AMD Says That CPU Core Race Can't Last Forever

Status
Not open for further replies.
I dunno, but I always thought that this was kinda obvious....
As they say, "All good things must come to an end".
 
aww.... and I was looking forward to the years when the general population no longer knew the term Ghz, but cores instead.

"My PC has 512 Core Clusters!" 😛

Not surprising in the end though

 
he maybe right. quantum mechanics processor is in the future. let say who can make it out for the public. amd or intel. you never know. by the time when you look back probably everything you have now is all junk. always happy to see new technology for better tomorrow.
 
So finally going eliminate old instructions that are not used? I'd love to see them get more efficient, programmers do not program for multiple cores it seems, or at least very few do.
 
Or maybe the technology will just hit a wall. For instance, passenger jets don't travel faster today than they did 50 years ago despite years of development. A lot of technologies just didn't work out like supersonic flight and right now huge efforts have to be made for incremental improvements in efficiency.
 
So what is next? As Donald states, architecture, but is that all? The finality of current computer tech has always been stated to end. Every decade the tech community estimates the fall of Moore's law, and yet it continues to flourish. A new technology will take the old's place, but only when current technology is sufficiently exhausted. Think vaccuum tubes --> transistors --> microcontrollers. The evolution of technology will never be stagnant. It hasn't in the past, and why would it be now.
 


/Off topic

The reason why passenger jets don't travel any faster now is because they would go (locally) supersonic if they fly any faster, and that would cause all sorts of noise and regulation problem (think Concord and how it's only allowed to fly over the ocean).

/endOffTopic

Anyway, I don't really see the CPUs going to 128 cores when the majority of the programs nowadays barely even utilize more than 2 cores.
 
Its time for software industry to mature and make changes to utilize the multi-core environment to full, then only we can imagine on increasing the cores otherwise there is not much use of those big no. of cores.
and from this point it seems like decades for this to happen.
 
I prefer speed rather than more cores. Every application benefits from increased speed, but very few applications benefit from many cores. Unfortunately not all games can be made to benefit from an increased number of cores (turn-based strategies like Total War and Heroes of Might and Magic are an example). IMO there should be some segmentation: gaming CPUs, which focus on speed and have a maximum of 8-12 cores, and workstation CPUs that focus on many cores.
 
[citation][nom]waksksksks[/nom]he maybe right. quantum mechanics processor is in the future. let say who can make it out for the public. amd or intel. you never know. by the time when you look back probably everything you have now is all junk. always happy to see new technology for better tomorrow.[/citation]Sorry but quantum computing can't run anything like x86 instruction set 🙁 It can only solve some problems that can be formulated as operations on Hermitian Matrices so even most of computer science phds I know wont touch it.
Also there is a problem with the way people are designing quantum computers:
1) Cool it down to 0K
2) Let's try to make it work some of the times.
 
[citation][nom]peterkidd[/nom]A new technology will take the old's place, but only when current technology is sufficiently exhausted. Think vaccuum tubes --> transistors --> microcontrollers. The evolution of technology will never be stagnant. It hasn't in the past, and why would it be now.[/citation]
Because in the past you had a few hundred computers to replace. Now you'll need the radical new technology to somehow be backwards compatible with billions of x86 machines because no business is going to changeover its entire IT framework in a day.

At some point it will happen though. The MOSFET is far too power-hungry to be viable in the long term future. Power consumption is what caused every other major technological transition of the most basic component in a computer.
 
How about replacing x86 instead of adding more and more stuff to it?
I'm not very familiar with x86, but is a 32year old instruction set still useful?
The number of transistors in a processor has grown from 29 thousand to well over a billion in those 32 years.
 
[citation][nom]dragoon190[/nom]The reason why passenger jets don't travel any faster now is because they would go (locally) supersonic if they fly any faster, and that would cause all sorts of noise and regulation problem (think Concord and how it's only allowed to fly over the ocean)[/citation]
The reason is not because of regulation or noise as the Concorde fleet ran flawlessly for nearly 30 years in the lucrative Trans-Atlantic market.

The reason why no-one travels supersonic anymore is the Concorde fleet was retired due to the French not keeping their runways clean, and the fact that the majority of Concordes regular passengers were killed in 9/11.

Virgin Atlantic offered to buy the Corcorde fleet and bring them up to 21st century specs but the UK Government refused to issue a license. On top of that no-one has the money to design and build a new fleet of supersonic airliners so the focus has now become one of increasing the comfort level of passengers, rather than reducing flight time.
 
I want maaany cores so every program/service can run on its own core. Program for one core and let the OS decide which core to use/is available.
 
They've already effectively eliminated the old instructions as the microcode is internally RISC and externally CISC for almost all, maybe even all, x86 and x64 CPU architectures.
 
The whole point of multicore was not to run a program on each core, although that does quite nicely when your AV kicks in, because who has more than 6 serious CPU intensive programs running at the same time?

The idea, and rightly so, is to split a single program up between several cores to make it run faster. Great idea in principle but where are the slew of multicore programs? ..... Silence.

Developers, please get off your collective fat asses and write the next generation of programs that can actually utilize all this expensive hardware I already own.
 
[citation][nom]Horhe[/nom]I prefer speed rather than more cores. Every application benefits from increased speed, but very few applications benefit from many cores. Unfortunately not all games can be made to benefit from an increased number of cores (turn-based strategies like Total War and Heroes of Might and Magic are an example). IMO there should be some segmentation: gaming CPUs, which focus on speed and have a maximum of 8-12 cores, and workstation CPUs that focus on many cores.[/citation]

i don't know if this will post, but that is the dumbest thing i ever heard... this year...

you get a turn based game to take advantage of multiple cores, its probably the most basic thing ever. have it run everything, than have it display he results in graphics.

i cant explain how im thinking to well.

and here is a spoiler, current cpus are fast enough to play damn near everything at a more than reasonable frame rate. the only reason there are cpu bottlenecks was the game doesn't take advantage of multi cpus well enough, and there is no excuse there anymore. every console and computer that isnt a netbook thats made in the last how many years has at least a dual core.
 
http://en.wikipedia.org/wiki/X86_instruction_listings

Thanks to the way it evolves, retaining backwards compatibility with the Intel Pentium(r) and Intel 80386DX (or better) is quite simple. Dare I say that retaining backwards compatibility 'usually' makes it easier, not harder, to keep pushing out new tech.

Read up on the differences between the Pentium and Pentium Pro, and how it all got very RISC (albeit very long pipeline) when the Pentium 4 hit the market. (We also have AMD to thank for a lot of tech, namely the Operton and AMD64_x64).
 
It seems backwards compatibility is, er, holding us back. We need to ditch x86. We need to change anything that isn't optimal (even things like bits per byte, or whether we even need bytes).

Note: Itanium didn't lose cos it wasn't x86 - it lost on it's own merits.

As for specialized functions, I don't like them. We have them in GPUs and they restrict graphics to only the methods chosen to be accelerated - nobody puts much effort into alternate methods (eg raytracing has been sidelined).
 
I suppose someone had to pop our dreams at some point. Realistically I think we'd top out at around 24 cores, if we even get there at all.

When we do top out though, what will they focus on next? Instructions per cycle? Pipeline width?
 
Status
Not open for further replies.