AMD Says That CPU Core Race Can't Last Forever

Page 3 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.
[citation][nom]alidan[/nom]i don't know if this will post, but that is the dumbest thing i ever heard... this year... you get a turn based game to take advantage of multiple cores, its probably the most basic thing ever. have it run everything, than have it display he results in graphics.i cant explain how im thinking to well. and here is a spoiler, current cpus are fast enough to play damn near everything at a more than reasonable frame rate. the only reason there are cpu bottlenecks was the game doesn't take advantage of multi cpus well enough, and there is no excuse there anymore. every console and computer that isnt a netbook thats made in the last how many years has at least a dual core.[/citation]
You're wrong, TBSs are serial in nature: every player must wait for the other players to finish their turn, because he makes decisions based on the previous players' actions. Yes, it is possible to use more cores during each AI player turn, for example for computing each hero or stack of units' next actions, but since it's unusual for a player to have more than 10 heroes/stacks, 12 cores with good speed are more than enough.
 
*Or maybe the technology will just hit a wall. For instance, passenger jets don't travel faster today than they did 50 years ago despite years of development. A lot of technologies just didn't work out like supersonic flight and right now huge efforts have to be made for incremental improvements in efficiency.*

supersonic flight's technology worked (and still works) fine. But everyone still uses glass, which is a 3000 year old technology. Breaking the sound barrier breaks glass.

It's like trying to guide the rockets that launched the Martian Rover using a horse drawn carriage; the older technology doesn't work.
 
[citation][nom]Horhe[/nom]You're wrong, TBSs are serial in nature: every player must wait for the other players to finish their turn, because he makes decisions based on the previous players' actions. Yes, it is possible to use more cores during each AI player turn, for example for computing each hero or stack of units' next actions, but since it's unusual for a player to have more than 10 heroes/stacks, 12 cores with good speed are more than enough.[/citation]

That is for now and perhaps 2 years on but think about future battle simulators with thousands of NPCs, unique models, skills and human personality patterns. Spice it with voice recognition and context speech.
 
[citation][nom]peterkidd[/nom]So what is next? As Donald states, architecture, but is that all? The finality of current computer tech has always been stated to end. Every decade the tech community estimates the fall of Moore's law, and yet it continues to flourish. A new technology will take the old's place, but only when current technology is sufficiently exhausted. Think vaccuum tubes --> transistors --> microcontrollers. The evolution of technology will never be stagnant. It hasn't in the past, and why would it be now.[/citation]

You're quite incorrect. There is a limit to how thin you can make things before it is simply not possible. We're approaching a point where things are only a few atoms thick, and then, the constant shrinking has to end.

It's like performance for processors. We've stagnated in processor performance pretty much since the Pentium Pro. You get higher clock speeds, more cores, etc..., but the instructions per cycle are not so much higher in the past 15 years. Compare that to a 1980 processor and the Pentium Pro. You're looking at 3x from 8086 to 286, 2x from 386 to 486, 2x from 486 to Pentium, 30% from Pentium to PPro. So, almost a 16x improvement per cycle to the Pentium Pro, and maybe 40% since? That's about 40 times slower. We still haven't reached clock speed record from 2006, so there's essentially no growth there. Even going by clock speed, we went from 5 MHz in 1980 to 200 MHz in 1995 to 3.6 GHz in 2010. So, 40x improvement to PPro, 18x since. So, roughly, performance increased 100x faster from 1980 to 1995, than from 1995 to 2010.

So, we're seeing processor performance stagnate badly. They do stuff like add cores because they can't do anything else better, but adding cores is less useful than making the cores perform better to the same extent. But, as you add cores, you lower the effectiveness of each once, since they share stuff, and some software can't be parallelized very effectively.

Hardware will putter along for a while longer, but performance has been slowing, and will continue to. The big opportunity is in software, where grotesquely bloated and inefficient code from companies like Microsloth can be made much more efficient.

When you consider how much effective work was done in the 60s and 70s on computers with only a tiny fraction of the processing power we have today, you begin to realize how poorly we've taken advantage of the enormous increases in processing power. It's been taken for granted, and rationalized sloppy and poor quality coding.
 
[citation][nom]mhelm1[/nom]More cores haven't increased my processing speed as much as faster processors and more data bits. Yea, why aren't they talking about a 128/256 bit microprocessor.[/citation]

Because a 128-bit or 256-bit microprocessor would probably be slower, than a 32-bit. 64-bit is generally slower and less efficient than 32-bit as well, except for specific apps, or those needing lots of memory.

More bits only helps when you can actually use them. Going from 8-bits to 16-bits was huge, because now you could add numbers greater than 128 much faster, and could generate addresses faster as well (8-bit processors addressed 16-bits of memory in almost all cases). 16-bits to 32-bits wasn't nearly as important, but still significant because there was still a decent amount of arithmetic using numbers higher than 32K. 64-bits is generally slower, since you simply do not need to use numbers bigger than 2 billion very often, but if you do, it's much faster. x86-64 adds some other benefits to help offset the inefficiencies of 64-bit code (like more registers), so you don't see it as much as you would. But, going to 128-bit would help with an extremely limited number of apps, and have a penalty for almost every app. In specific applications, 128-bits could be good, but for a general purpose processor, I don't think you'll ever see it, because the need for ultra-enormous numbers is not going to grow.
 
Seeing how their strategy is to market more cores with less performance per core this is some pretty bad news for them. Hopefully it signals they know they will have to up their performance per core significantly going forward.
 
A few years ago I had heard that Intel had developed materials that would allow them the reach 10Ghz. Whatever happened with that?
 
It's who can produce the cheapest power efficient CPU & motherboard solution, & why three AMD's for the price of one Intel isn't better when it does twice as much & uses roughly the same amount of power. The future isn't blight, it's bright! ;-)
 
[citation][nom]TA152H[/nom]You're quite incorrect. There is a limit to how thin you can make things before it is simply not possible. We're approaching a point where things are only a few atoms thick, and then, the constant shrinking has to end.It's like performance for processors. We've stagnated in processor performance pretty much since the Pentium Pro. You get higher clock speeds, more cores, etc..., but the instructions per cycle are not so much higher in the past 15 years. Compare that to a 1980 processor and the Pentium Pro. You're looking at 3x from 8086 to 286, 2x from 386 to 486, 2x from 486 to Pentium, 30% from Pentium to PPro. So, almost a 16x improvement per cycle to the Pentium Pro, and maybe 40% since? That's about 40 times slower. We still haven't reached clock speed record from 2006, so there's essentially no growth there. Even going by clock speed, we went from 5 MHz in 1980 to 200 MHz in 1995 to 3.6 GHz in 2010. So, 40x improvement to PPro, 18x since. So, roughly, performance increased 100x faster from 1980 to 1995, than from 1995 to 2010. So, we're seeing processor performance stagnate badly. They do stuff like add cores because they can't do anything else better, but adding cores is less useful than making the cores perform better to the same extent. But, as you add cores, you lower the effectiveness of each once, since they share stuff, and some software can't be parallelized very effectively.Hardware will putter along for a while longer, but performance has been slowing, and will continue to. The big opportunity is in software, where grotesquely bloated and inefficient code from companies like Microsloth can be made much more efficient. When you consider how much effective work was done in the 60s and 70s on computers with only a tiny fraction of the processing power we have today, you begin to realize how poorly we've taken advantage of the enormous increases in processing power. It's been taken for granted, and rationalized sloppy and poor quality coding.[/citation]
Yes, starting off, code written in the 50s and 60s was unbelievably more efficient than code written today since it was written directly to the hardware. Once you got away from programming in machine code, you then had to introduce an entity that would control access to the hardware and interpret hardware calls (the operating system) and coding languages that were easier for the programmer to understand (assembler, COBOL, FORTRAN, C) that had to be compiled to machine code to the hardware could understand it. Each of these innovations added inefficiency into the code that finally ran on the hardware. With each new level of hardware, OS, coded language, IDE, etc., you added layers of compatibility and new features, yet more inefficiency. With Java, Python, PERL, HTML, Flash, PHP and many others, we are looking at code never intended to run on actual hardware, but rather through a virtual machine, appliance or framework, which brings yet another layer that has to run before you can run the code.

Yes, code is a lot less efficient, but it has to be. There would not be enough developers in the world who could keep up with the demand if everything was still written to the metal as it was in the beginning of the computer age. So, we accept a certain amount of inefficiency in our code so we can get the code done in a timely manner with the features that we have all come to know and love.
 
[citation][nom]doorspawn[/nom]It seems backwards compatibility is, er, holding us back. We need to ditch x86. We need to change anything that isn't optimal (even things like bits per byte, or whether we even need bytes).Note: Itanium didn't lose cos it wasn't x86 - it lost on it's own merits.As for specialized functions, I don't like them. We have them in GPUs and they restrict graphics to only the methods chosen to be accelerated - nobody puts much effort into alternate methods (eg raytracing has been sidelined).[/citation]


Actually, as I understand it, as processors shrink the portion of hte processor that does the translation (x86 -> native) is a much smaller percentage and doesnt take all that much additional effort. So back in the day, when x86 translation would have taken 15% of the die, it may now only take 1% for the X million transistors needed (not accurate, just making a point).


If they were to change their instructions for each generation of processor all programs would have to be recompiled *unless written in scripting or IL code like Java or .NET*. It took years for Apple to get the software companies to convert the bulk of the programs when they went to a new processor! Intel took a path where making the processors backwards compat allows them to move forward faster... And since it means much less now than it did back in the 386 days they are probably seeing negligable downside to keeping it. Again, the alternative would be a huge downside with a much longer (and growing with each change) adoption period.

 
Also, for what you are proposing, we will need to see a paradigm shift in processor development that coincides with a synergistic change in how code is compiled or interpreted. Sure, we can add more CPU and GPU processors, but until we have the layer built into the OS that allows the OS to decide which piece of hardware runs which piece of code, then it is all for nothing if we want utility AND flexibility. Otherwise, we have to either write code specifically to the hardware configuration (or guess at it) or write code that will be best for 80% of the userbase. Hence, the reason why most gaming software only takes advantage of 2 cores is because that is the baseline that most game developers now recognize.
 
AMD LMAO hello AMD do you have 800 or 1600 cores in the old 4870/90 video cards ?
Build me a basic htpc cpu like the ati4890 1600 cores then bring up this comment again.

AMD stop b*thing and start pulling your finger from where the sun don't shine as we as pc users are tired of AMD always crying and complaining about how lazy and incompetent AMD is and that AMD will never win.

AMD suck up or build a AMD processor for once.

Intel might be incompetent and expensive with the i7-975 glorified memory controllers but at least they try occasionally to build something.

Imagine if Intel always complained about how AMD is the leader ?
O yeah that will not happen with AMD management cry babies, HUH AMD lead yeah right.

AMD suck up or close down shop as obvious you can not and does not want to play in the big league.

AMD fanboy here but for G*d-sake f-ng grow up and stop AMD b*tching you are not CrApple Steve Jobs and stop sounding like CrApple / Steve Jobs.
 
[citation][nom]Zulfadhli[/nom]why not start a computer with two cpu .i mean we already have sli and crossfire technology?but i really hope at the end of the day nvidia enters the cpu race.having only two choices really makes us consumers unhappy.[/citation]

They already have tech like that and it flopped. Socket 1207FX and Skulltrail. Both are under 3 years old and they flopped because of their prohibitive cost and the fact that you had to buy 2 FLAGSHIP priced Cpus to make it run.

Oh and by all means let's have Nvidia, the creators of ever larger GPUs join the fray. Their slowest CPU will operate at 3.0Ghz, consume 200 watts whiel idle adn demand it's own dedicated power supply with a third again more pins so it can get enough stable current to run, oh and it'll run 10% faster, and cost $2000 for the CPU alone. -.- Nvidia fanboys psssh... We have more competitors but they are targeting the low end markets. VIA being one of them, and they are still cleaning house.
 
[citation][nom]JohnnyLucky[/nom]Is it possible we've reached the point of diminishing returns?[/citation]

Could be. Portability is very popular. People who use computers for their social network are interested in something smaller than faster.
 
[citation][nom]Houndsteeth[/nom]Yes, starting off, code written in the 50s and 60s was unbelievably more efficient than code written today since it was written directly to the hardware. Once you got away from programming in machine code, you then had to introduce an entity that would control access to the hardware and interpret hardware calls (the operating system) and coding languages that were easier for the programmer to understand (assembler, COBOL, FORTRAN, C) that had to be compiled to machine code to the hardware could understand it. Each of these innovations added inefficiency into the code that finally ran on the hardware. With each new level of hardware, OS, coded language, IDE, etc., you added layers of compatibility and new features, yet more inefficiency. With Java, Python, PERL, HTML, Flash, PHP and many others, we are looking at code never intended to run on actual hardware, but rather through a virtual machine, appliance or framework, which brings yet another layer that has to run before you can run the code.Yes, code is a lot less efficient, but it has to be. There would not be enough developers in the world who could keep up with the demand if everything was still written to the metal as it was in the beginning of the computer age. So, we accept a certain amount of inefficiency in our code so we can get the code done in a timely manner with the features that we have all come to know and love.[/citation]

Actually, people were NOT writing most applications in machine language in the 60s. FORTRAN and COBOL were dominant languages of that time, and were not machine code. C++ is compiled to machine language, and is the dominant language today.

The problem is sloppy, bloated coding. Back in the early 90s, I wouldn't waste a single instruction, and was very conscious of it. I remember someone asking me why I went through an array of strings backwards instead of forwards. The answer was obvious - I already had my pointer there and it saved an instruction by not reseting it.

C is a very efficient language. Obviously, object oriented languages are bloated, but not that much. They aren't necessary at all anyway, but they do make bad programmers less bad. Structured programming is better for better programmers.

But, anyway, the point is, software today does nothing that it didn't do 15 years ago, but takes a lot more processing power to do it. Hell, go back 30 years, you had spreadsheets, bulletin boards, games, bookkeeping, etc... Nothing really changes. Games get more resolution, but do spreadsheets or word processing programs do anything now that you needed and couldn't do 15 years ago? What does Windows 7 do that OS/2 couldn't do, except require 2 Gig to run right, instead of 16 mb. Oh, and run a lot slower. Why do applications like Flash keep taking more memory the longer you use them, if it's just doing the same thing? After eight hours, it takes roughly 3 times as much as after 15 minutes. Why does it keep growing? Bad programming and memory leaks.

That's where the opportunity is. Hardware has carried bad programmers for a long time. These stupid layers on top of layers are designed for bad programmers. Games will not keep using garbage interfaces like DX11, or whatever comes after it, and will have to start writing more directly to the hardware. If they do this, you'll see big improvements. If they keep writing game engine on top of 3D engine, on top of operating system, on top of device driver, games will suffer a lot of performance loss from MS bloatware.

If Crysis were running on DOS and writing to the hardware, no one would ask that annoying and infamous question. Not that I'm suggesting that will or should come back, but something in between almost has to for performance to improve dramatically.
 
I am sure AMD and Intell will come up with minor improvements in order to justify changing cpu socketts etc. every couple of years to keep the $$$$ rolling in.
 
[citation][nom]Stardude82[/nom]Or maybe the technology will just hit a wall. For instance, passenger jets don't travel faster today than they did 50 years ago despite years of development. A lot of technologies just didn't work out like supersonic flight and right now huge efforts have to be made for incremental improvements in efficiency.[/citation]

LOL what? There were passenger jets 50 years ago cruising at 47,000 ft and traveling at 680 mph? Which ones? Show me.
 
[citation][nom]Stardude82[/nom]Or maybe the technology will just hit a wall. For instance, passenger jets don't travel faster today than they did 50 years ago despite years of development. A lot of technologies just didn't work out like supersonic flight and right now huge efforts have to be made for incremental improvements in efficiency.[/citation]
Actually, today's planes have the ability to got almost twice as fast as they currently do. Unfortunately, fuel costs are so high, it's cheaper for them to fly at a slower speed. So you're definitely right in saying that efficiency improvements need to be attained.
 
/offtopic
The reason we don't have supersonic passenger planes is not because of the noise or a complex conspiracy or a technology wall, it's the simplest solution: inefficiency. The concorde simply burned more fuel and carried fewer passengers than a wide body airframe. To recoup their costs, airlines had to charge more than the general public was willing to pay. It wasn't banned from flying over the US (I saw one land in Texas in '87), it just couldn't go supersonic over land because of the sonic boom. Our military jets have similar restrictions. AFWIW: A sonic boom may rattle your windows, but it doesn't shatter glass. Mythbusters flew an F/A 18 at Mach 1.1 at about 500 ft over a several test glass objects, one window pane fell out and shattered upon impact with the ground, but no other test piece showed signs of damage.
/endofftopic
 
[citation][nom]TA152H[/nom]Actually, people were NOT writing most applications in machine language in the 60s. FORTRAN and COBOL were dominant languages of that time, and were not machine code. C++ is compiled to machine language, and is the dominant language today. The problem is sloppy, bloated coding. Back in the early 90s, I wouldn't waste a single instruction, and was very conscious of it. I remember someone asking me why I went through an array of strings backwards instead of forwards. The answer was obvious - I already had my pointer there and it saved an instruction by not reseting it. C is a very efficient language. Obviously, object oriented languages are bloated, but not that much. They aren't necessary at all anyway, but they do make bad programmers less bad. Structured programming is better for better programmers.But, anyway, the point is, software today does nothing that it didn't do 15 years ago, but takes a lot more processing power to do it. Hell, go back 30 years, you had spreadsheets, bulletin boards, games, bookkeeping, etc... Nothing really changes. Games get more resolution, but do spreadsheets or word processing programs do anything now that you needed and couldn't do 15 years ago? What does Windows 7 do that OS/2 couldn't do, except require 2 Gig to run right, instead of 16 mb. Oh, and run a lot slower. Why do applications like Flash keep taking more memory the longer you use them, if it's just doing the same thing? After eight hours, it takes roughly 3 times as much as after 15 minutes. Why does it keep growing? Bad programming and memory leaks. That's where the opportunity is. Hardware has carried bad programmers for a long time. These stupid layers on top of layers are designed for bad programmers. Games will not keep using garbage interfaces like DX11, or whatever comes after it, and will have to start writing more directly to the hardware. If they do this, you'll see big improvements. If they keep writing game engine on top of 3D engine, on top of operating system, on top of device driver, games will suffer a lot of performance loss from MS bloatware. If Crysis were running on DOS and writing to the hardware, no one would ask that annoying and infamous question. Not that I'm suggesting that will or should come back, but something in between almost has to for performance to improve dramatically.[/citation]
While I agree that there is a lot of room for improvement simply by cleaning up sloppy code, writing directly to the hardware is not an option. There have to be abstract layers. Are you going develop numerous versions of your game to run on every different video card, not just vendor, but every major grouping of cards? Are you going to develop your applications to run their own operating system? You would have to re-invent the wheel too many time to make it work. Are you going to code your own TCP/IP stack for your custom application that doesn't run in Windows? You would end up spending all your time developing and support multiple platforms.
 
Status
Not open for further replies.