Intel Demonstrates Haswell Processors at IDF

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.
[citation][nom]Justposting12[/nom]Just a few examples; 286 to 386 to 486 to pentium. 68000 to 68020/30 to 040 to PowerPC 601/603/604. A 33mhz 030 might put out about 8 mips, but a 33mhz 040 put out almost ~33mips! Now-a-days, you can expect a 5-10% gain (if were lucky) at the same clock speed when a new CPU is released. The reason for this? Companies don't change CPU designs anymore, they just "tweak" things.[/citation]
If you look at ARM, MIPS, Power and other architectures, they are all butting heads against performance ceiling as well, it isn't an x86-specific thing.

The reason why progress was so fast before is because architecture optimization was still a black art back then with everything still remaining to be discovered/invented. Early CPUs were linear single-instruction affairs that could spend multiple cycles executing a single instruction. Later CPUs added renamed register files, superscalar execution and pipelining which enabled the CPU to work on more than one instruction at a time. Then they added out-of-order execution, branch prediction, speculative execution, caches, prefetching, simultaneous multi-threading, etc. to extract more performance out of a given thread and execution unit.

With pretty much every trick in the book now on the table, there aren't many expectations of major future breakthroughs left. Everything is as deep and wide as it can practically be without hurting latency, clock rates or complexity/manageability, the only obvious way to increase performance is to add physical cores for raw power and logical cores for execution resource usage efficiency but this is pointless until more software is rewritten to properly leverage them.

The internal combustion engine is much the same. Early engines could be substantially improved simply by improving machining, improving seals, lubricants, etc. and they quickly improved from unreliable clunky affairs to usable. Going from carburetor to direct fuel injection to high pressure FI also significantly improved efficiency. The latest crop of improvements such as variable ignition, variable valve timing, discrete variable transmission, exhaust recirculation, etc. improved efficiency and emissions some more but now there is almost nowhere left to optimize, every 1% improvement will come with substantially greater complexity. Since improving internal combustion any further is becoming cost-prohibitive, efforts are shifting to hybrid systems.

CPU cores have reached their thread-IPC plateau, people's need for processing power have also largely plateau'd because programmers are still brushing up on multi-threaded programming. It does not make much sense for CPU manufacturers to add cores almost nobody can use so they focus on other objectives such as power efficiency and IGP/GPGPU.
 
Well put, Belardo.

I often forget how much trouble computers were in the 90's, when they seemed to crash all the time, but I still miss the times when every couple of years, whole new worlds seemed to open up, and I would dream about what games for example would look like in the year 2000.. or the year 2010..

Now when I think about the future, I see games very much like the ones we have now.
 
Oh and, what i got from the quote people are referring to above me, is that:
1) Idle power consumption for a given platform class will be 1/20 of comparable SNB platforms

2) Performance per watt will have doubled over IVB.

And now i shall read AnandTech's live blog...
 
"Sorry but I love Intel and as far as I am concerned they have never let us down and always delivered solid performance"

ROFL. I can't decide whether to go with Atom or Itanium for my next desktop rig, but I'm definitely holding out for a Larrabee GPU... My current Pentium 4 rig is still a beast because Intel made it, but I think it may finally be time to upgrade after 10 years.

I was also astounded by the gains I got when my job upgraded my laptop from a 2.4ghz Core2Duo to a 2.5ghz Sandy Bridge Core i5. Battery life went from 8 hours to 8.5 hours with a premium battery(thanks to 65nm to 32nm), and I can almost tell that it's faster(no, not really). Yes, every generation is so much better than the previous.

 
There are many performance improvemens to Haswell not mentioned in this article. Some of which are quite significant. Saying there is no performance improvement is ridiculous and wrong. See more detailed articles on this. There will need to be some recompilation of programs to take full advantage of some of the features, but they are there! It isn't just a GPU, power, and video encoding thing with Haswell. This is a very significant upgrade in many ways. I personally think this will be a boon for engineering, research, and education applications. It's not just an ultrabook/tablet thing although that is where Intel will make huge amounts of money on this.
 
Larrabee = Knights Corner... whoops! If you don't know the difference between Core2Duo and SandyBridge, then you need to stop playing scrabble and freescale and actually do something. They are VASTLY different in performance.
 
[citation][nom]blazorthon[/nom].I can literally take a five-year-old or three/four-year-old [processor] and do pretty much anything that most consumers do and do it well.[/citation]
moronic comments like these really deserve a special prize

you could have said the exact same thing 5 years ago
you could have said the exact same thing 8 years ago
you could have said the exact same thing 10 years ago
you could have said the exact same thing 12 years ago

imagine if corporations had figured out how to milk a CPU monopoly structure back then
 
[citation][nom]t0mtom[/nom]"Sorry but I love Intel and as far as I am concerned they have never let us down and always delivered solid performance"ROFL. I can't decide whether to go with Atom or Itanium for my next desktop rig, but I'm definitely holding out for a Larrabee GPU... My current Pentium 4 rig is still a beast because Intel made it, but I think it may finally be time to upgrade after 10 years. I was also astounded by the gains I got when my job upgraded my laptop from a 2.4ghz Core2Duo to a 2.5ghz Sandy Bridge Core i5. Battery life went from 8 hours to 8.5 hours with a premium battery(thanks to 65nm to 32nm), and I can almost tell that it's faster(no, not really). Yes, every generation is so much better than the previous.[/citation]

A 2.5GHz mobile i5 SB CPU is almost twice as fast as a 2.4GHz Core 2 Duo. Larrabee became Knight's Corner and is now getting work done in the Xeon Phi co-processors. Atom has been updated and is now making it as one of the highest performance per core phone and tablet CPUs. Itanium is still the primary architecture used in some of the systems for huge companies such as HP. I've got nothing on P4 because it kinda sucked, but it did continue performance improvements a few models after it launched, albeit not ideally until it was replaced by the far more powerful Core 2 Duo micro-architecture.

The CPU isn't the biggest factor in battery life of a laptop, although it is up there. The display is easily the most power-consuming part within a laptop except for some very high-end laptops with some very powerful graphics cards, so that a new CPU didn't change power consumption greatly shouldn't be a surprise. After Core 2 came about, every successive micro-architecture of Intel's CPU has been an improvement over the previous. Intel's not perfect, not even close, but you really should get a clue if you're going to rant about them.
 
[citation][nom]thecolorblue[/nom]moronic comments like these really deserve a special prizeyou could have said the exact same thing 5 years agoyou could have said the exact same thing 8 years agoyou could have said the exact same thing 10 years agoyou could have said the exact same thing 12 years agoimagine if corporations had figured out how to milk a CPU monopoly structure back then[/citation]

Maybe for general purpose stuff such as office work, but higher end things such as gaming, not so much. I can use some five/four-year-old CPUs for modern day gaming even with high-end graphics configurations. I couldn't really do that several years ago with parts that were four or five years old at the time.
 
[citation][nom]blazorthon[/nom]Maybe for general purpose stuff such as office work, but higher end things such as gaming, not so much.[/citation]
it is not about gaming, who is talking about gaming?
 
photoshop, autocad, maya, 3ds max, mudbox, coreldraw, illustrator, painter, vue,

there is never enough power for these
these will always require a desktop pc
 


I can literally take a five-year-old or three/four-year-old Core 2 Quad or Phenom II x4 and do pretty much anything that most consumers do and do it well, even if I need some overclocking to do it. No game is unplayable with such CPUs and other affordable hardware. Some compression, rendering, general productivity, and much more can all be done in reasonable amounts of time. It'll take more than small or even moderate performance improvements to stop this trend.

If you read my post that you replied to, you'd clearly see that I was. Nice try calling me a moron for your failure.
 
MathCad, Mathematica, compiling, complex spread sheets, cryptography, and many others to go along with colorblue's list. Tons of applications need heavy CPU power. These were the people who used computers the most in the 80's and early 90's. Now that everyone else uses a computer, simpler needs became common like playing on the internet. Mobility is now important so people can be on the internet for long period of time at whatever location. The power users of he 80's and 90's still require the hefty CPUs though.
 
SmarTPanz2 :
"Larrabee = Knights Corner... whoops! If you don't know the difference between Core2Duo and SandyBridge, then you need to stop playing scrabble and freescale and actually do something. They are VASTLY different in performance."

oO... just wow...

Intel core-i series is actually not much faster than the core2 duo/quad series. The main difference is the integrated memory controller, Hyper Threading, and improved FPU. The main reason fore some benchmarks showing Core-i being so much faster is because HT is enabled. Now remember these are synthetic benchmarks, not real-world scenarios. Disable HT and bench core2 against the same speed core-i series, and the results are extremely close.

Even with HT enabled, look at some of the real world benchmarks out there. Things like encoding and photoshop functions are barely faster when comparing core-i to core2, or core quad.

I think it is people like YOU that need to stop playing "scrabble" and actually test things out for yourself.
 
[citation][nom]JustPosting13[/nom]SmarTPanz2 :"Larrabee = Knights Corner... whoops! If you don't know the difference between Core2Duo and SandyBridge, then you need to stop playing scrabble and freescale and actually do something. They are VASTLY different in performance."oO... just wow...Intel core-i series is actually not much faster than the core2 duo/quad series. The main difference is the integrated memory controller, Hyper Threading, and improved FPU. The main reason fore some benchmarks showing Core-i being so much faster is because HT is enabled. Now remember these are synthetic benchmarks, not real-world scenarios. Disable HT and bench core2 against the same speed core-i series, and the results are extremely close. Even with HT enabled, look at some of the real world benchmarks out there. Things like encoding and photoshop functions are barely faster when comparing core-i to core2, or core quad. I think it is people like YOU that need to stop playing "scrabble" and actually test things out for yourself.[/citation]

Look for any benchmark comparing desktop Core 2 Quads to desktop Sandy Bridge i5s (they don't have HTT) and the results are not close even with otherwise identical hardware. You could even use Sandra to try to equalize the memory bandwidth and latency between both machines and only compare integer performance instead of floating point performance. The difference is significant in real-world work. Also, yes, I have tested this myself.

Maybe the improvements are mostly in the cache or something like that raather than in the cores themselves, but regardless, HTT isn't the kicker except maybe for Core i3 and mobile Core i5 versus Core 2 Quad and the FPU is irrelevant for comparing integer performance.
 
MathCad, Mathematica, compiling, complex spread sheets, cryptography, and many others to go along with colorblue's list. Tons of applications need heavy CPU power. These were the people who used computers the most in the 80's and early 90's. Now that everyone else uses a computer, simpler needs became common like playing on the internet. Mobility is now important so people can be on the internet for long period of time at whatever location. The power users of he 80's and 90's still require the hefty CPUs though.

My point was that for the general populace, they don't need top-end performance even when using professional applications if they're doing it for personal use. Most people don't care about something being done two or three times faster when it's only a matter of minutes unless they do it professionally. Even a lot of people who do it professionally don't need the most highest end hardware at any given time to do what they do with it, although granted that there are also many that do. If Intel wants to get more people on shorter upgrade cycles, then they'll have to do something more revolutionary than keep up the small and moderate improvements. They'll need to make more performance actually matter for more people.
 
I stand corrected! I just benched the core-i3 2120 (HT disabled) against an e8600, and the FP numbers are IDENTICAL (~24.5GFLOPS)! I don't think Intel even bothered to mess with the FPU at all, just copied it straight over from the core2. Fascinating.
 
[citation][nom]JustPosting14[/nom]I stand corrected! I just benched the core-i3 2120 (HT disabled) against an e8600, and the FP numbers are IDENTICAL (~24.5GFLOPS)! I don't think Intel even bothered to mess with the FPU at all, just copied it straight over from the core2. Fascinating.[/citation]

i3s have significantly inferior FPUs to the i5s and the i7s. They don't even have AVX support.
 
[citation][nom]thecolorblue[/nom]photoshop, autocad, maya, 3ds max, mudbox, coreldraw, illustrator, painter, vue, there is never enough power for thesethese will always require a desktop pc[/citation]
I think he's talking about normal office work, in the quote you quote. Most consumers i'm afraid don't do that.
That's why weird ass tech journos get the liberty to write stuff like "ARM's cannibalizing Intel's sales"...and get away with it.
Most consumers=social networking, news, mail, banking/stocks, office applications, etc.
 
Photoshop, Autocad, Maya, 3DS max, Illustrator etc.. etc.. etc... will all be moved to a server compute environment. (see autocad WS)

It is much more time, cost, and energy efficient to have a server with 96 cores, a GPGPU compute cluster and 256gb of ram run these applications, then to buy a workstation for each engineer and have them each process it on their personal workstation.

Doubling performance per watt allows these servers to essentially double their compute density. What you can do in a single 3u rack mount server with these chips would have taken an entire rack 4-5 years ago, and an entire warehouse 15-20 years ago.

Not even 5 years ago I was working on a cost analysis of running additional 230w power lines and installing an additional set of 4 commercial ac units for the server room. Today I am taking AC units offline and our racks are sitting half empty due to the massive increase in density and performance per watt.

Keep up the good work!
 
[citation][nom]mr grim[/nom]Since heat is the overclockers worst enemy this can only be a good thing.[/citation]Actually an overclockers worst enemy is a chip that doesn't overclock worth a damn. If you have a powerful cooling setup, it's just going to waste on a chip that refuses to run stable at high speeds regardless of voltage (or simply can't handle the voltage required).

I'm not saying this is the case with Haswell, we have no idea yet. But just because the stock TDP is lower, that doesn't tell you that it is a better overclocking chip. Look at IVB vs SB.
 
[citation][nom]alextheblue[/nom]Actually an overclockers worst enemy is a chip that doesn't overclock worth a damn. If you have a powerful cooling setup, it's just going to waste on a chip that refuses to run stable at high speeds regardless of voltage (or simply can't handle the voltage required).I'm not saying this is the case with Haswell, we have no idea yet. But just because the stock TDP is lower, that doesn't tell you that it is a better overclocking chip. Look at IVB vs SB.[/citation]

Ivy can easily handle higher frequencies. It simple gets hot because of the crap paste between the CPU die and the IHS. If you remove the IHS and replace the paste with some top-quality paste, Ivy can easily surpass Sandy Bridge in overclocking considerably.
 
Status
Not open for further replies.