Intel Finally Has a Real 4 GHz CPU

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.
AMD AthlonII X2 240e @1.35V 3.98GHz stable for two weeks on air, Arctic Cooling, maxes at 68W. The catch is in memory selection and the cooling of the chipset. I guess it almost qualifies for a 4 GHz processor.
 
[citation][nom]bit_user[/nom]Yes, I was comparing stock to stock. I don't mess with OC.[/citation]
You don't mess with OC? Funny thing, THEY do.


There have been so many GPUs and CPUs released that were made with the same specs, but just underclocked and relabeled as a different model. You could OC it yourself and It'd literally be at "Stock" speeds.


With the way Sandy Bridge is, and pretty much most of the Intel chips for the last 5 years, you can safely overclock to a significant level, without sacrificing a single thing. Maybe an extra 50 cents on your electricity bill at the end of the month. By the time the extra heat and etc take it's toll, It'll be the year 2015.


Overclocking has changed. It used to be a risk, without not much of a gain, unless you went to extensive amounts of aftermarket cooling and voltage tweaks. Now, you can keep the stock cooler, stock voltage and basically just profit without much risk. If It's unstable, It resets itself and you're back at square one. Hell, they have programs now that you just click once, and It'll optimize your speeds without you doing a single thing.


At this point, OC is just icing on the cake. We're really at the point where CPUs have gotten faster than the programs actually need them. Don't come back at that as me saying you won't benefit from OC'ing, but instead of 115 FPS in the random game, you now get 120 FPS. You see, an extra 5 FPS is cool, but when you're at such a high, useless number, it makes no difference.


Coding and etc will get faster though, so there is *some* use to OC the piss out of CPUs still. Besides that, It's just a bragging and accomplishment thing. Most stock speeds are overkill as is. Then again, you're prolonging your CPUs use, by squeezing more performance out of, making you not need to upgrade for another 1-2 years.
 
A good processor is the one which will finish the End Turn in Empire Total War and Heroes of Might and Magic 5 with more than 4 players in less than 5 seconds.

Imagine my disappointment when I upgraded my PC from a 1.8 GHz Athlon XP to a 3.0 GHz Core2 Duo just to see that the End Turn time in Heroes 5 decreased only slightly. Now, I know that Heroes 5 is among the worst optimized games when it comes to AI, but still, raw clock speed should've made for that.
 
You know what, I put "2015" in as a year, thinking It actually was a decent time away, but now that my head isn't in the clouds, I realize It's 2011 and 2015 is only 4 years away.


Make that 2020 instead. Haha.
 
[citation][nom]s997863[/nom]I'm surprised nobody on computer sites or product magazines points out that the latest CPUs on sale aren't even +50% in performance than core-2-duos which came out many years ago, nevermind double or triple which would actually be worth upgrading.[/citation]True dat. Well, I'm not going to sign onto the specific number of 50%, but that's why I've been holding off my upgrade.

Most things I do are bottlenecked by a single thread. Sure, it's nice to have cores for OS and multitasking, but I'm betting that most apps used by most people (gaming & video aside), are bottlenecked by one thread, most of the time.

I'm actually betting on a 2x speed up per-clock on single-threaded non-multimedia tasks, vs my P4. Some of that will be due to RAM bandwidth & latency improvements, some to cache size increases, some to micro/macro op fusion, some to shorter pipelines, and some down to just plain focus on efficiency. I was completely amazed at the alleged 17% speed-up of Sandy Bridge over its predecessor.


[citation][nom]s997863[/nom]People keep saying more & more apps/games in the future will surely be coded to take better advantage of i-series and multiple-cores ... etc[/citation]Again, there is much truth to that. There has been a slow evolution of development tools from Intel, MS, and even GCC now has some optional auto-parallelization features.

I think we need more revolutionary developments in programming languages and models. There's been more than enough research. What we need is for this stuff to go mainstream. And it has to go beyond SMP-type parallelism. It needs to scale to NUMA GPU-like architectures.

In the meantime, we can hope that more and more libraries and support tools will get good parallelization, so that programmers using existing languages and toolchains can gain some benefits of parallelization essentially for free.

Serial performance improvements and clock-speed gains will come at an ever slower pace. Like it or not, the way forward is parallel. The challenge is how to make it happen.
 
[citation][nom]s997863[/nom]4Ghz is too little, too late.Moore's law hasn't held for last ~4years IMO, and I'm surprised nobody on computer sites or product magazines points out that the latest CPUs on sale aren't even +50% in performance than core-2-duos which came out many years ago, nevermind double or triple which would actually be worth upgrading.People keep saying more & more apps/games in the future will surely be coded to take better advantage of i-series and multiple-cores ... etc but it's been years now and there are not enough real benchmarks I see to warrant an upgrade. It's still just a limited number of games where you would see significant improvement of i5 vs C2duo. Let everyone stick to their old products and give a message to Intel that we're not dumb enough to upgrade ~10% faster cpus which are hardly different from our old ones except for the snazzy model name change and that it's been 'optimized' for , umm, video editiing??[/citation]

Moore's law isn't about performance but about the number of transistors.
 
Why cant they just release a dual core K version of Sandy bridge with 125w TDP? Pretty sure can clock 4GHz easy with 125w tdp @ dual core only.

There are still plenty of Application out there are still dual core. MAkes perfect sense to get some extreme version of dual core CPU out.
 
[citation][nom]Maximus_Delta[/nom]I doubt there is much changed at all with the chip, everybody knows the high-end SB series consumer market processors can happily run 4GHz on all 4 cores all day long with minimal voltage increase (and no need for especially eloborate cooling).[/citation]
Totally Agree!
 
Why are there so many idiots in this thread? I stopped caring about core speed years ago. To me, all I want is a high number of cores (8+ cores please) and an above decent APU (6850 performance level please)
 
[citation][nom]pocketdrummer[/nom]You're right. You can beat the AMD with most mid range intels[/citation]

Troll, you do realize in game bench marks that the 3.7GHZ amd beat out the core i7 2600k in a majority of games at the higher resolutions? Nice try though.
 
[citation][nom]tomfreak[/nom]Why cant they just release a dual core K version of Sandy bridge with 125w TDP? Pretty sure can clock 4GHz easy with 125w tdp @ dual core only.There are still plenty of Application out there are still dual core. MAkes perfect sense to get some extreme version of dual core CPU out.[/citation]

I have a Quad i5 2500K running at 4.5GHz 24/7. It gets to 60 under prime95 load wih a H70.
 
I had my little E5300 running at 4Ghz ... what ... two years ago?

I am afraid "turbo" doesn't count though ... that's kinda cheating don't you think?

Doesn't AMD still have the fastest "standard" clocked CPU ?? I am not sure and too lazy to check it ... isn't it 3.8?

Anyway ... congrats to Intel.

/Eats a jelly snake in celebration
 
[citation][nom]jasonpwns[/nom]Troll, you do realize in game bench marks that the 3.7GHZ amd beat out the core i7 2600k in a majority of games at the higher resolutions? Nice try though.[/citation]

Game benchmarks are GPU centric since the mid 2000s...
 
[citation][nom]mayankleoboy1[/nom]with the bulldozer chips, AMD is gonna reach stock 4ghz.the B0 stepping could go only upto 3.5 ghz.but then so will ivy bridge.[/citation]

woot woot? LOL you failed,amd using old techprocess,since they cant make 4ghz stock,not even a chance,while Sandy easily can be made stock with 4ghz and even more,they are such brilliant cpu's.
Ivy will use 22nm,so they can go as high as Intel wants,and it will be no problem for them to release stock Ivy with 4ghz,but they dont need it,since amd will fail (already failed 4 times) with they so called bulldozer,which is gonna be bulldozed by Sandy and Ivy.amd=fail,like it or not
 
Sooooo....Intel has the technology to isolate to 1 core and, in an automated fashion, clock up the chip frequency to 4GHz?

Wow! Just what everyone wants...a single-core, 64-bit 4.0GHz processor!

/sarcasm

I have a 6-core AMD 1055T chip that runs 4.05GHz all the time, and an Intel 875K i7 chip running ~4.07GHz all the time.

I'm truly not impressed with it, other than the TDP.

Sorry, Intel.
 
Interesting conversation. - 4GHz, like the old fight, is it a true quad core. Personally, I don't find any thing wrong with Turbo Boost = 4GHz == 4GHz.
In any case, what's up with all the thumbs down in this thread?
 
[citation][nom]Pherule[/nom]Why are there so many idiots in this thread? I stopped caring about core speed years ago. To me, all I want is a high number of cores (8+ cores please) and an above decent APU (6850 performance level please)[/citation]

Why would you be insulting and then follow with such a stupid post? Clock speed improves every application that is CPU bound. Numbers of cores does not. They only become useful if you have enough threads.

Single threaded performance is the hardest thing to increase now, so they throw more cores in there because it's a lot easier, but less useful (but certainly not useless). Put another way, an 8 GHz single processor would eat a dual core 4 GHz for lunch in most applications.

Even more than this, adding cores gets less and less efficient on a hardware level. Doubling cores from 2 to 4, and 4 to 8 does not increase memory bandwidth, and increases more cache contention for shared caches. And then there is cache coherency that gets more complex as you add more non-shared caches. Of course, you can increase the cache size, but then you slow it down, and increasing cache size can be done on less core processors.

Then there are the lower clock speeds, because you reach thermal and power limits sooner (Turbo Boost helps with that though). Of course, you can only clock as high as the slowest processor, so you get lower clock speed because of that as well. Nothing helps with this.

So, higher core counts are good in many ways, but there are some tradeoffs too. Higher clock speeds or higher IPC make every application that uses the CPU faster, and doesn't cause these problems. It's clearly better. The only problem is, it's very limited in scope of how much you can improve it, so they've gone the less efficient/effective way, because it's relatively easy to double core counts and extremely difficult to double single threaded performance. But, if they were equally feasible, you wouldn't see dual core processors. In fact you didn't while adding transistors to single processors added a lot of performance. When it had only a somewhat minor effect, adding cores became the way to go for most of the transistor budget increase.
 
Minor correction: between the single-core P4s and Core 2 Duo, Intel had the 800 series (not 900, that was a re-run of the 800s using a different fab) followed by brief run of Core Duo. My Pentium D 830 ran ridiculously hot back in 2005, at 130 W TDP.
 
As long as Intel can sell it's current chips it has no reason to offer versions at higher clock speeds. That would simply mean it would have to sell parts with less desirable thermal characteristics for less than they do now and thus lowering their margins not to mention moving into the segment where they play to place the LGA1366 replacement. They will introduce faster parts if Bulldozer forces them to.
 
[citation][nom]jsc[/nom]I agree. It is not a "real" 4 GHz CPU. But that is only because Intel hasn't chosen to make one.[/citation]
Just wonder how many will sell if priced at $1500+
 
Status
Not open for further replies.