Is This Even Fair? Budget Ivy Bridge Takes On Core 2 Duo And Quad

Page 12 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.

Desktop GT3e is supposed to ~3X as fast as HD4000 so the GT650 would only be ~30% faster, not really worth the money anymore.

As for concerns of IGP hindering CPU performance, I doubt this would be a significant issue to someone buying an Haswell i3/5/7 with intent to use GT3e for graphics since most games would still be IGP-bound.
 
Hey Paul would you update the charts with the Core 4xxx series? Especially the power consumption would be very interesting considering the new Core cpu's will draw even less energy than the Ivy Bridge.
 
Thanks so much for making this article! Owning an ageing e8400 I've been trying to find articles on just how much of a performance increase I could actually see by upgrading. You've answered my question.
Cheers!
 



This is probally the best case scenario for DDR-3 running on a socket 775 motherboard performance wise. The EVGA 790i Ultra SLI board. Guru3d were pretty impressed with the difference performance wise, but at the time this board was over $300, and it got released near the end of life for socket 775 chips. My brother had this board with his q9550 overclocked to the 4.0ghz range, running DDR-3 at 1600mhz. It looks like there was a decent difference(roughly 10% on average) over the other chip sets using DDR-2.
http://www.guru3d.com/articles_pages/nvidia_nforce_790i_ultra_sli_review_(evga),1.html
This link doesnt seem to be linking correctly for some reason. An issue with their old URLs apparently. So here is the google result for the Guru3d review of the 790i ultra SLI motherboard.
http://www.google.com/search?q=site:www.guru3d.com%20articles%20pages%20nvidia%20nforce%20790i%20ultra%20sli%20review#sclient=psy-ab&q=guru3d+nvidia+nforce+790i+ultra+sli+review&oq=guru3d+nvidia+nforce+790i+ultra+sli+review&gs_l=serp.3...16920.23419.0.23879.28.15.0.0.0.6.328.2024.8j4j2j1.15.0...0.0...1c.1.14.psy-ab.TGLGVKpzRE8&pbx=1&bav=on.2,or.r_qf.&bvm=bv.46751780,d.dmg&fp=fe5c7b3dcb2c6542&biw=1024&bih=640
The article I was trying to link is the first result.
 


Hey Thanks....

Though I am not seeing 10% anywhere besides synthetic mem bandwidth maybe? Was it a 24 page review with synthetic and game data only? Very little difference in games it seems?

Also, I only glanced at it but can't find system specs for the 680i setup. Specifically CL for the DDR2 (4,5,6,7?)

Lastly, I may be wrong but to me it seems 680i used a real X6800 Conroe, but for 790i they simulated x6800 by disabling two QX9770 Wolfdale cores, and down clocking to 2.9 GHz? See page 17, memory test text. Different architectures, different cache amounts, so I hope not.

I'm just trying to feel out all the variables at play here as they can be game changers. This may not apply in this case as one graphics driver is listed, but some sites recycle old data (the 680i numbers for instance), so even the graphics drivers may have changed. Maybe you can clarify some of these ?'s for me. Glancing over just a few pages I may have missed clear answers.
 


Yes I believe they did disable cores to get some of the results, but the results without the cpu model included is actually a q6600 overclocked with different ram timings. Page 17 paragraph 3 they state this. Also the results from Ghost Recon Advanced Warfighter 2, and the original Crysis seem to heavily favor the DDR3 over the DDR2 system in all the different configurations they tested.

I also dug up some tests that Tom's ran with DDR2 vs DDR3. Allthough it looks like you guys were emulating DDR2-800mhz. Seems like some titles benefited from the faster ram according to those tests. They only ran DDR-3 up to 1600mhz though instead of the 2000mhz max speed, which may/may not of added a few more frames per second. http://www.tomshardware.com/reviews/core-memory-scaling,2342-7.html

Could be interesting to see how more recent games would be effected. It seems to vary from game to game from what I can tell with the older games tested. Either way there does seem to be a little difference. Maybe it would of effected newer titles more so, or even less, I don't really know.
 
[citation][nom]sincreator[/nom]Yes I believe they did disable cores to get some of the results, but the results without the cpu model included is actually a q6600 overclocked with different ram timings. Page 17 paragraph 3 they state this. Also the results from Ghost Recon Advanced Warfighter 2, and the original Crysis seem to heavily favor the DDR3 over the DDR2 system in all the different configurations they tested. I also dug up some tests that Tom's ran with DDR2 vs DDR3. Allthough it looks like you guys were emulating DDR2-800mhz. Seems like some titles benefited from the faster ram according to those tests. They only ran DDR-3 up to 1600mhz though instead of the 2000mhz max speed, which may/may not of added a few more frames per second. http://www.tomshardware.com/review [...] 342-7.htmlCould be interesting to see how more recent games would be effected. It seems to vary from game to game from what I can tell with the older games tested. Either way there does seem to be a little difference. Maybe it would of effected newer titles more so, or even less, I don't really know.[/citation]
Hmmn… That is not how I read it. “All the way to the right our current high-end graphics card test platform based nForce 680i SLI and 1142 MHz DDR2 memory with a Core 2 Extreme 6800 processor. Two steps to the right I have emulated the same setup by disabling two cores of the quad core processor, and lowering the FSB towards 2.9 GHz.”

Thanks for clarifying! So it seems they overclocked Q6600 to 2.9 GHz and then disabled 2 cores to emulate X6800. That’s much better than using Yorkfield at 2.9 GHz, but I’d still like to know the FSB and multiplier, as that does affect performance and mem bandwidth. There are so many potential variables, the more you know the better.

Yes, Ghost Recon indeed showed significant gains until GPU limited down at 85 fps. Frustrated by lack of full test configurations, I stopped at FEAR, just before seeing that last game.

And the Tom’s link is medium details, 1280x1024, which is really more synthetic in nature IMO than more interesting settings. Similar to what I did with the first 19x10 settings, used to show some scaling. Premiere and WinRAR show nice gains though. It's all DDR3 though, no DDR2 in that story.

But all this really goes along the lines of what I said in the story about there being very little difference between DDR2 and DDR3 without overclocking (FSB and mem) . You can pull more from fully tweaked DDR3 platform, but real world impact was still rather small in most cases and pitiful compared to the cost back then. I hadn’t even considered 790i, thinking more of Intel’s DDR2+DDR3 chipsets I’ve used.

Keep in mind, I increased memory bandwidth in Sandra over 24% (from 6.72 GB/s to 8.36 GB/s) just through FSB overclocking and tweaking, despite lower resulting memory frequency and at the same CL5 main timings. And that isn’t even close to squeezing all you can from DDR2 either. We can’t wring out S775 DDR3 for all it’s worth without doing the same from the DDR2 setup. In the end, I still don’t think there would be a significant impact in any way, (a few tests maybe) which is why the far more popular DDR2 route seemed appropriate. The idea here was just to emulate overclocks/performance most folks are able to reap themselves. I wasn't even supposed to overclock further for the story... that was bonus info. =)
 


I won't be updating these charts at all, though there should be plenty of data abroad comparing 4th, 3rd, even 2nd gen parts in that area.

As a data nut, I look to fill in the missing gaps, always bothered by what isn't there. So we'll wait and see. Maybe pit 4th Gen against old 1st Gen Core for instance. But my queue is full right now; time already doesn't allow 1/4 of my desires to be put into story.
 
Awesome article! I'm currently running a Q9550 at 3.5, have for years, with 8GBs of RAM. This shows me I have some time left before the next upgrade. Trying lowering the ram CAS# a bit and upping it; running stable for ~ an hour at 3.77 with 1.2V and 70C full load temp. One main issue is this: more RAM= less OC on most boards; the older they get the more ram you need, and you lose that precious OC that kept it running near the new chips.
 

What does the amount of RAM have to do with the highest OC?
 

When you do FSB OCing (C2D/C2Q) with a chip that has fixed multipliers, overclocking the FSB overclocks the CPU and RAM. Depending on northbridge, motherboard and DIMM combination, adding DIMMs will often require more conservative memory timings if you were already close to the limit with OCing.
 


see the following poster



in short, when you overclock the FSB, you overclock the ram with it...
 


One solution is to change the RAM divider to prevent that.
Lets say you have a CPU with a 266 nominal FSB, and a 10x mult like mine.

If you use DDR2-800. Normally the RAM divider would be set to 2:3 so 266MHz / 0.666 = ~400 MHz.
Now lets say you crank up the FSB to 320 MHz. This means the CPU is running at 3.20 GHz and the FSB is running at 1280 MT/s.
But uh-oh! 320MHz / 0.666 = 480Mhz. The RAM is overclocked by 80 MHz or in other words operating at DDR2 960.

But consider this: change the RAM divider to 4/5. Now the RAM is running at 320MHz / 0.8 = 400MHz - or - right where it should be.

Some motherboards may not have 4/5, but it will surely have 1:1 atleast meaning the RAM will be clocked a 320 MHz.
 
still have a e8500, 4gigs of ram, and a 560ti, in my old gaming rig. those dual cores were really strong at the time bought the e8500 over the q6600 did better for gaming at the time and overclocks better than almost anything. have an 3770k at 4.5, 16gigs of ram, ssd, 2 7970ghz man it was a nice jump in performance
 
Ivy Bridge and Haswell have a lack of single threaded performance increasement (say Super Pi) over a Penryn (last generation) Core 2 Duo. Not more than 60% better clock to clock. Not very impressive. I think I'll wait until Skymont for a new laptop..
http://forums.anandtech.com/showthread.php?t=2218060&page=23

The one thing going for Haswell is the great power consumption and very good integrated graphics, and very good multithreaded performance. But for day to day responsiveness, not that much better than a high clocked Penryn.
 
"Intel Core 2 Quad Q9550 (Yorkfield), 45 nm, 2.83 GHz, 1,333 MT/s, 12 MB L2 Cache
Overclocked @ 3.4 GHz (400 x 8.5), 1,600 MT/s, 1.240 V Idle/ 1.200 V Load
Overclocked @ 3.7 GHz (435 x 8.5), 1,740 MT/s, 1.328 V Idle, 1.240 V Load"

what I want to know is why the 3.7 ghz wasn't in the test? I only saw the 3.4 GHz.

please answer
 


Its there.... page 17 (applications), page 18 (games).
http://www.tomshardware.com/reviews/ivy-bridge-wolfdale-yorkfield-comparison,3487-17.html

This story was intended to test at more typical OCs. 4.0/3.4 GHz are more common. But I couldn't help myself and pressed further, including the data as bonus content.
 
I'm running an i7 2700K @ 4.8, which is fairly snappy ...
But before that an i7 920 @ 4.2 ...
And before that a Q9650 @ 4.2 ...
And before that a Q6600 @ 3.6 ...

I wish I had saved my money and kept my Q9650 ... it was the best "BANG" for the CPU buck I ever enjoyed!

This dates back beyond the 45 nanometer C2Q's to the 65 nanometer C2D Conroe OC'd @ 3.6Ghz, when Intel won back the processor performance crown from AMD in August 2006.

Regardless, I couldn't care less about whom hit who over their technical heads how hard with which corporate rock and drew how much market-share blood during whenever financial quarter! I'm only interest in which CPU is faster.

Let's put this argument into perspective. Just about like anything else in life ... you usually get what you pay for ... plus or minus a few beer bucks! So apples-to-apples, core-per-core, clock-per-clock, an i5 3570K OC'd or not, will pretty much monkey-stomp the large intestines and their complete contents out of any other comparative Quad Core processor ... except for perhaps a 4th gen Core i5 4670K.
 
Wish I saw this article a few days ago, just bought an i5-3570k and overclocked it to 4.0ghz. Now I figured a 5 year newer processor and $400 upgrade would destroy my Q6600 overclocked to 3.0ghz. Went from a 7.2 on Windows 7 WEI to a 7.6... Now don't get me wrong, real world performance I am pretty happy, my DVDs encode about 2.5x faster on the new system, but still, would have thought it was going to be a bigger difference, guess I should have saved my money and bought a newer video card (or faster SSD drive)!
 


and what you're experiencing is why many people think amd is "good enough" and why so many people still are gaming just fine on their phenomIIs or core2duos or first gen core i cpus, or are going out and getting fx6300... the performance spread between the highest end intels and the 2nd and 3rd tier cpus of the world just isn't noticeable in 95% of your daily life, and the other 5% of the time you wouldn't notice it unless it's set side by side with a top end intel to show you the difference
 
I'm running Q9550 @3.4 ghz with Radeon 7870, 8 GB ram and 830 (256 Gb). Computer is super responsive. I can use Photoshop, Office, various games etc. without ANY problems. Core 2 Quad were (are!) fantastic processors. I'm waiting for good replacement of Core 2 Quad and I have to say that i3, i5 or even i7 don't look that good for me. When I changed from p4 3.0 ghz to C2D 1.8 (e4300) and than to C2Q (q9550) that was AMAZING gain in performance. AMAZING. If you have decent Core 2 Quad don't worry about new graphic card get something decent and enjoy :)
 


While I agree that they are nice processors, a 4.0-4.5ghz i5 or i7 would smoke your C2Q. 3570k's, for instance, hit 4.0-4.3, consistently, without any real effort. I had a Kentsfield based quad Xeon clocked @ 3.6ghz. It was a nice chip, but I feel my i5 is a much speedier chip. I wouldn't consider anything other than an i5/i7, or maybe even an FX 8320/FX8350, if coming from a core 2 quad though. Nothing else is worth price of an upgrade cpu wise. I gave my mom my old E8190 C2D and still suits her needs just fine. For most people, a C2D/C2Q or Athlon II/PhII is plenty. Only enthusiasts and power users really need more.
 
Status
Not open for further replies.