Is This Even Fair? Budget Ivy Bridge Takes On Core 2 Duo And Quad

Page 5 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.
There is an abundance of people using socket 775, more than that of the Athlon64 era I'm sure. That said, there are socket 939 enthusiast groups out there, still OCing away. 😀 775 was a good socket too due to the success of Core 2 Duo. The Core iX series will also be noted in history I am sure, although due to all the f'ing sockets Intel put out for the varying chips, we won't look back on them quite as favorably. Personally, I really applaud AMD for putting DDR2 and DDR3 memory controllers in some of their chips to bridge the gap between sockets. It was thoughtful thinking, and although they may have sacrificed a few transistors for it and used up a bit of die space, and ultimately speed - they were trying hard to give consumers what they want. Upgrade-ability. Which is one reason there are still those users out there who really love AMD and will continue buying their boards and chips, even if they aren't performance king. They still compete on price, and still make systems fast enough to game on. Which to a true enthusiast, is all that really matters. I own and have owned a lot of both AMD and Intel systems. Both are fun to build and work on, but AMD does seem to cater more to the overclocker/upgrader in me. Intel makes some great processors, and I own an i5 IVB. The fact is, it's easier than ever to build an awesome "do everything" gaming system. Consoles suck. That's one point most (especially 30+) PC enthusiasts agree upon. :)
 

It is amazing how many sockets Intel has gone through, I mean 775 lasted from Prescott to Penyrn and the successors only lasted one architecture and its die shrink (ala LGA 1556 and 1366). I would not nessicarily say "consoles suck", they are great tools for just sitting on a couch and gaming albeit at lower graphical setting and I must admit, PCs are much better multi-use system no matter how heavy a console OS becomes. My QX9770 and EVGA 780i are here to stay for a while longer.
 
A Q6600 @3.0 GHz (or higher) would have just done aweful in this test. As would have most other old CPU's.

The E8400 and Q9550 were relatively rare and unique.

Typically, older CPU's had no where near the cache they do. Years ago I had a Q6600 and the 4mb of cache really hurt it, even though it's listed as 8mb(2x4mb). Effectively it's only 4mb, unlike Intel smart cache; the full 8mb was not available across all 4 cores.
 
"God dammit! Now I feel like I need to change my C2D E8400. >"
I second that. It was a great CPU for it's time but now it's time for a new mobo/CPU/RAM
This article proves that my core system components are the major bottleneck holding back my SSD and somewhat outdated, but respectable GTX650 Ti. The SATA2 ports limit my SSD and the C2D (even OC'ed to 4Ghz) is holding back my graphics potential.
 
[citation][nom]ceh4702[/nom]What about a 2500k or an E7200?[/citation]
A 2500K would be very good. Approximately the same as the 3570K.
An E7200 would be terrible, as it only had 3mb of L2 cache.
 


775 4 Life! Sorry just had to throw that in there, my Q6600 G0@3.2 is still folding in another rig to this day, coupled with an 8800GTS, I bet it can still play all DX10 games on medium settings @720/900P. I should thank myself for not getting the E6850 as that wonder-cpu lasted forever or should I say still hauling proteins, even my hand-me-down QX9770 is not that much better than the G0 Q6600.
[/quotemsg]

No the Q6600 was actually really cool because clocked at 333X9 puts it at 3Ghz/1333Mhz FSB which basically made it a QX6850 Extreme. Paired with a 560Ti my girl still games on it @1080P with mostly high settings in pretty much every game to this very day.
 
 
I have a G50VT laptop with a 3.3GHZ X9100 (Penryn) with SSD, fast enough for me, but definitely not too fast. Am trying to wait until Skymont 11/10nm. Lots of heat too out of this beast, helps me stay warm in winter with the CPUs and GPU crunching seti@home all the time
 

I wouldn't say the E8400 was "rare", it was a mainstream part late in the Core2Duo product cycle. Relative to modern chips, it would be in about the same spot as i5-3470.

Not the most popular choice for budget buyers due to relatively high price and not a popular choice for enthusiasts due to being dual-core two bins below the top dual. The in-between models are for people who want better than low-end performance without paying the full high-end cost so these tend to get relatively little press with enthusiasts unless they have an unexpected quirk like exceptional overclocking - IIRC, the E8400 was actually quite popular for that.
 


Except the Android OS makes everything even web browsing so slow, even with a 1GHz processor and 512 MB of memory. With a slower clocked x86 processor (such as a Pentium III Copermine 800Mhz) and 512MB of memory, you could run web browsing just fine on Windows XP.
 
[citation][nom]logainofhades[/nom]IIRC, Core 2 didn't see much of a performance benefit with the faster ddr3 ram. Intel in general doesn't seem to benefit quite as much from faster ram or better latency. AMD, however, has had a history of being affected by ram speed and timings. AM2 was painfully obvious of this back in the day. You had to go with fast DDR2 800 to achieve the similar performance of a S939 chip of the same 3xxx+ number with fast ddr1 400.[/citation]

I got your point but not all tasks are parallelizable and there are theorical limits which limits maximum throughput, please see Gustafson's Trend http://software.intel.com/en-us/articles/amdahls-law-gustafsons-trend-and-the-performance-limits-of-parallel-applications here.
This is a theorical limit and parallel application are quite difficult to write and debug.
Yes there are task with 0% of serial code (HPC??) but this is not the case of gaming.

PS: the maximum number of cores for consumer CPUs have raised from 4 to 6 in 5+ years (please, do not count HT or mutilated AMD modules), heat is still a problem I think.

I hope to be wrong 🙂
 
[citation][nom]InvalidError[/nom]If having more threads and cores could benefit a broader selection of mainstream applications, the obvious way forward would be to do exactly that: add cores and threads.The biggest challenges to higher performance in mainstream applications is that on one hand, software that depends heavily on user input tends to just sit there waiting for input most of the time regardless of how well or poorly it may be threaded and on the other hand, that lots of code depending on user input is usually intrinsically sequential which makes it difficult if not impossible to thread in a meaningful and efficient manner.In other words: mainstream hardware is waiting for mainstream software to catch up.[/citation]

I got your point but not all tasks are parallelizable and there are theorical limits which limits maximum throughput, please see Gustafson's Trend http://software.intel.com/en-us/ar [...] plications here.
This is a theorical limit and parallel application are quite difficult to write and debug.
Yes there are task with 0% of serial code (HPC??) but this is not the case of gaming.

The maximum number of cores for consumer CPUs have raised from 4 to 6 in 5+ years (please, do not count HT or mutilated AMD modules), heat is still a problem I think.

I hope to be wrong 🙂

P.s: I'm sorry fot duble post, I have quoted the wrong message.
 
After helping numerous people on the forums with "budget" gaming rigs, for the life of me I cannot understand why they do not save their pennies a little longer and buy the 3570 instead of the myriad of cheaper choices. You can pick it up on sale for $180 and the cost difference to shift down just is insignificant compared the the performance jump you get for the extra $50.

This comparison demonstrated the huge difference between the 3570 and cheaper CPU choices - it really doesn't make any sense to consider the cheaper CPUs if you are going to game. Add to this that if you overclock the 3570 it keeps up with 3770 which is a rocket ship fast CPU.

Yeah I know that these budget gamers are mowing lawns for their computer, but geez, wait a little longer and mow a couple of extra lawns and get a real CPU that real smokes of gaming and anything else for that matter. This comparison just confirms this.
 
Nice article :0 but it would have been cool if you had also included the old LGA1156 Clarkdale based Pentium G6950, I have one clocked at 4.2GHz and it owns IB Celerons/Pentiums.
 
because most people don't understand what a balanced gaming rig means. what most people want is cheap, even if that means sacrificing performance and risk faster gradual performance decline. they set a budget and get fixated on parts as long as they cram under the limit. most people don't think in long term. what they see is the present-set budget limit and that alone.
 


Yes I agree with you and you make my point. "They don't think long-term," is the key. To spend an extra $200 on a $500 build buys you three-times the computer and is by far the smartest choice and actually saves you money in the long run. I just wish the budget builders could see this.

I am not criticizing budget builders, I understand that finances can be short. However, the difference is so little to upgrade I just don't agree with the budget mentality.
 


I'm sure you guys playing Crysis 3 would be happy with your 3570K overclocked to 4GHz and a 7870XT for a graphics card, but I built my miserly dualcore 7770 rig with 2GB(originally 4GB) of memory, cheap as it might be, with the intention of playing a little Team Fortress 2 and some office applications. It does what I want it to do, and I'm not particularly interested in playing anything else, so this will last me 2 years at the least. Sure, sure, operating systems and applications will get more demanding. But I bet I could still do some web browsing and maybe even some office apps (I have an old copy of Microsoft Word 2000) with a Pentium 4 1.8GHz or a Pentium III 1GHz. However after my current build I will be getting a 3570K-class Haswell/Broadwell processor and will be overclocking.
 


I probably didn't communicate very well. I have a 2600K with an single ATI 6950 and I am not a big gamer. I don't own the 3570K and I am not putting anyone down who has bought a budget build.

I get that many builders want to start off with a cheap build and work up and again, I am not putting this down. I am saying that for $40-$75 more, the 3570K blows away the lower cost CPUs and $40-$75 is not a lot of money when you are building a computer. The 3570K will maintain its speed competence much longer than the cheaper CPUs and thus will save the builder money.

For example, I paid extra for the 2600K CPU when I built my computer and today I feel no need to upgrade to Ivy Bridge and probably not even Haswell. The 2600K is so fast that it is only slightly slower than monsters like the I-7 3930 and 3960. So I am saving money now because I spent the extra $80 a 1-1/2 years ago. Plus I don't have to go to the hassle and expense of a new motherboard and a new build.

So having a long-term view can say the budget builder money - a lot of money - and isn't that what the budget builder needs most? I am saying this with kindness.
 
Yes I see. However, the 3350P combined with a cheaper non-overclocking chipset of course can provide similar performance, if the buyer has low money. I reckon it's possible to squeeze a Z68 and a 2500K these days under $600...
 
Status
Not open for further replies.