System Builder Marathon, March 2012: $1250 Enthusiast PC

Page 4 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.

Darkerson

Distinguished
Oct 28, 2009
706
0
18,990
[citation][nom]Crashman[/nom]Perhaps Tom's Hardware staffers are biased in favor of AMD?[/citation]

You realize people will probably take you seriously and scream foul now, right? I can hear the collective gnashing of teeth already!

XD
 

Darkerson

Distinguished
Oct 28, 2009
706
0
18,990


Why bother reading the articles when you can go straight to the comment sections and start posting!

...oh, wait!
 

Crashman

Polypheme
Former Staff
[citation][nom]Darkerson[/nom]You realize people will probably take you seriously and scream foul now, right? I can hear the collective gnashing of teeth already!XD[/citation]Yeh, it's called misdirection. I think the people who gave me the thumbs ups recognized the humor behind it :)
 

Marcus52

Distinguished
Jun 11, 2008
619
0
19,010
Frankly, I’d be happy with 4 GB of lower-latency memory. But your feedback tells us that you want to see 8 GB in these builds. At $60, this Mushkin Enhanced dual-channel kit boasts 7-9-8-24 timings at a 1600 MT/s data rate.

Can you run benchmarks of these games using what you'd like and what most of us think will give better performance in the games? Please? I mean, as far as I know (and it's certainly true for me) the reason we are wanting 8 GB is that sites like Tomshardware are telling us it's better, not because we have a thing about having 8 GB of RAM in a gaming machine.

;)
 

Marcus52

Distinguished
Jun 11, 2008
619
0
19,010
But because many vendors are now encouraging overclocking through Turbo Boost offsets, the technology remains on all of the time without an option to disable it.

What are they thinking of? Limiting what an enthusiast can do when it comes to overclocking is just ridiculous. Bad job AsRock, I'm very disappointed.
 
An intriguing idea, but I could see scope creep in this too. I usually plot builds this way too ( set a rough budget for a specific goal, but I'm usually willing to go over budget just a little bit if it snags me a strictly better component. ) You'd have to set some pretty strict rules for this otherwise comparisons between builds would be almost useless.
 

Crashman

Polypheme
Former Staff
[citation][nom]Cleeve[/nom]It is 6 GB/s. the typo is fixed.[/citation]Actually, it's SATA 6Gb/s. That's a registered name, without the extra space :) Of course the interface SUPPORTS up to 6 Gb/s, with the extra space :p
 
Who in he** would actually build this? If "We also chose the cheapest optical drive we could find." is your motto, then you have no pride invested in the build at all. Why not save yourself the headaches and buy a Walmart "special". The i5-2400 CPU is a perfect example of (Intel) stupidity. For an extra $35, you have the best value, high performance CPU (until Ivy Bridge arrives). Exactly what can you not do without for the next 3-5 years, that you cannot spend the extra $35-75 dollars to get the more desirable components? And you still get the bragging rights to a 7970 GPU. I certainly would not.
 
I agree that most builders are willing to go marginally over their budget to get that extra oomph of performance / longevity. However, the SBM has some fairly hard and fast rules about going even slightly over budget.
 
[citation][nom]noob2222[/nom]looks like updating the bios for the 6100 allowed it to run in dual-channel mode instead of single channel. Too bad everyone is already convinced the entire problem with last quarters build was just the cpu.[/citation]

Even the FX-8150 bottlenecks two Radeon 6950s, so that the 6100 would provide a smaller bottleneck doesn't matter because it would still be a bottleneck.

Also, for everyone complaining about the use of the i5-2400, it can be overclocked significantly, just not as far as the 2500K. Through manipulation of the Turbo and BLCK, it can easily be overclocked to float between (minimum) 3.78GHz and (maximum) 3.99 GHz(depending on how well-threaded the workload is), usually staying closer to 3.9GHz. I don't know why Tom's hasn't covered this, nor do I know why they seem to have not tried it. This trick doesn't work on the i3s and lower, but all i5s and i7s can do this and depending on their stock clock frequency, some of them can go over 4GHz this way. Basically, it never goes lower than 3.78GHz this way.

[citation][nom]The_OGS[/nom]Another build from the boys at Tom's Radeons...[/citation]

There were no Nvidia cards even worth considering back when these PCs were built. The GTX 680 wasn't out (technically, with it's zero availability it doesn't even matter that it's now out because we can't even buy one) and the GTX 500 cards hadn't had their price reductions yet, so the Radeons were really the only option to get the most for the money. Of course, the 7970 has poor performance for the money and I think that a dual 6950 2GB setup would have been a better idea if we didn't already have a dual 6950 PC in the last SBM.

Had the recently released drivers that finally fully support the GCN Radeons been out in January when they should have been, well then I would have suggested dual 7850s instead of the 7970. If the GTX 680 was out, then I would have suggested it and I'm sure that the Tom's guys will use the GTX 680 in the next SBM once it's more available. It's not Tom's fault that Nvidia has had notoriously poor performance for the money spent on their cards. Honestly, the GTX 680 is probably the best performance for the price that Nvidia has had in a while. The price cuts on the Fermi cards also really helped Nvidia's performance for the price.

However, those price cuts happened after these computer parts were bought, so it should not come as any surprise that Tom's went with the brand that offers the most performance for the money. Besides, it's not like some of the previous builds haven't had Nvidia cards. Didn't the last most expensive SBM PC have dual GTX 580s?
 
G

Guest

Guest
I just spent some time this morning explaining why I choose AMD over Intel for “best bang for the buck,” and the bottom line is that while AMD doesn’t offer a CPU that matches higher-end Intel offerings such as the Core i7 series, if you start taking into account the amount of performance you get *for the price of the chip* then you find that AMD consistently beats Intel. I found your post while searching for recent “AMD vs. Intel” articles, and you might be interested in what I’ve written. It’s posted at http://nctritech.wordpress.com/2012/04/01/amd-beats-intel-on-price-versus-performance-every-single-time/
 
I just spent some time this morning explaining why I choose AMD over Intel for “best bang for the buck,” and the bottom line is that while AMD doesn’t offer a CPU that matches higher-end Intel offerings such as the Core i7 series, if you start taking into account the amount of performance you get *for the price of the chip* then you find that AMD consistently beats Intel. I found your post while searching for recent “AMD vs. Intel” articles, and you might be interested in what I’ve written. It’s posted at http://nctritech.wordpress.com/2012/04/01/amd-beats-intel-on-price-versus-performance-every-single-time/

I read you link and your methodology is wrong. CPUbench works all cores on a system, although games won't use them all. Looking at the Phenom II x6 1045T versus the i3-2130 (no one cares about it by the way, we only care about the i3-2100 and i3-2120 because although they are almost identical to the 2130 in performance, they are $20 -$30 cheaper), you find that the i3 is almost twice as fast for single and dual threaded work. For quad threaded, it also wins significantly. Considering that the games that actually use more than one or two threads use only four threads, the Phenom II x6s all lose to the i3s by large margins in gaming performance.

This is also why the FX-4100 and 4170 are currently the best FX gaming CPUs despite them being the lowest end Bulldozer FX CPUs and why the quad core Phenom IIs beat the six core Phenom IIs in gaming. Four faster cores pretty much always beats six slower cores in gaming. For single and dual threaded games, the i3s are only beaten by the i5s and i7s (every time I mentioned i3, i5, and i7 in this post, I meant Sandy Bridge versions, not Nehalem).

The only way that the AMD CPUs even come close is if they are given large overclocks. For example, the FX-4100 needs an overclock to about 4.5GHz or 4.6GHz to match the i3-2120 in games that don't make excellent use of four threads. This is a problem with games that use more than one or two threads; they often don't make good use of the extra threads. For example, on a quad threaded CPU, WoW heavily loads one or two threads and lightly loads the others. Sure, it helps to have more than two threads open for WoW, but not nearly as much as the second or first thread. This is a recurring problem with many games (especially Blizzard games, Starcraft II has the same problem).

CPUbench doesn't show any of this. All it shows is the approximate performance of the CPU with all threads fully loaded. All that shows is that the AMD CPUs have greater highly threaded performance for the money. However, this means nothing to the gaming community because the AMD CPUs have less gaming performance, especially lower clocked high core count CPUs. That is why many server CPUs are poor for gaming, but are huge powerhouses for highly threaded work. If I went out and bought who knows how many thousands of dollars machine with four 2GHz 10 core Xeons, it would slaughter the i5-2500K overclocked to 5GHz in highly threaded work. However, the i5-2500K would slaughter it in lightly threaded work loads. Same concept here.

Like Cleeve said in your comments section, your methodology is flawed and Passmark is a very imperfect measure of performance too. When he said you were wrong he should have explained why, but he was right. Synthetic benchmarks often don't reflect real world performance perfectly. Real world benchmarks would be stuff like benchmarks built into games and software such as archiving applications (WinRAR, 7zip, etc), and more. Synthetic benchmarks tend to measure extremely specific things. Basically, if a synthetic benchmarks measures a, but a only applies to parts of different programs and unevenly to them. Say a certain program relies on a about 35% of the time. So, if a is the part being measured by a benchmark, but the rest of the program runs a lot faster on a CPU that doesn't run part a as fast, the different CPU would lose in the benchmark, but win in the real world performance on that software. This is a common thing, although the winning CPU tends to also win in the synthetics too. The synthetics aren't used as rock solid measurements for very important reasons.

Hopefully this explains why you were wrong.

Also, consider the fact that the AMD CPUs use a lot more power than the Intel CPUs. The FX-4100 at stock uses about 75% more power than the Sandy Bridge i3s use. Phenom II is also far more power hungry than Sandy Bridge. The difference in power usage can make the $110 FX-4100 that is already out-performed by the $130 Sandy Bridge i3s also cost about the same over the course of about three years (I already had a fight with someone else about this, we both realized that the FX-4100 isn't a bad value, but it is worse than the i3s for lightly threaded work and a little better for highly threaded work and that the differences are overall small enough for it to simply be a personal preference thing).
 
... unless we see more program/game developers making lighter, less bloated software and operating systems. At least M$'s Windows 8 seems to be making a step in this direction, although Metro doesn't seem very good. Oh well, that's what a work around is for!
Yes, Win 8 is more efficient on RAM, though I have to ask why you think Metro "doesn't seem very good." I've talked to quite a few fans of both Android and iOS and most are quite impressed with the Metro interface. I've even heard from a few self-proclaimed Apple fanbays that think Metro is easier to learn than iOS.

As for bloated software, that's usually not an accurate term to apply to games. Being as devs are usually pushing boundaries in speed and performance, they can't afford "bloated" designs.
 
G

Guest

Guest
Most PC's I build that are performance oriented get an aftermarket cooler. I have yet to use a factory cooler that would sufficiently cool a high performance Quadcore, and Ive tried them all. The Corsair sealed watercoolers seem to do the most for the money. Under a hundred bucks and all your cooling needs are satisfied! So if you need a factory cooler I can check my garbage or my junk parts pile! The SSD I think is best for just running your OS and software pertainning to it. Thats it, most gamers get the performance of the OS itself performing better on the SSD. So I think thats why most SSD's are small in capacity, running all your software off a SSD would be very expensive compared to the amount of performance you gain. Whatever happenned to using 10,000 rpm harddrives?
 


alpha was referring to Metro on Windows not being as good as it could be, not on the phones...

Also, games on PCs are usually very bloated and inefficient, especially if they are ports. Consoles are the devices with efficient and non-bloated games so long as the games were developed natively on the console and not through an emulator or a port.
 

Ok, yes, I wasn't considering console ports, and those can be very . . . interesting.
 
Status
Not open for further replies.