Self-confident bastard needs rig advice

Jay_83

Distinguished
Feb 7, 2011
135
0
18,710
Hi,
My friend, who has been out of the PC game for some time, is building a new rig. He asked me for advice and the following is what I just typed up and mailed him. I'm posting it here a) for fun and b) for comments, so that I am aware of any SIGNIFICANT changes I should pass on to him. I am especially shaky in the whole Quick Sync subject.
Have fun and please let me know what you think :)

VERY IMPORTANT DISCLAIMER:

1) I COMPLETELY realize that there are many factual errors in the text below.
2) I COMPLETELY realize that there are many terrible simplifications in the text below.
3) I am VERY biased.
4) I am an nVidia fanboy.
5) I am NOT an Intel fanboy.
6) I hate Apple more than vegetables.
7) I swear too much and enjoy it.
8) Remember: the GLARING OVERSIMPLIFICATIONS AND OUTRIGHT LIES are there to paint a clear, unambiguous picture for someone who has not followed recent tech, but wants to build a decent rig on a decent, but not unlimited, budget. I lie about some things to spare him the cost of slightly better but significantly costlier parts that will not benefit the rig's speed in any meaningful way.
9) I completely understand people with money to burn who get the best of the best, even if it's not cost-effective. I respect everyone's right to spend the money they earn themselves on whatever they want.
10) I aim to offend no one. If I offended you, try to forgive and remember this mail was not intended for you personally.
11) Be constructive, please. I want my friend to have the best gear AND the best bang for his buck.
12) Try to enjoy yourself.

Here we go.

=========================================

OK dude, here it is.

First off, a bit of description, just so you know what the *** is going on
with the tech nowadays. Then on to specific rig builds.

CPU
===
Lemme start by saying we're not considering any AMD processors here. This
is because a) they suck and b) when they don't suck, they blow.
Performance-wise, AMD was on par with Intel about TWO CPU GENERATIONS ago.
High-end hex-core AMD desktop chips are now getting smacked around in some
tasks by Intel's current dual-core parts. Laughable, really. Die-hard AMD
fans keep pointing out (quite correctly) that AMD's chips are priced VERY
competitively, but then I always ask myself: how much would I be willing to
pay for a steaming pile of *** anyway?

In Intel's tick-tock cycle, the tick improves and shrinks existing
architecture, while the tock introduces something new. Current timeline:
tock: Nehalem architecture, 45 nm
tick: Nehalem shrinks into Westmere, 32 nm
tock: Sandy Bridge, 32 nm <-- from Jan 2011, current architecture
tick: Sandy Bridge shrinks into Ivy Bridge, 22 nm <-- expected Q4 2011 - Q1
2012

Before Nehalem, Intel ran with the Core architecture (what you're running
atm), and that's also when AMD introduced the architecture it rolls with to
this day. All they've been doing is increasing clock rates as their fab
process matures, essentially evolving slow piles of *** into slightly less
slow piles of ***. They've introduced the Llano platform this year, which
is basically a new, low-power all-in-one CPU+GPU for netbooks. 'Course the
problem is, nobody gives a flying *** about netbooks. Their desktop
Bulldozer architecture is slated to arrive this year. Bulldozers are
expected to be decent people, possibly on par with Sandy Bridge, but no
definitive dates have been set and I have a feeling they'll arrive later
rather than sooner. Pity really, since Intel could use some competition in
the desktop space and the incentive to keep prices in check. As it is, you
just gotta pay up, since their CPUs are
shag-your-dog-and-piss-in-your-cereal-before-you-notice fast.

Anyway, here is the breakdown of CURRENT Intel CPUs (they're still using
the Core name brand, as you can see):
Core i3 - entry level crap
Core i5 - mainstream
Core i7 - enthusiast

All processors in these three groups drop into socket 1155 boards. The
upcoming tick processors will include compatible models and will also add
socket 2011 models. These will be the Extreme Edition parts, sporting 6 and
maybe even 8 cores. Current CPUs rock 2 or 4 cores, some with, some without
HyperThreading. There's really only one processor worth considering among
them: the Core i7 2600K. Difference between non-K and K models is that the
Ks have unlocked multipliers- fuckers can rev up to around 6+ GHz (provided
enough liquid nitrogen). The 2600K is currently the most powerful Sandy
Bridge CPU. That's 4 physical cores with HT for a total of 8 logical cores
@ 3.4 GHz stock. I was considering the i5 2500K for a moment - it's
basically the same processor with a smaller cache, but also DISABLED HT.
That's 4 physical cores for 4 logical ones. The problem with that is that
more and more mainstream software makes use of 4+ threads. So while it's
still an awesome chip, it's just not as future-proof as the 2600K. Makes me
laugh when I see some leet n00bs strut around forums with the 2500Ks daddy
bought them. Dipshits.
If you're thinking about waiting for the Ivy Bridge parts, it's not worth
it. I got my Extreme Edition CPU purely out of fancy. While it's still
Intel's flagship model, even kicking the 2600K's ass in well-threaded
applications, it's less efficient clock-for-clock. Running at the same
clock speeds, with =<4 cores utilized, the 2600K will be faster- though the
difference won't be noticeable outside of benchmarks. Conversely, the new
Extreme Edition parts will not be faster than the 2600K in everyday tasks
but WILL cost significantly more.

CHIPSET
=======
Now, obviously the chipset will affect the mobo choice. Currently, three
chipsets are available - the H67, P67 and Z68. They are all designed around
socket 1155, so you can drop ANY current Intel CPU into a mobo equipped
with ANY of these chipsets. The differentiating factors revolve around
enabling different features, with the H67 catering to office space, P67 to
mainstream use and Z68 being most suitable for you. The H67 can access the
graphics engine built into every SB CPU. P67 cannot, relying instead on the
immensely more powerful discrete GPUs. Z68 can use BOTH. "Why the ***
would I need Z68 if I'm buying a kick-ass GPU anyway?", do I hear you ask?
Well, shut the *** up and let me explain. You see, Intel's HD Graphics
3000, the GPU integrated into the 2600K, is what contains Quick Sync. And
Quick Sync is the fixed-function coder-decoder I told you about, that
module which simply shreds video tasks without using the CPU proper. So, on
Z68 you can use your discrete GPU to render 3D game graphics, and then
employ CPU-integrated Quick Sync to transcode video. Very roughly speaking,
Quick Sync will reduce transcoding time by 50% compared to a CUDA GPU, and
by 80%-90% compared to pure software mode that actually uses the CPU.

Still, there ARE some caveats here:
1) You gotta use a piece of software called Virtu. What this does is it
lets you hook up your monitor to the GPU and it virtualizes the on-CPU
graphics. Then you just use the Virtu control panel to switch rendering
engines- either let the GPU do its job as usual, or use Quick Sync. Without
Virtu, you would have to open your case and actually remove the *******
GPU to use Quick Sync for video encoding.
2) Not all content-creation software uses Quick Sync yet. And some of the
programs that do, have a problem with Virtu, and you may end up removing
that ******* GPU ANYWAY.
On a brighter note: Quick Sync kicks ass, pure and simple. Any serious
video-editing software makers will have to start supporting it sooner or
later. Same goes for compatibility with Virtu, since the whole thing is too
bothersome without it. Also, virtualizing your your CPU-integrated graphics
while running natively on the GPU has ZERO impact on performance, so that's
good news.

So, to sum up chipsets: H67 ain't an option (doesn't allow overclocking,
even on K-series CPUs. Did I mention that? No? *** man, IT DOESN'T ALLOW
OVERCLOCKING!), P67 will actually give you the same performance as Z68-
except for one very specific case, which is video encoding/decoding. In all
other respects, there is absolutely no performance to be gained by taking
Z68 over P67. However, imagine yourself encoding a long video, running on
P67. Takes, say, 1h 30 minutes to complete. Would you honestly not call
yourself a dipshit when you realize it would only take 10 minutes on Z68?

RAM
===
RAM is easy. All three chipsets use dual-channel memory, so you've got two
or four slots to populate. Options: 2x4GB or 4x4GB. I consider considering
less than 8 GB futile, all things considered :) You should have no problems
WHATSOEVER with 8 GB. However, RAM is an extremely ******* sensitive
animal. If you populate 2 slots now, and later decide to get some memory
for the other two, chances are you're gonna end up buying four brand new
sticks, instead of just two. This may happen even if you buy the same
brand, same model with the same timings. All just because the sticks came
from different batches and SOMETHING, *** knows what, changed in the mean
time. And now the ******* sticks hate each other as bitches hate reason. So
if you want to get 16 GB in the future, get it NOW.
Also, do not even consider getting these sticks rated for 2000+ MHz.
They're a scam- you pay an exorbitant amount of money for "gaming grade"
RAM that gives you literally *** ALL of an improvement when compared to
decent 1600/1800 MHz modules. Seriously, we're talking about absolutely
insatiable memory bandwidth here. In everyday use, you couldn't tell the
difference between a rig running 800 MHz and 2000 MHz RAM, all else being
equal.

Just for clarity - the upcoming Ivy Bridge CPUs / X78 chipsets will run
quad channel RAM. Not worth waiting for for the same reasons super fast RAM
isn't worth the money.

SSD
===
Two generations worth looking at out there atm. Those rocking SandForce
controllers, and those with SandForce 2. Both kinds have native support for
TRIM, which ensures performance doesn't drop over time (providing you run
Win 7 / more recent Linux distros. OS X will implement it "soon". Apple
dipshits.). I remember your attitude towards SSDs, dude. Wake the *** up
and believe me when I tell you it's a ******* ESSENTIAL component. Oh, and
the life time we're talking about here is roughly 10 years, provided the
SSD is in use 24/7/365, so no biggie. In fact, show me an HDD that will
give you that.
SandForce gen 1 is simply slower than SandForce 2. 1st gen will give you
sustained transfer speeds at least twice as fast as those of the fastest
HDDs. What is more important, random reads are several times faster, so
Windows can quickly pull all those few-Kilobyte drivers and libraries when
it boots/runs. SandForce 2 is roughly twice as fast, although that's
impossible to observe when just loading/using the system. Loading LARGE
files into memory WILL speed up perceivably, though. Here, the fastest (and
I MEAN the FASTEST) HDDs will stream ~150 MB/s. SandForce 1 will do ~260
MB/s, SandForce 2 upwards of 400 MB/s. SF2 thus fully satiates SATA II
bandwidth and needs to be hooked up to SATA III, which comes with basically
any P67/Z68 mobo.

HDD
===
With the OS on SSD, what I look for in an HDD is reliability, first and
foremost - it's a ******* mechanical thing, spinning platters and a head
that cycles hundreds of times per second, right? Pick what you like.
Myself, I've always had an affinity for Hitachi's DeskStar line, so that's
what I'll drop in the rigs below. Same drive I'm rocking now- 2 TB, decent
speeds (nothing impressive at all, though), rock solid.

PSU
===
Look for low noise/ripple, high efficiency. Maybe even more importantly,
proper protection circuitry. Well duh, you know the drill. Will be linking
the PSU I told you about in the setups below, as well as the unit I'm using
now, if you really want more power. Nevertheless, 850 W should do you fine
even with dual GPUs, especially since Sandy Bridge draws significantly less
power than Nehalem/Westmere.

GPU
===
Like I admitted, I went with nVidia purely out of love. That is to say, I
would have chosen them even if I had decided to build a single-GPU rig.
That's not the whole story, though:
You see, nVidia's multi-GPU implementation (SLI) is much more mature than
ATI's (CrossFire). The latest generation of the ATI Radeon family is the
6000-series, which is said to have VASTLY upgraded multi-GPU scaling, on
par with nVidia's cards. All the benchmarks seem to point there. I have my
reservations, though: benchmarks are done on a very limited number of games
most popular at the time of a GPU's launch. But there're thousands of games
out there, and nVidia's multi-GPU setups scale extremely well in a great
majority of those titles. How compatible will the Radeons really be? ***
knows. Maybe two Caymans (current-gen GPU codename for the Radeons) DO play
just as nicely with each other as two Fermis (nVidia's chip). It's just
that I don't give a *** about maybes. If I was to get two Radeons, I'd
wait at least until their next-gen offerings. In single-GPU mode though,
take your pick.
From nVidia, I recommend my GPU - GTX 570, the best, after GTX 580, in
nVidia's current line-up. Do not go with the 580- the price increase is
completely disproportional to the moderate (up to 15%) performance gain.
There is also the GTX 590- technically the fastest, but practically ***.
That's a dual-GPU card, sporting two processors ripped straight from GTX
580s. However, it produces so much heat and noise it had to be severely
downclocked. As a result, it performs slightly worse than my two GTX 570s,
but costs about the same.
If you don't wanna run dual GPUs, you could go for the Radeon HD 6970. It
is ever so slightly faster than the GTX 570 in most games, and ever so
slightly slower in some. Price-wise, they are now in exactly the same range
on scan.co.uk.
If you ever want to try stereo 3D vision, that's nVidia's tech, so go with
a GTX. Radeon has nothing that comes close atm.

Note about cooling: I know you're hot for H2O, but it is not necessary even
for a dual setup. My cards do get hot, like I said, but never even approach
the danger zone. And a single card would of course have an easier time of
it. I know you'd like to do it just for the sake of it and the mod value,
but I'm simply mentioning this for completeness- it's not necessary.

BUILDS
======

Erm... actually, I'm pretty tired. Tomorrow, k? Awesome.

J.
 
Seems rather too strongly biased for the 2600K over the 2500k, IMO. Choosing between the two should only lean towards the 2600k if HT is needed/used, IMO

Hyperthreading is just short of, well,... useless in gaming systems.

BUt, if someone thinks paying almost 50% more in cpu cost for 100 MHz more makes sense, more power to them.

( I frankly don't know how anyone who *ever* paid the exhorbitant amounts for any INtel ExtremeEdition cpu could give advice with a straight face on which cpu is best recommended value )
 

Jay_83

Distinguished
Feb 7, 2011
135
0
18,710


He won't be gaming a whole lot, but he wants to have it smooth and looking good when he does. Dabbles in video quite often.

(Frankly, you make a good point. But I did it anyway.)