dragonfly522 :
I presume that most people know the answer to this one or I probably would have found it in the search for the answer. I am wondering why people don't buy server chips for personal use when they are faster or have more cores. Are there features in them which make them incompatible for personal use? I see in many benchmarks that the fastest opterons smoke many of the i7 chips and the fastest pc amd "personal use" chips.
Server CPUs are different from client CPUs (desktop and laptop CPUs) in a few different ways.
- Server CPUs almost always support features like error-correcting (ECC) memory that can increase reliability of a machine that is always left running. Some client CPUs support this technology, some do not. Almost all AMD CPUs made in the past half-dozen years or so support ECC memory, Intel FSB-equipped CPUs could support it if the chipset did, and no LGA1156/1366 Core i-series CPUs have ECC support.
- Server CPUs can sometimes support registered memory, which allows for more than two memory modules per channel and more than two ranks of memory ICs (more than 8 or 9 ICs per side) per DIMM.
- Server CPUs can sometimes support multi-socket operation so you can have more CPU cores on your motherboard. No client CPUs since the Pentium III support being run in multiple-socket mode, even if the CPUs will fit in the motherboard.
- Server CPUs sometimes have more cores than client CPUs. AMD's Opteron 6100s have 8 and 12 cores while their client CPUs top out at 6 cores; Intel's server CPUs go up to 8 cores while their client CPUs also top out at 8 cores.
- Server CPUs sometimes have different sockets than client CPUs, but not always. Right now AMD uses completely different sockets for servers (C32 and G34) than they do for client (AM3, S1) and Intel shares LGA1156 and LGA1366 with both client and server. They still do have LGA1567 solely for the largest Xeon MP setups though.
dragonfly522 :
Thanks for the insight on this, but I am confused. In laymans terms, how do multi-sockets affect the performance of processors? Tell me if the answer is too long to explain in a single paragraph and I will research this. My other main objective now is to find out what kind of task(s) a server does that is so labor intensive.
Multiple sockets allow for multiple processors. Adding a second CPU adds another set of CPU cores and another set of memory slots, so you get more cores, more memory bandwidth, and more memory capacity from adding a second CPU. That second CPU doesn't help any unless you need more memory, more memory bandwidth, or more CPU cores to do your work. Many client applications such as games do not gain much from going from 3-core to 4-core and 6-core CPUs, and the 2-memory-channel LGA1156 Core i7s perform similarly to their 3-memory-channel LGA1366 Core i7 counterparts. As such, there's typically not a lot of benefit in using multiple processors for client programs. Video encoding is the big exception as a good encoder will use as many cores as you can throw at it.
Servers run things like virtual machines and Web serving programs, which are highly multithreaded and scale well on servers with lots of CPU cores.
Also, I have been looking into getting a new cpu and I find cpubenchmark.net useful. Do you use their site? It looks like a good site to use because from what I have read on their site they use the average or overall score of all of the performance tests.
It's PassMark's CPU benchmark website. I take it with a BIG grain of salt as four of the eight tests the benchmark runs are completely synthetic benchmarks with no real-world-application applicability (integer math, prime number test, FP math, SSE/3DNow! test). Case in point- the Pentium 4s were champs of a lot of synthetic benchmarks yet lost to just about everything else out there on real programs. I'd also wonder at the exact code they use, their compiler and their compile-time optimizations. The benchmark seems to well with both clock speed and cores, but it seems to absolutely kneecap AMD's CPUs in comparison to Intel's. Lo and behold, in their "About Us" page they have an "Intel Software Partner" logo there. Maybe it would be okay to get a VERY rough idea of how two different CPUs from the same maker stack up against each other, but I'd certainly NOT use it to compare AMD vs. Intel CPUs.
More people these days are using their computers for multi-core and multi-threaded operations, including multi-tasking, so an overall score in a benchmark seems particularly useful.
The best benchmarks are the ones that reflect what YOU do with the computer and use real life applications. If you do a lot of gaming, look for tests that run real games in their benchmarks rather than some synthetic "gaming" score from a benchmark like 3DMark. It is awfully tempting to want to see one number that says "this CPU is faster than that one," but that number really doesn't exist any more. It used to when everybody copied Intel's designs and different makers' CPUs were very similar except for clock speed and price, but now they are very different and much more difficult to compare.
Also realize that a lot of "performance" is responsiveness and not speed. A computer with plenty of RAM and fast disk drives will feel a lot snappier than one with a stonking huge CPU but a mediocre amount of RAM and a slower disk drive. Also never overlook the effect of your software on how snappy your computer feels. A well-tuned OS free of cruft feels fast, while the same OS with a bunch of crap running in the background feels dog-slow.
I have found out by reading a couple of the comment sections after the articles, including from Tom's Hardware, that people think that both AMD and Intel have been reaching a standstill. They think that these companies are aren't about to make processors put out much more MHz, if at all, because their engineers haven't figured out a way to improve upon the cpu's architecture. They also think that making more cores as a way to improve overall performance isn't working and they say that this is because software today isn't designed to make optimal use of multiple cores. According to a speaker for AMD, the Bulldozer chips will stop these accusations for future processors. I have read up on how their new chips are going to be better and I am very impressed. I want one, but I think that AMD is going to price these ones a little bit closer to what Intel's price range is for Intel's top of the line processor chips. Not to the degree that Intel prices them, but it will start to compare to the prices of their server chips.
CPU engineers reached a point where CPU clock speed could not increase very quickly due to thermal issues in the early 2000s. Thermal output increases at a 1:1 ratio with increasing clock speed and with the square of the voltage increase. Running at a higher clock speed requires more voltage, so you can see how a small bump in clock speed can result in a big bump in heat output. That's fine in the Pentium III days when chips threw off 25 watts of heat, but not so much when you had P4s throwing off 100 watts already. Thus to increase performance, they wanted to find a way to make the CPU do more with every clock cycle, since they couldn't just keep getting more clock cycles per second like they used to. That's why we have things like multiple-core CPUs, it's all a way to have a CPU do more in a given period of time.
The thing that has really been lagging is programmers. Writing good multithreaded code is difficult, but it can yield huge performance gains in most programs if done well. Many programmers whine about CPU clock speeds remaining more or less constant since it means more work for them, rather than writing simpler single-threaded code and relying on the CPU makers to keep jacking up clock speed to make their programs run faster. The programmers really need to wake up and realize the days of more GHz is over unless there is some paradigm shift in CPU manufacture that solves the thermal problem. That paradigm shift will most likely be a big change in what CPUs are made of, such as going from silicon to diamond or using carbon nanotubes or some other not-here-yet technology.