Rumor: Xeon E7-v3 CPUs To Carry Up To 18 Cores

Page 3 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.
First of all Intel is expensive for a reason. Next server computers today are really expensive. Consoles have 8 cores because XB1 and PS4 are able to multitask between games not all cores are used when gaming on 1 particular game, XB1 has alot of background processes running usually because it's made for more than Gaming.

Also i know server computer needs huge amount of space considering its the computer that is updating and storing all the info do they use 10000 RPM drives or even SSD's? Would they completly rely on SSD's?
 
You should check out Xeon Phi. Its successor will be even more appealing, since you'll be able to drop one right in a LGA 2011-3 socket.

Yeah I know about it, but it (still) requires special programming. It won't just show up as extra cores on your machine, nor run your normal multi-threaded code.
 

Apologies on that, I was talking on the 2630L and accidentally left off the final SKU digit. Apart from the SKU typo all other specs and stats I listed are correct. Regardless, the base of my argument, that you can get superior performance for lower power draw, still stands with the regular 2630v3. It's 30W higher than the low-power model ( and still 30W lower than the 6376, ) but the clock speeds on are also significantly higher at 2.4 GHz - 3.2 GHz. If the L would trade blows with the 6376, the regular 2630 would completely knock it out of the park.



Now you're "spreading false information." I didn't make that suggestion or assumption anywhere. If you care to reread my actual words, I said, "And unlike casual consumer space, power draw is a much bigger concern in server world." I didn't say no casual consumers care about power draw because many of us do ( I am one of them. ) Nor did I say all people who run server equipment have power consumption as their primary concern. I said that if you consider those that focus on server hardware as a whole, and compare them with the casual crowd, you'll find that, on average, power consumption and performance per watt is much higher on the priority list for the server people than it is for the casuals or prosumers.



Yep, at that time you likely would've been limited to a 6C/12T Xeon for that money. However, two years ago has little to do with right now and the release of the new E5s and E7s. Your claim was that "[AMD] dwarfs Intel in terms of price/performance." You see, "dwarfs" is a present tense verb. You said that a two year-old AMD chip currently offered superior performance to price value than anything Intel offered. That is simply false.
 
They have big profit margins, which means they could be less expensive, if there were more competitive pressure.

Not more than about 1, max, would be off limits to the game. Background processing, like checking for updates or displaying a small video window, really doesn't take much CPU time.

Try this: go to Dell or HP's website, navigate to the business section, and spec out a server. You'll see the range of storage options covering everything from big, slow HDDs, to small 15 kRPM HDDs, and to various SSDs. Which config to use depends on the purpose of the server.

At the high end, like for big server farms and mainframes, there are consultants whose job it is to model the transaction mix and figure out exactly what hardware to buy. But more and more, people are just using the cloud and they simply increase the number of virtual machines they use based on the current load. If you're doing that, you don't care as much about the efficiency of a single instance (partly because you have less control over it). I think some of the cloud providers do give you some choice in certain aspects of the machine, such as GPU.
 
The next Xeon Phi CPU is supposed to run stand-alone. Meaning it'll look and act just like a normal multi-core x86-64 CPU, and can run OS and programs as normal.

http://www.hpcwire.com/2015/02/05/practical-advice-knights-landing-coders/

The current Xeon Phi supposedly doesn't require much effort to port, especially if your code runs under Linux. I think you can just compile your code for it and it should just run. But you can also use other acceleration APIs, like OpenCL, OpenMP, and perhaps OpenACC.
 
Apologies on that, I was talking on the 2630L and accidentally left off the final SKU digit. Apart from the SKU typo all other specs and stats I listed are correct. Regardless, the base of my argument, that you can get superior performance for lower power draw, still stands with the regular 2630v3. It's 30W higher than the low-power model ( and still 30W lower than the 6376, ) but the clock speeds on are also significantly higher at 2.4 GHz - 3.2 GHz. If the L would trade blows with the 6376, the regular 2630 would completely knock it out of the park.
Thanks for correcting that and clarifying.

I'm actually not so sure if the 2630 would be so much faster than the Opteron 6376 (when using all cores). The clock speeds are similar 2.4Ghz vs 2.3Ghz (I think it's fair to compare the normal clock speed, not Turbo) and the 6376 has twice as many cores. The higher IPC of Haswell will probably fully compensate for this, but I don't think it will make the 2630 very much faster than the 6376.

And still, we're comparing a CPU introduced in the end of 2012 (the 6376) with one in the end of 2014 (for about the same prices). One may expect that Intel can beat the price/performance of its competitors to years later. That was basically the point of my original post, although saying that the Opteron "dwarfs Intel in terms of price/performance" was perhaps a bit of a stretch and certainly doesn't seem to apply any longer.
 
Although Intel's products are more expensive than AMD. I guess AMD's price to performance ratio is actually pretty good. As of Intel i get your paying more for longer lasting maybe better efficiency, i just think there price/performance ratio may be a bit lower. I dont know. Intel performs really well just like a $100 more between the FX8350 and i7 4770
 

Short answer: both chips address 16 threads, Haswell has better IPC, and Haswell is also running 100 MHz faster than PD. When Haswell has both the efficiency and speed advantage, why wouldn't it be significantly faster?

Detailed answer: go check the original Piledriver review. Specifically, compare how the 8350 fares against the i7-3770. The 3770 ( the review uses a K, but it's left at stock clocks, ) beats the 8350 soundly except a couple benchmarks where the 8350 edges it out. However, the 8350 also had a 500 MHz advantage over the 3770. If the chips were at the same frequency, the 8350 wouldn't score a single win over the 3770. And keep in mind, that was against an older Ivy Bridge, not the current Haswell.

So a 4C/8T IB chip significantly beats an 8C/8T PD chip even when the IB is running a 12% slower clock. Why would you think an 8C/16T HW chip would barely edge out a 16C/16T PD when the HW is running 4% faster than PD?



And I've already said when it was released, that was a good deal for $700. To get equivalent performance from Intel back then would've cost $1100. However AMD hasn't updated their server portfolio since then, so the 6376 is still considered AMD's "current" chip. This means if you were to buy it right now you'd still be saddled with all of PD's problems, notably the poor IPC performance, the horribly slow cache ( IB/HW can hit its L3 faster than PD hits its L2, ) and the slower memory controller ( IB/HW can hit its RAM faster than PD accesses the L3. ) Back then you would just deal with those problems because there was nothing else available at that price range that was better. But now you do have better options, so why would you still recommend that chip?
 


Every CPU even from the same model behaves differently. I see your point there should be a speed increase, but may it's the thing you don't notice. Would you notice a 4-5 FPS change on games? With the human eye and games not so much. Unless you mean potential power then yea.

 


I highly doubt RedJaron is referring to games re his comments on comparing two server-type CPUs.
His point was there's no rationale for expecting the AMD chip to be quicker based on existing data
with other CPUs, and that's correct as far as I can see.

Ian.



 
Thanks, Ian. No, JKL, I wasn't referring to games. I was talking if you were able to keep the threads more or less saturated, the 2630 would offer a very noticeable performance increase over the 6376.

Gaming is much more GPU dependent, and that's where you'd see the bottleneck if you decided to game on these server CPUs.
 
Battlefield 4 is the only game i know off that uses all cores. So it could a plus. On any other game we would see some extreme bottlenecking on single core performance along with the GPU
 

No, because in order to address that many threads, the 2630 has to spin down toward 2.4 GHz. Even the lowest HW i7 can maintain 3.4 GHz under full load. You'd need 11 - 12 threads at 2.4 GHz to match the total clock count of eight threads at 3.4 GHz. Besides, BF4 actually doesn't care much as long as you have four cores/threads. Notice that an i3 is only a few frames behind an IB-E chip ( http://www.techspot.com/review/734-battlefield-4-benchmarks/page6.html ).

In short: buying a server CPU for any kind of gaming emphasis is a waste of money. You can get a 4790K and GPU for the cost of the 2630 alone, and that's one of the cheaper server options out there. You can play games on a server, but don't expect to get better performance than a more traditional enthusiast desktop.
 
Then prove it's wrong. You've got that link right there. A 2C/4T i3 @ 3.3 GHz, 4C/4T i5 @ 3.4GHz, and 4C/8T i7 @ 3.5 GHz are within 2fps of each other. Hell, a 4C/4T FX-4100 @ 3.6 GHz with much poorer IPC is only 1fps behind that. So please tell me how you figure a chip running a full GHz slower will somehow yield faster gaming performance. Even if it was able to run eight threads at its turbo clock ( doubtful, ) it's still operating 200 MHz slower than the i7. But you're saying it might have some magical HT variant that can make up for that 10% slower clock speed? And games largely don't care about L3 cache beyond a few MBs.
 
Its funny how you made an assumption when clearly i didn't say that the e7-v3 would win.

Let me refresh you here cause clearly you can't read comments read my comment then come back to the comments and tell me what you learned :)
 
Fight? What fight? I've laid out logical facts. Your response was basically "Uh uh!" And if you already knew this, then why make the suggestion it wouldn't be the case?


Explicitly say it? No. Arguing for it? Yes. And BTW, I'm talking about an E5, not an E7.


Hypocrite, who's trying to pick a fight? I provide solid facts and supporting figures. You call me wrong while providing no actual supporting facts and then insult me. What I've learned is you simply want to provoke others without contributing to the discussion.

Begone.
 
Proving Solid facts while attempting to overthrow someone's theories .

How ya doing on reading?

I never said your wrong i was just amused how you can make an incorrect assumption about my comments when your saying that i sayed something that i didn't say. Try saying that to a person in real life look carefully at the first line .

Try saying that to a bear , that is how you challenge someone
 
If we go back to your original comment you explained that the server processors would be a waste for gaming you said a 3.4 Ghz quad core is better than 2.4 Ghz 8 core. All i am saying you can judge a processor by its clock speed. a 0.2 Ghz 100 core will be faster in single core performance against a 0.1 101 core processor. Just can judge a processor by clock speed :)
 
Wow, some people need to relax, here.

First, what's really at stake? You don't have to prove to the internet that you're right.

Second, the best way to support your argument is with data. To the extent you care, go dig up some relevant benchmarks on the closest CPUs you can find. You might find something on specbench.org or in the databases of popular benchmarking tools like SiSoft's Sandra. Or try OpenBenchmarking.org.

I think it's healthy to compare and contrast the characteristics of different architectures & implementations and their impact on performance. But bickering over each others' speculations seems like a pointless waste of time and energy. The world really doesn't need any more geek on geek violence. Go make something. Or read a book.

That's just my 0.02 bitcoins.
 
There's an art in choosing the right mix of benchmarks to estimate performance on the expected workload. That's one reason there are so many of them.

Also, if you understand a bit about the benchmark, you can see how it might be affected by different component choices (often, with the help of other benchmarks). For example, I might have slower memory, but I can go look at memory benchmarks to figure out whether that should affect the gaming benchmarks I'm looking at to evaluate the GPU I'm planning to buy.
 
"They don't have to... We need some very big player to compete Intel, and there is no one. Only Samsung seems to be big enough to compete Intel, and they are not in PC business, so 8 cores are going to cost more than 1000$ for long, long time..."

AMD basically de-invested its self from manufacturing, you might have heard of Global Foundries who now make CPU's for AMD and snapdragon. The AMD K12 which should be released in 2016 is based off Globalfoundries/Samsung 14 nm FinFET manufacturing process. If its any good or if it actually is released in 2016 is yet to be seen.
 
Status
Not open for further replies.