Rumor: Xeon E7-v3 CPUs To Carry Up To 18 Cores

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.

bit_user

Polypheme
Ambassador
Believe me, if you had 12-16 cores rendering, you would feel it, even with two cores to spare.

All of these cores compete for memory bandwidth, and the whole system is competing for I/O bandwidth. And there are points of contention in the OS (memory management, etc.).

But I agree with your basic premise, that there are real uses for more cores, and paying for some extra capacity does help with multitasking.

That said, when I bought my Sandybridge-E (actually, an equivalent Xeon model), I went for fewer cores and more GHz, because I felt I'd be more constrained by single-thread performance and have limited chances to use 8 cores.
 

adambomb13

Distinguished
Aug 1, 2007
5
0
18,510
I think its great to request more out of intel and any other technical company. What is wrong with asking for more power, lets get everyone to 18 cores with a reasonable cost that will make Microsoft/Apple or whatever OS take advantage of it. We should always want more.
 

mapesdhs

Distinguished
InvalidError writes:
> Practically no mainstream software exists that can make reasonable use
> of more than two cores, ...

Can't imagine where you get that idea from. As others have said, the OS
itself needs resources, so 2 definitely isn't enough for smooth running,
and benchmark after benchmark shows that plenty of modern games
gain from at least 4 cores. Bare in mind the increasing popularity of online
multiplayer games - such games definitely work better if a system has more
cores to handle situations involving a lot of players.


> ... no hurry for Intel to bother producing mainstream chips with more than four

They're not bothering because there's no competition. Why do people keep on
forgetting again and again that Intel did already produce an 8-core consumer
chip, crippled to be a 6-core? Has everyone's brain gone to mush? The same people
are reading these articles today surely? The 3930K is an 8-core chip with 2 cores
disabled (the toms review of the 3930K made this quite clear), thus proving Intel
could have released a consumer 8-core a long time ago at a reasonable cost,
and far cheaper than the 5960X is today. The pricing is totally arbitrary. They didn't
do so with SB-E because there was no need to. It was bad enough that IB-E
offered no 8-core, but now we don't even have a mid-range 8-core after all this time.


> ... Most applications people care about rely heavily on single-threaded
> performance and tuning CPUs for high single-threaded performance
> means giving up on extra cores.

Most?? Don't know what planet you're living on. I know loads of pro users
who absolutely need all the threaded performance they can get, from quantum
chemistry research to After Effects, plenty of apps out there need it. In the
consumer world, the ability to handle background tasks with ease via SSDs very
much depends on a system's threaded performance. There are some pro apps
that haven't been updated for it, like ProE, but they're an ever shrinking minority
these days. You may have been correct 5+ years ago, but not anymore.


> ... AMD has been failing to close the gap for years. If improving mainstream
> CPU performance beyond where it stands now was so easy, AMD should
> have at least caught up by now.

That logic chain is completely the wrong way round. Intel hasn't bothered
precisely because there's no competition from AMD, or has everyone suddenly
changed their opinions from the days of the IB fallout? IB was bad because SB
was far too good, and there was no competition. AMD is behind because they
quit the market; the company stated this quite clearly some time ago. They don't
care about desktop CPU performance anymore. If AMD did care so much, they
could have sidestepped the multi-year delays and at the very least released
a die-shrunk Ph2 with a couple more cores and some IPC tweaks, call it the Ph2
2x00 series or something (x = no. of cores), that would at least have been
usefully competitive for long enough to sort out a proper new chip arch with
some useful innovation.s Instead all we got was the auto-designed joke that
was BD, a chip so embarassing at launch that I remember people who'd pre-
ordered new mbds simply dumping them on eBay over the days that followed.

There's rather too much selective memory floating around here IMO.

Ian.

 

alidan

Splendid
Aug 5, 2009
5,303
0
25,780


im currently running a phenom II 955 and ddr2 800mhz with no overclock...

yea... that chip with just 1 core would probably perform better than mine does currently. and i have to disable a core when i render crap because i got no other computer to use and i'm not waiting for hours to use a computer... and i also dont want to render at night as its in my bedroom and i cant sleep when the room is hot.

as to would i feel it when you consider before than i would have been useing 18 cores, and now i would only have 2-4... yea, but it wouldn't impact performance like a 95% load across the board would.
 

VIPChristian

Honorable
Mar 3, 2012
165
0
10,680
We'll STOP , buying the newest, greatest thing. Consumers control the prices, and don't even know it. No Sales = No Revenue.
It will not have a huge impact, and Intel will probably be just fine without our hard earned slave money, we are just test subjects to this sick & greedy, and misguided group of people, who are ruining our planet.
Here's a challenge for you,
What are the oldest PC components available to build a PC for entertainment, being old, the cost should be low ?
Ex: 1996
Gateway PC
$4,000.00 open box
Played Silent Steel, Phantasmagoria, Siberia
On a 40gb HD
I think it was an BTX MB ?
Windows 95
This PC was more enjoyable than the one I bought in 2014.
 

InvalidError

Titan
Moderator

Consumer-grade CPUs are not intended for high-end computational professionals. Mainstream CPUs are meant to provide sufficient processing power for the vast majority of people's everyday needs, with the Extreme models covering people who have requirements a notch or two above that. There are plenty of multi-core, multi-sockets CPUs in Intel's lineup if your compute time is valuable enough to justify upgrading to Xeons. Most normal people cannot afford or justify paying more than $300 for a CPU.
 

bit_user

Polypheme
Ambassador
You should upgrade to... pretty much anything!

Seriously, the 955 has a TDP of 125 W (I know - I've got one in my fileserver), which is much higher than even the i7-4790k.

If you're willing to consider used/refurb, there are probably a some good deals on Ivy Bridge-E/Xeon with 6 cores (specifically, E5-1650 v2, i7-4930K) or even 8. If you're buying new, then I'd imagine your budget would probably limit you to a quad-core Haswell i5 or i7. Or wait for Sky Lake.

But if you're doing serious rendering workloads, I'd imagine most software would benefit from AVX & AVX2 added in Sandy Bridge and Haswell, respectively. Altogether, you should notice a huge step up, even though the core count would stay the same. And at lower power.
 

bit_user

Polypheme
Ambassador
And with both major consoles being 8-core systems, you can bet that game developers are optimizing their engines to scale well beyond 4 cores.

You could pretty much end the argument here.

I basically agree. For the average user, even 4 cores is a bit unnecessary. But certainly enthusiasts and power users could benefit from more. So, due to lack of competition, Intel is pushing these folks into a higher price bracket. If they faced stronger competition, they might instead decide to offer 6 cores in the mainstream bracket.

But there's a real barrier if you go much beyond 4, in a dual-channel memory config. They want to keep costs down for mainstream users, so it doesn't make sense to add more memory channels in that socket. And once you create a higher-end socket to serve a fundamentally smaller market, prices will be higher due to lower volume.

So, is Intel extorting these higher-end users? Certainly. Prices on their quad-channel CPUs could be lower (and probably would be, if they faced any real competition in this segment). But I don't fault them for not trying to squeeze 8 cores in a dual-channel socket. That just wouldn't make much sense, from a performance standpoint.

Even with quad-channel, look at how clock speeds drop off once they get much above 8-cores. There are two reasons for that - one being power efficiency, but the other being that it's difficult to feed more cores running at full speed. I mean, they certainly could have made the chips burn more power - IBM has mainframe CPUs that burn like 2 kW. So, if it were a cost-effective way to add performance, they could have gone above 145 W.

Um, I don't think the issue is one of caring. If you haven't noticed, Intel gained an unassailable lead in manufacturing tech, and AMD has had some architectural and managerial missteps. The combined affect has hammered their revenue, and they've been trying to use what limited resources they have just to stay in business (e.g. by going after consoles and ARM-based servers).

I have great hopes for their K12. I was really disappointed not to see Carrizo at 20 nm and in a desktop variant. I've been waiting for their APUs to become a bit more competitive, before buying one to dabble with HSA.

I've also wondered whether something like this would've been competitive, but I think they just didn't have resources to hedge their bets, like this. They already had two new architectures and were probably devoting all their resources to making them as good as possible.

Remember, die shrinks sound simple enough, but there's a ton of work that goes into layout and routing of a high performance CPU. It's not as automated as you'd think, if you want the best performance. And the amount of testing required is typically comparable to the engineering work.

Anyway, there seem to be some new MIPS SoC's on the market. Perhaps they'll even run Irix!

Hands-On With The Creator Ci20
; )
 

alidan

Splendid
Aug 5, 2009
5,303
0
25,780


largely i render video and some amount of 3d, thankfully i only ever do single frame or try to make it as close to real time as possible in the render though... i can imagine photoshop would probably feel better too.

my budget is barring rendering, this computer does everything i ask it too. my first computer was a 333mhz p2 i kept till i was finally given a p4 3 or 3.2ghz ht as a christmas present, and kept that till the motherboard died and am on my current phenom II 955 computer. i cant imagine upgrading till income is disposable to me or the thing dies. i have 2 grand in savings that is there just for an "oh crap" situation of something major breaking.

just looked at it now, i found a Intel Core i7-3820 for 180$, i think pride performance wise that would be the best and if it was just the cost of the cpu i would probably jump on that even used... but needing to buy new ram and motherboard, which would probably cost as much or more than the cpu for each is something i would have hangups about.
 

bit_user

Polypheme
Ambassador
I actually have the Xeon version of that chip. If you look at the benchmarks, it's easily beaten by mainstream Ivy and Haswell CPUs.

The main reason I got it was for the PCIe 3.0, which wasn't available in mainstream SandyBridge. Also, having more than double the PCIe lanes, double the memory channels, and a couple hundred MHz higher clock than the LGA-1155 chips offered were nice bonuses. But you don't really need quad-channel memory for only quad core. All the benchmarks show that, clearly. And the mobo cost me about $100 more than I'd have spent on a socket 1155 board. And, of course, I had to get a full set of 4 identical DIMMs. 1.5v DIMMs are required for full 1600 MHz operation, as I learned. Also, it's best to run with one DIMM/channel, so either buy all the memory you're ever going to have, or plan on swapping out the old DIMMs, when upgrading.

In retrospect, I don't regret it, but I would steer others away from the quad-core socket 2011 processors unless you really need the extra PCIe lanes. Otherwise, if you're buying quad core, you're better off with a LGA 1155 Ivy Bridge or LGA 1150 Haswell. Faster, cheaper system costs, and lower power.
 

mapesdhs

Distinguished


It's irrelevant what anyone thinks such CPUs are meant to be used for. Fact is, many solo-pros
simply cannot afford XEON setups, certainly not dual-socket systems anyway (I've talked with numerous
users in this situation). An oc'd top-end consumer build with quality parts is a good compromise until
they can afford something better (an oc'd SB-E offers about the same threaded performance as a 10-core
XEON from the same era, at substantially less cost), irony being they have far better single-threaded
performance than any XEON config available. Sure there are sacrifices, such as no ECC, and for some
pro tasks that's not viable at all, but for many it is; I've helped numerous solo pros obtain systems of this
kind as a practical stepping stone to something more powerful in the future, often mixing Quadro cards
and GTX cards upfront to at least give a quality output plus extra CUDA oomph (ideal for AE), eg. Quadro
K5000 plus 2x GTX 580 3GB.

It's silly to say that consumer CPUs are not intended for pros and thus imply they shouldn't use them. That
contradicts the huge shift in the way consumer GPUs have been supported by pro apps and drivers, with
the latest consumer cards showing stronger performance than pro cards for Viewperf 12, at the expense
of image quality, features, support, etc. For some tasks this is a bad idea, but for many users it's a very
useful & productive tradeoff. The industry is actively encouraging the idea within the GPU space, so why
not with CPUs aswell? Writing off the notion so casually when there are thousands of users making
a living right now in this way is bizarre. Not everyone can afford XEONs, and until they can, good oc'd
prosumer builds are an excellent compromise (I've built several such systems, mainly for AE users).

Ian.

 

mapesdhs

Distinguished
bit_user writes:
> But there's a real barrier if you go much beyond 4, in a dual-channel memory config. ...

I agree there, but I was referring to the fact that the 3930K is an 8-core chip with 2 cores disabled.
It doesn't bother me much that S1155 never had a 6-core, but it's ridiculous that X79 didn't have an 8-core
at all, even though the 3930K shows it's perfectly possible. Intel didn't do it because they didn't have to.

Remember IBM's chips have a heck of a lot of cache and other features which make them rather
different to consumer designs.


> just didn't have resources to hedge their bets, like this. ...

They wasted their resources on automated design methods. An ex-AMD guy posted this a while back.
It resulted in less performance for a big increase in transistor count & power consumption, hence BD.


> Remember, die shrinks sound simple enough, but there's a ton of work that goes into layout and
> routing of a high performance CPU. It's not as automated as you'd think, if you want the best
> performance. And the amount of testing required is typically comparable to the engineering work.

:D Heh, I know, I used to have lengthy conversations about all this a decade ago with people at SGI,
MIPS, John Mashey, etc.


> ... Perhaps they'll even run Irix!

Neeeever gonna happen... Casey Leedom did create a newer IRIX prototype: dual-core Broadcom MIPS
CPU, PCI Express, NVIDIA gfx, etc. Management let him build it, then they killed it off. Shame.

Ian.

 

bit_user

Polypheme
Ambassador
Oh, very different. I was merely pointing out that it's certainly possible to build chips consuming far more power than Intel's top-line Xeons. I believe that's simply because it's not a cost-effective way to offer more performance. At the top of the market, I think Intel will build whatever they think customers will pay for. So, the lack of more cores or faster clock speed probably means that's where even the least bandwidth-intensive applications hit a bottleneck on memory.

The economics driving IBM's mainframe division are very different, though. Customers are already paying vast sums of money per machine and IBM is already making custom chips, so why not make them eye wateringly fast? If you're buying a mainframe, you probably care much less about power consumption than reliability. And because they're so expensive, you can only afford relatively few and therefore it's important that they perform well.

I've heard that Intel has traditionally had an edge on tools. I think they use all their own custom tools. Nothing from Cadence, Synopsys, etc.

I just think it'd be funny if a little SBC ran IRIX faster than their fastest workstations from 10 or 15 years ago. They should opensource IRIX and let it happen. Maybe Imagination will buy it from them.

But the OS I'd most like to see revived would probably be Symbolics' Genera.
 

mercedesbenz

Reputable
Feb 18, 2015
4
0
4,510
I really don't understand all the negativity about AMD in this discussion.
AMD has had 16-core Opteron CPUs for years that have been selling for around $700. This is for a 2.3Ghz 115W TDP type (AMD Opteron “Abu Dhabi” 6376)

This dwarfs Intel in terms of price/performance.
 

The E5-2630L v3 is an 8C/16T chip for ~$600 that runs 1.8 GHz and turbos up to 2.9 GHz. Considering the IPC advantage of Haswell over Piledriver, at very worst you're about a dead-heat with the Opteron in processing speed. Considering the E5 is a 55W chip, and the kind of people that use these server CPUs like to keep them very busy, you're getting that performance at about half the power of the Opteron. And unlike casual consumer space, power draw is a much bigger concern in server world.

And pro tip: upvoting your own comment doesn't help your cause.

EDIT: Correction, I meant the 2630L
 

InvalidError

Titan
Moderator

Hmm, I'm a mod and I can't upvote my own comments... my account must be broken!

AMD does have a long uphill road ahead of them if they want to claw back some server market share, especially with the focus on power-efficiency that started nearly ten years ago.
 

Here in the normal forum view you can't. But when you're looking at the comments in when reading the actual article you can. Similar thing with down-voting someone. I can't down-vote at all in the forums proper, but in the article view I can.
 

bit_user

Polypheme
Ambassador
A lot of the comments are focused on the mainstream, and the price & core count of their mainstream and enthusiast CPUs. The conjecture seems to be that if AMD were more competitive in the mainstream, Intel would increase core count or lower the prices on Extreme CPUs.

Few of us probably have any actual experience with AMD's server CPUs.
 

bit_user

Polypheme
Ambassador
Yes, and that's why they're betting on ARM.
 

mercedesbenz

Reputable
Feb 18, 2015
4
0
4,510
RedJaron, you are spreading false information: the Intel Xeon E5-2630 v3 is 85W TDP, not 55.

http://ark.intel.com/products/83356/Intel-Xeon-Processor-E5-2630-v3-20M-Cache-2_40-GHz

Apparantly you assume that everybody who is not from the "server world" is but a mere "casual consumer". We happen to use AMD Opterons for scientific modeling and simulation. This requires many parallel threads so we also like to keep our CPUs very busy. However we don't have them by the thousands, so TDP is not our first concern.

Besides, the AMD Opteron 6376 was introduced nearly 2 years BEFORE the Intel Xeon E5-2630 v3. From what I recall, at the time we selected our components (early 2013) the Xeon line was far less interesting than Opteron for our needs. In fact it now seems like Intel did choose to compete with AMD in this segment after all.
 

Justkeeplookin

Reputable
Feb 17, 2015
637
0
5,060
Personally i think this is pointless. I see this is only optimal for server computer. This wouldn't be a help for today's gaming. I also see that this could be useful in rendering.

Apparently i would like to know if they would plan on allowing these on the LGA-2011 socket or the Intel Haswell Sockets.

If anyone can tell me or show me how far they're planning to let it overclock
 

bit_user

Polypheme
Ambassador
Wait, is it pointless, or useful for servers and rendering?

I think nobody here is arguing that 18 cores makes sense for gaming. 6 or 8 cores... probably, if you plan on keeping your CPU for 3-4 years. Don't agree? go back and read the review of Haswell, where a 6-core i7-3930k Sandybridge-E beats the i7-4770k on most benchmarks.

http://www.tomshardware.com/reviews/core-i7-4770k-haswell-review,3521-12.html

Given that the previous E7's seem to use LGA 2011, I'd expect these to use LGA 2011-3. But you'd wanna check on that. Also, don't underestimate the price. List prices on the E7 v2's range from $1200 to $7000. These things are mostly targeted for big servers with > 2 CPUs. The E5 Xeons are more suitable for workstations, and priced a bit closer to earth.

And I expect they don't overclock at all. Some models do support Turbo.
 

Justkeeplookin

Reputable
Feb 17, 2015
637
0
5,060


I do know they're prices are really high, do they already have a motherboard that can support 18 cores? I know most modern Dell Workstations can hold 2 CPU's not cores. If they plan to clock it that low i would suggest they add turbo maybe 3.3 GHz. My point is they aren't the best for gaming in fact most games and software don't take advantage of 4+ cores. Another question how many memory channels would they be pairing with the 18 cores in total. I'm expecting the Xeon E7-v3 CPUs to start maybe around the $1200-$2500 range. I'd surprised if it goes above $4000

 

bit_user

Polypheme
Ambassador
Well, the TDP's really aren't that high, so it might work. Perhaps the number of QPI links could be an issue. If you're truly interested, I'd recommend lookup up the CPU compatibility list on some workstation boards. Perhaps you'll have to look at some LGA 2011 (Sandy/Ivy Brige-E) workstation boards and see if they accept E7 v2 CPUs, to get a sense of whether it's possible, since even if the 2011-3 board could accept E7's, they probably haven't yet been tested with the v3's.

The question is whether this is due to the way the game is written, or just because the CPU stops being a bottleneck. Like I said above, consoles have 8 cores, so you can expect most game engines to have enough concurrency to scale above 4. If you plan to hang onto the same CPU for a while, I think you'll see newer games utilizing more of the cores, as game developers target ever faster CPUs.

The socket supports 4 channels of DDR4. Beyond that, it should have a few QPI links, through which it can steal some bandwidth from its peers, in a multi-CPU config.

Really? Based on what?
 

bit_user

Polypheme
Ambassador
You should check out Xeon Phi. Its successor will be even more appealing, since you'll be able to drop one right in a LGA 2011-3 socket.
 
Status
Not open for further replies.