Report: Intel Haswell-E Halo Platform Will Have Eight Cores

Status
Not open for further replies.

It's Haswell-E. I think they like releasing the Extreme edition chips a year later, don't ask me why...


Are you even serious? They're going from 6 cores within 130w to 8 within 140w, and that's not enough for you? They're not going to pull physics from their backside, you know. Wait till they hit 5nm, maybe then you'll get 8 cores @ 4 GHz under 70.

Look at their standard mainstream parts if you'd like to see stuff around 70w.
 

Dude, did you even read the whole thing? Socket LGA 2011-3. LGA. Land Grid Array. Not BGA. Socket LGA 2011-3. SOCKET.

All current chips come in some sort of BGA packaging, Broadwell will be the entirely soldered generation, but then Broadwell won't release for the desktop (at least not in a socketed form), and we'll skip straight to Skylake that WILL NOT BE SOLDERED.

Hope that helps.
 


Extreme series parts are based on the Server EP line of parts which lag a year behind desktop parts. As such, Ivy Bridge server is coming out this year, while Haswell Server is coming out next year (Assuming they maintain their ~yearly cadence). The reason for this is that Server parts require much more validation because there is so much more on the die. In fact, it takes about a year of extra validation for server parts to be release.


 
I'd also love to see the power requirement go down from 130W to 80W on 8 cores, maybe even 96W. With consoles finally up to date later this year I am hoping that for gaming atleast this means our systems will be fully utilized.

However, DDR4, quad-channel, ect I still have not seen in which combinations my system can be maxxed out.
 

There is no benefit to going beyond x8 on PCIe 3.0 for GPUs (almost none over x4 most of the time) so you can use 2x x8 for GPUs, which leaves you with 24 lanes (192Gbps) for everything else.

Alternately, you can get a board that uses PLX or similar chips to more efficiently use the CPU's lanes... 10GbE only requires 1.25 PCIe 3.0 lanes so wasting 4 lanes from the CPU would waste 2.75 lanes. Using a 40 lanes PLX to expand the 8 "extra" lanes would make 64Gbps available for IO; enough to handle all but the craziest prosumer IO needs.
 


You are just as wrong as the guy you are correcting.

Broadwell will offer LGA just like Intel always has.

AMD would love for Intel to only offer BGA, but it's not going to happen.
 
This is a very absolutist statement and is rooted very much in the current state of affairs, rather than looking towards 2014 and beyond.

The first exception I'd cite is that of partially-resident textures, where main memory is used to hold textures too large to fit on the video card. This puts much more strain on the bus.

Second, consider the fact that scene geometry continues to increase and the GPU is increasingly involved in tasks like physics and AI. Whenever the CPU and GPU are collaborating like that, it's not just the bandwidth that counts, but also the latency. The sooner the CPU gets the answer back from the GPU, the sooner it can start performing the next operation (and faster bus speeds reduce transmission time).

Finally, consider applications specifically involving GPU-compute. Depending on the application, the bus can quickly become a bottleneck.

If you look back through previous advances in bus architecture, you'll see that the first couple generations of games and graphics cards didn't benefit much from a new standard (I'll make an exception for AGP, which was long overdue). But well before the next generation comes along, products and applications have evolved to take advantage of the capacity of the previous generation.

Since both of the major new consoles have APUs with extremely high-bandwidth CPU <-> GPU communication, I suspect we're in for a wave of games thare are increasinlgy sensitive to GPU bus bandwidth.
 
Don't you guys get it? Intel can & does build lower power chips, but they're not as fast. They could build more power-hungry, faster chips than they currently do, but they're under no competitive pressure to do so.

Look at it this way: Intel & AMD will both build high-end CPUs that burn as much power as the market will accept. If Intel decides to make their fastest CPU burn only 70 W, then AMD will come along and blow it out of the water with a 140 W chip. In fact, this is what AMD is currently trying, with the 220 W chip they just announced, but I think they've considerably over-shot, with that one.
 

I still stand by what I said.

Partially resident textures? With 2-3GB GPUs? Really? How many games use enough textures at sufficiently ludicrous resolutions to actually require that?

Latency? Going over PCIe already requires hundreds of cycles so the drivers and software already need to be written to accommodate very high latencies. A few cycles more or less should make little to no difference to properly written software unless you are attempting to do something that exceeds on-card resources but then you would be screwed anyway since system RAM is nowhere near as fast as on-board RAM even on the best of days... this effectively becomes a case of "wrong hardware for the job."

x8 currently has very little benefit over x4 most of the time even though PCIe 3.0 has been out for nearly two years already so it will still be a few more years before x8 starts becoming a bottleneck actually worth worrying about. By the time it does, PCIe 4.0 will likely be out since the optimistic ETA is late 2014.

BTW, PCIe 4.0 is coming out in late-2014 or early-2015 so, "looking toward 2014 and beyond", PCIe 4.0 would be my answer. By the time PCIe 3.0 x4 becomes a bottleneck, PCIe 4.0 x4 will be available and enthusiasts will likely have an urge to upgrade regardless of what they have today.
 
8 cores, DDR4, 140W... sounds pretty tempting to me.

How about an 8 core nonHT and non-iGPU part at a lower price point specifically for gamers? I am not a huge gamer, but at this point I would much rather have a native 8 core part than a quad core with HT or an 8 core with HT. AMD's CPU division has not gotten a whole lot right over the last few years, but having their version of HT which can be used by any software compared to Intel HT which has to be specifically programmed for seems to be a great direction to go. Now if only they could get the rest of their platform to be just as innovative...
 
Status
Not open for further replies.