InvalidError :
Normal DRAM chips are standard, manufactured by the billions and there is over a dozen fabs dedicated to it. The 128MB chip on the other hand is proprietary, using some proprietary process to combine DRAM with high speed logic and only ships a few million units per month, not quite the same cost amortization curve.
You're forgetting that while eDRAM hasn't been shown that frequently in most PC devices... We've had a lot of cases of it having
heavy use in consoles. Namely, the Xbox 360, Wii, (and also Wii U) all make critical use of eDRAM... And even "a few million units a month" is still quite high; even if we take the minimum defintion of "2," that still equates to 24 million units a year; putting you within an order of magnitude of those consoles: 83, 101, and 10 million respectively... And I don't think any of them hit 24 million units sold in a year! (granted, we may be overstating how many of the relevant CPUs Intel sells...)
And do remember, the first two were hampered by having to use 90nm silicon, which makes a big impact versus the 22nm Intel used for Haswell. (or the 14nm that may be in use for the Broadwell-generation ones) That puts Crystalwell's 22nm size of 85 mm² or so in rough line of what was seen for the 360 and Wii: about 70 mm² for the Xenos "daughter Die" and 94.5 mm² for "Vegas."
And in both cases, the chip was not one of the biggest cost sources of each machine; the main GPU core, and especially for the 360, the CPU; the DVD drive was also a major cost contributor. Yet in spite of all that, the Wii was known to be sold at a profit (of $200 for the hardware alone; $250 when bundled with a game) from day 1, and the 360 was sold at only a minor loss from its $300 launch price at first.
Given that both of those occured when there was likely little to no economy of scale in place, chances are that at worst, the pricing of those was comparable to Crystalwell, if not higher. (given that Intel's already shipped a few million of them)
InvalidError :
If you look at GDDR5 prices instead of DDR3/4, cost per bit already doubles and we are still talking industry-standard chips that trade in the hundreds of millions annually.
I have my doubts that GDDR5 is the best comparison for the type of memory involved; as its main argument in favor as a comparison point is "it's more expensive." Certainly, I'd expect the pricing on some eDRAM module to be higher; that's why I didn't outright suggest it'd cost less than $1-2. However, GDDR5 is likely unsuited for such an application, given that it's got a much-higher latency than would be desired for its function as L4 cache. The biggest cost factor for eDRAM tends to be the extra space commanded by higher amounts of control circuitry, which makes DDR4 probably a better comparison, given its higher percentage of control space versus the actual space used by DRAM cells. Granted, there's no readily-readable open market for DDR4 to judge the actual price of the DRAMs, though it's likely not double DDR3, given that full DIMMs aren't even really going for double the price, and they're still solidly in the premium pricing phase.
InvalidError :
The eDRAM chip is a specialty product half-way between DRAM and SRAM and I bet manufacturing DRAM on a high-speed logic process comes with a few more extra challenges (costs) than merely the sum of the two processes. For starters, cell density is 3-4X worse than standard DRAM simply to mitigate the higher leakage on logic silicon wafers, so that's already a 3-4X higher manufacturing cost per bit right there.
Once you have the eDRAM chip, you also need to consider the extra cost of adding layers to the LGA/BGA substrate to run traces between the eDRAM and CPU, then the extra circuitry in the CPU to manage the eDRAM. On Broadwell, the eDRAM interface uses about as much die area as two of its x86 cores, or about 9% of the total die area.
Mostly, all of the above was already addressed by the likes of AMD and others when they built the Xenos and Hollywood GPUs, among other things. I'm not so certain that the costs of EDRAM per-cell are THAT much more over conventional DRAMs, and certainly WELL below the cost of SRAM. While not much has been said about the specific design of Crystalwell's eDRAM, I'd imagine it's not that much different from the "1T-SRAM" design patented by MoSys, and advertised heavily; it's also the type of eDRAM used for the Wii and Wii U.
In their design, it offers a few major advantages over SRAM: first off, each cell is about 1/3 as large, and secondly, the cell design (unlike 6T SRAM) does not require more than one layer of interconnects inside the cell itself, (that merely is necessary for just the bit/wordlines and control circuits) And lastly, the design can be used on just about any logic process without extra issue. (i.e, it costs no more per-mm² to fabricate than whatever other logic is on the same die, and is interfaced with as if it was SRAM)
I'm imagining that Intel, given that they are Intel, will likely be able to roughly match those advantages, and thus, per-cell, Crystalwell's eDRAM is costing them no more than 30-40% of what CPU cache SRAM would cost them... And likely a good deal less; what I've seen so far suggests Intel's logic fabrication costs only about 50% more (per mm²) than it does for DRAM on the same node and wafer size. In short, the extra cost of eDRAM over DRAM is likely less than 100%: merely that it overall size-per-cell-count is higher (due to added control circuitry) and that it's made using a more expensive logic process rather than a DRAM process.
While I'll admit that a lot of my information has largely just been "pieced together," what I've seen suggested the cost for Intel's full-scale 22nm logic is around 6-7 cents per mm², which multiplied by the 85mm² of (22nm) Crystalwell... Is about $5.10-$6.01. That price assumes the Crystalwell die uses the full scale and maximum number of interconnect layers that they use for, say, the main Haswell die. Integration costs would likely be significantly less than the cost of fabricating the die itself.
InvalidError :
The all-inclusive cost of adding that 128MB eDRAM is fairly significant. Add Intel's typical 50-60% margins on top and you end up with putting eDRAM on lower-end chips not being worth the trouble if they cannot charge at least $50 extra for it. For higher-end chips, they can absorb part of the cost without hurting their margins much but I bet there will be another round of $10-20 hikes with Skylake to restore those margins.
Well, I'd already posited that, even if a large price hike was needed, if they took the Pentium 3258 as a basis (currently retailing for $69 apiece) they could push up to an additional $50 onto it; at $119 they'd have a very attractive piece, that likely has a very healthy profit margin for Intel... Given I don't think they maintain the same 50% profit margin on their Pentium lines that they have on, say, their i5 and i7 lines.
Given the massive price hikes Intel has put in for existing new Iris Pro products, my estimate is that Intel is trying to feel out for the highest-possible profit margin they can ask for; i.e, they're pushing for a MUCH higher margin on their Iris Pro products. A definite confirmation is if we see, over the next few months, Intel gradually drop the price to bring them much closer to comparable non-Iris Pro CPUs.
flushfire :
1366x768 is the second most used (26%) single-display resolution of Steam customers according to their hardware survey.
That's a very good point, and all the more reason to recognize that there's purpose for benchmarks below 1920x1080; the rest of the world doesn't strictly follow the enthusiast's high-end preference.