Broadwell: Intel Core i7-5775C And i5-5675C Review

Page 7 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.

GPUs have always been heavily constrained by available memory bandwidth, that much should not come as much of a surprise to anyone. GPU designers have been working on tricks to reduce dependence on memory bandwidth for the past 20 years. This will be getting much worse at 14nm where the compute density will leap forward while memory bandwidth using GDDR5 remains unchanged, hence AMD and Nvidia's focus on HBM.

Although Zen's higher-end APUs may have HBM, they are still the better part of a year in the future. By then, Intel will likely be about to switch from CrystalWell to HMC for 5-10X more bandwidth and much higher capacity. This is where IGPs should start getting exciting.
 
About the GTA V chart, when you are comparing integrated graphics with discrete graphics the testing should have been done with higher graphics settings. With the lowest settings the testing is being CPU limited and the discrete cards are paired with X4 860K. It's kind of pointless.
Indeed. Anandtech's results are much more realistic.
 


You are assuming that whoever AMD chooses to have FAB their Zen CPUs will also have the 3D NAND technology that Intel is using for their 128MB L4 cache for the IGP. We will have to wait and see what the specs are.

And even with a decent CPU, the R7 240 is a pretty low end GPU. It wouldn't have done much better, nor would AMDs current setup with faster memory.

The biggest benefit to Broadwell is the fact that you can get playable FPS at 1080p, even with Anandtechs results, and not have to worry about a dGPU using more power and thus be able to shrink the space taken making it the perfect (currently) HTPC setup. That might change with Zen or it might not. There is no real way to know. Remember when Bulldozer was coming out it looked great on paper but fell way short of the very hyped up expectations. I want to think that Zen will be decent competition for AMD but their past 5 years of history have said differently.
 

NAND is a type of flash memory, it has nothing to do with CrystalWell, HBM or HMC.

AMD and Nvidia will be using HBM memory next year which is simply a fancy new JEDEC-standard type of ultra-wide DRAM chips currently manufactured only by Hynix and designed for stacking around CPUs/GPUs using silicon interposer packaging technology.

The fab that makes AMD's APUs/GPUs does not need to manufacture HBM dies since AMD/Nvidia will buy HBM dies or assembled stacks from Hynix and whichever other DRAM manufactuers decide to sign up, get their silicon interposers manufactured by whoever has spare low-cost fab capacity, then have the whole thing (APU/GPU + HBM dies + stack controllers + interposer + carrier substrate + passives + etc.) assembled on their APU/GPU package by their chip packaging/integration house.
 
The conclusion text is pretty close to my thoughts, though I might've shortened it to "cool, but impractical." I'm thinking of this on multiple levels: not ONLY are the lot of us holding our breath for Skylake, one has to wonder just what, precisely, sort of market segment these CPUs are targeted towards?

Though I don't think we have EXACT retail pricing available, it might be safe to say that they'll be comparable to the existing similar "Devil's Canyon" SKUs; i.e, the same range most enthusiasts and gamers currently go to for building serious machines, which means we're looking at an MSRP of $250/$350. (while clocks are lower, as palladin9479 pointed out, there's that eDRAM die to consider, which will add to the manufacturing cost)

Just about anyone building in that price range is going to likely be shelling out some money for a discrete GPU, which will render the Iris Pro rather moot; so it's not powerful enough for enthusiast-grade, and certainly too expensive for more mainstream/entry-level grade. Sure, there's the whole "That'd be great for my HTPC" crowd, but in practice that's a tiny niche market. So while it's definitely WAY more potent than any Fusion APU, the price difference means it doesn't matter.

Now, if they could couple this into one of their Pentiums... Then that would truly be a terrifying piece to behold. If they could get something like a counterpart to the G3258 linked in the intro, instead of targetting their existing unlocked i5/i7s... Even at, say, $100 it'd probably find a HUGE following there... And would actually pose a grave threat to AMD in the way the 5675c does not.

This does kind of bleed over into the whole debate about "lack of competition" in the CPU arena, and it really DOES kind of go both ways; Intel's actual core performance has budged only very slightly since Sandy Bridge came out over 4 years ago; of course, with how much of a flop AMD's Zambezi was, they didn't have to move. Perhaps Skylake may yet prove to be more akin to the Nehalem to Sandy Bridge transition rather than the yawn-inducing Sandy Bridge to Haswell shift. On the flip side, AMD's held a near-vice-grip control on the inexpensive segments; for the most part beyond some curiosities (like the current 3258) Intel really has nothing to challenge AMD's low-cost APUs, especially for systems that forgo any discrete graphics.

In the end, what we've really seen from BOTH companies over the past 4-ish years has been a lot of promise... And not a whole lot of practical delivery.
 
"The two Broadwell processors serve up 22 and 21 FPS at 1920x1080 with Ultra settings. Stepping down to the Medium preset gets you an average of 44 and 41 FPS."

I don't see that in the charts at all.
 
Half life 2 DX9 title = fail. Not even relevant as everything including the A8 7600 and above can crank out 70fps or better.
 
GTA V obviously not stressing the GPU at those settings if even a lowly 440gt can pull 55fps. How about suing some realistic settings?
 


Sorry I meant 3D stacked RAM not NAND. I work a lot so my mind is half asleep most of the time now.



Titanfall uses the same game engine, Source, so I highly doubt the A8 7600 can crank out 70FPS. It is also a very popular game engine considering that it is used in some of the most played games out there including DOTA2, CS:GO and TF2.
 


1) They didn't test Titanfall
2) According to the chart in the article the A8 7600 did indeed pull 70 fps, 70.3 to be exact at the settings used.
 


1. I know that however Half Life 2 is always getting updated versions of the game engine. Either way my point stands, to se how the APU handles DX9 because there are still a lot of games that run DX9 out there.

2. I meant in a game that pushes more like Titanfall or CS:GO. Of course it would in Half Life 2. The game is 11 years old.
 


Exactly, it's 11 years old. And at 70fps + it doesn't stress the gpu enough to be relevant. Would have been nice to see benchmarks from the other games you mentioned, especially Titanfall since it's the most recent Source game.
 


What you define as realistic may not be the same as someone else's realistic. Not everybody I hope you realize needs high settings. You can play games on low and still enjoy them. If people enjoyed games in the 90s believe me they can enjoy low settings now.
 

When even IGPs can manage well over 80fps, I think re-evaluating what is considered as baseline settings and reference games for benchmarking is warranted - the workload is becoming too trivial to mean anything on modern hardware. At 100+fps even on an IGP, I would consider the benchmark effectively broken.

I would start by ditching 720p testing and making 1080p the new lowest resolution - new desktop displays with lower resolutions than that have been nearly extinct for a few years already and running displays at their native resolution is usually the best bang-per-buck visual quality enhancement you can get. (I like my graphics sharp, so re-scaling blur is no-go for me regardless of what other details I may have to sacrifice to get playable frame rates at native resolution.)
 
With this ship, if intel wanted, they could kill AMD in a couple of years. Just make an I3 with an iris Graphic at sub 150 USD and AMD would bankrupt.
 
are these numbers real?.............Not only does it match lower mid range cards, but it completely destorys AMD's APUs........
:shock:

OMG a $270 APU is faster than a $130 one. Who would have thought.
Prices aren't even announced yet...

Either way, Intel ate AMDs lunch on this one. The one-chip gaming solution and even a discreet R7 250 can't come close to the iGPU in an i5. It's impressive when you consider where Intel graphics were even 5 years ago.
 

The i3 already sells quite well on its own at $130 and Iris 6200 competes with $80-90 discrete GPUs. If Intel released an i3 with Iris 6200, I would expect it to sell reasonably well at $170-180.

I am pretty sure the CrystalWell chip on the i5/i7 costs Intel considerably more to put on there than the extra $20 Intel tacked on the price tag. With the lower margins on i3, they cannot afford cutting as deep in their profits. Same goes for the hypothetical Iris 6200 Pentium.
 

For some people having 60 FPS is the highest priority... Which means that dips below 60 (which will occur even when the AVERAGE framerate is even much, much higher) can often be unacceptable. And yes, sometimes these people work within constraints to where a pair of Titan X cards isn't an option.


That's because Titanfall itself is an ABSOLUTELY TERRIBLE game to use for a benchmark:

- It has a hard cap on the framerate that can't be bypassed.
- It has no benchmark tool of its own, so you have to improvise.
- It's online-multiplayer only, which also means it's effectively impossible to even get a fully consistent "improvised" benchmark.

As Half-Life 2: Lost Coast was made as a form of tech demo, it is exceptionally suited as a benchmark of the Source engine. And while the tech demo ITSELF may no longer be particularly strenuous, a performance comparison made using that will show a trend that will hold true to OTHER Source engine games... Including Titanfall. (E.g, if a GPU performs twice as well as another at Lost Coast, it will perform very close to the same margin in Titanfall)


I actually HEAVILY doubt that the cost of Crystalwell is even close to $20, let alone above it: most of the silicon is used for eDRAM, which, cost-wise, is pretty close to any other DRAM. (And current spot prices for even 256MB DRAM stays consistently below $2) I'd probably peg a maximum actual cost to them at $5, maybe $10 at most...

...Enough that I do feel that an Iris Pro-equipped Pentium would, on its own, still maintain quite a substantial profit margin, even if the price WASN'T changed... And to be honest, they could still boost the price without noticably dropping demand; anywhere from $99-119 (a price jump of, say, $30-50 over the 3258) would likely still see staggering demand that the i3 has never seen.

Granted, there's one possibility, in that Intel may be afraid that such an attractive low-end option may drive too much demand away from their (definitely more profitable) Core i5 lines; a lot of people would quite willingly settle for less: sure, all us enthusiasts know that pairing a 4690K and anything from a GTX 750 Ti or 260X and up is a shocking amount of power for the price, but for a lot, being budget-minded does mean that if there's a cheap option that still delivers impressive (or possibly more) bang for the buck, it's what is to go for.
 
The concerns have merit. Using very low settings and an 860k for pairing with discrete cards resulted in an overly positive showing for the 5675c. And yes, some of the settings are unrealistic. Once you hit 60 FPS average there's really not much reason not to use higher settings esp. when you're sitting there at the lowest settings with 120+ FPS average, even dips aren't that big of a deal unless they happen so often that the game stutters.

AT's review is IMO more representative.
 

First off, I agree that a good game is enjoyable on lowered settings. As I recall from viewing a lot of game reviews, "Medium" is consistently a lot better than "Low," and is where testing should probably begin.
A lot of laptops, and dirt-cheap desktop monitors still have 1368x768 screens, so I think this resolution is still valid for testing.
---
Otherwise, some of the numbers associated with this Broadwell testing look unrealistic and/or cherry-picked, which is vaguely reminiscent of how Bulldozer was presented; we all know how that turned out.
I'd like someone to sit down in front of one of these systems, crank up some games and put them on the sort of settings he might typically use, and see how playable it is.

 

Normal DRAM chips are standard, manufactured by the billions and there is over a dozen fabs dedicated to it. The 128MB chip on the other hand is proprietary, using some proprietary process to combine DRAM with high speed logic and only ships a few million units per month, not quite the same cost amortization curve.

If you look at GDDR5 prices instead of DDR3/4, cost per bit already doubles and we are still talking industry-standard chips that trade in the hundreds of millions annually.

The eDRAM chip is a specialty product half-way between DRAM and SRAM and I bet manufacturing DRAM on a high-speed logic process comes with a few more extra challenges (costs) than merely the sum of the two processes. For starters, cell density is 3-4X worse than standard DRAM simply to mitigate the higher leakage on logic silicon wafers, so that's already a 3-4X higher manufacturing cost per bit right there.

Once you have the eDRAM chip, you also need to consider the extra cost of adding layers to the LGA/BGA substrate to run traces between the eDRAM and CPU, then the extra circuitry in the CPU to manage the eDRAM. On Broadwell, the eDRAM interface uses about as much die area as two of its x86 cores, or about 9% of the total die area.

The all-inclusive cost of adding that 128MB eDRAM is fairly significant. Add Intel's typical 50-60% margins on top and you end up with putting eDRAM on lower-end chips not being worth the trouble if they cannot charge at least $50 extra for it. For higher-end chips, they can absorb part of the cost without hurting their margins much but I bet there will be another round of $10-20 hikes with Skylake to restore those margins.
 
Status
Not open for further replies.