Broadwell: Intel Core i7-5775C And i5-5675C Review

Page 9 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.

InvalidError

Titan
Moderator

If you want to push the 50GB/s (x2) bandwidth through a 64bits wide (4x16) interface each way, that interface would need to operate at a minimum of 6.25Gbps per bit lane. With a 1.6GHz clock going in, that would likely round up to 6.4GT/s.

And PCI Express is considered a serial interface even in its x16 configuration - serial lanes operate independently, then the data passes through small FIFO buffers to align bytes across lanes and eliminate the need to delay-match PCB traces with the clock and implement de-skew circuitry on the chip as would be necessary on parallel interfaces.
 

Aurunemaru

Reputable
Jan 26, 2015
15
0
4,520
Perfect choice por mini itx/steam machines, but I hope intel give this and the option for a cheaper i5 without iGPU for who will use a card anyway.
 


Assuming the 1.6Ghz interface is then DDR'd, we are talking 3.2Gbps per bit-line. To get 100GB's we'd need to do 800Gbps so possibly a 256-bit wide interface. That's expensive to do as demonstrated by our friendly GPU makers who use that technique. It's not impossible nor that difficult, just expensive.
 

nottheking

Distinguished
Jan 5, 2006
1,456
0
19,310
if it helps:
according to intel:
Well, I was referring to more specific, technical information than that, which I was already aware of. That, and as you might now be seeing... Some of the information seems to contradict other parts of it.

And PCI Express is considered a serial interface even in its x16 configuration - serial lanes operate independently,
Yeah, that's what's weird about the listed Intel bus: PCI-e may have currently up to 16 lanes, but each lane is 1-bit. By contrast, the very definition of "4-wide, 16-bit" can't be serial, because while the four pathways may all operate independently, each path is 16-bit.

Assuming the 1.6Ghz interface is then DDR'd, we are talking 3.2Gbps per bit-line. To get 100GB's we'd need to do 800Gbps so possibly a 256-bit wide interface. That's expensive to do as demonstrated by our friendly GPU makers who use that technique. It's not impossible nor that difficult, just expensive.
Though it's actually a lot easier still when you don't need to connect that interface to anything outside of the same package; it's a step easier than what GPU makers do, which still have to connect to other DRAM packages, even if on the same PCB. (which in turn is vastly simpler than connecting to DIMMs clear across a separate board, as seen for quad-channel motherboards)
 

shamejais

Reputable
Jun 10, 2015
1
0
4,510
Loved very much AMD's CPU's - since 2002. But truth is - i always suffer of OVERHEATING, and NOISY coolers. I am sick of it !!! :(
I better will by Intel's CPU's, then listen cooler sound :( Sad, but true
 

InvalidError

Titan
Moderator

There is nothing weird about it. A serial bus simply means that each lane is independently clocked and serialized, requiring separate clock recovery and alignment with the other lanes that are part of the same port while a parallel bus has a shared clock or strobe for all related signals. Parallel interfaces are falling out of favor because maintaining exact timing relationships and margins between control, data and clock/strobe signals becomes increasingly difficult as data rates increase.

A 4x16-wide interface can just as easily mean four ports of 16 serial lanes each.

BTW, the HMC2 specification which Intel is a backer of calls for four ports up to 16 serial lanes wide operating at up to 30Gbps each. Sounds similar enough to suspect Intel may have designed their eDRAM to be interchangeable with HMC.
 

Mohamed Ayach

Reputable
Jun 12, 2015
2
0
4,510
Hello am in the mid of building my new workstation (VM&emulation) the only constraints is ddr3 ( already have 16 gig). I've being doing some research and up until now the i7-4790k shows up at the top but since I don't attend to overclock I've decided to go i7-4790. What do you think? I wanna keep this machine for at least 5 years. Should I buy now or should I wait, also should I go with a six core or ditch the ddr3 that I have and go with ddr4. Will the socket 1150 be around for a little more or should I go with something else
 

logainofhades

Titan
Moderator
If not overclocking, the 1231v3 is the chip to get. The leaked preliminary benchmarks, for skylake, don't show there to be a huge performance increase, 5-15% I believe. By the time you really need a new CPU, Intel will already be 2-3 generations ahead of what you have.
 


I am waiting for official benchmarks for Skylake. I was originally going to stick with my 2500K until at least Skylake but a job I had offered to pay for me to build a system so free upgrade, not easy to say no to.

However, depending on that performance I will probably wait till Skylakes successor now that I have a 4690K.
 

logainofhades

Titan
Moderator
I really wanted to do a mini-itx build, and will probably have to wait till Skylake, or later, to do it. That or just get an HAF XB, and put it on a shelf of my desk. My 932 advanced is just too big, for the room it will end up in.
 

drobillka

Reputable
Jun 13, 2015
2
0
4,510


so sad..
also h97 supports too... ive already check mobos compatabilityies on asrock web


 

eriko

Distinguished
Mar 12, 2008
212
0
18,690
are these numbers real?.............Not only does it match lower mid range cards, but it completely destorys AMD's APUs........
:shock:

OMG a $270 APU is faster than a $130 one. Who would have thought.

I think what you guys are missing, is the (likely) intended market for these things; there are system builders out there whom might not *want* to add a discrete GPU, due to space / noise constraints, but still want the graphics power to do a fair amount. Think home-theatre setups etc.

And in this regard, the price difference, and performance benefits sound worth it to me.
 

eriko

Distinguished
Mar 12, 2008
212
0
18,690
I am very much loving the development of on-die graphics, for the budget gamer our dollar is going farther and farther with every new generation, and before someone blows up at me, yes we would all love a trio of Titan Xs and a 5960x but some of us have to think about college for our kids, and still want to game.

Nothing 'budget' about Iris Pro, but I think I know what you mean.
 

darkcoder

Distinguished
Jan 15, 2010
3
0
18,510
I would like to see more than a Minimum Settings comparison between integrated video solutions from both companies. Intel integrated GPU had this old fame of not rendering correctly complex scenes, and not being up to par in the frame rate department.
 

wewum

Reputable
Aug 30, 2014
422
0
4,810
We don't want AMD to go out of business. That will get rid of competition, which will result in Intel and Nvidia's products to increase in price and decrease in quality.
 

yankeeDDL

Distinguished
Feb 22, 2006
100
17
18,685
I read quite a few reviews of Broadwell and they all have quite similar conclusions.
So I am starting to think that it is me, the one who is wrong, but I can't wrap my head around it.
The core i7-5775C costs 480usd. Yes, it is clearly faster than the A10-7850 which, however, costs 130usd. With 350usd extra you can by an R9 290 and go to Applebees for a dinner with the whole family. It's not an apples to apples comparison, obviously, but let's just say that the gap is huge.

I'm just hoping that the Zen APUs will feature HBM, like Fury, to see a similar jump in performance for AMD's platform.

But the bottom line remains: in my view, reviews like this are DISinformative, and skew customers tower's extremely pricey Intel's chips, when at nearly 1/4th of the cost you get excelent AMD's CPUs with an incredible price/perf ratio.
 

AngeloX

Reputable
Jun 2, 2015
2
0
4,510


No one is going to buy these particular Broadwells solely for the integrated GPU and they're clearly not intended for the general mainstream public. This iGPU improvement will however very likely make it to the upcoming Skylake chips, so anyone buying a new Intel CPU will get a nice backup GPU and some will probably use it as a primary, since it can do most games at 1080p at low settings and it presents less noise and heating in the build.
 


Zen will not include HBM. There are too many issues to work out and the costs would make it prohibitive. If the CPU is not good enough and can't compete how would you justify the extra price added in by HBM?
 

InvalidError

Titan
Moderator

HBM2 cost will likely end up in the same range as GDDR5 when it eventually reaches mass production. Since the main selling point of hypothetical APUs with a $30 HBM2 stack would be the IGP, the CPU would not be as critical beyond being fast enough to drive the IGP. I would not be too worried there as long as AMD achieves at least half of their claimed 40% IPC improvement with Zen.
 


The biggest issue I see is failure rate. Adding a component like DRAM onto a CPU adds to the possible failure rate. If it does fail will it be able to turn that off and re-route to utilize board memory?

The other issue, how will it work with motherboard RAM? Will it be able to or will it have other issues? I can see if both were the same standard but it one is HBM/HMC and the other is DDR4 it will have a lot of fun issues to iron out.

Of course I guess we shall have to see but I still don't see AMD throwing HBM onto Zen yet. I still think we are a two or three years out before either Intel and AMD attempt that.
 

InvalidError

Titan
Moderator

Adding components anywhere adds to the hypothetical failure rate. The general trend though is that tighter integration reduces average failure rate by eliminating intermediate manufacturing, packaging, shipping, assembly steps, external intermediate busses, etc. where the individual components are most vulnerable to ESD and mechanical damage.

If integration genuinely scares you, would you like to go back to the days of every subsystem being on separate chips and add-in boards? These computers may have been far more serviceable but from memory, they were also far less reliable.

How often does memory randomly fail in the field under normal operating conditions? The most common cause of RAM damage is user error but that is not going to happen on an eDRAM chip sealed under the CPU/GPU heat spreader. The rest is mostly making sure no intermittent bits slip through QA.
 
Status
Not open for further replies.