Introducing Intel's 14nm Node and the Broadwell Processor

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.

ceeblueyonder

Distinguished
Nov 6, 2011
69
0
18,630
this is why AMD is focusing on APU's because Intel is focusing on mobile platforms, too with all the die shrinks. the FX series and AM3+ chipset is also stagnant b/c AMD knows the performance delta of when the introduced FX-8350 to now is not much. i mean an FX-8350 is a great gaming cpu and even video editing cpu. it's a good "multi-tasker" because it has 8 cores. the only thing that i think ppl will shy away from and keep buying intel for their gaming and video editing needs is b/c magazine editors think that old is dead when a computer that is two yrs old is not the same as say an old bridge. an old bridge needs repair as it gets old. a cpu will not wither and die by the passage of time. an fx-8350 and an am3+ mobo still has sata6GB speeds and adequate pci 2.0 lanes since there isn't a gpu today that will even saturate a pci 2.0 lane. i mean, an AMD 990FX mobo has more sata ports and pci lanes than intel. intel just wows ppl today with "thunderbolt" "sata express" "m2 sata" and other things that to me seems superfluous. am i alone in this?
 

Menigmand

Honorable
Jul 27, 2012
128
0
10,680
I can't believe Tom's Hardware doesn't know how to do percentages, but here we go:

"In fact, the Broadwell-Y die has about 63% less area than the Haswell-Y die." (page 1)

"The Broadwell-Y chip is 82mm2, scaled down about 63% compared to Haswell-Y's 130mm2 die size." (page 2)

No. It's scaled down 37%. It has 37% less area. So, the new chip is 63% of the original size.
 

blppt

Distinguished
Jun 6, 2008
579
104
19,160
"AMD patented x64. x64 is better than x86 which intel uses. correct me if i'm wrong."

While AMD64 does have certain slight advantages in feature set, intel has their own version of it called EMT64. Since AMD has to license x86 from Intel, any fruit from that tree has to also be available to Intel, and thus, AMD64 (aka x86-64). Since the last prescott P4s, Intel has had chips with EMT64 on them (with a slight gap with the very first Core Solo/Duo chips).
 

blppt

Distinguished
Jun 6, 2008
579
104
19,160
Also, for what its worth---the 2600K Sandy Bridge I have trades blows with my 9590 box, which considering the gap in TDP (95 vs 220W) is just sad. And for games that are heavily CPU dependant, and dont use all 8 cores of the 9590, the 2600K stomps all over it (see: Skyrim). Hopefully, with the advent of Mantle and the fact that a lot of future games are going to be designed with the 8 core consoles in mind, this will be reversed, but right now, its a very narrow range of people who would choose these power-hungry monster over a (now-ancient) 2600K. Never mind the even more power efficient Ivy and Haswell.
 

LookItsRain

Distinguished
Alot of people dont seem to realize is that the regular computer/laptop market is much larger than gaming, these optimizations work the best for what is needed in the market. Cpus have been fast enough for gaming for the past 3-4 years.
 

childofthekorn

Honorable
Jan 31, 2013
359
0
10,780

Even if you compare Sandy Bridge (32nm) Intel CPUs with AMD's FX83xx (28nm) which theoretically gives the advantage to AMD, Intel's older chips still win most benchmarks. Intel being one process node ahead has very little to do with their performance lead; their architecture itself is just that far ahead.

FX-83xx series are 32mm, btw/fyi. fx-8350 vs. i7-2600k is probably a fair fight. i bet they'd trade blows. or, an fx-8350 is not far behind if it is behind. and amd has a software/platform/optimization disadvantage, meaning that the programs are not optimized for amd chips since most pc's have intel chips inside them.

The ALU and FPU units used by the FX series are also very low quality. The rumor mill generating that AMD is custom making the ALU/FPU units for Excavator. Even for software thats goign to be made to utilize the FX series, its still a lower quality processor compared to intel, hence the price point.
 

army_ant7

Distinguished
May 31, 2009
629
0
18,980
Here are some corrections that I think need to be made. Just doing my part as a community member. :)

The Broadwell-Y chip is 82mm2, scaled down about 63% compared to Haswell-Y's 130mm2 die size.
I think it should say "...scaled down to about 63% of Haswell-Y's 130mm2 die size." The reason being that the original statement seems to imply that Broadwell-Y shrunk by 63% which is a significantly larger shrink as opposed to the about 37% it really shrunk by.
(old - new) / old * 100% = shrink % e.g. (82 - 130) / 130 * 100% = 37%

...Haswell-Y's integrated graphics has a maximum of 20 AUs...
I could be mistaken or behind the times, but shouldn't that by "Execution Units" or "EUs"?
 

gsxrme

Distinguished
Mar 31, 2009
253
0
18,780
bah! My 2600k @ 5.1GHz @ 1.5v (a real water setup) will just have to stay. Its a shame too, I can't push my memory bus passed 2200Mhz either. I hate you Intel and AMD! I want to build something!
 

blppt

Distinguished
Jun 6, 2008
579
104
19,160


Geez, my 9590 needs north of that to hit 5ghz (all cores, not turbo). And that 5ghz is about equal to a 2600K @ 3.8 (all cores, not turbo).
 

balister

Distinguished
Sep 6, 2006
403
0
18,790


It was never a Law as such, merely an observation that seemed to be conveniently accurate many years
ago, but often it's quoted as being either a performance doubling or a density doubling every year and a
half (not 24 months). And for the other poster, wikipedia is not the word of god. :D

Either way, my point still stands - neither angle has been true for a long time now.

Ian.

I'm pretty sure you're wrong on the transistor side of that argument as they have continued to double the amount of transistors every 2 years which still holds with what Moore originally said in his report.

And in this case, Wikipedia is correct as the information there is pulled from Moore's original work.
 

ceeblueyonder

Distinguished
Nov 6, 2011
69
0
18,630

Even if you compare Sandy Bridge (32nm) Intel CPUs with AMD's FX83xx (28nm) which theoretically gives the advantage to AMD, Intel's older chips still win most benchmarks. Intel being one process node ahead has very little to do with their performance lead; their architecture itself is just that far ahead.

FX-83xx series are 32mm, btw/fyi. fx-8350 vs. i7-2600k is probably a fair fight. i bet they'd trade blows. or, an fx-8350 is not far behind if it is behind. and amd has a software/platform/optimization disadvantage, meaning that the programs are not optimized for amd chips since most pc's have intel chips inside them.

The ALU and FPU units used by the FX series are also very low quality. The rumor mill generating that AMD is custom making the ALU/FPU units for Excavator. Even for software thats goign to be made to utilize the FX series, its still a lower quality processor compared to intel, hence the price point.

how do you know they're low quality? what do you mean by it? did you take the cpu apart and look at the actual silicon via a micron microscope to examine its "quality?" i don't even know what you're talking about. but, if i may guess, maybe you mean slower? if so, you are right. amd needs to work on their modules. even though amd has 8 real logical and physical cores, each module with two cores shares resources like the fpu, l2 cache and another thing, which hampers its performance. at least they're not "hyperthreading." or what i call "fake" cores. lol. just kidding.

anyway, the modules and cores sharing resources is probably how amd is able to deliver more cores for the price. but, it doesn't mean the actual silicon themselves that make up those units are of low quality. if they are, those chips are rejected or not used b/c the chip itself will not function. maybe your computer will not even start because of it.

also, if you wanna talk about quality, isn't intel the one skimping on it? the thing about the soldered tim not being used on their cpu's now to save money. also, i have bullt an fx-8320 pc and an i7-3770k pc and the box that the amd fx chip comes in is metal and seems more of quality than the box that the i7-3770k chip came in which is cardboard. seems like the cpu could easily get damaged without a metal box. but, i digress.

also, the cpu cooler that comes with the intel also seems "cheaper" than the one that comes with the amd chip.
 

balister

Distinguished
Sep 6, 2006
403
0
18,790
bah! My 2600k @ 5.1GHz @ 1.5v (a real water setup) will just have to stay. Its a shame too, I can't push my memory bus passed 2200Mhz either. I hate you Intel and AMD! I want to build something!

So build it...

...for someone else
 

leeb2013

Honorable
hmm, not much is moving performance wise, just power consumption. I wonder how much the next Tock will improve performance.
It's a moot point anyway, when the latest games don't even push an I5 or I7.
 

none12345

Distinguished
Apr 27, 2013
431
2
18,785
"*reads about people complaining not having a reason to upgrade and spend more money* what am I reading "

As a gamer, im tired of games stagnating. The long counsel cycle deserves a lot of the blame. But stagnation of hardware is just as guilty. Nothing has really changed in the gaming world in the last 7 or so years. Graphics havent really improved any, nor has ai, or anything else. Sure there have been some tiny improvements, but its all pretty boring.

If processors were still doubling in performance every 18 months or so....the games of today would make the current stuff look like old school nintendo games.

Ive been waiting for another spurt of innovation since 2009. Yet my 6 year old system, still plays every new game just fine. So why upgrade.... Wish intel/amd would give me a reason to.

I wish someone would blow me away again with a hardware advancement.
 

bin1127

Distinguished
Dec 5, 2008
736
0
18,980
Its good to see Intel working so hard on their thermal department. Gaming is great, but you cant help feeling guilty about mother earth every time you fire up your pc. Meanwhile Amd innovates with 220W processor XD

Intel always does great with their tdp/performance. I worry more about graphic cards. I wish that people would scale down their graphics after every new game when smooth shiny tree leaves with swaying shadows in the background loses their novelty.
 

dovah-chan

Honorable
I would say that after reading a few of these comments I do feel for some people a bit now. That excitement of new tech being dished out and all the rumors and facts surrounding it is what brings us debating all this stuff in the first place. Because we like to be impressed with technology.

I was just stating my viewpoint as a consumer that I'm pleased my system will have nice longevity. I can focus on buying other things and not have to worry about it as much.

Still kinda sad though that a game like crysis will never be released again. Back in 2007 it was unbelievable just to watch a video of it let alone run it.

While I already see a good bit of improvement in this next generation of titles in terms of graphics and physics, none have came out that really wow people like crysis did.

Sure crysis 3 is a great looking game but it doesn't have the same notoriety as the first one. As hardware has slowed down greatly in terms of improvement, so have graphics as well. Although really I feel like I'm terms of graphics we should be upping the polygon budget more.

Until I can't see any triangular shaped bumps on agent 47's head, I think we still have room for improvement.
 

dovah-chan

Honorable
Oh and I thought experts were supposed to have an ad free experience on tom's. Can you please edit that to include the mobile version of the site as well? It gets really intrusive. I really don't want have to go write and compile my own adblocker for firefox mobile.
 

Shneiky

Distinguished
Some people do need to realize that games is only part of the world. And not even the largest piece of the cake. Our world is spinning because of we have the power of computers. Even phones turned into mini computers.

If gamers have no reason to upgrade - well blame the consoles and their manufacturers. All of the "current generation" consoles arrived already outdated. Remember the "hack" or "mod" to unlock all of Watchdogs potential?

The reason why a lot of us and specifically me are "upset" by the current CPU development is because it dictates what the average level of performance will be. Intel launches Sandy Bridge and then Sandy Bridge-E and EP, Ivy Bridge and then Ivy Bridge-E and EP, Haswell...etc. But the moment the mainstream line launches - you know what to expect. If anyone here has renders out of Mental Ray (I got only Mental Ray at home) that runs for 28 hours for a single frame is going to get me. And the performance of extra cores is never linear improvement. The more cores you add, the more time a single pixel is rendered. Pixar's Render man is the best example - 4 cores render faster than 24. But even in the best threaded render engine, if you have 2 render nodes with 2 I7s running at 3 GHz, they will render 30% faster than a single 8 core Xeon at 3 GHz. and you can get those 2 render nodes for less than the price of the Xeon by itself. And if you get more performance out of your cheap render nodes - all the best.

If Broadwell's performance is 5% on top of Haswell, than Broadwell-E and EP will be 5% on top of the Haswell. Everything is linked to the mainstream part. Intel launches a mainstream part and adjusts number of cores and TDP for the enthusiast/professional line. The moment this Broadwell article came out, we already know what is in store for the next 3 years.

And if some people think - get a 12 core Xeon or something - that is not always possible or smart. A lot of software scales bad after 8 cores/threads. Even half the functions that you use in a work flow are single threaded (diffuse to object baking, modeling tools, deformers, conversions, etc). In a lot of animation studios they use high-clocked low-core count machines for animators because of such reasons. Running on screaming "Cores" does not work. My home Sandy when at 4.5 renders 10-20% slower the the Xeon 2650v2 at work.

The software is too far behind. If in the late 1990s and begging of 2000 - the hardware was holding back the software, ever since late 2000s and specifically since 2010 - the software is lagging. VRay 3.0 which is available for 3Ds Max and soon to be out for Maya was rebuild from the ground up to use AVX. And when I was at a presentation of VRay 3.0 - there was 30% improvement in render speeds compared to the older version. And this is happening in 2014 and 2015. And AVX is? Technology from 2008 first implemented in 2010/2011.

And now imagine all those I7s/I5s or their Xeon versions (mainstream socket xeons perform exactly the same as the I series counterparts) arriving in cheaper workstations or in Macs. This is the main hardware of the working force. Not only Adobe products and 3D packages and SDKs, but also a lot of software developers and scientists sit on this hardware. And all the Indie studios.

If all of you guys have something to blame for no reason to upgrade - blame the software. Don't blame Intel. Blame the console manufacturers for their outdated consoles. Don't blame AMD. Blame all those lazy programmers that either can't or won't or don't have enough resources to program for multi-threaded environment and are stuck in 1-2 threaded functions. And also blame the world. If you bought less phones and tablets and more High Performance PCs the market interest and innovation would have been different. It is the mainstream that defines the performance increase for enthusiasts. Cheers.
 

InvalidError

Titan
Moderator

If you look at the pattern for the last couple of chips, improvements are around 5-7% regardless of Tick or Tock so I would expect up to 7% IPC improvement from Skylake since that is what we got from IB to Haswell.
 

edlivian

Distinguished
Dec 10, 2009
96
2
18,635
i dont give a hoot about power savings anymore,intel has to start finding a way to gain 25% performance per cycle, or it will never become ideal to upgrade from sandy and ivy bridge i7's
 

InvalidError

Titan
Moderator

Gaining 25% performance per cycle is simple: make the core wider and add two extra threads per core to make sure those extra execution resources have work to do. Alternately, they could add cores.

Either way, the extra throughput per clock from larger-scale thread-level parallelism is pointless without massively threaded code to actually use it.
 
Status
Not open for further replies.