Samsung Reveals 4nm Process Generation, Full Foundry Roadmap

Status
Not open for further replies.

AgentLozen

Distinguished
May 2, 2011
527
12
19,015
Intel has made me cynical towards smaller processing nodes in the last few years.

"February 3, 2018"
Intel Announces Next Generation Processors

In a press conference today Intel has announced that it's next generation "Alien Lake" processors will soon be available for purchase.

Intel has described Alien Lake as a dramatic improvement over last generation Kaby Lake processors. An Intel spokesperson had this to say about the new architecture,

Intel has indicated that Alien Lake will be built around a sub-1nm process and is at least 40 years more advanced than any if its competition. Stay tuned to Tomshardware for our full review of Alien Lake."

Then, a week later, the reviews show a +3% increase to IPC and a 0% increase to clock speed over Kaby Lake. However, it DOES use 10% less power. So there's that....

It's been this way since Ivy Bridge and I'm now convinced that die shrinks offer no perceivable improvements over their larger counterparts.
 

InvalidError

Titan
Moderator

Die shrinks still enable chip designers to pack more transistors in the same power and area budget. Intel simply chose to apply most of those gains to increased IGP performance and smaller die sizes instead of increasing core count. Now that AMD has been reasonably successful with Ryzen, you can expect more aggressive movement from Intel over the next few years.
 


That is only valid for desktop processors. For GPUs, mobile systems, NAND, XPoint and a lot of other stuff, this things are GREAT. Smarthpones are advancing a lot every year, and we're not very far from the moment a high end smartphone equals a lowish-tier desktop, and high end ultrathin/ultralight laptops have no more heat issues. This will help enable that.

For desktop processors, on the other had, die shrinks usually mean harder to cool processors that consume less energy, which balances out and we get a refreshed last generation'.
 

iPanda

Reputable
Mar 25, 2015
87
0
4,640
LOL, Agent you are spot on with the meh-ness of the last few gens. Perhaps with the push for higher pixel count monitors, UHD media/streams, AR/VR, ryzen high core counts... we will see Intel push out something more substantial this time around.
 


Thats pretty much it, mobile is seeing the vast majority of the improvements due to die shrinks, and honestly what impresses me the most, is that AMD was able to take this stinker: http://www.cpu-world.com/CPUs/Bulldozer/AMD-FX-Series%20FX-4100.html . Refine it, improve it, shove in a GPU, take it down to 15 Watts, and create this: http://www.cpu-world.com/CPUs/Bulldozer/AMD-FX-Series%20for%20Notebooks%20FX-9800P.html . Which was then promptly ignored by the OEM's, even though it would have made a damned decent relatively gaming capable thin and light laptop.
 
@AGENTLOZEN, your point is valid in the desktop/HEDT segment as Intel has been stagnating a lot since Ivy Bridge but to say the process shrinks don't matter comment is just silly. Its not the die shrinks that matter here, its what Intel choose to do with them i.e. make more money. Watch the HEDT lines from Intel and AMD get announced next week, that is what 14nm can do in desktops on a technological level. We are about to have 12 core Intel CPU's and 16 core AMD CPU's, without those being on 14nm you would never have that many strong cores ruing at decent frequencies.
 

bit_user

Titan
Ambassador

Agreed, although NAND actually tends to get slower, as the cells get smaller. But it's good for improving GB/$.

And you forgot server CPUs, which have experienced a remarkable run up in core count, over the past decade. Their performance (across all cores) has been increasing, substantially.

No, smart phones still have much worse single-thread performance than any Kaby Lake-based desktop or mobile processor. They can only win via core count, which only helps in a small but growing number of cases.

The reality of mobile performance is that it's hitting the same wall that we've seen with x86 CPUs. It had a great run, but now they're on roughly the same Performance/W curve (once you factor in the x86 vs. ARM architectural differences).
 

bit_user

Titan
Ambassador
BTW, don't forget AVX2. That came in Haswell, and nearly doubled the throughput of vectorizable integer operations. We're about to see the same thing with AVX-512, although it'll be dribbled out over several generations, similar to what we saw with SSE and AVX.

Of course, it's not good for everything. And many SIMD-friendly workloads can run even faster on a GPU. So, its value is limited but noteworthy.
 

CatalyticDragon

Honorable
Jun 30, 2015
19
5
10,515
"Samsung Reveals 4nm Process Generation"

That headline insinuates they have a 4nm process. What they have is a roadmap. I don't need to tell you anybody can write one of those.
 

bit_user

Titan
Ambassador

They seem to have a fairly specific idea of how they're going to get there. I doubt you could say what it'd take to build working chips at 4 nm.

And it's news that they announced it. The sad part is no dates. I guess they'd be pretty soft dates, though. Almost guaranteed to slip.
 

CatalyticDragon

Honorable
Jun 30, 2015
19
5
10,515


I would actually know what it takes which is why I know a 4nm process does not yet exist and won't for some time. In fact nobody knows exactly how to build one. IBM along with partners GlobalFoundries and Samsung was showing off some 7nm test wafers back in 2015 and intel is just getting ready to mass produce 7nm wafers later this year.

That is the leading edge and as of right this second no commercial 5 nm process exists and none is expected to exist until the 2020-2023 time frame.

So to even suggest 4nm is running, ready, or close, is an outright misrepresentation of the truth. If you are wondering why there is no date yet it's because nobody knows if they can even make them commercially viable. It wasn't that long ago 5nm was considered the end of the road where quantum tunneling prevented any further progress. The only potentially workable transistor type was lateral nanowire FET - expensive and difficult. If we get lucky and can extend finFET down to 5nm things will be easier and quicker to market but we don't know if that is possible. Then there are still challenges with extreme ultraviolet lithography.

So yeah, Samsung has whacked 4nm on their roadmap knowing the technology does not yet exist and are hoping one day it will. Will it happen? Yeah, most probably. But you still don't then throw out the headline like this.

This is a more honest headline because it explains the aspirational nature of their press release; "Samsung Wants to Lead the Foundry Business to 4nm and Beyond"

- https://www.extremetech.com/computing/249791-samsung-wants-lead-foundry-business-4nm-beyond
 

bit_user

Titan
Ambassador

Okay, so you obviously understand that research into later nodes overlaps with development and commercialization of sooner ones.

Who said that? I took the announcement to basically mean that, based on their research, this is what they think they can achieve and how they would do it.

Yes, I haven't been living under a rock. I saw how hard it was for the mighty Intel to scale up 14 nm, as Broadwell slipped into Skylake's release window. And now their backsliding on the node for desktop Cannonlake.


IMO, that's why the announcement is interesting. They're publicly stating what they think is achievable. Everybody knows it's speculative, at this stage.

Dude, Samsung clearly held some kind of press conference. If they make these bold pronouncements, it's totally legit for the press to cover it. They might add the appropriate caveats, but it's unfair to criticize Tom's for reporting on it.


Well, I think we can agree that the word "Reveals" is going too far. The headline should've been their reveal of their roadmap, perhaps mentioning that it extends to 4 nm.
 

InvalidError

Titan
Moderator

Nobody said they had working 4nm stuff. You don't need to have chips anywhere near production to put it on a roadmap telling investors where you want the company to go in the future - Cannonlake has been on Intel roadmaps for something like five years now and it got pushed back by three years due to hold-ups through the 14nm and 10nm process maturation. Samsung was wise not to include dates here. They'll get to each step of their roadmap whenever they can.

Given the number of complications everyone in the business has been having below 20nm, I would expect lots of calendar slips along the way.
 


This whole discussion turned just into a semantics argument.
Catalytic was just complaining about the Title's redaction, and after that it all went nowhere.

I agree that the title can be misinterpreted, if you read it at a glance. But if you read it more carefully, no part of it is actually incorrect.

It's only a little bit misleading you could say, just like any well done click-bait. That is one of the objectives of a title in press, to grab your attention, while not lying at the same time, and it achieved that spectacularly.

(Another objective usually is to give a bried abstract of the contents, and it might be a bit misdirected in that aspect)
 

bit_user

Titan
Ambassador

No, it sure sounded like CatalyticDragon criticized the very idea of reporting their roadmap:
What they have is a roadmap. I don't need to tell you anybody can write one of those.

I think we all agree that the headline was poorly worded. I think it was just due to sloppiness - not necessarily intentional click bait. If we start seeing more misleading headlines on Tom's, then we should probably start complaining to the editors.
 

Kewlx25

Distinguished


It may seem "meh" from a home user standpoint where you don't care about your electric bill, but datacenters love these new chips. Even when spending several thousand dollars on a server, it has become cost effective to replace servers every 2 years because of the energy savings.

Think of all of the web services made possible by these gains in efficiency.
 

ammaross

Distinguished
Jan 12, 2011
269
0
18,790


Could have been "Groom Lake," but I doubt as many people would get the reference....
 

InvalidError

Titan
Moderator

If you build a server that consumes 500W, it will cost you ~$500/year to power and cool but way more than $2000 to build. If you manage to double performance per watt, it'll take you about eight years to recover your investment. Power savings from improved efficiency come nowhere close to being cost-effective on their own.
 

Kewlx25

Distinguished


But you're not thinking about opportunity costs. If you cannot add any more servers, then a 10% increase in performance means a 10% increase in revenue. But a 20% reduction in power means you can have 20% more cores, giving you 20% more revenue.

Very simplified, but that's how the datacenters for my friends work. They're not a Google, but they have a 10-30 megawatt power budget, depending on the datacenter. My cousin has several $60k recent AMD servers unpowered because they only purchased them to decide which hardware to purchase. It's not that the servers aren't fast, it's that they're not as power efficient as the Intel servers for the same price and they'd rather let the server rot than use them.
 

bit_user

Titan
Ambassador
I'd never heard of a 2-year upgrade cycle. According to this, 3 years is/was more common. If it's gone in any direction, I'd guess it has increased a bit, since then.

https://www.nextplatform.com/2015/11/05/three-is-the-magic-number-for-hardware-cycles/


That's some pretty cheap electricity! At $0.11/kWh, it's cheaper than my residential rate. And that's not even including cooling. Where did you get those numbers?

BTW, they do need to add capacity, over time. And they have finite space in which to do it. Just another factor to consider, but the above article goes into considerably more depth (if you care about that sort of thing).
 


I pay At $0.09/kWh that's in Texas. Albeit I signed a 3 year contract when it had dipped low.
 
Status
Not open for further replies.