News Forget 10nm? Intel May Change CPU Naming Scheme

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
We are talking about chip/die manufacturing and semiconductor engineering. So PCB design is largely irrelevant.

Well strictly speaking AMD has no manufacturing division so they cannot possibly take TSMC's approach. They simply use TSMC's manufacturing facilities and are using the design tools available. Of course, the big manufacturing decision AMD has made with regards to yields was to use multiple dies/chiplets and not follow the monolithic path that Intel had taken.

That's the beauty of higher IPC and higher core number. You can use lower frequency and still achieve higher performance all while achieving lower power consumption. That of course applies to applications that are heavily threaded which is what server workloads usually are. It should be noted here that IPC is a per workload metric. Up to and including 3000 series AMD's IPC improvements were what I would call cheap. Make the core wider so you increase throughput per core in a similar manner a larger core count would achieve. That benefited workloads like tile-based rendering that would also scale with more cores but did little to help workloads that wouldn't. 5000 series is a different beast. That being said some of the architectural elements of Zen3 such as unified L3 cache should have been there since Zen 2 at least, not to say since original Zen. It's almost like AMD purposefully introduced a knowingly flawed design with the original Zen only to correct one flaw at a time and appear like making massive IPC improvements each time. That coupled with the fact that they already moved to bigger cache and AMD is slowly running out of some easy core design tricks. They have one more with DDR5 and then I am wondering how good they will be with IPC improvements. I hope they are.

I don’t think anyone doubts the semiconductor engineering advances made by TSMC and Samsung. What is controversial is the naming of the nodes and how nodes of different manufacturers can be compared with one another. And since Intel is compared, in the minds of the uninitiated at least, unfavorably by the current state of affairs and their position appears to be worse than it actually is, they react. And the fact of the matter is that Intel’s 10nm+/10nmSFF is comparable to TSMC’s 7nm/7nm+ so clearly nm naming is no longer a scientific or technical metric but a pure marketing one.

As for the future, Intel’s 7nm slots between TSMC’s 3nm and 5nm nodes. TSMC will be using GAA for their 2nm node, Samsung for their 3nm one and Intel for their 5nm. In any case at least until 2025 Intel will be trailing TSMC. AMD is likely to move to TSMC’s 5nm by the start of next year and stay there until at least late 2023/early 2024. Meanwhile Intel will be using 10nm until mid-2023 so again they will be a node behind for the next 2 years. In mid-2023 Intel will move to their 7nm, at least for mobile surpassing AMD like with Icelake Vs Ryzen 3000 (Zen+) mobile. But then AMD moves to TSMC’s 3nm though this time around Intel will also utilise TSMC and their 3nm for some logic tiles. It will be interesting to see how all these pan out for sure.

We are talking about chip/die manufacturing and semiconductor engineering. So PCB design is largely irrelevant.

Lol, do you actually realize that PCB's are not just consigned to the Motherboard. The Multilayered board that the processor die is placed on is an ultra thin layer PCB made in the same way but for a different package. The materials can differ slightly (as in the case of Intel with its Fusion generator design (joke!)). So not sure if you really understand the worlds of ASIC Design.. But oh well, Another one loves to try to school me on matters they have no real knowledge about... Hence I was referencing my points to factual matters of ASIC level design, talking on the levels of reality of problems of Lithographic process that Intel tried and largely failed at (in yield success).

Well strictly speaking AMD has no manufacturing division so they cannot possibly take TSMC's approach. They simply use TSMC's manufacturing facilities and are using the design tools available. Of course, the big manufacturing decision AMD has made with regards to yields was to use multiple dies/chiplets and not follow the monolithic path that Intel had taken.

AMD used to own Global foundries which followed the same ethos that TSMC use. AMD had to sell it because of the nefarious practices of Intel and the cash flow problems stemming from it. AMD form K6 onwards followed it s own direction and still kept up the Ingenuity, despite Intel getting the game manufacturers and Microsoft to be tardy with the use of 3D Now....
Your use of Monolithic path was very grand and also pointless, as even when AMD owned Global foundries, they came with this ethos against Intel. And Samsung has a even more monolithic approach when it comes to finding success doing the same thing, with the same Tri-gate Die approach with 5 and 3nm products... Meaning that Intel's metrics are directly relevant to Samsung and Samsung are Streets ahead... TSMC is moving over theirs to the tri gate on the 3nm and Intel will have to cry cry cry, and buy the patient from Samsung to catch up.

It is for this purpose that Intel wishes to change the subject on the dynamic naming system. They simply have no argument against Samsung for the size and soon to be true for TSMC which will use the layered tri-gate approach at the behest of Intel.

My point was; the size issue that Intel touts is because of 2 things

1. Intel have pressed their Transistors too close to each other to reduce die size, making the chips furnaces in the process.. They will not last long as server chips or Data processing agencies, hence they will lose business.

2. The Competitors are about to use their own tri-gate transistors meaning that Intel will lose on that argument too... It thus must change the name game now....

Arguably AMD changed the performance metric in the early 2000 to equivalent performance metrics, because back then, as is the case now, Intel was just whacking up Frequency for IPC performance.

That's the beauty of higher IPC and higher core number. You can use lower frequency and still achieve higher performance all while achieving lower power consumption. That of course applies to applications that are heavily threaded which is what server workloads usually are. It should be noted here that IPC is a per workload metric. Up to and including 3000 series AMD's IPC improvements were what I would call cheap. Make the core wider so you increase throughput per core in a similar manner a larger core count would achieve. That benefited workloads like tile-based rendering that would also scale with more cores but did little to help workloads that wouldn't. 5000 series is a different beast. That being said some of the architectural elements of Zen3 such as unified L3 cache should have been there since Zen 2 at least, not to say since original Zen. It's almost like AMD purposefully introduced a knowingly flawed design with the original Zen only to correct one flaw at a time and appear like making massive IPC improvements each time. That coupled with the fact that they already moved to bigger cache and AMD is slowly running out of some easy core design tricks. They have one more with DDR5 and then I am wondering how good they will be with IPC improvements. I hope they are

Possibly true, but where is the evidence of this, what leads you to this belief? Making claims like this otherwise are fantastical, but you could after all be right.

I don’t think anyone doubts the semiconductor engineering advances made by TSMC and Samsung. What is controversial is the naming of the nodes and how nodes of different manufacturers can be compared with one another. And since Intel is compared, in the minds of the uninitiated at least, unfavorably by the current state of affairs and their position appears to be worse than it actually is, they react. And the fact of the matter is that Intel’s 10nm+/10nmSFF is comparable to TSMC’s 7nm/7nm+ so clearly nm naming is no longer a scientific or technical metric but a pure marketing one.

Again, you are missing the point. Intel's 10nm as with its 14nm uses Tri-gate design, so 3 transistors in one gate space where as TSMC can go 2 on planar. This is the only reason Intel can make this claim, and once TSMC moves over to the new Tri-gate process (which is soon; 3 years), Intel knows its argument is gone.
Intel is struggling shrinking its process to 10nm on a hit and miss basis. It probably has a yield success of 50% and less, so its shrink to 7nm is also a dream, if it cannot master the 10nm. It tried to use Cobalt to resolve the wire connection problems, but this is difficult to achieve, and it made little noticeable change in yields.
In other words; Intel must either buy the tech from its competitors (or at least rights to patents), or fall further behind.

TSMC is investing billions over the next 3 years, they know that Intel is losing Profit margins (despite what is claimed), and they have the most advanced Lithography that is for contract... Samsung is the most advanced, but it is in house.

But then AMD moves to TSMC’s 3nm though this time around Intel will also utilise TSMC and their 3nm for some logic tiles. It will be interesting to see how all these pan out for sure.

Considering Intel pay more for R&D than any other fab company, this is sickening for them.
AMDs success has been on the back of far less and from 'the Brink', they are now the leader in CPUs. Intel may claim some speeds, but at the cost of their reputation.

But yes, the future can yield many different possibilities, and it will be interesting to be sure. Remember Atom size is reportedly 0.1nm in size... It will be interesting...
 
Establishing a new naming standard thats actually relevant to these technology nodes is a great idea. I've been hoping a change in naming schemes would be adopted by the industry.

Additionally, I know people are having fun dunking on Intel with this story, but if it means anything to you, here is the Vice President of Corporate Research from TSMC saying something very similar during his keynote speech at Hot Chips 31, Tuesday, August 20, 2019.

"Okay, so having said that, density is really important because many good things come out of density. Speed, energy efficiency, power efficiency, all the things that we somehow associate Moore's law with. All the good things, the good attributes that we associate Moore's law with comes from density. And so it is often the case that we conflate these others attributes that we'd like to have, energy efficiency, power efficiency, speed and so on with Moore's law. So the fact that processor speed has saturated does not mean Moore's law is dead. Densities keep increasing..."

"Now all these numbers today don't mean a lot, because it used to be the technology node, that node number, means something, some features on the wafer. Today, these numbers are just numbers. They're like models in a car - it's like BMW 5-series or Mazada 6. It doesn't matter what the number is, it's just a destination of the next technology, the name for it. So let's not confuse ourselves with the name of the node with what the technology actually offers." (quote starts at 7:00 timestamp)
- Dr. Philip Wong, TSMC Vice President of Corporate Research

These quotes are from the first 8 minutes. Give it a watch if you have time.
View: https://youtu.be/O5UQ5OGOsnM
 
  • Like
Reactions: Co BIY and BogdanH
Their TDP is a digital choke in bios that no software can go over, let alone use twice as much power.
If someone disables TDP from bios or puts it to some ridiculously high value then it stops being intel's TDP.
Even when a BIOS is following Intel's guidelines, the chip will exceed the advertised TDP by huge margins until the specified boost period is exceeded, typically after around 40 seconds or so for the standard desktop parts. During that time, even a processor advertised as having a "65 watt TDP" can potentially be drawing over 200 watts. So for most common workloads, that TDP is meaningless, only coming into play when the CPU has been under extended load for some time, assuming the motherboard follows the boost duration guidelines, which practically none of them do by default.

And Intel knows the motherboards don't follow those guidelines in their default configurations, and actively encourages that. They don't want them to, since the processors wouldn't be able to compete at extended workloads if limited to their base clocks after the defined boost period. Intel is absolutely manipulating the numbers with their recent higher-core count processors to avoid advertising them with 200+ watt TDPs. That hadn't always been the case, but has been for recent generations, particularly once they started ramping up the number of cores.

https://www.extremetech.com/computi...onger-useful-to-predict-cpu-power-consumption
 
  • Like
Reactions: King_V and Conahl
So let's not confuse ourselves with the name of the node with what the technology actually offers.

We're not. But it would appear that Intel is hell bent on confusing the terminology (e. g. 2D/2.25D/2.5D/2.75D/3D). The only thing that matters is what is done per unit of time and at what cost (e. g. platform and energy). The fastest is never the smartest. But then again. you have gaming and crypto mining.
 
We are talking about chip/die manufacturing and semiconductor engineering. So PCB design is largely irrelevant.

Lol, do you actually realize that PCB's are not just consigned to the Motherboard. The Multilayered board that the processor die is placed on is an ultra thin layer PCB made in the same way but for a different package. The materials can differ slightly (as in the case of Intel with its Fusion generator design (joke!)). So not sure if you really understand the worlds of ASIC Design.. But oh well, Another one loves to try to school me on matters they have no real knowledge about... Hence I was referencing my points to factual matters of ASIC level design, talking on the levels of reality of problems of Lithographic process that Intel tried and largely failed at (in yield success).
PCB design is relevant and important in the packaging and assembly processes sure but is irrelevant in the die manufacturing itself for which the nm naming scheme is all about.
 
Doesnt matter.
All really matter is perfomance, tdp, fps, heat.
One of it lagging behind that mean you have a bad point.

The thing is those higher density also came with drawback which is TDP and heat.

People working with laptop will search more performance and battery life. What 14nm can do with it even though it have same performance with 7nm?
It just 1 use case. Datacenter would consider even more.
The only one that could careless is intel big fan and
gamer and deep pocket
 
Intel's TDP ratings for their recent generations of processors have been nowhere near representative of their real-world power draw and heat output under typical operating conditions. In many cases these processors are drawing over double the power under heavy loads than what their TDP would imply, or up to several times the power in the case of of their "low-power" chips. The TDP only applies to their base frequency, which one is unlikely to encounter under load on a properly-configured system. The power draw of AMD's Ryzen processors, on the other hand, tends to typically match their advertised TDP almost perfectly.

Exactly...looks like Intel wants to change the definition of process spec like it reinvented TDP to disguise the power guzzling of its current CPUs, and how it would like to refer to silly "productivity benchmarks" rather than performance benchmarks--simply because Intel can't win them, and so on. What's so funny here is that Intel can't even put these CPUs on its own 10nm process--but has to use its 14nm+++++++++++ process. So if Intel's 10nm = AMD's 7nm, Intel can't even manufacture the latest desktop CPUs at 10nm! AMD has been on 7nm for going on 21 months now, and full PCIe4 system buses at the same time. Intel is still far behind and its latest-gen 14nm CPUs prove it conclusively, imo. Yep, the 12-Generation of same old-same old from Intel does nothing for me.
 
  • Like
Reactions: ginthegit
Exactly...looks like Intel wants to change the definition of process spec like it reinvented TDP to disguise the power guzzling of its current CPUs...
In the case of process nodes, I wouldn't really care much if Intel changed the way they describe them though, so long as it's done within reason, since nodes have already become a bit abstract. If their "10nm" process is actually comparable to what TSMC describes as "7nm", you can't really blame them for wanting to market it in a different way. So long as they are not hand-picking a metric that would make their process look better than it really is compared to the competition, it wouldn't be a problem. Of course, their marketing team has a tendency of doing just that. However, the way a process is described is less important than how it actually performs, and to the end user a node on its own isn't going to be all that meaningful.
 
Intel's TDP is super accurate because it's just a simple cut off point that you can set yourself in bios
according to this article : https://www.anandtech.com/show/13544/why-intel-processors-draw-more-power-than-expected-tdp-turbo you may be wrong and i quote :
"
For the last however many years, this is the definition of TDP that Intel has used. For any given processor, Intel will guarantee both a rated frequency to run at (known as the base frequency) for a given power, which is the rated TDP. This means that a processor like the 65W Core i7-8700, which has a base frequency of 3.2 GHz and a turbo of 4.7 GHz, is only guaranteed to be at or below 65W when the processor is running at 3.2 GHz. Intel does not guarantee any level of performance above this 3.2 GHz / 65W value.

On top of the base values, Intel implements Turbo. As mentioned, something like the Core i7-8700 can have a turbo of 4.7 GHz, which draws a lot more power than the processor running at 3.2 GHz. The all-core turbo value for a processor like the Core i7-8700 is 4.3 GHz, which is well above the guaranteed 3.2 GHz. What makes it all the more complicated is when none of those turbo modes go down to the base frequency. It means that the processor will be operating above its TDP rating all the time, and that 65W cooler you purchased (or perhaps it even came with the processor) has become a bottleneck of sorts. If more performance is required, it needs to go in the bin, as you’ll need something better. "

sorry Terrylaze, no matter HOW you spin it, bend it, or try to reword it, intels TDP rating is AT BASE frequency. if you keep the cpu at that frequency, it will only use that much power, and ANY type of turbo, it will use more then that rated TDP.
and some one who only buys a comp from a store, prebuilt, isnt going to delve in to the bios and change any of the settings, at all. while you may go into it and change every setting that you want, doesnt mean every one else will.

If someone disables TDP from bios or puts it to some ridiculously high value then it stops being intel's TDP.
so that means practically ALL mobo makers, NOT the person using the system ?
 
PCB design is relevant and important in the packaging and assembly processes sure but is irrelevant in the die manufacturing itself for which the nm naming scheme is all about.

You really don't know much about PCB manufacturing do you. The CPU is on a PCB inside its chip usually on a 48 to 60 layer die interconnected with Via and routing. the Top layer is custom layered with 3 different metals (sometimes more) to create each individual transistor. Will you just stop trying to talk about something you know nothing about. I actually design PCB's in the form of Chips. So even simple chips like the ancient 741, when you put it inside the thermal slug, is on a PCB wafer pasted to the top as Transistors, and covered in a solder mask so that the Heat dissipating slug does not damage the transistors.
CMOS and TTL use different techs, but CMOS is the current tech used, but TTL could take over if the design is improved in the way it has done since 2010.
The Die manufacturing is just a core or memory cluster in and of itself and is not relevant to the PCB process. If you can fabricate a Die to 7nm, you can come and work for us and show us you amazing new tech that is sub atomic process.... Each Transistor is measured individually for their physical size and spacing between them and other transistors, and this is what determines the 14nm and 7nm sizing. Intel is trying to make the point that is uses 3 transistor per until space than TSMC's 2. But it looks like Intel cannot make its 3D FinFET Transistors reliably below the 14nm mark because of the complexity of connection process. TSMC has a 2D planar that connects reliably at the 7nm Node and uses 2 Transistor per unit space... This is why Intel claims the not being compatible with space direct measurements. So yes it has a point, but being as Intel cannot make its 10nm reliable enough, it cannot claim to have the advantages if it cannot print them reliably. So intel has moved back to its 14nm process and 2/3's of 14nm is almost 10nm, so 2D planar tech has the upper hand as the 10nm only work on intel's 3d Design when it increases the space between its 10nm Transistors making the over all design larger than the TSMC process in actual metrics (as the complete size is not just transistor size but separation space too).

So your statement is not only untrue, but I think you are just a keyboard warrior who has read a few articles and thinks he understands PCB manufacturing and ASIC's.

Maybe you have dabbled too much into Proteus, Multisim or TINA and think that PCB design is limited to just Surface and thruhole tech. Maybe that is your justification for you argument.

Lithographic Writing uses the same PCB process of using Routing and Via connection, the only real difference is that the surface is layered in 3 metals similar to 3D printing (in Samsung's case) and more like molding in Intel's. Both High level and Low level PCB's use the same tech and same methods of PCB tech, the only difference is whether it is mounting components or Etching cuts and laying metal, the rest is exactly the same.

https://resources.pcb.cadence.com/b...s involves the,exposed thin film or substrate.
 
Last edited:
In my opinion Intel is already winning the silly marketing game. The fact that by bringing this "controversy" up causes us to talk about the actual near parity shows that they are the marketing geniuses we all hate them for.
I don't consider this to be a marketing issue. People mistakenly comparing Intel nm to TSMC nm is probably the second-most common error I have to correct in tech discussions. (People comparing SSD sequential speeds is the most common. The 4k speeds are much more important.)

Intel's TDP is super accurate because it's just a simple cut off point that you can set yourself in bios, it's PL1 and PL2 and the only issue is that intel doesn't enforce it so you can put in whatever you want, intel suggests you to put it to 125 but doesn't force you.
AMD on the other hand claims 105W TDP but then forces mobo makers to supply 140W to the socket...it's only 35% more.
TDP isn't a power consumption figure. It's thermal dissipation figure - how big a cooler you need for the CPU. The time-average of heat produced will equal the time-average of power consumed. But the presence of a large metal heatsink acting as, well, a heat sink means there will be substantial smoothing of the thermal output curve relative to the power input curve. So it is completely permissible for the instantaneous power consumption to significantly exceed the TDP rating.

AMD used to own Global foundries which followed the same ethos that TSMC use. AMD had to sell it because of the nefarious practices of Intel and the cash flow problems stemming from it. AMD form K6 onwards followed it s own direction and still kept up the Ingenuity, despite Intel getting the game manufacturers and Microsoft to be tardy with the use of 3D Now....
And given the current situation today, I think most people would agree that was a good thing. Otherwise AMD would be manufacturing on Globalfoundries' 12nm, and Apple (on TSMC's 5nm) would be completely wiping the floor with both Intel and AMD.

Having semiconductor manufacturing independent of processor design is good for the industry, good for consumers, good for the fabs, and good for the processor companies. It lets everyone concentrate on competing against their equivalent competitors, instead of against some hybrid combo of processor design + fab size. I've felt since the 1990s that Intel should've been broken up into two companies as an anti-trust measure - one designing processors, one manufacturing semiconductors. Samsung is in a similar situation, but its divisions tend to operate independently of each other. Samsung's semiconductor division prioritized manufacturing Apple's Ax SoCs over Samsung Mobile's Exynos, because Apple was willing to pay more. The opposite of how Intel kept its fab technology to itself to use it to help leverage its domination of the processor market.

Arguably AMD changed the performance metric in the early 2000 to equivalent performance metrics, because back then, as is the case now, Intel was just whacking up Frequency for IPC performance.
And Intel paid dearly for marketing solely to MHz when they ran into the thermal brick wall of too-high clock speed on the Pentium 4. It's why the 64-bit instruction set is amd64, instead of x64. Intel stumbled and AMD actually took the lead for a few years. The subsequent (and current) Intel Core processors were based on their mobile processors, which (due to power constraints on laptops) had been insulated from their marketing division beating the MHz drum. If Intel hadn't had that rabbit in their hat, AMD might have dominated for 2 straight decades.
 
And Intel paid dearly for marketing solely to MHz when they ran into the thermal brick wall of too-high clock speed on the Pentium 4. It's why the 64-bit instruction set is amd64, instead of x64. Intel stumbled and AMD actually took the lead for a few years. The subsequent (and current) Intel Core processors were based on their mobile processors, which (due to power constraints on laptops) had been insulated from their marketing division beating the MHz drum. If Intel hadn't had that rabbit in their hat, AMD might have dominated for 2 straight decades.

That statement is part true and part conjecture. Yes P4 was a disaster with heat, but not necessarily was P4 a disaster because of it. After all they were hampering AMD with its compiler and was leveraging gaming companies to go with its own extensions rather than AMDs.
The I64 and AMD64 is another issue entirely. Intel wanted the industry to completely change the instruction set to 64 bit only, thus making all the 32bit Software obsolete and unable to run on the Intel new 64bit processors. AMD had a better Idea of having the 64bit registers able to split in 4 parts so that it could still run 16 bit, 32 bit and the new 64 bit software, and it was the backwards compatibility idea that got the software guys juices up. But besides that, the gaming coders were still coding games in 32 bit for some time afterwards, and it is only now that this is shown to be a poor choice. Games Like Sins of a Solar Empire could only leverage 32 bit addressing for its RAM hungry needs and showed to be a bad choice, and others made this same mistake.

Intel didn't have the rabbit in their hat, they tried to cheat AMD out of the market, and this was largely what kept AMD in the game. Intel's nefarious cheating practices were constantly exposed and litigated over and over again for large settlements to AMD, which made a good proportion of their bread and Butter.

AMD has always taken a technological step to the sides to try to beat Intel, and like with Bulldozer, they often didn't succeed, but Ryzen is a different beast. Ryzen 3 has allowed AMD to not only catch up with Intel, but exceed them in many metrics, even these new Power hungry beasts of Gen 11. This is an embarrassment for intel on many levels. Even when AMD was on 28nm and intel on 14nm, the Consoles still went with AMD, and not just because of Cheaper licensing, but because AMD always offered what Intel could not, Good efficiency considering its process was more than 2x the size, and better integration with Graphics. Considering Intel's R&D costs, this never should have happened, but it is what it is, and Intel is now not only red faced over its latest generation, it is losing to underdogs.

Apples dominance is another issue on another thread. And though these synthetic benchmarks show Apples new chips leading the pack, in real applications and programs, I bet it doesn't pan out like that. Benchmarks are notorious for linear heavy scaled Integer Mathematic calculations which RISC loves and CISC does not show its true power.
Apple will not be so big when it tries to port games to its Metal layer.
 
And given the current situation today, I think most people would agree that was a good thing. Otherwise AMD would be manufacturing on Globalfoundries' 12nm, and Apple (on TSMC's 5nm) would be completely wiping the floor with both Intel and AMD.
That's assuming AMD would have followed GlobalFoundries path. By the time GloFo announced they were ending development of their 7nm node, it had already been nearly a decade since AMD spun the company off, and over 6 years since they sold their remaining stake in it. Had AMD still been in charge, there's a good chance they would have continued 7nm development, though that's not to say they would have necessarily been better off. At the very least, having another company making chips on cutting-edge nodes might have eased the current 7nm shortages though.

And technically, AMD is still manufacturing on GlobalFoundries 12nm node. Their core chiplets might be on 7nm, but the IO chips are on 12nm, so there's more silicon from Globalfoundries in many of their processors than there is from TSMC.
 
The glossy graph at the start of this article, showing Intel leading by producing 10 nm CPUs is just misleading to the point of being dishonest.
 
but I think you are just a keyboard warrior who has read a few articles and thinks he understands PCB manufacturing and ASIC's.

Maybe you have dabbled too much into Proteus, Multisim or TINA and think that PCB design is limited to just Surface and thruhole tech. Maybe that is your justification for you argument.
That’s funny because that’s exactly what I was going to write about you. If you really are an engineer (and I think you are), then let’s put this down to disagreement about the semantics. But if you really are just a keyboard warrior you should probably start from the basics. In that case I am linking you some popularized articles/videos (that you are used to) for you to read/watch.
  1. How Intel Makes Chips: Concept to Customer - YouTube
  2. Chip Manufacturing - How are Microchips made? | Infineon - YouTube
  3. https://www.techspot.com/article/1840-how-cpus-are-designed-and-built-part-3/
 
Last edited:
X
That’s funny because that’s exactly what I was going to write about you. If you really are an engineer (and I think you are), then let’s put this down to disagreement about the semantics. But if you really are just a keyboard warrior you should probably start from the basics. In that case I am linking you some popularized articles/videos (that you are used to) for you to read/watch.
  1. How Intel Makes Chips: Concept to Customer - YouTube
  2. Chip Manufacturing - How are Microchips made? | Infineon - YouTube
  3. https://www.techspot.com/article/1840-how-cpus-are-designed-and-built-part-3/
You are not really disproving my point here, the Video 1 Just shows the top layer design of the lithographic cut, and 48 layer (which is up to 60 layer routing (which is still all PCB(A) manufacturing. And yes Indeed it would be semantics as such. PCBA and PCB are near exact same process except the lithographic cutting and layering of the metals, and Intel's cutting and layering is very much inferior to Samsung, not only in size, by technology used to layer the metals and connectors down. Back near 25 years ago when ASIC design was simple, I did my Degree specializing in it. We called it PCB then as we call it PCB now as PCB is just a Printed Circuit Board, and whether this is lithography or Pick and place, it is still the same Silicon/ fibre glass etc, just in different thickness.
But I am fascinated by how Samsung resolved the problems that Intel cannot solve effectively. They are using a layer approach to the 3D transistors that allows the connectors to the gate to be directly connected with the bus lines without error. Similar to the way 3D printing layers plastics together in some 3D printers.

My point is still, Samsung's new 3nm production has the same measurement as Intels 22nm 14nm and 10nm and at 3nm is kicking intels azz. TSMC will transition to 3D transistors on the next node past 5nm, and Intel will bare no excuses then at all.... which is why it needs a rename revamp.
 
Until Intel demonstrates it actually can produce high performance 10 nm chips (not laptop chips) in commercial quantities (by actually selling them) I think it's an open question whether or not they can do that.

All this talk of how superior Intel's hypothetical 10 nm chips will be to 'others' chips is just more Intel hubris.

Existence is a quality of perfection that these hypothetical Intel chips lack, unlike the rival chips from AMD, Samsung, Apple and TSMC.

Yet again, instead of actually delivering the goods Intel offers more promises and a dishonest smear campaign directed at rivals.

Good stuff, that's the way to build confidence Intel.
 
  • Like
Reactions: ginthegit