News Raja Koduri Flashes 'Petaflops Scale' 4-Tile Xe HP GPU at Hot Chips

That guy worked for AMD as we know :)

4 tiles ... wonderful , he is stealing AMD Technology in ThreadRipper/Epyc design ... I think AMD is waiting for the Release , and then "Court Time" ...
 
The CC info was that the CPUs planned for 7nm were delayed. The 7nm process was low yield. There was no mention of delaying the only 7nm GPU scheduled for 2021, which was Ponte Vecchio.

btw, the HPG Xe chip which was listed as being fab'd externally was stated to be back in the lab already in David Blythe's presentation. Looks like this was not a last minute decision. They just didn't announce it.
 
That guy worked for AMD as we know :)

4 tiles ... wonderful , he is stealing AMD Technology in ThreadRipper/Epyc design ... I think AMD is waiting for the Release , and then "Court Time" ...

There's a difference between an idea and a technology. The first multi-core CPU was made by IBM, the first dual GPU made by 3DFX. Stacking silicon is not a new idea.
 
That guy worked for AMD as we know :)

4 tiles ... wonderful , he is stealing AMD Technology in ThreadRipper/Epyc design ... I think AMD is waiting for the Release , and then "Court Time" ...
AMD themselves dont have a tiled GPU yet. How can you be so sure the technology came from the CPU department of AMD when he was working on the GPU division? This might be a totally different design internally for all you know.
 
That guy worked for AMD as we know :)

4 tiles ... wonderful , he is stealing AMD Technology in ThreadRipper/Epyc design ... I think AMD is waiting for the Release , and then "Court Time" ...

Did you forget Intel used tiles (chiplets) 25 years ago with the Pentium Pro? Or Clarkdale also had tiles 10 years ago.
I understand fanboys gotta fanboy, but at least get your info straight.
 
The Pentium Pro merely packaged its cache separately; there was no core separation. If you're going to define the issue so loosely, then "chiplets" began all the way back in the 1970s, by IBM.
And there's your problem,if you define it too tightly then intel's method is going to be different enough and if you define it too loosely then you can't defend it as a patent/IP.
Unless intel copied any part of AMDs design 1:1 there will be no case to be had.
 
It doesn’t surprise me that ignorant AMD fanboys are giving AMD credit for things that AMD never innovated. Here they are even giving AMD credit for “their” 7nm node (… the TSMC’s 7nm node!!!). Anyway, the fact of the matter is that there is virtually nothing AMD has done new with Threadripper/Epyc in the MCM topic. It was not AMD the first to create (or even the first to commercially use) an MCM module (not even the first to do so with different cpu dies/chiplets – in fact they were attacking Intel for “stitching cpu dies together” back in the Core 2 duo/quad era). It was not AMD the first to create or commercially use the 2.5D packaging technology. And it was not AMD the first to create or commercially use silicon interposers for 2.5D packaging.

As for patents Intel already holds several patents on 2D, 2.5D and 3D packaging with all of them pre-dating anything AMD may hold a patent for. Foveros was patented by Intel since 2008 and Intel was even publishing about EMIB in journals at least since 2011. By the way EMIB (which was used commercially since 2015) is a superior implementation than a silicon interposer for 2.5D packaging.

In established duopolies competitors generally don’t sue each other in courts over patents on the basis of ‘equivalency’ alone. It will have to be a straight rip-off in every implementation detail. It will otherwise be a colossal waste of time and money on a pyrrhic war where they will end up just cancelling each other (and if there is a winner, it will be a pyrrhic victory akin to a draw). Once a decade-old strong duopoly is established there is simply acceptance by both parties that they will have to just live with each other. You see this with Intel Vs AMD, Nvidia Vs ATI/AMD, Xilinx Vs Altera/Intel, etc. But any newcomer however is fought by both duopoly strongholds.
 
There's a difference between an idea and a technology. The first multi-core CPU was made by IBM, the first dual GPU made by 3DFX. Stacking silicon is not a new idea.

not the same technology ... it is the way you connect them together .. many many AMD patents are there . Intel could not touch this in GPU until they hired Raja ...
 
For decades, Intel's graphics have been absolute garbage compared to Radeon and GeForce. Intel has been talking about making a serious GPU seemingly forever and they've always managed to either fall flat on their faces or come up with something that went nowhere. Suddenly Intel is talking about this "Xe" thing and people are taking it seriously like a bunch of lemmings.

Has everyone forgotten the claims that Intel made about IRIS and Larrabee so soon? I'm not saying that they don't have something but the odds that it wil be competitive with Radeon and GeForce are slim at best. I think that a "wait and see" attitude is probably the best policy to have until we see something actually on the market.

Raja Kadori's "big accomplishment" at ATi was Vega and we all know how that panned out (it didn't). The corporate culture at Intel has been rather toxic for quite a while and that is not congruent with great accomplishments in the tech sector. First, there was CEO Brian Krzanich banging his employee in 2018. Then there was Chief Engineering Officer Murthy Renduchintala just pissing off everyone to the point that Jim Keller, a man who truly loves lithography, couldn't deal with it anymore and walked away.

Then Intel hires Raja Kadori, a man with a great legacy of mediocrity, to head up their attempt at a serious GPU because his mediocre Vega is still light-years ahead of anything that Intel could come up with.

I don't think that neither ATi nor nVidia need start shaking in their boots just yet. Maybe S3 should be concerned. 😉
 
Last edited:
GPU or CPU are the same , the difference is the Program inside ... but connecting tiles together is the big thing
I don't know where you get the idea that CPUs and GPUs are the same. CPUs are integer-based compute cores that may or may not have floating point cores included on-die. GPUs are FAR more complex than CPUs and it's not even close. The basic idea of computation may apply to both of them but they are as different from each other as a boat is from an airplane.

For example, an R7-3950X has 16 x86-type integer cores and 16 x87-type floating point cores which can generate 32 threads.

A Radeon RX 5700 XT on the other hand, has 40 compute units, 160 texture units and around 2,500 stream processors.

These are nowhere near being equivalent.
 
Last edited:
I don't know where you get the idea that CPUs and GPUs are the same. ... GPUs are FAR more complex than CPUs
He's right; you're wrong. Actually, you're wrong on two separate counts. First of all, from an Intellectual-Property perspective, there is no difference between a CPU and a GPU. They are both silicon-based processing units. What they are designed to process is irrelevant, unless a specific patent refers to that content. And, more importantly, to a federal judge presiding over a patent dispute, they will be considered functionally equivalent.

Secondly, the cores in a modern CPUs are -- in the Intel/AMD world, at least -- much more complex than a GPU core. That's the reason why a 3950 (to use your own example) runs only 32 threads at once, whereas an NVidia GPU might have 8,000 or more CUDA cores. Yes, the number in the latter case is bigger, but those actual cores are much simpler, from an overall transistor count. The total transistor counts for the entire chip in each case are driven by overall die size, not the complexity of a single core.
 
He's right; you're wrong. Actually, you're wrong on two separate counts. First of all, from an Intellectual-Property perspective, there is no difference between a CPU and a GPU. They are both silicon-based processing units. What they are designed to process is irrelevant, unless a specific patent refers to that content. And, more importantly, to a federal judge presiding over a patent dispute, they will be considered functionally equivalent.

Secondly, the cores in a modern CPUs are -- in the Intel/AMD world, at least -- much more complex than a GPU core. That's the reason why a 3950 (to use your own example) runs only 32 threads at once, whereas an NVidia GPU might have 8,000 or more CUDA cores. Yes, the number in the latter case is bigger, but those actual cores are much simpler, from an overall transistor count. The total transistor counts for the entire chip in each case are driven by overall die size, not the complexity of a single core.
Sure, and boats and planes are both vehicles powered by the burning of petroleum to move from one place to another. They just use different ways of getting there. Does that make them the same? No, it doesn't. Now, if you're talking about engine design intellectual property, then yes, they are but all I read was "They're the same, it's just software that's different." which, to me, implies that they're as similar to each other as x86 and x87 are, which they are not.

As for complexity, the fact that the GPU uses several different processing units to do what it does at the speed it does with the level of parallelism that it does means that it has a more complex blend of parts and it's not even close. This is why GPUs are used for machine learning and AI while CPUs are used for straight-forward computation and data analysis.

IF, as you think, CPUs and GPUs were the SAME THING, then AMD wouldn't have needed to purchse ATi to get GPU tech, Intel wouldn't suck at GPUs and nVidia would have some great CPUs out there. None of the above is true so they are NOT the same thing.
 
IF, as you think, CPUs and GPUs were the SAME THING, then AMD wouldn't have needed to purchse ATi to get GPU tech, Intel wouldn't suck at GPUs and nVidia would have some great CPUs out there. None of the above is true so they are NOT the same thing.
Well, to be fair this is more down to licencing and patents than it is about actual abilities.
AMD wanted to get a head start by buying ATI it didn't work out very well in the beginning though.Intel designed their own thing which without a head start was just as bad as you'd expect.
Nvidia wanted to get a license for x86 to make CPUs but intel just laughed in their faces.


Oh and I don't want to say that CPU and GPU is the same thing,there are a lot of differences but on the other hand there are also a lot of similarities.
 
I don't think that neither ATi nor nVidia need start shaking in their boots just yet. Maybe S3 should be concerned. 😉
For GPUs in the region of 4 petaflop it not only about performance anymore it's just as much about driver features/stability and intel is very much at the top as far as working with customers goes,nvidia does very good work on that as well but if a single company can help you with both the CPU and the GPU aspect of a problem it's going to be a big argument at least for some people.
oneapi
 
Sure, and boats and planes are both vehicles powered by the burning of petroleum...Does that make them the same? No.
You're missing the point. If your patent refers to how petroleum is burned within those vehicles then they are functionally equivalent from an IP perspective, and the same patent would apply to both. Or a patent on any of a thousand other commonality modes the two vehicles share. This is the doctrine of equivalents I referred you to earlier.

Having gone through the process many times myself, I'll give you a semi-concrete example. The AMD engineer sends off a writeup to the patent division. Then a patent attorney returns him a document that begins something like "Method and Apparatus for Interconnecting Multiple Processing Units on a Silicon Substrate". Then - if the patent attorney is worth his salt (and AMDs are) - he broadens the patent by including subclaims which specifically relate the patent to a general-purpose CPU, a GPU, an APU, a TPU, an FPGA, and even types of processing units not yet invented. It's intentionally made as broad as possible. That's why you pay a patent attorney.

The GPU...has a more complex blend of parts. This is why GPUs are used for machine learning and AI while CPUs are used for straight-forward computation and data analysis.
No again. GPUs are used for machine learning because their massively parallel architecture more closely models the underlying algorithm being used. And point in fact, this is only true for some types of machine learning, e.g. neural networks. Other types of machine learning are more suited for a general-purpose CPU.
 
not the same technology ... it is the way you connect them together .. many many AMD patents are there . Intel could not touch this in GPU until they hired Raja ...
Can you elaborate on these patents? I don't see how they could hold patents related to MCM given they weren't the first to do MCM. Except maybe if they are narrow patents relating to specific manufacturing methods or something.
  • Connecting chip(lets) via routing in the organic substrate (which is what Threadripper/EPYC and Zen 2 do) has been around for ages
  • Connecting chip(lets) using a silicon interposer was first done by Xilinx I believe
  • Connecting chip(lets) using silicon embedded in the organic substrate (EMIB, which is what is being used for Xe) is an Intel invention. And as someone else noted, was first used years ago and first written about (and presumably patented) by Intel years before that.

The Pentium Pro merely packaged its cache separately; there was no core separation. If you're going to define the issue so loosely, then "chiplets" began all the way back in the 1970s, by IBM.
Even we limit ourselves to only cases where multiple processor dies were packaged together there's still Intel Kentsfield/Clovertown, which had two separate dual-core dies in a single package almost 15 years ago.
 
Last edited:
Can you elaborate on these patents? I don't see how they could hold patents related to MCM given they weren't the first to do MCM. Except maybe if they are narrow patents relating to specific manufacturing methods or something.
Pretty much. Here's a few of the many they have:

Multi-chip package with offset 3D structure: here.
Multi-RDL structure packages and methods of fabricating the same: here.
Molded die last chip combination: here.
Configuration of Multi-Die Modules With Through-Silicon Vias: here.
Reducing chiplet wakeup latency: here.
 
  • Like
Reactions: nofanneeded