News Intel's Ponte Vecchio Xe Graphics Card Shows Up in Add-In Card Form Factor

That's a good sign given the company's previous attempts at developing its own Larabee GPUs ended without a single productized model

I mean not quite true considering that Knights Ferry was launched and was basically the HPC Larrabee. We just never saw a consumer version of it since it probably wouldn't have kept up with AMD and Nvidia well enough to sell. But in HPC, it performed well and could be sold for a healthy margin.
 
When the Larrabee derived Knight's Landing were released the move to GPU based accelerators was in it's infancy - they were NEVER going to be a consumer card. Same with Itanium - was never going to be a consumer facing product (Itanium was a VLIW 64bit CPU developed with and for HP)
 
When the Larrabee derived Knight's Landing were released the move to GPU based accelerators was in it's infancy - they were NEVER going to be a consumer card. Same with Itanium - was never going to be a consumer facing product (Itanium was a VLIW 64bit CPU developed with and for HP)

Intels original plan for Itanium was actually to go server and eventually move consumer to it since it was a distinct uArch from x86 and Intel would not have to share licensing with AMD and VIA, they would have x86 and of course x86-64.

I am somewhat sad as I think a pure 64bit uArch would have been better than holding onto the ancient x86 base.
 
  • Like
Reactions: bit_user
Intels original plan for Itanium was actually to go server and eventually move consumer to it since it was a distinct uArch from x86 and Intel would not have to share licensing with AMD and VIA, they would have x86 and of course x86-64.

I am somewhat sad as I think a pure 64bit uArch would have been better than holding onto the ancient x86 base.
Itanium was designed with HP for HP. Was never to be released outside of that agreement. Was never going to be the next gen, or replace x86 or anything other than meet HP's goals for the CPU.

Intel could revoke AMDs x86 license, which means that AMD would have to recall 100% of it's products from the channel and stop taking delivery from TSMC. AMD would be bankrupt over night, and Intel could offer AMD a lifeline - the purchase of x64 ouright.

Thing is, AMD serves a useful function to both Nvidia and Intel - they are considered "competition" and as such allows both Nvidia and Intel stave off antitrust / monopoly charges, while still not actually providing actual real competition.
 
Itanium was designed with HP for HP. Was never to be released outside of that agreement. Was never going to be the next gen, or replace x86 or anything other than meet HP's goals for the CPU.

Intel could revoke AMDs x86 license, which means that AMD would have to recall 100% of it's products from the channel and stop taking delivery from TSMC. AMD would be bankrupt over night, and Intel could offer AMD a lifeline - the purchase of x64 ouright.

Thing is, AMD serves a useful function to both Nvidia and Intel - they are considered "competition" and as such allows both Nvidia and Intel stave off antitrust / monopoly charges, while still not actually providing actual real competition.


wrong : Although Itanium did attain limited success in the niche market of high-end computing, Intel had originally hoped it would find broader acceptance as a replacement for the original x86 architecture from Wikipedia also and interesting read about IA-64 : https://www.techworld.com/tech-innovation/will-intel-abandon-the-itanium-2690/

intel cant revoke AMD's x86 license cause then amd could revoke intel from using amd64, and they would both have to recall their respective products that use x86 and amd64

" and Intel could offer AMD a lifeline - the purchase of x64 ouright. " um amd owns x64, aka amd64 i assume that should of been x86

a simple google search would of told you this.
https://www.quora.com/Could-Intel-revoke-AMD-s-licence-to-produce-x86-cpu-if-they-wanted-to
 
  • Like
Reactions: bit_user
Itanium was designed with HP for HP. Was never to be released outside of that agreement. Was never going to be the next gen, or replace x86 or anything other than meet HP's goals for the CPU.
Oh, this is very wrong, indeed.

The first gen Itanium processors even had a hardware engine to accelerate emulation of x86. Intel's plan was that IA64 (the name for Itanium's ISA) would trickle down to consumers and be the 64-bit replacement for x86.

Why else do you think AMD beat Intel to extending x86 to 64-bit? Intel didn't want 64-bit x86, but AMD succeeded to such an extent that Intel had to embrace it and scrap its plans for world domination with IA64.

Intel could revoke AMDs x86 license,
I'm not sure about that. Is that an assumption you're making, or is it based on actual information about the license terms?
 
When the Larrabee derived Knight's Landing were released the move to GPU based accelerators was in it's infancy - they were NEVER going to be a consumer card.
Not true. The original Larrabee in fact was intended to be a consumer GPU[1]. It had hardware texturing engines, and there are prototype boards floating around that even have video outputs for not just Larrabee, but the next generation or two (Linus Tech Tips got a hold of one of these cards and tried to get it up and working as a GPU).

As for Knight's Landing being released when GPU-compute was in its infancy, I have no idea where you got that. CUDA and Nvidia's Tesla line of data center accelerators were launched way back in 2007[3,4]. KNL launched in 2013[2].

References:
  1. https://www.anandtech.com/show/3738/intel-kills-larrabee-gpu-will-not-bring-a-discrete-graphics-product-to-market
  2. https://en.wikipedia.org/wiki/Xeon_Phi
  3. https://en.wikipedia.org/wiki/CUDA
  4. https://en.wikipedia.org/wiki/List_of_Nvidia_graphics_processing_units#Tesla
 
Oh, this is very wrong, indeed.

The first gen Itanium processors even had a hardware engine to accelerate emulation of x86. Intel's plan was that IA64 (the name for Itanium's ISA) would trickle down to consumers and be the 64-bit replacement for x86.

Why else do you think AMD beat Intel to extending x86 to 64-bit? Intel didn't want 64-bit x86, but AMD succeeded to such an extent that Intel had to embrace it and scrap its plans for world domination with IA64.


I'm not sure about that. Is that an assumption you're making, or is it based on actual information about the license terms?
Wrong. For HP with HP. NEVER was going to be a consumer chip - and no AMD did not beat Intel to 64 bits because of this. I worked at HP during this time, and I know of what I speak. What you know you synthesized yourself, but is based on nothing in reality.

Was to replace PA-RISC for HP - was a VLIW processor and server specific. Had nothing to do with "AMD beating Intel to x64",

Head canon is not canon.
 
Wrong. For HP with HP. NEVER was going to be a consumer chip - and no AMD did not beat Intel to 64 bits because of this. I worked at HP during this time, and I know of what I speak. What you know you synthesized yourself, but is based on nothing in reality.

Was to replace PA-RISC for HP - was a VLIW processor and server specific. Had nothing to do with "AMD beating Intel to x64",

Head canon is not canon.
Nope. I don't agree.

You can work at HP and be focused on it as a replacement for PA-RISC, while still missing Intel's larger plans for it as their 64-bit successor to x86. Both can be true.

Why the heck do you think Intel messed around with PAE, and why wasn't Intel the one to extend x86 to 64-bit? It's because their only plan for 64-bit was IA64, and they did the PAE hack because it was running late.

All of the tech press, at the time, was focused on IA64 as the 64-bit successor for x86. It's the main reason Intel had a hardware x86 front end integrated into Itanium. In the late 90's, all of the messaging was around Itanium being the next big thing. Sure, it was going to start out in the server realm, because consumers had no need for 64-bit, but then it was going to trickle down.

See: https://en.wikipedia.org/wiki/Itanium#Other_markets (which cites: http://features.techworld.com/operating-systems/2690/will-intel-abandon-the-itanium/ ).

Unfortunately, it was late, slow, and expensive. And the Pentium 3 turned out to be quite good, for the time. Then, when AMD came along and dropped Opteron, it was game over for IA64. Its window had closed, and Intel had no choice but to embrace AMD64.

BTW, if you wanted to show real street cred, you'd describe IA64 as EPIC or "VLIW-like". EPIC was constraint-based, still requiring realtime scheduling by the CPU, which was done for binary compatibility between various models and generations. VLIW is entirely statically-scheduled, and most commonly used in embedded scenarios, where having to compile for a specific CPU model isn't necessarily a problem. The main benefit of EPIC is that it saves the CPU having to work out data dependencies, on the fly.

I think some of these ideas could resurface, as x86-64 eventually falls out of favor, with CPU designers struggling to find ever more ways to increase perf/W.
 
Last edited:
P.S. I'm glad you're back.

Your extensive knowledge is welcome here, as long as you buttress your strong opinions with sound logic and quality sources, rather than insults or vitriol.

Something I keep facing, myself is the fact that internet arguments are fundamentally unwinnable. All you can really do is make your best case. If your counter-party remains unconvinced, accept there's nothing more you can do and move on. I usually let them have the last word - especially if I'm the one who "started it" (i.e. called them out on something they said).
 
sorry Deicidium369, you are are the one that is wrong :
Although Itanium did attain limited success in the niche market of high-end computing, Intel had originally hoped it would find broader acceptance as a replacement for the original x86 architecture source : Wikipedia
also and interesting read about IA-64 : https://www.techworld.com/tech-innovation/will-intel-abandon-the-itanium-2690/
intel intended it's IA-64 to replace x86, but the mass market didnt want it, and instead, AMD extended x86 to 64bit, with AMD 64, which, due to MS, pretty much forced intel to adopt it, as it didnt want to have to program windows for 2 x86-64 instruction sets. so intel had to license AMD64 from amd so they can include it with their own cpus.
 
  • Like
Reactions: bit_user
P.S. I'm glad you're back.

Your extensive knowledge is welcome here, as long as you buttress your strong opinions with sound logic and quality sources, rather than insults or vitriol.

Something I keep facing, myself is the fact that internet arguments are fundamentally unwinnable. All you can really do is make your best case. If your counter-party remains unconvinced, accept there's nothing more you can do and move on. I usually let them have the last word - especially if I'm the one who "started it" (i.e. called them out on something they said).
Well Thank you, I still am not sure why I was banned...

I enjoy a good back and forth - some people here are knowledgeable and some are operating with a very thin understanding of something that happened before their time, and some of us were there.

But seriously - IA64 was never meant for the desktop...

We would not care if we didn't love this sh*t. Ever since I got my Atari 400 + cassette drive back in 1978 or 1979 i was hooked - went to the just released 800XL + 5.25" floppy, Tektronic composite monitor and either an Epson or Star Micronix 9pin dot matrix printer... that led to a C64 + 5.25" floppy (from my wealthy Aunt in Cali) to the Atari ST (520ST TOS on Disk, later on ROMs, and later ROMs clipped in) and the Amiga 500, another Atari Mega 2 STE (upped to 4MB) with a Megafile 30, Moniterm Viking 19" greyscale + Atari Laser & Calamus SL, and eventually an Amiga 1200, then the Falcon 030 - and then it was the bleak times of the early IBM PC - 1st system I bought was an 80286 with a whopping 120MB HD.

IF I come off as derogatory, I apologize, as my brain never learned to express emotion in text - I hate emojis - even the primitive ones. At the end of the day, I don't really care much what happens here.

Funny thing is this thread is about the add in card version of Ponte Vecchio and whether it is Xe HPC (NO) or is it Xe HP (YES)... PV will be on 7nm... and is still under development.

"Here are the actual EU counts of Intel's various MCM-based Xe HP GPUs along with estimated core counts and TFLOPs:

  • Xe HP (12.5) 1-Tile GPU: 512 EU [Est: 4096 Cores, 12.2 TFLOPs assuming 1.5GHz, 150W]
  • Xe HP (12.5) 2-Tile GPU: 1024 EUs [Est: 8192 Cores, 20.48 assuming 1.25 GHz, TFLOPs, 300W]
  • Xe HP (12.5) 4-Tile GPU: 2048 EUs [Est: 16,384 Cores, 36 TFLOPs assuming 1.1 GHz, 400W/500W]
A 4-tile GPU would be seriously impressive and that may be why Raja Koduri mentioned it as the 'Father of All GPUs' in a recent tweet"

So looks like the consumer variant of PV/Xe HPC is ...drum roll ... Xe HP. It is possible that the 7nm version will be 6 or 8 tiles - and each of those comprise 1 of the 6 GPUs on that blade.

See you on the next Thread.
 
I still am not sure why I was banned...
That's unfortunate. Not speaking from any position of authority, but I think that should be clearly communicated.

some are operating with a very thin understanding of something that happened before their time, and some of us were there.
During the late 1990's, I was programming true VLIW chips, and consuming all the information I could find about Itanium, in the run up to its launch. I'm one of those geeks who was actually looking forward to it, and I was disappointed by its failure. It was much more interesting to me than DEC's Alpha, which what all the cool kids were into, back then.

Ever since I got my Atari 400 + cassette drive back in 1978 or 1979 i was hooked
I started on a 8088 PC clone from the company that would later become Dell. But, I didn't really get into programming until I got a 386. I didn't get very interested in PC hardware until years later. In the meantime, I had an after-school job at a PC repair shop.

dot matrix printer
OMG, yes. I had so much tractor-feed paper in my life. Either printouts with the feed still attached, or just printer paper that I grabbed to write on.

At the end of the day, I don't really care much what happens here.
Yeah, it's indeed quite low-stakes, but people have egos...

The way I look at it is I ask myself: what's my goal, here? What I settled on is simply this: to exchange information. With that as my guiding principle, I try to focus on discourse that leads to improving the volume of quality of information, on these forums. Good information isn't just correct, but it's verifiably so, by having sources and references. Of course, I don't add sources to most of my posts, but when conversations get heated, then it's time to add citations.

The cool thing is that I learn stuff. In debates, sometimes when I try to confirm what I believe to be true, I discover I'm at least partially wrong. Or, when I'm trying to find sources to back up my assertions, I learn additional things. However, I've found that the more open-minded you are, the more you can learn.

A 4-tile GPU would be seriously impressive and that may be why Raja Koduri mentioned it as the 'Father of All GPUs' in a recent tweet"
what I find most exciting about that is the potential it has to speed up general-purpose compute tasks. In Gen9 GPUs, each EU has 7-way SMT. Assuming Xe is the same, that means you could concurrently run 14k threads on it. And real, CPU-equivalent threads, that can each branch independently. I wonder what it'd be like to port a C compiler, a SQL database, or perhaps a stateful packet inspection engine to run on it. That's one of the few advantages I can see of Intel's narrow-EU approach.
 
Eh...so as a layman,is this aurora stuff basically ground work to get future consumer mobos that will feature a CPU and a GPU socket so you can switch out and upgrade each one whenever you want? It would make sense as a cost efficiency thing,no need to make PCBs and whole GPUs no need for PCI.4 no need to make different masks/modules for different products and so on.

Also direct connection between CPU and GPU would improve performance by a lot right?

Or is this going to stay only on datacenters?Would seem like a terrible waste though.
 
Do you have a source on that? Just curious.
i will see if i can find something, but i remember reading this lke 10+ years ago. so far i have only seen forum posts from 2002 or later about how microsoft worked with amd on 64bit windows back then, and how some people though intel could just force the market to use it own version if x86-64, but it was microsoft back then that was in the drivers seat, not intel for this, again just forum posts from people. im also reading as i am looking for this, as intel had to licence amd64 from amd or be left behind as by this point, a lot of people went with the A64 vs the 32bit only P4, and for the most part, IA64, wasnt getting anywhere because of the lack of backwards capability, and the need to rewrite software for IA64. the best i can find, is intel was more or less forced to used amd64. seems to be a lot of info about this here : https://www.quora.com/What-is-X86-X64-X86-64-AMD64-and-Intel64-Whats-the-difference-between-them all though it doesnt seem to be " official " sources.
 
is this aurora stuff basically ground work to get future consumer mobos that will feature a CPU and a GPU socket so you can switch out and upgrade each one whenever you want?
AFAIK, GDDR memory won't work with DIMMs, and I'm not sure you can even get away with using it from socketed chips. So, it could be limited just to HBM2?

no need for PCI.4
Uh, they have to communicate, somehow. They could use CXL or one of the competing standards, but my understanding is that the PHY layer is basically the same.

no need to make different masks/modules for different products and so on.
You mean different mobos, right? I think mobo fabrication overheads are fairly low, by comparison with chips.

Also direct connection between CPU and GPU would improve performance by a lot right?
I think we haven't seen evidence of that, in APUs, but it really depends on what you're doing.

Games are designed so the CPU queues up batches of asynchronous commands for the GPU, which insulates them from CPU <-> GPU latency. For some hybrid compute tasks (i.e. involving both the CPU and GPU), having a lower-latency connection could be more important.
 
microsoft worked with amd on 64bit windows back then, and how some people though intel could just force the market to use it own version if x86-64, but it was microsoft back then that was in the drivers seat, not intel
Certainly, MS had just invested huge amounts of resources porting Windows NT & their development toolchain to IA64. And I'm sure they weren't turning a profit from those efforts. So, if Intel did try to get MS to adopt an incompatible x86-64, they would be advocating from a much weaker position.
 
AFAIK, GDDR memory won't work with DIMMs, and I'm not sure you can even get away with using it from socketed chips. So, it could be limited just to HBM2?
They do state unified memory architecture on the slide,this might mean DDR4 though but still it would cover a lot of use cases.
They could stick this stuff into a nuc type case and sell it as a console.