Question Intel's Xe Graphics Architecture to Support Hardware-Accelerated Ray Tracing

OctaBotGamer

Prominent
Apr 30, 2019
171
19
595
Yup finally Intel is getting dedicated graphics card upgrade, but it will be a brand new thing but I expect enabling ray tracing will drop the fps to below 10:tearsofjoy: I also think that it'll last only for A very short period of time and will probably be solved after some time, rest assured when the graphics card will be launched
 
This is not suprising. Intel demonstrated real time ray tracing with back in 2007. As their team includes one of the fundamental Larabee architectects, it would make sense they include hardware support.

View: https://www.youtube.com/watch?v=blfxI1cVOzU


Intel's original approach was non-ASIC small CISC cores (Larabee/KC/KL) this sacrifices efficiency for a more flexible approach that can be updated.

But their implementation is no different than what NVIDIA's or AMD's implementation. Define a bounding box for the ray trace hit test. Extend ray forward from viewer to point space then calculate caustics/IOR, ambient/diffuse, single reflections, shadows etc...

I don't want to say these are elementary calculations. They aren't. The algorithms have been around for decades now. I used to dive into POVRay source and see how it worked. But the mechanisms for speeding it up with ASIC circuits though have remained out of the spotlight because of the amount of circuitry it demands for BASIC ray hit test and calculations.

We've come to a point now where adding those circuits might be more beneficial to image quality than say creating more raw FLOPS throughput. (For several technical reasons)

In the future you will see architectures perform some interesting balancing acts. A lot of this depends on which takes greater hold. Ray Tracing OR VR. You can't do both.

If I were a betting man, if ray tracing does take off, I would say we would see a new sub standard of chiplet architecture where chiplets are assigned based on rendering viewports on draw calls. One dedicated to left lens, one to right for VR. I've been working on the algorithms to solve the efficiency issues here. At one point when dealing with Z Buffer depths comparisons can you render a background draw with 1 call and duplicate it across both viewports? .5 pixels calculated distance change in the main FOV with a 1.5 max delta at the max common fov edges?
 
Last edited:

bit_user

Polypheme
Ambassador
the Intel Xe architecture roadmap for data center optimized rendering includes ray tracing hardware acceleration support for the Intel Rendering Framework family of APIs and libraries.
This could actually be taken to mean it's not coming in their first gen products. The fact that they cite their "roadmap", rather than being more specific, could be very deliberate.

Also, the term "hardware acceleration" can be taken to mean anything that runs on the GPU, itself.

So, we could actually see first gen products that implement it in software, like what Nvidia recently enabled on some of their GTX 1xxx-series GPUs, followed by true hardware support in the second generation. This progression also ties in with talking about a "roadmap".
 

bit_user

Polypheme
Ambassador
The cards will come wielding the 10nm process and should arrive in 2020
Are we sure about that?

I know GPUs run at lower clocks than most CPUs, but their dies are also bigger and tend to burn more power. So, given that their roadmap shows no desktop CPUs @ 10 nm in 2020, and confirms there will be Xe chiplets at 14 nm, I'm a bit skeptical we'll see Xe at 10 nm, in 2020.
 

bit_user

Polypheme
Ambassador
In the future you will see architectures perform some interesting balancing acts. A lot of this depends on which takes greater hold. Ray Tracing OR VR. You can't do both.
Oh, sure you can. You just have to dial back the complexity, accordingly. You won't get fancy effects like global illumination, but the flip side is that ray tracing is a much more natural fit for foveated rendering.

I actually think their move towards ray-tracing might've even been a bet on VR going more mainstream, by now. Consider that the specs for Turing were probably baked back in 2016, when the VR hype cycle was peaking.
 

joeblowsmynose

Distinguished
I know everyone is thinking about gaming but I'm quite confident Intel is not at this time. "Raytracing" is that cool buzzword that if you don't toss around in some presentations, the crowds will think you are a loser. I was into programming some basic raytracing back in my college days - on a 486 :) -- funny how everyone thinks this is new technology. ...

I picked apart the game engine initialization and other settings on one of the early S.T.A.L.K.E.R. games and found that the game engine supported raytracing ... this was over ten years ago - the engine developers actually added that feature in (I guess the game engine was named "X-Ray" for a reason). It never worked very well because the hardware couldn't cast enough rays and it messed up the shadows, but I was able to enable it and play with it a bit and get some basic results (at 10fps) ...

Raytracing has a lot more uses than just making reflections in games look prettier, it can be used for all sorts of simulations and complex mathematical calculations, if Intel is adding this at the hardware level then it is these capabilities for the datacenters that they are after. Why market a video card for $500 when you can sell one for $5000 to a data center? AMD already tried to convert their compute tech into a gaming device ... it's pretty meh, and bloated as a gaming card, (vega) but I am sure AMD is smiling at their massive margins on the MIxx cards.

Every CPU and GPU out there can calculate raytracing - I have been using raytracing for well over a decade to render photorealistic images, the only question is, can it be done fast enough to be useful? ... that's where having dedicated hardware support as opposed to software can come in handy ... but by sometime in 2020, Intel may be showing up late to the party in regards to offering raytracing features.
 

bit_user

Polypheme
Ambassador
"Raytracing" is that cool buzzword that if you don't toss around in some presentations, the crowds will think you are a loser. I was into programming some basic raytracing back in my college days - on a 486 :) -- funny how everyone thinks this is new technology. ...
Well, in many of their presentations on RTX, Nvidia has been giving a brief history lesson on ray tracing. Here's the official launch video:

View: https://www.youtube.com/watch?v=Mrixi27G9yM


In fact, they even made a real life replica of one of the first ray traced scenes - the one with the translucent and reflective spheres over the red-and-yellow checkerboard, from 1979. Just search for "turner whitted nvidia replica" and you can find photos of people posing in it.

BTW, I played with POV-ray, back in the day. Luckily, I had a math coprocessor for my 386. I modeled a few scenes with graph paper and a text editor.
 
This could actually be taken to mean it's not coming in their first gen products. The fact that they cite their "roadmap", rather than being more specific, could be very deliberate.

Also, the term "hardware acceleration" can be taken to mean anything that runs on the GPU, itself.

So, we could actually see first gen products that implement it in software, like what Nvidia recently enabled on some of their GTX 1xxx-series GPUs, followed by true hardware support in the second generation. This progression also ties in with talking about a "roadmap".

i think that "hardware acceleration" will meant specific hardware to calculate RT. i don't think intel want to waste time dealing with software based solution.

Are we sure about that?

I know GPUs run at lower clocks than most CPUs, but their dies are also bigger and tend to burn more power. So, given that their roadmap shows no desktop CPUs @ 10 nm in 2020, and confirms there will be Xe chiplets at 14 nm, I'm a bit skeptical we'll see Xe at 10 nm, in 2020.

heard rumors that intel is looking at other foundry to manufacture their GPU.

AMD already tried to convert their compute tech into a gaming device ... it's pretty meh, and bloated as a gaming card, (vega) but I am sure AMD is smiling at their massive margins on the MIxx cards.

maybe the margin is a bit bigger on those pro cards. but for AMD the most money still likely come from the gaming GPU market due to volume. we saw how sad it is right now that they have less than 20% market share on gaming discrete GPU but the situation probably a whole lot more cruel on the professional market. maybe they should exit the pro market for GPU and focus entirely on gaming performance in their future design.
 

bit_user

Polypheme
Ambassador
i think that "hardware acceleration" will meant specific hardware to calculate RT. i don't think intel want to waste time dealing with software based solution.
Maybe, but how do you explain them talking about their "roadmap", rather than simply coming out and saying their datacenter Xe GPUs will have it?

maybe the margin is a bit bigger on those pro cards. but for AMD the most money still likely come from the gaming GPU market due to volume. we saw how sad it is right now that they have less than 20% market share on gaming discrete GPU but the situation probably a whole lot more cruel on the professional market. maybe they should exit the pro market for GPU and focus entirely on gaming performance in their future design.
AMD needs the cloud market, because it's growing while PCs are still on an ever-downward trend. More importantly, AMD is the current supplier for Google's Stadia, and such game streaming markets stand to threaten even their console market.

So, AMD cannot afford to walk away from cloud, whether it's in GPU-compute, or more conventional graphics workloads. I'll bet they also love to sell datacenter customers on the combination of Epyc + Vega.

What AMD needs to stop doing is trying to play catchup, with deep learning. Each generation, they implement what Nvidia did last, and Nvidia is already moving on from that. AMD needs to really leap-frog each thing Nvidia does, to ever hope of catching them in deep learning. Unfortunately, Nvidia has been doing so much work on the software part of their solution that the situation is starting to look bleak for AMD.

But there's also a 3rd area of the server GPU market, which I'll call the "conventional GPU compute" market. These folks just care about memory bandwidth and fp64 throughput. And on that front, AMD really did best Nvidia's current leading solution (V100), and probably at a much lower price. The only question is how long until Nvidia will replace the V100, which I'm guessing will happen this year.
 
stuff like Stadia AMD does not need the compute portion of the GPU. they just need the GPU to be fast in game rendering. for AMD maybe it is better for them to say good bye altogether to GPGPU. AMD current top compute card is only slightly faster than nvidia tesla V100. even if they are cheaper almost no one really cares about them. Epyc + Vega seems nice but in reality those that opt Epyc instead of Xeon will still going to use nvidia tesla or intel phi. remember when AMD Firepro S9150 was the fastest compute accelerator (be it in FP32 and FP64) in the market? i still remember when they say in 6 to 12 months the top 20 of top500 list will be dominated by AMD Firepro. that never happen. instead most of the client choose to wait for intel and nvidia next gen stuff to come and keep using nvidia aging kepler accelerator for their machine. hence when nvidia coming out with GP100 in 2016 AMD did not the see the rush to replace their S9150. yeah they still get some contract in here and there but really there is nothing major for them. and if i'm not mistaken the last AMD major win was those SANAM super computer that employ tahiti based accelerator. for S9150 i don't think even one machine ever entered the top 500 list. and we know spec wise AMD Hawaii simply destroy nvidia fastest accelerator at the time (GK210 based) be it in raw performance or efficiency.
 

bit_user

Polypheme
Ambassador
stuff like Stadia AMD does not need the compute portion of the GPU. they just need the GPU to be fast in game rendering.
I disagree. The Stadia GPUs need to be located physically close to the users, and there needs to be enough of them to satisfy peak demand. That means a lot of idle GPU cycles, most of the time. So, I think Google would be foolish not to run background compute jobs on idle Stadia nodes.

Granted, the fact that these are basically overclocked Vega 56 means they can't do anything fp64-intensive, but I still think it's too much "free" compute power to ignore.

those that opt Epyc instead of Xeon will still going to use nvidia tesla or intel phi.
...except Intel cancelled Phi. They replaced it with that 2x 24-core Xeon Platinum multi-die monster, and that's really not competitive at GPU-oriented workloads.
 
except Intel cancelled Phi. They replaced it with that 2x 24-core Xeon Platinum multi-die monster, and that's really not competitive at GPU-oriented workloads

and yet they still kill AMD in that market. what they want to show is x86 can also do massive parallelism like GPU without being a GPU (even if they can't compete head to head). but they specifically tout the ease to program on Phi because Phi is exactly the same as CPU where both are x86 based. and i think this is what makes Phi end up being more successful than AMD firepro. years ago some developer said it is more problematic to run OpenCL on AMD GPU than it is on regular CPU. since Phi is x86 it does not face the same issues that firepro did in OpenCL.

in the end it was like this: if the client are using OpenCL between intel and AMD they will end up choosing intel solution. if they really want GPU for their system they most often pick nvidia because of nvidia established ecosystem with CUDA in professional world.

IMHO AMD most likely have much bigger success in compute accelerator market if they make their accelerator card the same way intel did; x86 based instead being a GPU. CUDA is too deep rooted in the professional world to the point even nvidia solution was soundly beaten in raw power and efficiency the client wiling to hold onto it and wait for nvidia next gen solution that only comes years after that. GK110 comes out in late 2012. Hawaii comes out in late 2013 with the compute solution available in 2014. for majority of professional client instead of opting for readily available AMD S9150 in 2014/2015 time frame they chose to hold on and keep using nvidia GK110/210 until nvidia replace them with GP100 in 2016!
 

bit_user

Polypheme
Ambassador
and yet they still kill AMD in that market. what they want to show is x86 can also do massive parallelism like GPU without being a GPU (even if they can't compete head to head). but they specifically tout the ease to program on Phi because Phi is exactly the same as CPU where both are x86 based. and i think this is what makes Phi end up being more successful than AMD firepro.
Don't you feel even a little embarrassed talking about Xeon Phi as a successful product, when they not only discontinued the product line, but went so far as to cancel the last announced iteration?

https://www.tomshardware.com/news/intel-knights-mill-xeon-phi-retire,39276.html

Apparently, this "ease of programming" didn't offer enough of an advantage to meet their business objective for the product line. Anecdotally, word is that Phi's strength is that it can run legacy code, but the downside is that it takes a fair amount of effort to get good utilization out of it. That's because conventional x86 programming methods, libraries, and tools aren't designed to scale up to such high core counts, whereas GPU programming models like CUDA and OpenCL are tailor-made for the problem. By offering full OpenCL support, Intel tacitly acknowledges this fact.

IMHO AMD most likely have much bigger success in compute accelerator market if they make their accelerator card the same way intel did; x86 based instead being a GPU.
Don't you feel even a little embarrassed suggesting AMD should adopt the approach Intel is walking away from, and drop the approach (i.e. purpose-built GPU) that Intel is walking towards?

You seem so disconnected from Intel's current HPC compute plans that It's hard to take you seriously. Their Xe plans have been public for about a year, and they even delayed their contract to build the Aurora supercomputer so they could substitute Xe-based accelerators for the previously-planned Xeon Phi.

https://www.tomshardware.com/news/intel-exascale-aurora-supercomputer-xe-graphics,38851.html

CUDA is too deep rooted in the professional world to the point even nvidia solution was soundly beaten in raw power and efficiency the client wiling to hold onto it and wait for nvidia next gen solution that only comes years after that.
Ah, the wonders of vendor lock-in!

Let's see if Intel can replicate CUDA's success, with their upcoming "oneAPI". I presume that's one of its goals...
 
Last edited:
  • Like
Reactions: TJ Hooker
yeah i haven't follow the HPC news very close for quite sometime. but the fact does not change that those phi (not the one that being canceled for Aurora) already very successful against AMD compute solution. they just can't beat nvidia at the time. and many thought that those phi will never stood a chance against GPU be it from nvidia or AMD. if AMD using the same approach as intel they probably will win a lot more contract back then. intel able to beat AMD GPU when it comes to design win but not nvidia GPU.

also it seems AMD just win the new contract code name Frontier that will use both Epyc and Instict. to be honest it really surprise me. will see how it goes when it actually comes online. i still remember what they said with titan before. AMD win the contract with bulldozer back then and when they win the contract bulldozer seems an amazing processor. at least on paper. after that pretty much all supercomputer will use Xeon as long as they are using x86 CPU.
 
Isn't Xe based on AMD anyway? Haven't AMD licenced out their hardware to INTEL

View: https://www.reddit.com/r/Amd/comments/7bv3r1/amdintel_cross_licensing_agreement_explained/

AFAIK intel only ever license GPU related IP from nvidia which originated from the initial agreement back in 2004. it was supposed to expire in 2011 but being extended to 2017 as part of the settlement over the dispute between intel and nvidia are having back in 2008 in regards to nvidia rights making x86 chipsets for intel. they said intel have to renew their license after the deal with nvidia are expire in 2017 but it turns out intel will keep having the access to nvidia patent even if the licensing agreement are expire in 2017. meaning intel will have no problem building their own GPU even if they did not license any patent from AMD or nvidia. and let's remember that over the years intel also keep building their own IP of GPU. if intel did really license any GPU IP from AMD we should know about it right now.
 

bit_user

Polypheme
Ambassador
yeah i haven't follow the HPC news very close for quite sometime. but the fact does not change that those phi (not the one that being canceled for Aurora) already very successful against AMD compute solution.
They cancelled the entire Xeon Phi product line! What's notable about Knights Mill is that they already announced it and were close to delivering it, before cancelling it. That's rare. It's one thing to wind down a product line, but it's another to completely pull the rug out from an announced product generation, just as it's nearing its market introduction. That's not what you do with a successful product line.

if AMD using the same approach as intel they probably will win a lot more contract back then.
You mean the approach Intel tried and failed? Yeah, I don't see that happening.

If anything, it's Intel who's copying AMD's approach. First, with pure (i.e. non-x86) dGPUs, and now their Cascade Lake takes a page from EPYC's playbook to make a multi-die server chip.
 
  • Like
Reactions: TJ Hooker

bit_user

Polypheme
Ambassador
Isn't Xe based on AMD anyway? Haven't AMD licenced out their hardware to INTEL
At present, Intel GPUs look very different from AMD and Nvidia GPUs. We'll have to see, but I think Xe will necessarily look a bit more like them. Probably more a case of convergent evolution than anything else.

You don't have to take my word for it. If you're curious (or skeptical), here's some good reading material:
https://software.intel.com/sites/de...ure-of-Intel-Processor-Graphics-Gen9-v1d0.pdf
https://www.hotchips.org/wp-content...b/HC29.21.120-Radeon-Vega10-Mantor-AMD-f1.pdf
 
Last edited:
They cancelled the entire Xeon Phi product line! What's notable about Knights Mill is that they already announced it and were close to delivering it, before cancelling it. That's rare. It's one thing to wind down a product line, but it's another to completely pull the rug out from an announced product generation, just as it's nearing its market introduction. That's not what you do with a successful product line.


You mean the approach Intel tried and failed? Yeah, I don't see that happening.

If anything, it's Intel who's copying AMD's approach. First, with pure (i.e. non-x86) dGPUs, and now their Cascade Lake takes a page from EPYC's playbook to make a multi-die server chip.

it might fail to take on GPU but looking how they able to erode nvidia dominance and at the same time able to score more design win that maybe able to go AMD back then to me that is not a complete fail. before intel push phi to the market that market is pretty much duopoly between nvidia and AMD. there was article at top500 listing the market share between nvidia, intel and AMD for compute accelerator. nvidia own roughly 60% of the market share with pretty much the rest was own by intel. AMD market share back then was listed as low as 0.1%. look at that market share was made me thinking if AMD using the same approach as intel they probably able to score more design win. also due to OpenCL also made to run on CPU AMD probably will have much less issue with OpenCL with their Firepro.
 

joeblowsmynose

Distinguished
...
maybe they should exit the pro market for GPU and focus entirely on gaming performance in their future design.

Or maybe steer Vega into the "pro" world exclusively, and bring out some new gaming only focused architecture to market. They could call this new architecture "Navi" and maybe they could develop it with the console makers in mind to ensure that the focus is 100% on gaming with no wasted resources. Just an idea I had. ;)
 
  • Like
Reactions: bit_user
Or maybe steer Vega into the "pro" world exclusively, and bring out some new gaming only focused architecture to market. They could call this new architecture "Navi" and maybe they could develop it with the console makers in mind to ensure that the focus is 100% on gaming with no wasted resources. Just an idea I had. ;)

problem is AMD might not have the resource to do that. they just finally recovered from the financial woes that they have for more over a decade so they probably need to very careful with what they need to do next. their CPU division will surely need more and more R&D from now on. and remember AMD push things like async compute so compute oriented design like GCN to be more useful in gaming workload. thanks to async compute nvidia also going back with compute oriented design with Turing. Turing is simply volta without dedicated FP64 core (which being replace with RT core).
 

joeblowsmynose

Distinguished
problem is AMD might not have the resource to do that. they just finally recovered from the financial woes that they have for more over a decade so they probably need to very careful with what they need to do next. ...

It was a tongue-in-cheek comment -- AMD is already doing everything I just said - Navi comes out on 7nm later this year and is expected to have a far better price/performance ratio than Vega could ever provide, as the design was done with the consoles in mind, so the package will be efficient and relatively inexpensive to make (vs vega), so those qualities should carry over seamlessly to a mid-tier desktop card. I think most people are plenty happy with a mid-range gpu, but no one wants to pay $400 for one ... this should be where Navi shines, but we'll have to wait until launch to see how it ultimately stacks up.
 
  • Like
Reactions: bit_user

bit_user

Polypheme
Ambassador
Navi comes out on 7nm later this year and is expected to have a far better price/performance ratio than Vega could ever provide,
Hmmm... why did Vega fall short? I think they took their eye off the ball of power-efficiency. When Nvidia really pulled ahead was with the Maxwell architecture, which is (not coincidentally) when they made power-efficiency a top priority.

As for price/performance, I think AMD underestimated demand for HBM2, which might've finally fallen more in line with supply. I don't know if you saw, but Vega 56 is recently going for as little as $300. If it had been selling at such a price for long (and were just a bit more power-efficient), I think it'd have been a lot more popular.
 
  • Like
Reactions: joeblowsmynose
Hmmm... why did Vega fall short? I think they took their eye off the ball of power-efficiency.

AMD did care about power efficiency but the problem is they need to keep GCN design intact with 8th gen console. they can't make drastic changes or coming out with completely new architecture because by doing so they will loose the optimization benefit from console. i think AMD once mention that the optimization done on console version of Doom making it easy for them to optimize the performance on Vega for the PC version. so since they want this optimization benefit the only way they can improve their efficiency in significant manner was by using smaller node. for nvidia they are not restricted in this way. so they can improve their architecture efficiency by making changes to architectures and utilizing smaller node. that's why AMD can't keep up with nvidia when it comes to power efficiency.

actually AMD are very aware they can't fight nvidia on the same architecture for 8 to 10 years! i still remember what they said when the 8th gen console just comes out. they actually expect the life cycle for 8th gen will be shorter than the 7th gen like 5 years or so. a day later sony responded that they expect the 8th gen to last much longer than the 7th gen saying how you can do more with software bla bla bla. so to me it seems there is a bit of conflict between AMD and console maker interest there.

but personally i think for the next gen console AMD will probably no longer banking on "PC and console having the same architecture".