News Coding Mistake Made Intel GPUs 100X Slower in Ray Tracing

Not to be obnoxious, but saying A is 100x faster than B makes sense however saying that B is 100x slower than A makes no sense.

If I have $1, and you have 100x more, you have $1x100=$100. If you have $100, and I have 100x fewer, I can't write that equation.

The correct way to say it is that the old driver was so bad that it only provided 1/100 or 1% of of the ray tracing capability of the fixed driver.

That said, I think the media has been unfairly hard on Intel. Considering AMD and NVIDIA have decades of experience developing drivers for dedicated 3d rendering GPUs, the fact that Intel released a competitive solution as quickly as they have is impressive. And I wager that their Linux support will quickly eclipse their competition.
 
Not to be obnoxious, but saying A is 100x faster than B makes sense however saying that B is 100x slower than A makes no sense.

If I have $1, and you have 100x more, you have $1x100=$100. If you have $100, and I have 100x fewer, I can't write that equation.

The correct way to say it is that the old driver was so bad that it only provided 1/100 or 1% of of the ray tracing capability of the fixed driver.

That said, I think the media has been unfairly hard on Intel. Considering AMD and NVIDIA have decades of experience developing drivers for dedicated 3d rendering GPUs, the fact that Intel released a competitive solution as quickly as they have is impressive. And I wager that their Linux support will quickly eclipse their competition.
I agree bro! The writer of this article/blog makes no sense. Shame.
 
Not to be obnoxious, but saying A is 100x faster than B makes sense however saying that B is 100x slower than A makes no sense.

If I have $1, and you have 100x more, you have $1x100=$100. If you have $100, and I have 100x fewer, I can't write that equation.

The correct way to say it is that the old driver was so bad that it only provided 1/100 or 1% of of the ray tracing capability of the fixed driver.

That said, I think the media has been unfairly hard on Intel. Considering AMD and NVIDIA have decades of experience developing drivers for dedicated 3d rendering GPUs, the fact that Intel released a competitive solution as quickly as they have is impressive. And I wager that their Linux support will quickly eclipse their competition.
In the equation 100=100x; x=1
 
Must be some rank Amateur Coders if you forgot to do your RT work in Local VRAM instead of the PC's System RAM for all these years of Driver development.
Intel hasn't made a discreet GPU in 20y, even in the low likelihood one of the coders from their early cards is working on Arc, that experience isn't very relevant.

If making graphics cards were easy, then 100s of companies would have jumped in when 3070s were selling for $1500.
 
It's both shocking and it isn't. AMD have spent literally decades optimizing drivers for large GPU designs and Intel has only had to do mediocre iGPU that only needs to do all enough for video playback and general office tasks. Nobody cares if PowerPoint only gets 30fps.

The A750 is a massive GPU and from what I understand of the design it should be punching around 3070 Ti levels. It's all down to the driver and Intel knows they can't complete as is and is sitting on a ton of back stock with a bow depressed GPU market. They're going to take a bath on this one.

If you're a Linux user, this card could prove to be a great value. Open drivers that will improve dramatically over time. Decent Vulcan support, considering DXVK, probably a good option for Linux gaming. And low power usage compared to higher end options. Though higher than the 3060 comparison.

Opportunities for compute and encoding chores as well. It would be a gamble though. Also it only works well paired with 12th gen Intel CPU.... So not good for AMD rigs. Which leaves me out. Also, the big/little support in Linux likely won't start landing in stable releases for a few months still.

I guess it's just a mixed bag, but maybe an opportunity for some.
 
Must be some rank Amateur Coders if you forgot to do your RT work in Local VRAM instead of the PC's System RAM for all these years of Driver development.
The details on this are slim, but if it only affected the linux drivers, how much of a priority do you think ray tracing performance in Linux is for the Intel driver development team right now? If they can't release the hardware globally because the Windows drivers are so terrible, then Intel has probably outsourced the linux driver development to AMD's driver team.
 
I think its overly kind to call the Intel drivers "immature", if they're making this kind of mistake.
This isn't some difficult tweak to optimize a specific application. They literally forgot to enable the VRAM.
"Making sure the ram works" is the kind of thing you check before you announce a product, let alone release it to select markets.
I get that this is ray tracing specific oversight, and that DG1/DG2 didn't have ray tracing.. But they still should have figured this out, behind the scenes, in the timeframe of shipping DG2.
You're talking 2+ years of active development, and shipping cards to China, before anybody checked to see if all of the hardware can be turned on. Which is, again, something you should do at the proof of concept prototyping stage - which happens before you finalize hardware and actually start the real software/driver development.
My point is, they should have had this kind of thing worked out before covid could have been an excuse for delayed development.

Its like Intel hired the Windows 11 team to develop this hardware. Get it together, guys
 
In the equation 100=100x; x=1
Your equation is accurate. Doesn't mean it's correct to say that 1 is one hundred times smaller than 100. You would say 1 is one hundredth of 100.

Not saying it's not done all the time, but it shouldn't be.
I think its overly kind to call the Intel drivers "immature", if they're making this kind of mistake.
This isn't some difficult tweak to optimize a specific application. They literally forgot to enable the VRAM.
"Making sure the ram works" is the kind of thing you check before you announce a product, let alone release it to select markets.
I get that this is ray tracing specific oversight, and that DG1/DG2 didn't have ray tracing.. But they still should have figured this out, behind the scenes, in the timeframe of shipping DG2.
You're talking 2+ years of active development, and shipping cards to China, before anybody checked to see if all of the hardware can be turned on. Which is, again, something you should do at the proof of concept prototyping stage - which happens before you finalize hardware and actually start the real software/driver development.
My point is, they should have had this kind of thing worked out before covid could have been an excuse for delayed development.

Its like Intel hired the Windows 11 team to develop this hardware. Get it together, guys
Pretty sure the article said this was introduced in a driver update after the product released.

From a software development standpoint, this can happen for all sorts of reasons.

One would hope they would fully test the compiled update before release, but all to often changes are tested individually and only the install is tested after the package is created. If two updates conflict, or the wrong commit is pulled, or they left a commit out, stuff like this happens.

To make it clear, this is not at all unusual. I have rarely seen updates for just about any application undergo any comprehensive testing outside of the changes in the update.
 
Not to be obnoxious, but saying A is 100x faster than B makes sense however saying that B is 100x slower than A makes no sense.

If I have $1, and you have 100x more, you have $1x100=$100. If you have $100, and I have 100x fewer, I can't write that equation.

The correct way to say it is that the old driver was so bad that it only provided 1/100 or 1% of of the ray tracing capability of the fixed driver.

That said, I think the media has been unfairly hard on Intel. Considering AMD and NVIDIA have decades of experience developing drivers for dedicated 3d rendering GPUs, the fact that Intel released a competitive solution as quickly as they have is impressive. And I wager that their Linux support will quickly eclipse their competition.
Both can be said. 100 times slower ist the same as 1/100 of the speed or 1%. 100 times faster and 100 times slower are inverse of each other.
 
The inverse of multiplication is division. To invert 100x you use x/100 or x(1/100). 1% can be written as 0.01x, 0.01 in fractional form is 1/100. So yeah, "100 times slower" is the inverse of "100 times faster", and if "100 times faster" is 100x, then "100 times slower" would be x/100, or 1%.
 
Not to be obnoxious, but saying A is 100x faster than B makes sense however saying that B is 100x slower than A makes no sense.

If I have $1, and you have 100x more, you have $1x100=$100. If you have $100, and I have 100x fewer, I can't write that equation.

The correct way to say it is that the old driver was so bad that it only provided 1/100 or 1% of of the ray tracing capability of the fixed driver.

That said, I think the media has been unfairly hard on Intel. Considering AMD and NVIDIA have decades of experience developing drivers for dedicated 3d rendering GPUs, the fact that Intel released a competitive solution as quickly as they have is impressive. And I wager that their Linux support will quickly eclipse their competition.


Both ways make sense, they just imply different units. Say some code finishes in 1 second and the change you've made makes it run for 100 seconds. So you have a very clear case of a 100x slowdown where x is the number of time units as opposed to the number of op/time units. So your statement is both pedantic and incorrect, the worst kind of incorrect.
 
Last edited:
Not to be obnoxious, but saying A is 100x faster than B makes sense however saying that B is 100x slower than A makes no sense.

"B is 100x slower than than A" is a totally sensible English statement with an unambiguous mathematical interpretation of "speed_A / 100 = speed_B".

In both colloquial English and while speaking precisely in mathematical terms it makes total sense to consider "faster" and "slower" to be opposites (mathematical inverses) of each other and the sentence makes perfect sense.