News Intel Gen12 Graphics Linux Patches Reveal New Display Feature for Tiger Lake

bit_user

Splendid
Herald
Tiger Lake will also feature next-gen I/O, likely referring to PCIe 4.0.
I'm guessing that's referring to Thunderbolt 3, DisplayPort 2.0, and maybe HDMI 2.1.

None of the roadmap leaks or info on LGA-1200 mention any client CPUs having PCIe 4.0 this year or next.
 

bit_user

Splendid
Herald
This is also not Intel's first foray into the discrete graphics market. The 740 was dismal.
That was 20 years ago!

AMD still made uncompetitive CPUs as recently as 3 years ago, and look at them now!

It's a lot more informative to look at the strides Intel has made in their integrated graphics. They're hardly starting from zero, this time around.
 
They are late to the PCIe 4.0 party. Its not even remotely surprising they would try to incorporate it.

None of the roadmap leaks or info on LGA-1200 mention any client CPUs having PCIe 4.0 this year or next.
And at one point their roadmaps also showed Intel would be manufacturing 7nm nearly 3 years ago. And were still on 14nm for the desktop with just medeocre 10nm mobile processors...


https://www.anandtech.com/show/13405/intel-10nm-cannon-lake-and-core-i3-8121u-deep-dive-review/2
At the end of the day, they are projections. They can and usually are wrong, especially when they look many months or years in the future, just like the slide above.
 

bit_user

Splendid
Herald
And at one point their roadmaps also showed Intel would be manufacturing 7nm nearly 3 years ago. And were still on 14nm for the desktop with just medeocre 10nm mobile processors...
It's understandable when they under-deliver on their roadmap promises, because technology is actually hard, sometimes.

However, what you're saying is that they're going to over-deliver on their roadmap promises. It could happen, but I don't think they've had a track record of doing that.

At the end of the day, they are projections. They can and usually are wrong, especially when they look many months or years in the future, just like the slide above.
They're not projections as in stock market or weather forecasts - these are their plans that they're communicating to partners and customers! And work that's not planned doesn't usually get done. Furthermore, features that their partners weren't expecting wouldn't necessarily get enabled or properly supported.
 
That was 20 years ago!

AMD still made uncompetitive CPUs as recently as 3 years ago, and look at them now!

It's a lot more informative to look at the strides Intel has made in their integrated graphics. They're hardly starting from zero, this time around.
They also started project Larabee, but no consumer devices were sold.
So I did not count it.
Just stating that this is their third attempt to enter the Discrete GPU market, the first 2 attempts were complete failures.
The article states this is their first.
 

bit_user

Splendid
Herald
They also started project Larabee, but no consumer devices were sold.
So I did not count it.
Yeah, forcing x86 into GPUs was an exercise in trying to fit a square peg into a round hole.

Just stating that this is their third attempt to enter the Discrete GPU market, the first 2 attempts were complete failures.
To be fair, there were a lot of 3D graphics chips, back then - S3, 3D Labs, Matrox, Rendition, Tseng, Cirrus Logic, Number Nine, PowerVR, 3DFX, and of course ATI and Nvidia. Plus, even a few more I'm forgetting. Most weren't very good. Even Nvdia's NV1 could be described as a failure.

From the sound of it, the i740 was far from the worst. Perhaps it just didn't meet with the level of success that Intel was used to. Being a late entrant to a crowded market surely didn't help.

http://www.vintage3d.org/i740.php#sthash.e4kIOqFj.MxbFM9tE.dpbs

The article states this is their first.
Ah, I had missed that.
 
Jun 17, 2019
87
24
35
0
To be fair, there were a lot of 3D graphics chips, back then - S3, 3D Labs, Matrox, Rendition, Tseng, Cirrus Logic, Number Nine, PowerVR, 3DFX, and of course ATI and Nvidia. Plus, even a few more I'm forgetting. Most weren't very good. Even Nvdia's NV1 could be described as a failure.

From the sound of it, the i740 was far from the worst. Perhaps it just didn't meet with the level of success that Intel was used to. Being a late entrant to a crowded market surely didn't help.

http://www.vintage3d.org/i740.php#sthash.e4kIOqFj.MxbFM9tE.dpbs

Ah, I had missed that.
And Intel's i740 was not the last Intel discrete GPU in that era, there was also Intel i752 which never made it to mass market. As for the worst, probably Cirrus Logic Laguna3D despite using RDRAM and only lasted a single generation. Others like Trident 3DImage series was very buggy, and NEC PowerVR had quirky rendering. Companies like Tseng, along with Weitek and Avance Logic (Realtek), never made it into the 3D acceleration segment. Also NVidia's NV1 quadratic texturing technology made its way into Sega's Saturn console.
 

kinggremlin

Distinguished
Jul 14, 2009
574
39
19,010
0
Hasn't it been rumored for a while that Intel would skip PCIE 4.0 and go straight to 5.0 which is only a year behind 4.0? With so little benefit of 4.0 right now to most of the market, would seem a waste of time to implement it when it will already be outdated a year later without ever having been any real use.
 

sykozis

Distinguished
Dec 17, 2008
1,755
3
19,865
37
I'm curious as to where this 10TFlops figure comes from. With no real performance data available, it's impossible to even make an educated guess as to what the performance may be....
 

bit_user

Splendid
Herald
Companies like Tseng, along with Weitek and Avance Logic (Realtek), never made it into the 3D acceleration segment.
Ah, you're right. I mis-remembered the ET6000 having 3D acceleration, but it seems that was to be introduced in the ET6300 that was never finished. Interestingly, I just learned that ATI bought Tseng... so, I wonder if any of that IP ever did see the light of day.

Also NVidia's NV1 quadratic texturing technology made its way into Sega's Saturn console.
Oh man. I used to think the NV1 was cool, until I read this:

http://www.vintage3d.org/nv1.php

What a disaster! If that had been a product in a larger company, they'd have killed off the entire graphics division, after such a showing. The only reason Nvidia kept going is because that's all they had.

Be sure to check out the gallery, if you can (requires flash). Its quadric patch rendering is nothing to brag about, even on games that supported it!

BTW, that article states:
under pressure from developers Sega eventually abandoned quads and NV architecture, despite their help in funding NV2
 

bit_user

Splendid
Herald
Hasn't it been rumored for a while that Intel would skip PCIE 4.0 and go straight to 5.0 which is only a year behind 4.0?
Ain't gonna happen. Not for consumers. PCIe 5.0 is more power-hungry and costly to implement. It might carry other limitations, as well. Moreover, there's not a strong need for more bandwidth, in the mainstream/consumer segment. And, according to this, Comet Lake-S (mainstream CPU for 2020) will still be PCIe 3.0.

https://www.tomshardware.com/news/intel-comet-lake_s-early-impressions-amd-ryzen-3000,40260.html

However, it is on their Server roadmap for 2021, while Ice Lake servers are slated to get PCIe 4.0 in Q2 of 2020.

With so little benefit of 4.0 right now to most of the market, would seem a waste of time to implement it when it will already be outdated a year later without ever having been any real use.
This argument makes no sense to me. PCIe is fully forward-and-backward compatible. If you have a PCIe 3.0 peripheral, you can still use it in your Ryzen 3k X570 board! Likewise, there's no downside in buying (or a company building) a PCIe 4.0 SSD, since it'll still work in PCIe 5.0 boards. And, for companies, I'd hazard a guess that they'd learn a few things in building PCIe 4.0 devices that would carry-over to PCIe 5.0, easing the transition relative to jumping straight from PCIe 3.0 to 5.0.

In fact, the only case I can see for having PCIe 5.0 in consumer devices would be if you could reap some cost savings by cutting lane counts. However, the problem you'll run into is that people have x16 PCIe 3.0 GPUs and x4 NVMe PCIe 3.0 SSDs that they'll want to carry forward to any new motherboard, and they won't want to lose any of those lanes. So, unless a mobo somehow has an additional set of lower-speed lanes that only become active if the higher-speed lanes drop back, there's no real cost savings in it. And, for consumers, PCIe speeds just aren't a big bottleneck.

Faster bus speeds are all about cloud & datacenter. For things like all-flash storage arrays, 100 & 200 Gbps networking, and AI accelerators. That's why PCIe 5.0 came on so quick, and why PCIe 6.0 is hot on its heels.

I predict it'll be quite a while, before you see any consumer CPUs, GPUs, or SSDs with PCIe 5.0. I'd bet at least 2025. Maybe Intel or AMD could upgrade their Southbridge connection to 5.0 before then, so they can cut back on direct-connected CPU lanes (and I'm really looking at AMD, here), but not for their GPU slots.
 
Last edited:

bit_user

Splendid
Herald
I'm curious as to where this 10TFlops figure comes from. With no real performance data available, it's impossible to even make an educated guess as to what the performance may be....
Oh, they're just extrapolating. Each Gen11 EU has two 128-bit SIMD pipelines, together delivering ~16 FLOP/cycle. So, if you clock that at about 1.2 GHz, you get 10 TFLOPS. Intel tends to clock their GPUs in the neighborhood of 1 GHz, but they could obviously go a fair bit higher. They could also increase the SIMD width per EU, but then 512 would be really a lot.

https://en.wikipedia.org/wiki/List_of_Intel_graphics_processing_units#Gen11

For reference, Vega 64 delivers 10.2 - 12.7 TFLOPS, RX 5700 TX gives 8.2 to 9.8 TFLOPS, GTX 1080 Ti gives 10.6 to 11.3 TFLOPS, and RTX 2080 Ti gives 11.8 to 13.4 TFLOPS.
 

DavidC1

Distinguished
May 18, 2006
387
5
18,785
0
Oh, they're just extrapolating. Each Gen11 EU has two 128-bit SIMD pipelines, together delivering ~16 FLOP/cycle. So, if you clock that at about 1.2 GHz, you get 10 TFLOPS. Intel tends to clock their GPUs in the neighborhood of 1 GHz, but they could obviously go a fair bit higher.
Yup. Architectures are starting to converge in a way. AMD/Nvidia/Intel are all going to be clocked in the ballpark range. I expect 1.6-2GHz range clocks for Xe graphics.

Also leaks show PCIe 4.0 support for Tigerlake. Not only that, DMI is being boosted to x8. That's quadruple the bandwidth. It is needed though because DMI x4 on PCIe 3 is limiting.

In servers, the chip after Icelake/Cooperlake is going PCIe 5/CXL. CXL is the Compute Express Link they announced quite recently. Actually CXL is based off PCIe 5 or something.

But I agree on the client side, PCIe 4 will last quite a while.

Marketing tells you technology is moving faster than before. Reality tells you scaling difficulties are slowing it down.

Just stating that this is their third attempt to enter the Discrete GPU market, the first 2 attempts were complete failures.
That's true. I think at least the barriers are far lower on their third attempt. Actually I would say there were many half-attempts with Iris Pros and such.

Intel was too focused on only the hardware and fabrication side with the i740 and Larrabbee. They are starting to get the drivers down with their Gen architecture, and they already have a consumer base. Gen 12/Xe builds on that.

Now they need to prove Xe can scale up, because current Intel iGPUs absolutely suck at it. The Skylake Iris Pro was a total failure scaling up.
 
Last edited:
Reactions: bit_user
Jun 17, 2019
87
24
35
0
Oh man. I used to think the NV1 was cool, until I read this:

http://www.vintage3d.org/nv1.php

What a disaster! If that had been a product in a larger company, they'd have killed off the entire graphics division, after such a showing. The only reason Nvidia kept going is because that's all they had.

Be sure to check out the gallery, if you can (requires flash). Its quadric patch rendering is nothing to brag about, even on games that supported it!

BTW, that article states:
Actually it looks cool (which is why Sega Saturn also uses the same quadratic texturing technology). And nopesies, it wasn't a disaster but just a piece of alternative 3D technology (predates DirectX) which got left out after the industry decided to go with triangles. Thus never got mass adoption. Watch this video about Diamond Edge 3D which shows the differences between traditional 3D rendering and NVidia's quadratic rendering technology.
 
Last edited:

kinggremlin

Distinguished
Jul 14, 2009
574
39
19,010
0
Ain't gonna happen. Not for consumers. PCIe 5.0 is more power-hungry and costly to implement. It might carry other limitations, as well. Moreover, there's not a strong need for more bandwidth, in the mainstream/consumer segment. And, according to this, Comet Lake-S (mainstream CPU for 2020) will still be PCIe 3.0.

https://www.tomshardware.com/news/intel-comet-lake_s-early-impressions-amd-ryzen-3000,40260.html

However, it is on their Server roadmap for 2021, while Ice Lake servers are slated to get PCIe 4.0 in Q2 of 2020.


This argument makes no sense to me. PCIe is fully forward-and-backward compatible. If you have a PCIe 3.0 peripheral, you can still use it in your Ryzen 3k X570 board! Likewise, there's no downside in buying (or a company building) a PCIe 4.0 SSD, since it'll still work in PCIe 5.0 boards. And, for companies, I'd hazard a guess that they'd learn a few things in building PCIe 4.0 devices that would carry-over to PCIe 5.0, easing the transition relative to jumping straight from PCIe 3.0 to 5.0.

In fact, the only case I can see for having PCIe 5.0 in consumer devices would be if you could reap some cost savings by cutting lane counts. However, the problem you'll run into is that people have x16 PCIe 3.0 GPUs and x4 NVMe PCIe 3.0 SSDs that they'll want to carry forward to any new motherboard, and they won't want to lose any of those lanes. So, unless a mobo somehow has an additional set of lower-speed lanes that only become active if the higher-speed lanes drop back, there's no real cost savings in it. And, for consumers, PCIe speeds just aren't a big bottleneck.

Faster bus speeds are all about cloud & datacenter. For things like all-flash storage arrays, 100 & 200 Gbps networking, and AI accelerators. That's why PCIe 5.0 came on so quick, and why PCIe 6.0 is hot on its heels.

I predict it'll be quite a while, before you see any consumer CPUs, GPUs, or SSDs with PCIe 5.0. I'd bet at least 2025. Maybe Intel or AMD could upgrade their Southbridge connection to 5.0 before then, so they can cut back on direct-connected CPU lanes (and I'm really looking at AMD, here), but not for their GPU slots.

blah blah blah.... You're over thinking this. I'm was not saying the industry should skip PCIE 4.0. I meant exclusively from a chipset maker perspective like Intel. AMD is already there. Nobody is really going to care about PCIE 4.0 in the next year, so sit it out and leapfrog AMD and steal the headlines with 5.0 in a year.
 

bit_user

Splendid
Herald
blah blah blah.... You're over thinking this. I'm was not saying the industry should skip PCIE 4.0. I meant exclusively from a chipset maker perspective like Intel. AMD is already there. Nobody is really going to care about PCIE 4.0 in the next year, so sit it out and leapfrog AMD and steal the headlines with 5.0 in a year.
Maybe you should try reading some of the blah blahs, because I was trying to tell you:
  1. That won't happen.
  2. Why it won't happen.
You don't have to believe me. Go ahead and sit out PCIe 4.0 and wait for 5.0. I'm still waiting for 10 Gigabit Ethernet to go mainstream.
 

bit_user

Splendid
Herald
Actually it looks cool (which is why Sega Saturn also uses the same quadratic texturing technology). And nopesies, it wasn't a disaster but just a piece of alternative 3D technology (predates DirectX) which got left out after the industry decided to go with triangles. Thus never got mass adoption.
Nopsies, yourself!!!

Wow... where to start? Clearly, you didn't read the article I linked. So, I'll spoon feed you a few choice excepts:
Part of it is omission of Z-buffer, which fired back upon NV1's d3d compatibility.
NV1 was not really compatible with any of my OpenGL to D3d wrappers ... so it is all about Direct3d titles. Nvidia urged Microsoft to add support for quadrilaterals, but ... they refused. As expected, this chip has extraordinarily hard time running d3d games. Triangles can be rendered when one side of quadrilateral has zero length, sacrificing quarter of vertex performance. The bigger problem is shading and texture mapping stages were not made for such special cases, often causing warping artifacts. Complete solution would also require pre-warped textures as it happened few times on Saturn, but PC developers were of course not going into such troubles to brighten Nvidia's day. Strangely enough, resolution affects the warping. In motor-car for example, the threshold between fine image and warping textures is somewhere between 320x240 and 512x384 resolution. The Edge 3400 barely looses any speed when resolution increases to 640x480. This raises my suspicion, could Nvidia deliberately sacrifice image quality for better performance at high resolution? Another strange behavior is refusal of Direct3d rendering at lower resolutions than 512x384 in several games.
Under Direct3d real alpha blending is non-existent, no textures with alpha channel are supported.
The image quality of course suffers mostly from lack of texture filtering and alpha blending. NV uses stipple patterns but not in all cases, causing series of transparency artifacts. Texture perspective correction is only a wish, the quadratic technique or subdivision obviously did not translate well into d3d. Textures are usually heavily warped, at least color precision looks alright. Table fog is of course unsupported, but surprisingly there is no sign of vertex fog either. Only games not dependent on hardware z-buffering are running, and Carmageddon II and SOTE take advantage of NV1 only via slower emulation layer.
Direct Draw speed is usually too low
the NV1 architecture, built for quadrilaterals instead of standard triangular polygons, was too exotic and developers hated it.
When d3d games actually started spreading, compatibility was finally exposed as minimal, missing important features like z-buffering, not to mention image quality issues.
NVIDIA was in an ugly situation, unable to pursue anyone into embracing their advantages, there were plenty of NV1 cards in stores nobody wanted and console world was lost as well.
How on earth you can call that anything but a disaster is completely beyond me.

Again, just look at the gallery, if you have any way of viewing flash. Most games he tested feature artifacts that range from annoying to unplayable.

http://www.vintage3d.org/flashgallery/nv1.php#sthash.Bpw5E5Rv.dpbs

And the video you posted has so many issues that I've decided to address it in a separate reply.
 

bit_user

Splendid
Herald
Watch this video about Diamond Edge 3D which shows the differences between traditional 3D rendering and NVidia's quadratic rendering technology.
Okay, here we go. Imma need a list for this...
  1. The reviewer clearly doesn't understand the difference between quadric/quadratic patches and planar quadrilateral polygons.
  2. The reviewer only tested the included games, which were 1st-party ports of Sega titles. He did not test any 3rd party games from that time. And those were basically the only Sega games ported, so it was a pretty small library of content that would work well on it.
  3. Of the 3 games he tested, only Virtua Fighter actually used quadric surfaces - the others merely used flat quads.
  4. He only compared the games to their unaccelerated, software renderers - not to any other 3D accelerators of the day.
  5. Even that guy acknowledged the relative market failure of the NV1-based cards!
I'm not saying the NV1 didn't work or couldn't do anything. I even thought it was pretty cool that they got way out ahead with quadric patch rendering! It wasn't until I read the Vintage3D review that I learned it required pre-warped textures, where the warping was apparently even resolution-dependent. So, that wasn't great.

But, when you dig into just how badly the NV1 failed at literally everything else, calling it a failure is putting it lightly. It's no small point that most 3D games were unplayable on the damned thing. Okay, yeah, I give them some credit for getting out ahead of D3D, but that actually worked against them, in the end.
 
Jun 17, 2019
87
24
35
0
Nopsies, yourself!!!

Wow... where to start? Clearly, you didn't read the article I linked. So, I'll spoon feed you a few choice excepts:

How on earth you can call that anything but a disaster is completely beyond me.

Again, just look at the gallery, if you have any way of viewing flash. Most games he tested feature artifacts that range from annoying to unplayable.

http://www.vintage3d.org/flashgallery/nv1.php#sthash.Bpw5E5Rv.dpbs

And the video you posted has so many issues that I've decided to address it in a separate reply.
Didn't you notice the part "(predates DirectX)" and also the part "a piece of alternative 3D technology". As mentioned, this was before DirectX which uses triangle (vertex) rendering instead of quadratic rendering. Yupsies, again this was before DirectX. Of course it was rather incompatible with DirectX (Direct3D) standards. Thus the drivers and wrappers for NV1 will have all those problems trying to translate quadratic rendering into DirectX's triangle rendering. And it wasn't a disaster because not many games (very few games) could use that NV1 GPU, which is why it was never popular (thus no mass adoption). At that time there were no real standards for 3D rendering. Its either software based or proprietary hardware 3D acceleration. And when DirectX came, it was left behind (because of incompatible technologies thus those problems arise).
 
Last edited:

bit_user

Splendid
Herald
Didn't you notice the part "(predates DirectX)" and also the part "a piece of alternative 3D technology".
It didn't predate Z-buffers, triangles, or texture filtering, yet it had support for none of those.

Thus the drivers and wrappers for NV1 will have all those problems trying to translate quadratic rendering into DirectX's triangle rendering.
Of course, as you seem not to like reading things (especially those that might contradict you), this shouldn't come as a surprise, but I guess you didn't see the part where textures for the quadric patches need to be pre-warped and, for some reason, in a fashion dependent on the display resolution? Even for what it was built to do, that kinda sucks.

And it wasn't a disaster because not many games (very few games) could use that NV1 GPU, which is why it was never popular (thus no mass adoption).
A hardware product not selling well, in part because only a tiny minority of the targeted software supports it... not a disaster? You've got a pretty warped sense of reality.

At that time there were no real standards for 3D rendering. Its either software based or proprietary hardware 3D acceleration.
And here, you're stooping to spinning lies, hoping I'm too ignorant and naive to catch you.

OpenGL 1.0 was released on June 30, 1992. There were also earlier, competing standards, like PHIGS, and higher-level standards, like OpenInventor. But I only mention the latter to demonstrate the depth of the ignorance upon which you brazenly pontificate.

But the plot thickens, because not only were there indeed standards for hardware-accelerated 3D rendering, but there were a few dominant commercial middleware layers designed to provide high-performance software renderers and support 3D hardware, chiefly: Reality Lab, RenderWare, and BRender. At the time, a number of games already used these, which fed into Microsoft's decision to buy RenderMorphics and leverage their work to create Direct3D.

So, it's entirely misguided to suggest that Direct3D suddenly came down from on high, there was simply nothing else like it, and nVidia couldn't possibly have foreseen this turn of events. nVidia had every opportunity to see in what direction the industry was already headed, and yet they had the pomposity (not that you'd know anything about that) to think they could divert it to use their quirky primitives and their proprietary API to such an extent that they didn't even need to think about supporting what everybody else was doing.

3D graphics was a big deal, in the late 80's and early 90's. There was even VR hype, back then. Lots of companies were doing hardware-accelerated 3D: SGI, Sun, Apollo/HP, IBM, Dec, Intergraph, and Stardent, to name a few. This stuff was pretty old-hat, by the time nVidia started up, in 1993. In fact, Windows NT launched just a couple months after their founding, and included a full OpenGL stack!

And when DirectX came, it was left behind (because of incompatible technologies thus those problems arise).
Nice alternate history. But that still wouldn't make it not a disaster.
 
Jun 17, 2019
87
24
35
0
A hardware product not selling well, in part because only a tiny minority of the targeted software supports it... not a disaster? You've got a pretty warped sense of reality.

And here, you're stooping to spinning lies, hoping I'm too ignorant and naive to catch you.
NVidia NV1 tried introducing quadratic rendering technology which does not rely on many of DirectX's (or OpenGL) "heavy" requirements. As mentioned, again and again, this early hardware 3D acclerator cards was not popular. In fact many of the early 3D accelerators are not popular at all, and were considered expensive (since some also includes a RISC CPU on-board for rendering). Also there were no standards, again as mentioned, thus most games will use software based rendering. And only very few games are able to use any of those proprietary 3D accelerators (have to be custom developed).

OpenGL 1.0 was released on June 30, 1992. There were also earlier, competing standards, like PHIGS, and higher-level standards, like OpenInventor. But I only mention the latter to demonstrate the depth of the ignorance upon which you brazenly pontificate.

But the plot thickens, because not only were there indeed standards for hardware-accelerated 3D rendering, but there were a few dominant commercial middleware layers designed to provide high-performance software renderers and support 3D hardware, chiefly: Reality Lab, RenderWare, and BRender. At the time, a number of games already used these, which fed into Microsoft's decision to buy RenderMorphics and leverage their work to create Direct3D.

So, it's entirely misguided to suggest that Direct3D suddenly came down from on high, there was simply nothing else like it, and nVidia couldn't possibly have foreseen this turn of events. nVidia had every opportunity to see in what direction the industry was already headed, and yet they had the pomposity (not that you'd know anything about that) to think they could divert it to use their quirky primitives and their proprietary API to such an extent that they didn't even need to think about supporting what everybody else was doing.

3D graphics was a big deal, in the late 80's and early 90's. There was even VR hype, back then. Lots of companies were doing hardware-accelerated 3D: SGI, Sun, Apollo/HP, IBM, Dec, Intergraph, and Stardent, to name a few. This stuff was pretty old-hat, by the time nVidia started up, in 1993. In fact, Windows NT launched just a couple months after their founding, and included a full OpenGL stack!

Nice alternate history. But that still wouldn't make it not a disaster.
You contradicted yourself. Early OpenGL was only available on high end workstations and very expensive proprietary OpenGL accelerators (from Sun, SGI, Apollo, Integraph , etc). Earliest OpenGL implementations were mainly software based, and only much later hardware based. These were very expensive and targeted for professional use. Silicon Graphics (founder of OpenGL) had their own (early) OpenGL.DLL for Windows (just search for "sgi opengl.dll"), which is unrelated to Microsoft's own OpenGL32.DLL which came later (from joint development). Thus there were actually two different OpenGL standards on Windows. And OpenGL is not directly interoperable with DirectX (different API standards and requirements). That is why many early 3D accelerators does not support OpenGL at all, or have problems supporting OpenGL. Early Direct3D graphics chips such as ATi Rage and S3 Virge does not support OpenGL. Heck even SiS 6326 had an official OpenGL driver but was pulled out later because it doesn't work properly and had compatibility issues with games. There are 3rd party Direct3D-to-OpenGL wrappers around for that, but rarely works properly (includes texture and rendering artifacts, also occassional game compatibility problems since the wrappers are really more of a MiniGL optimized for games like Quake). The early hardware could not handle OpenGL. The only early (expensive) consumer hardware that handled OpenGL was made by 3DLabs (the GLINT series) and had proprietary OpenGL libraries (such as Creative Graphics Library). They also had problems with DirectX. And much like NVidia's NV1, only few games could use those 3DLabs GLINT accelerator cards (had to be hand ported). That was not rectified until they developed later/newer chips (such as the Permedia series) to handle both formats. Thus that is main problem with alternative 3D technologies, they are actually incompatible with each other. Thus NVidia tried to shoehorn their NV1 quadratic rendering into DirectX's triangle rendering will result in problems, especially texture and rendering artifacts because the hardware could not handle the requirements of DirectX. Whatever issues, spin and excuses about how the hardware performs is really the result of (again) trying to shoehorn an alternative 3D technology into DirectX standards and requirements.
 
Last edited:

ASK THE COMMUNITY

TRENDING THREADS