News Intel Gen12 Graphics Linux Patches Reveal New Display Feature for Tiger Lake

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.

bit_user

Polypheme
Ambassador
In fact many of the early 3D accelerators are not popular at all, and were considered expensive (since some also includes a RISC CPU on-board for rendering).
The i860 was frequently used as a geometry processor, due to its massive floating point horsepower, for the time. But this is really beside the point.

Also there were no standards, again as mentioned, thus most games will use software based rendering.
This whole line of argumentation seems to miss the point. Even if the PC gaming sector had no standard API (though the middleware packages I mentioned were a big step towards that), there were certainly standard practices in use, at the time. Developers had embraced triangles, pixel fog, and even some Z-buffering - none of which the NV1 supported. And toolchains for game assets were triangle-oriented, not based on quadric patches. Maybe that's why only 1 of those 3 first-party ports shown in your Youtube video actually used them. The other two just used flat quadrilaterals, and I forgot to mention the lack of fog made the draw-in on Panzer Dragoon look horrendous.

The NV1 just ignored what everyone was doing, and went completely in its own direction. Not only was there going to be a learning curve, for anyone who adopted it, but it lacked ecosystem support.

You contradicted yourself.
Where?

Early OpenGL was only available on high end workstations and very expensive proprietary OpenGL accelerators (from Sun, SGI, Apollo, Integraph , etc). Earliest OpenGL implementations were mainly software based, and only much later hardware based.
You should @GetSmart, yourself. You're trying to win an argument out of ignorance, and it's just not working for you.

OpenGL was developed by SGI, specifically to enable 3rd party developers to more easily leverage their hardware accelerators. It was based on their earlier IRIS GL library, which was already quite popular, before they opened it.

These were very expensive and targeted for professional use.
Yes, that's where 3D hardware started, because it was expensive. The reason I brought up OpenGL is that it was an open specification that existed prior to nVidia's founding that established an industry-standard API for hardware accelerated 3D graphics.

Again, I made this point to counter your suggestion that nVidia was operating in a complete vacuum and just got unlucky when MS decided to take Direct3D in a different direction. This shows extreme ignorance of the industry, at the time. Your ego cannot handle this truth.

Your wall of details about the subsequent history of OpenGL on Windows is not relevant to this point, so I won't bother to help you correct them or fill some of the more glaring holes.

Thus that is main problem with alternative 3D technologies, they are actually incompatible with each other.
This is BS. There are lots of reasons why consumer cards had missing or incomplete OpenGL support, not least because they had a tendency to take various short-cuts that prevented full conformance with OpenGL. And, in the one example you cited of a 3D Labs OpenGL chip that lacked D3D, I'd suggest that's probably less of a technical issue and more of a market segmentation problem.

But, you don't actually care what the real truth is - you're just hoping to bog me down in a wall of details to distract from the underlying point. I'm not about to fall for that.

Go ahead and waste more time, if you like. However, you'd do well to @GetSmart and heed the advice that "when you're in a hole, stop digging".
 

GetSmart

Commendable
Jun 17, 2019
173
44
1,610
The i860 was frequently used as a geometry processor, due to its massive floating point horsepower, for the time. But this is really beside the point.

This whole line of argumentation seems to miss the point. Even if the PC gaming sector had no standard API (though the middleware packages I mentioned were a big step towards that), there were certainly standard practices in use, at the time. Developers had embraced triangles, pixel fog, and even some Z-buffering - none of which the NV1 supported. And toolchains for game assets were triangle-oriented, not based on quadric patches. Maybe that's why only 1 of those 3 first-party ports shown in your Youtube video actually used them. The other two just used flat quadrilaterals, and I forgot to mention the lack of fog made the draw-in on Panzer Dragoon look horrendous.

The NV1 just ignored what everyone was doing, and went completely in its own direction. Not only was there going to be a learning curve, for anyone who adopted it, but it lacked ecosystem support.
Again, all those features you mentioned (including Z-buffering) was simply not possible on early 3D hardware accelerators as it would require more video memory capacity and higher memory bandwidth (plus higher complexity). And that would make it very expensive for consumers. NVidia's NV1 (and Sega Saturn's) technology is an alternative 3D rendering method whereby it does not need those requirements. With reduced requirements and less complexity, that made it possible to create the chips (with the process technology at that time) for consumer level graphic cards. As mentioned again, early 3D accelerators are all proprietary and does not conform to any standards. And not every developer is going to spend time on to port their games over to these early 3D accelerators. Those Sega Saturn ports are easier since the original hardware uses the same technology.

Where?

You should @GetSmart, yourself. You're trying to win an argument out of ignorance, and it's just not working for you.

OpenGL was developed by SGI, specifically to enable 3rd party developers to more easily leverage their hardware accelerators. It was based on their earlier IRIS GL library, which was already quite popular, before they opened it.

Yes, that's where 3D hardware started, because it was expensive. The reason I brought up OpenGL is that it was an open specification that existed prior to nVidia's founding that established an industry-standard API for hardware accelerated 3D graphics.
Do you know how expensive are Silicon Graphics workstations at that time? And as mentioned, for professionals with deep pockets. Do you know how much video memory those professional OpenGL hardware cards have? Lots more than an average consumer graphics cards. Also more complex as well (example) which made them very expensive. This is where you contradicted yourself because these OpenGL hardware are clearly not normal consumer pocket friendly and relatively very obscure in the consumer space (typically can only be bought on order, not simply off-the-shelf). You brought up the OpenGL, assuming that it was the same as DirectX when it comes to hardware. And I've shown you that was not the case. Early OpenGL graphics card are stuck with OpenGL and does not have DirectX compatibility. As with other early proprietary 3D accelerators.

Again, I made this point to counter your suggestion that nVidia was operating in a complete vacuum and just got unlucky when MS decided to take Direct3D in a different direction. This shows extreme ignorance of the industry, at the time. Your ego cannot handle this truth.

Your wall of details about the subsequent history of OpenGL on Windows is not relevant to this point, so I won't bother to help you correct them or fill some of the more glaring holes.

This is BS. There are lots of reasons why consumer cards had missing or incomplete OpenGL support, not least because they had a tendency to take various short-cuts that prevented full conformance with OpenGL. And, in the one example you cited of a 3D Labs OpenGL chip that lacked D3D, I'd suggest that's probably less of a technical issue and more of a market segmentation problem.

But, you don't actually care what the real truth is - you're just hoping to bog me down in a wall of details to distract from the underlying point. I'm not about to fall for that.

Go ahead and waste more time, if you like. However, you'd do well to @GetSmart and heed the advice that "when you're in a hole, stop digging".
That is relevant (since you brought that up). And if you look at 3D Labs early accelerators, majority of them are geared towards OpenGL only. That 3DLab GLINT example highlights that especially the consumer version of 3DLab GLINT by Creative Technologies. If 3DLabs could produce a DirectX driver to support their 3DLabs GLINT chips then things will not be much different than what happened NVidia's NV1 (which includes rendering issues and artifacts). And there would be Direct3D games compatibility issues as well. Thus 3DLabs decided not to shoehorn their 3D accelerator's OpenGL hardware into Direct3D API. Hence was born that custom OpenGL library, Creative Graphics Library, to ensure games have more compatibility and renders properly. Side effect is (again) not every game developer will spend time on it and consequently few games only emerge that can support that graphics card. Each API have different requirements thus this meant hardware support for those requirements as well. Since there were no common standards and requirements, early 3D hardware developers had their own proprietary system and APIs.
 
Last edited:

JayNor

Reputable
May 31, 2019
426
85
4,760
I recall articles mentioning extra cooling fans required on the amd chipset with pcie4. Was that specifically due to pcie4? If so, it may not even be appropriate for a low power laptop application.
 

bit_user

Polypheme
Ambassador
Again, all those features you mentioned (including Z-buffering) was simply not possible on early 3D hardware accelerators as it would require more video memory capacity and higher memory bandwidth (plus higher complexity). And that would make it very expensive for consumers. NVidia's NV1 (and Sega Saturn's) technology is an alternative 3D rendering method whereby it does not need those requirements.
I'm done with your nonsense. The main problem is that you do not read. If you had, you'd have seen Z-buffering was utterly common among 1st gen D3D cards, as the lack of it is one of the main issues breaking compatibility with most D3D games. NV1 cards came with 2 and 4 MB of memory, giving them plenty of space for a Z buffer. Granted, it would've hit bandwidth, but this is not a problem that any other card didn't also have. Again, all you had to do was read the Vintage3D review I cited, which would've taken a lot less time than doubling down on your ignorant position and trying to have it out.

The other problem is that you're more invested in winning arguments than the truth. This has you spinning facts to buttress your ego, and that's good for nobody. The eventual consequence of people spreading misinformation for the sake of argument is that history gets blurred in a thick, pixel fog of lies.

And I've shown you that was not the case. Early OpenGL graphics card are stuck with OpenGL and does not have DirectX compatibility.
You've shown nothing. You haven't shown one iota of evidence supporting your claim. No direct sources, no API analysis, just a smattering of circumstantial details.

The more troubling part is this entire line of reasoning is a simple diversionary tactic. You're trying to have an argument about OpenGL vs. D3D that you think you can win. You forget that I only brought it up as an example that it was well-understood how standard APIs for hardware acceleration of 3D looked. The fact that some games managed both OpenGL and D3D compatibility is pretty clear evidence that it's not so different from D3D as to be irrelevant. Nvidia went against a tide comprised of games, middleware, tools, standard practice, and even a standard API... and got burned by it. That's the simple truth that you don't want to accept. I don't know how or why your ego has gotten so wrapped up in the ideology of a failed product, but there it is.

Finally, I'd encourage you to have a good think about why you post on here. Is it to help inform people by exchanging information, or just to feed your ego? I think I know which - prove me wrong.
 

GetSmart

Commendable
Jun 17, 2019
173
44
1,610
I'm done with your nonsense. The main problem is that you do not read. If you had, you'd have seen Z-buffering was utterly common among 1st gen D3D cards, as the lack of it is one of the main issues breaking compatibility with most D3D games. NV1 cards came with 2 and 4 MB of memory, giving them plenty of space for a Z buffer. Granted, it would've hit bandwidth, but this is not a problem that any other card didn't also have. Again, all you had to do was read the Vintage3D review I cited, which would've taken a lot less time than doubling down on your ignorant position and trying to have it out.

The other problem is that you're more invested in winning arguments than the truth. This has you spinning facts to buttress your ego, and that's good for nobody. The eventual consequence of people spreading misinformation for the sake of argument is that history gets blurred in a thick, pixel fog of lies.
Again, as mentioned earlier, NVidia NV1 predates DirectX (and Direct3D). And that was developed before first generation DirectX capable 3D accelerators. Whatever you are trying to spin (with wall-of-text) about NVidia's NV1 problems, I have already simplified and explained earlier. Part of that come down to no standardized APIs for games, and also DirectX's requirements which NVidia's NV1 hardware is not able to fully cope (under Direct3D). It was not designed with DirectX requirements (which only arrived later with NVidia's Riva 128). Also 4MB is only with memory expansion, and that is not easy to get. Thus typically off-the-shelf its only 2MB of video memory, with usually the cheapest versions (without memory expansion capability). To get 4MB, its either bundled with the graphics card or get the memory expansion directly from the manufacturer. In those days, a typical PCI graphics card on average has only 1MB of video memory.

You've shown nothing. You haven't shown one iota of evidence supporting your claim. No direct sources, no API analysis, just a smattering of circumstantial details.

The more troubling part is this entire line of reasoning is a simple diversionary tactic. You're trying to have an argument about OpenGL vs. D3D that you think you can win. You forget that I only brought it up as an example that it was well-understood how standard APIs for hardware acceleration of 3D looked. The fact that some games managed both OpenGL and D3D compatibility is pretty clear evidence that it's not so different from D3D as to be irrelevant. Nvidia went against a tide comprised of games, middleware, tools, standard practice, and even a standard API... and got burned by it. That's the simple truth that you don't want to accept. I don't know how or why your ego has gotten so wrapped up in the ideology of a failed product, but there it is.

Finally, I'd encourage you to have a good think about why you post on here. Is it to help inform people by exchanging information, or just to feed your ego? I think I know which - prove me wrong.
You were the one who brought up the OpenGL comparison without actually knowing the history of OpenGL hardware. The fact is, before DirectX, there were no standards at all for the consumer PC space. Also fact is, during that era before DirectX, OpenGL was only used in prosumer/professional applications and high-end hardware. On the PC, most of those 3D games were primarily software-based renderings using the CPU. As for NVidia's own proprietary APIs (that "middleware") that was before DirectX. And again you are trying to spin your way out, this time about "some games managed both OpenGL and D3D compatibility". Heck, the fact is first generation DirectX capable 3D graphics cards (such as ATi Rage and S3 Virge) do not have hardware OpenGL support. Thus when games were running in OpenGL mode without hardware OpenGL support, Microsoft's own fallback software renderer is used which gives very low frame rates (and often slideshows). Of course no one will use it (in OpenGL mode). Likewise early OpenGL hardware accelerators do not support DirectX. That is not some small technical (driver) issues. Its actually the hardware itself. And only much later (beginning from DirectX 6 onwards) when graphics hardware were advanced enough to support both DirectX and OpenGL requirements, thus only games could take advantage of both APIs. For example, S3 did not have any real hardware OpenGL support until Savage3D arrived. Likewise 3DLabs did not have DirectX support until Permedia arrived. Finally NVidia had true DirectX and OpenGL support when Riva 128 arrived. Unfortunately you still kept pounding on things that happened after DirectX arrived when many of those early 3D accelerators, including NVidia's NV1, were never designed with DirectX specifications and requirements in mind. As a reminder, NVidia's NV1 finished development in 1994 while the first Direct3D version was in 1996 (and the first preliminary DirectX version was in 1995).
 
Last edited:

kinggremlin

Distinguished
Jul 14, 2009
574
41
19,010
Maybe you should try reading some of the blah blahs, because I was trying to tell you:
  1. That won't happen.
  2. Why it won't happen.
You don't have to believe me. Go ahead and sit out PCIe 4.0 and wait for 5.0. I'm still waiting for 10 Gigabit Ethernet to go mainstream.

I didn't predict anything. Why must you make things up to "win" a non existent argument? I asked about the validity of the rumors of Intel skipping 4.0. It was obvious after the first couple of lines of your post that you weren't going to provide any supporting evidence either way and were just going on a long opinionated diatribe so I didn't waste any of my time reading the rest of it.
 

bit_user

Polypheme
Ambassador
Thus typically off-the-shelf its only 2MB of video memory, with usually the cheapest versions (without memory expansion capability).
They were in the same boat as the other graphics cards of that era, which did support Z-buffer.

As for NVidia's own proprietary APIs (that "middleware")
That is not middleware. I'd advise you not to use words you clearly don't understand.

And only much later (beginning from DirectX 6 onwards) when graphics hardware were advanced enough to support both DirectX and OpenGL requirements,
You can "prove" almost any lie by starting with a conclusion and then cherry-picking supporting facts.

In this case, the OpenGL that existed at the time of DX 6 is already different than what came before, so it was a moving target. But, you also completely disregard other points and explanations that don't align with your narrative. Speaking of explanations, your evidence is entirely circumstantial. You literally don't know why it happened that way - you're only guessing. And you're trying to make up with quantity what your points lack in quality - that doesn't work.

you are trying to spin your way out, this time about "some games managed both OpenGL and D3D compatibility".
That wasn't spin - I was simply pointing out that the APIs are not so far apart, from a functional point of view. Thus, any hardware that had even been informed by what OpenGL was doing (much less, the entire rest of the industry!) shouldn't have been so badly wrong-footed by D3D.

Thus when games were running in OpenGL mode without hardware OpenGL support, Microsoft's own fallback software renderer is used which gives very low frame rates (and often slideshows).
This can also be said of D3D games running on its software renderer. Anyway, it's utterly irrelevant.

Again, I could take your bait and litigate all of your points. But, this is just a sideshow.

Again, as mentioned earlier,
You're not listening. That's why you keep treading the same broken path.

The saddest part of all is that by reading the Vintage3D review, you could've actually learned a few things. Instead, you chose to insult me with a superficial review of the same HW by some noob with 1% the level of depth, breadth, and understanding of the Vintage3D review.

I'm done with this farce.
 

bit_user

Polypheme
Ambassador
I didn't predict anything. Why must you make things up to "win" a non existent argument? I asked about the validity of the rumors of Intel skipping 4.0.
So, you asked about the validity of some rumors, and then decided not to read a reply speaking exactly to that point? Certainly, there's no requirement that you read all replies to your posts, but it's pretty rude to reply to posts you haven't even bothered to read.

It was obvious after the first couple of lines of your post that you weren't going to provide any supporting evidence
It's impossible to prove a negative.

either way
Either way? Why would I post evidence contradicting my own points?

and were just going on a long opinionated diatribe so I didn't waste any of my time reading the rest of it.
If you have questions about specific claims and reply in good faith, then I'll see what I can do to provide supporting evidence.

Edit: here's a short article outlining the added costs & challenges of PCIe 4 & 5:


Practically the whole article is quotable, but this should whet your appetite:
The big tradeoff of the higher speeds is that signals won’t travel as far on existing designs. In the days of PCIe 1.0, the spec sent signals as much as 20 inches over traces in mainstream FR4 boards, even passing through two connectors. The fast 4.0 signals will peter out before they travel a foot without going over any connectors.

So system makers are sharpening their pencils on the costs of upgrading boards and connectors, adding chips to amplify signals, or redesigning their products to be more compact.
 
Last edited:

GetSmart

Commendable
Jun 17, 2019
173
44
1,610
They were in the same boat as the other graphics cards of that era, which did support Z-buffer.
Not every 3D accelerator uses Z-buffer support or have varying levels of Z-buffer abilities (such as supported Z-buffer bit depth). That's because Z-buffer requires memory (memory requirements also depends on the screen resolution, the higher the resolution the more memory it requires). On professional graphic cards with lots of video memory (often 16X more than an average consumer graphics card) thus easily accomodate Z-buffers with 16-bit to 32-bit depths (the more bits, the higher the precision, read the problem about 3DLabs first generation Permedia). As for those without Z-buffer support, usually uses polygon/vertex sorting to determine which ones to render first (visible surfaces) and sometimes not to render (such as hidden surfaces). This can create some quirks in the rendered results especially if edges of the polygons/vertexes do not line up or intersect properly.

That is not middleware. I'd advise you not to use words you clearly don't understand.
Middleware includes wrappers and proprietary APIs/DLLs. In other words, "software glue" between the game engine and the graphic card's (low level) drivers.

You can "prove" almost any lie by starting with a conclusion and then cherry-picking supporting facts.

In this case, the OpenGL that existed at the time of DX 6 is already different than what came before, so it was a moving target. But, you also completely disregard other points and explanations that don't align with your narrative. Speaking of explanations, your evidence is entirely circumstantial. You literally don't know why it happened that way - you're only guessing. And you're trying to make up with quantity what your points lack in quality - that doesn't work.

That wasn't spin - I was simply pointing out that the APIs are not so far apart, from a functional point of view. Thus, any hardware that had even been informed by what OpenGL was doing (much less, the entire rest of the industry!) shouldn't have been so badly wrong-footed by D3D.
Talking about the hardware around DirectX 6 time period. At that time, most of those 3D accelerators already can support both DirectX and OpenGL. That's for your argument about games supporting both APIs. However before that era, games that can support Direct3D or OpenGL will only perform better with Direct3D as most of the early 3D accelerators do not support hardware OpenGL. That is how Direct3D took off so quickly (thus why many early 3D accelerators for Windows are mainly Direct3D only).

This can also be said of D3D games running on its software renderer. Anyway, it's utterly irrelevant.

Again, I could take your bait and litigate all of your points. But, this is just a sideshow.
Nopesies, that is incorrect. If you try running a Direct3D game on a 2D-only graphics card then it will not work (spits out error of hardware not supported). And only certain games comes with its own software 3D rendering engine when there is no hardware Direct3D support (one example is Unreal in 1998). You cannot spin out yourself out of this one. As for Microsoft's OpenGL software rendering, can read it here Default Renderer which works with all graphics cards. If the graphics card driver has an OpenGL ICD then the default renderer will not be used.

You're not listening. That's why you keep treading the same broken path.

The saddest part of all is that by reading the Vintage3D review, you could've actually learned a few things. Instead, you chose to insult me with a superficial review of the same HW by some noob with 1% the level of depth, breadth, and understanding of the Vintage3D review.

I'm done with this farce.
Go ahead and read the rest of Vintage3D reviews, especially about 3DLabs first generation Permedia chips which seems to be adapted ("desperate hack") from their early GLINT designs. The results are not pretty in Direct3D, and also in OpenGL as well (due to cutting down features from their high end chipsets). As mentioned earlier, its also hardware.
 
Last edited:

bit_user

Polypheme
Ambassador
You forgot to type anything.

Yeah, he's nit-picking. I could & would defend those points, but it wouldn't affect the larger issue. Continuing that exchange would ultimately be unproductive.

My whole reason for following up the original counterpoint about the NV1 is that I'd long held the same opinion as @GetSmart . It wasn't until I learned the details of their implementation that it became clear to me how ill-conceived it was, and just how close nVidia must've come to the failure befalling most other early 3D chipmakers. I thought @GetSmart & possibly others would appreciate this information, but clearly I underestimated that user's ego.

Most people aren't very receptive to new information, if it contradicts a position they've staked out. Even if the position is one as tenuous as defending a failed product. In fact, the absurdity of the position probably serves only to entrench them even more. It's a good lesson for us all.
 

GetSmart

Commendable
Jun 17, 2019
173
44
1,610
Yeah, he's nit-picking. I could & would defend those points, but it wouldn't affect the larger issue. Continuing that exchange would ultimately be unproductive.

My whole reason for following up the original counterpoint about the NV1 is that I'd long held the same opinion as @GetSmart . It wasn't until I learned the details of their implementation that it became clear to me how ill-conceived it was, and just how close nVidia must've come to the failure befalling most other early 3D chipmakers. I thought @GetSmart & possibly others would appreciate this information, but clearly I underestimated that user's ego.

Most people aren't very receptive to new information, if it contradicts a position they've staked out. Even if the position is one as tenuous as defending a failed product. In fact, the absurdity of the position probably serves only to entrench them even more. It's a good lesson for us all.
Heck, some of those things I've written can also be found here Comparison of OpenGL and Direct3D especially this part.
In the early days of 3D accelerated gaming, most vendors did not supply a full OpenGL driver. The reason for this was twofold. Firstly, most of the consumer-oriented accelerators did not implement enough functionality to properly accelerate OpenGL. Secondly, many vendors struggled to implement a full OpenGL driver with good performance and compatibility. Instead, they wrote MiniGL drivers, which only implemented a subset of OpenGL, enough to run GLQuake (and later other OpenGL games, mostly based on the Quake engine). Proper OpenGL drivers became more prevalent as hardware evolved, and consumer-oriented accelerators caught up with the SGI systems for which OpenGL was originally designed. This would be around the time of DirectX 6 or DirectX 7.
 

bit_user

Polypheme
Ambassador
You can look back at page 1 at his first wall-of-text.
Noun[edit]
wall of text (plural walls of text)

  1. (chiefly Internet slang) An intimidatingly large block of writing, particularly one with few or no paragraph breaks.
Source: https://en.wiktionary.org/wiki/wall_of_text

Example: https://forums.tomshardware.com/thr...-feature-for-tiger-lake.3520193/post-21279200

Yeah, go ahead and reprimand me for this, remix. I expected "smart" to have the last word, but this jab was not only uncalled for, but also just inaccurate and even hypocritical.
 

GetSmart

Commendable
Jun 17, 2019
173
44
1,610
This is what I meant by your manner and tone, from this initial one (wall-of-text). .
Nopsies, yourself!!!
My original comment was short, but your initial one was that wall-of-text I've mentioned. And look above to your own replies. Did I ever accuse you of lying? Did I ever accuse you of diversionary tactics? I gave a very simple reason for that problem with NVidia's NV1 but you have to continue pushing the argument about those problems.