Intel To Release Discrete Graphics Card In 2020, GPUs For Desktop PCs Coming, Too

Status
Not open for further replies.
I so hope Intel does this justice on this attempt to get into this market. Finally a real 3rd competitor. This should hopefully get prices back in check and stop all these silly shenanigans like partner programs or proprietary solutions.
 
Jun 6, 2018
5
0
10
I think what Intel needs to get their act together more than they need to go into the discrete GPU market. If they can compete, good. Very risky move though.
 
Intel's driver support should be interesting. I haven't been impressed with their drivers so far, so expectations are low.

Other than that, they have a lot of catching up to do. Not sure grabbing the marketing guy from AMD was a good move, he has a history of over-hyping bigly.
 

someone-crazy

Prominent
Apr 6, 2017
6
0
510
More like Intel is going to copy AMD's gpu technology. 1st buy gpus for their kabby lake G product with a hidden clause somewhere, hire AMD's engineers to help reverse engineer the design and tweak it so they can call it their own design. Follow up by crippling other vendor GPUs on Intel CPU hardware.
 

PaulAlcorn

Managing Editor: News and Emerging Technology
Editor
Feb 24, 2015
858
315
19,360


Food for thought -- It is notable that Intel has a well-stocked warchest of graphics IP. In fact, at one time they had more graphics patents than both AMD and Nvidia, though I'm not sure if that statistic still holds true. Intel is still the largest supplier of graphics chips in the world, largely due to its integrated graphics on its chips, so they do have a solid base of knowledge. Also, the UHD Graphics architecture is inherently scalable by design, so many opine this will just be a scaled-up version of its forthcoming Gen 11 (.5?) graphics engine.
 

COLGeek

Cybernaut
Moderator
Back in the day, Intel actually produced discrete video cards. I had one (around 1997/1998). I hope, by today's standards, that whatever they produce is better than the previous attempt.
 

Intel has also announced that they will have 10nm CPUs by 2015, and 7nm by 2017. : D


Keep in mind that "a long time from now" is a pretty vague time frame. The CEO obviously doesn't want to hurt sales of current cards, and you can be sure that he's not going to break news of their next generation of cards in some quick interview with a tech news site. According to unofficial sources, it sounds like they intend to start launching their next generation of cards within the next couple months or so.
 
I remember the last Intel GPUs I think they were called the 740s and they were terrible. After a few tried them and realized they were very limited they were dropped like a hot potato. Expensive and slow and very poor driver support, if not worse than ATI's back then.
 

Math Geek

Titan
Ambassador
keep in mind that they can do like AMD has done and not bother trying to be the fastest out there. mid-range is where the bulk of sales is and starting small with low to mid-range cards would still be worthwhile if they can compete.
 

Tanyac

Reputable
I couldn't imagine Intel would be selling anything below their 60% profit margin like they do with CPUs. This company is not here to provide customers with value-for-money solutions, only products it thinks it can make billions in profit on.

I doubt AMD and nVidia would be concerned too much as Intel's products are always overpriced.
 

bit_user

Polypheme
Ambassador

I've been down this road, conceptually, and have some serious misgivings about this approach. While it would've been a good route to take for something like Xeon Phi, I think its resources are partitioned poorly and their SIMD pipe is too narrow to scale up to larger graphics and dense compute workloads.

Taking the last point first, the narrow (128-bit) SIMD engines mean more work for the instruction decoders per FLOP, which hurts energy efficiency (and energy efficiency ultimately limits GPU performance). However, this is great for the sort of mostly-serial or branch-heavy code that typically runs poorly on GPUs, which is why I say it's good for GPU-compute (hence, Xeon Phi - which is sort of like GPU-scale compute for CPU-oriented tasks).

The next point would be the level of SMT, which is too little. It makes the GPU highly dependent on good thread partitioning, to avoid stalling the EUs, since you probably need 2-3 threads running at all times to keep a given EU busy (and here, I'm using the CPU definition of a thread). Like AMD's GCN, threads are all allocated to specific EUs, although GCN supports almost twice as many threads per EU-equivalent.

Finally, they really need to revise their memory architecture. HD Graphics' GTI bus represents a significant bottleneck for scaling much beyond their desktop chips. I think their Iris Pro graphics even suffer, as a result (the 50 GB/sec to/from eDRAM can't even match Nvidia's GTX 1030's off-chip bandwidth). I'm pretty sure one of the secrets to Nvidia's efficiency is partitioning the work and data across a number of narrow memory controllers/banks, which also explains how they reach odd memory sizes like 5 GB and 11 GB.

My money is on either a clean-sheet redesign or a complete, top-to-bottom overhaul that barely resembles the current HD Graphics architecture. It's worth considering that HD Graphics hasn't fundamentally changed in something like 15 years. In fact, the Linux driver used for them is still called i915 (referring to the 2004-era motherboard-integrated GMA graphics).

Anyone interested in learning more should give this a look:

https://software.intel.com/sites/default/files/managed/c5/9a/The-Compute-Architecture-of-Intel-Processor-Graphics-Gen9-v1d0.pdf
 

bit_user

Polypheme
Ambassador

I guess you mean dGPU? Because Intel never stopped building GPUs - they just limited them to integrated graphics - first at the motherboard chipset-level, then embedded in the CPUs.

https://en.wikipedia.org/wiki/List_of_Intel_graphics_processing_units
 

bit_user

Polypheme
Ambassador

I think their chip designers do a much better job of holding to a schedule than their manufacturing process engineers.

In fairness, it's too easy to sit back and complain about their lack of progress, while they wrestle with the limits of physics, chemistry, and mass production. Only a handful of companies in the world are competitive with their manufacturing tech.
 

bit_user

Polypheme
Ambassador

Two problems with that:

    ■ Faster -> bigger -> more expensive CPUs for everybody.
    ■ iGPUs are limited by memory bandwidth.


That said, AMDs APUs have clearly shown that Intel's iGPUs aren't exactly at the pinnacle of what an iGPU can be. Yet, consider that AMD only sells APUs with up to 4 cores, while Intel offers their iGPU in chips with up to 6 (and maybe soon 8?) cores. So, it looks like AMD might be devoting a bit more die area to this, although I can't say I know exactly how their die sizes compare (and all of AMDs APUs are much cheaper than Intel's i7-8700K).

I've also read that AMD APUs' graphics can generate so much heat that the CPU has to throttle, if using the stock cooler.

So, I take your point - they could probably optimize their iGPUs a bit. But there's not really a free lunch to be had, there. And 3x is not realistic - unless we're talking about their Iris Pro, which is literally 3x as big and features 128 MB of eDRAM - but those are all mobile chips, and they aren't cheap. Finally, even Iris Pro isn't quite 3x, due to lower clock speed.
 

bit_user

Polypheme
Ambassador
BTW, one positive aspect of all this is Intel's commitment to OpenCL. They might finally get Nvidia to advance its support beyond v1.2.

Also, like AMD, Intel has recently been strong contributors to open source. That just leaves one player holding to the mantra of closed & proprietary...
 

Math Geek

Titan
Ambassador


i had that thought as well. intel does not have to reinvent the wheel for a lot of stuff. freesync is there as well they could implement if they wished.
 

mischon123

Prominent
Nov 29, 2017
83
2
640
@mathgeek: It gives you exactly one top card more to choose from and it will most likely be more expensive than the successor of the 1080ti, which will be invariably priced higher now that Intel high price strategy adds bottomline to all CG manufacturers.
This does not pertain to obsolete products like 50,60, 70, 80 or hot running AMD...
CG are awfully lowperforming today. 8k, 10 bit, HDR and 3d are posing unsurmountable challenges to todays CG Cards.
 

hannibal

Distinguished


Yep, Intel is making money. Crypto currency gpu and AI gpu and worksattion gpu are places where money is dropping!
 

Yeah, It seems likely that Intel will support FreeSync, seeing as it's on its way to becoming a standard feature. Microsoft recently enabled FreeSync on the Xbox One for certain games, and while FreeSync TVs are not common yet, the feature is starting to appear there as well. Samsung recently added FreeSync support to some of their televisions via a firmware update, and more screens are undoubtedly on the way. I even suspect Nvidia may be forced to support it eventually


This is silly. 8K is completely unnecessary to play games, and its advantages would be minimal. At most common screen sizes, even 4K is arguably overkill. The vast majority of people are still gaming at 1080p or below, which even mid-range cards are very well capable of handling at high settings. Past a certain point, there are diminishing returns for adding more resolution. 4K requires at least three times the GPU performance as 1080p, and for what, a moderately sharper image? Most people simply don't care enough to spend multiple times as much on a video card just to make the image slightly sharper. At typical viewing distances, they'll be hard pressed to see a difference unless they really look for it.

A decade or two ago, there were much more pressing reasons for graphics hardware to improve. Characters in games were blocky, environments were sparsely detailed, and in general hardware advancements from one year to the next could be utilized to make things look substantially better. It was more than just adding more pixels to sharpen the image a bit. Certainly, there's still room to make game graphics look better, but at this point it's more a matter of what a developer can do at a given budget then what the hardware will enable them do. And as chip manufacturing processes become smaller, it becomes increasingly more difficult and expensive to improve performance just by shrinking the architecture further, so performance advancements will naturally continue to slow as time goes on, unless there's some major unpredictable advancement.
 

manleysteele

Reputable
Jun 21, 2015
286
0
4,810
To go from 1080p to 1440p will cost you about twice the graphics computing power, but on a 27" monitor will only increase DPI from 80 to 106.7 on the same monitor. 2160p cost 4 times as much compute as 1080p to go from 80 DPI to 160 DPI on the same monitor. An 8K 27" monitor will display about 320 DPI, but it will cost you a whopping 16x of compute power compared to 1080p.
 
All I'm saying is they better have a darn-good driver team upon initial implementation.
I know of another [primarily] CPU company who bought their way into the discrete GPU market and initially turned a good thing into garbage. It took them around 10 years, a lot of ignorant customers' (including myself) money and a FRAPs reveal to even get them to pull their heads out of their buttocks about their issues.
 
Status
Not open for further replies.