Intel Tweets First Video Of Its Discrete Gaming GPU

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.



"They had a slow graphics driver release because there is no need to release drivers when the majority of people just need a stable working driver."

Exactly! Jimmy.
I'm one of those people.pls do a favor for your knowledge if you're professional or care, if you ever used 3rd and 4 genration of "CORE i7" APUs. try to install 2012-2014(the same era) major 3d software on and work with selection tool or make rectangle shape (latest 2018 intel's drivers on windows 10) then feel what is "STABLE" and perhaps "Depression" word is. (those chips graphic's potentials is certified at Autodesk website for example)

leave the benchmarks and Ads and feel the real world of 3Ds, not just emails and writing comments in browsers. pls dont get me wrong INTEL is GREAT company with great development teams, sure they have tough fans like me and you.i do know.
powerful(core i7) 300-500 bucks Intel's desktop APU with 3 or 4 years age is just for surfing web or watching movies. of course "Quick Sync" do all 2d processes just fine. but 3d? no way.
buying N years old discrete AMD or Nvidia GPU should be work fine.
 

...yep.

If you are a mid sized graphics/animation studio, imagine having 96GB of VRAM and a boatload of CUDA/RTX/Tensor cores at your fingertips with two NVLinked Quadro RTX8000s.

Next, imagine if you were a major film studio and had a warehouse filled with the same systems networked together, all dedicated to rendering.

Finally, imagine if you are a major research centre looking to step up to the next generation supercomputer built on say 4,000 nodes of 6 - 8 NVLinked Turing Tesla/quad IBM Power9 clusters.

Yeah, Intel has just a bit of a ways to go yet to catch up.

 

Maybe when APUs have a couple stacks of HBM2 memory in-package.

However, GDDR5 isn't something you can just put on DIMMs, in place of DDR4.

The other problem with making an APU with a big iGPU is cooling. A 1060 is rated at 120 W. Some of that will be for the fan and memory, but about 100 W of it will be for the GPU, itself. This is more than the 95 W rating of processors like the i7-8700K. If the iGPU accounts for maybe 10 W of that, then replacing it with a 100 W iGPU means you need a cooling solution of like 185 W, not to mention extra PSU capacity and a larger case.

You're essentially talking about a console APU. Remember that AMD has been building big APUs since the original PS4 first launched. If something like this really made sense for PCs, you'd think they probably would've done it, by now.

HBM2 could change things, somewhat. However, the supply/demand situation would have to shift, before it would probably be economically viable.
 

It can't have been that successful, or why did they just kill Xeon Phi?
 

I think that's what their RTX Server is all about:

25.jpg

8 Quadro RTX 8000s in a single box
General availability in Q1'2019
3250W power consumption
$125,000



They also had a slide for that:
26.jpg


Source: https://www.anandtech.com/show/13215/nvidia-siggraph-2018-keynote-live-blog-4pm-pacific
 
...yeah but for more of an intermediate studio, 20,900$ for two Quadro RTX 800s and the NVLink widgets would still offer a lot of rendering power.
 
Intel has had issues with spectar and meltdown. Cheap timm used on CPU's that are as hot as a furnace. More issues pop up daily and they want to talk about a product two years out?
That GPU in the picture was one they made back when they made GPU's and nobody brags on Intel integrated graphics so what do we have to look forward to?
Raja Koduri is known for dropping GPU's not developing one. Took over Vega and look what happened.
 
According to Forbes, the PNY RTX 2080ti OC will be released at @$1000, with reference cards at $899. You could prolly expect FTW type versions at closer to $1300. I'd hate to see what a Titan type rated card would cost, prolly over $1500.

Back in the day, nvidia wasn't the giant, amd was. I'm sure there's plenty of ppl who tangled with early nvidia on board graphics nForce mobo's. Then they got smart and absorbed companies like 3dfx and ULI (they supplied Southbridge chipsets to amd) and the ideas flew. AMD got ATI. Intel doesn't have that option, basically starting from scratch, with only its in-house development teams coming up with anything in a market that's already dominated by others. Without some serious inspiration from somewhere, I'm doubting there's going to be too much competition for high end cards for a good long minute.
 
...as I recall, the Skylake-X series of CPUs had some serious heat issues due to the fact that Intel went form soldering the heat spreader to the CPU to using a low quality thermal paste to attach it spreader in order to cut costs. Not a smart idea.
 


...the 12 GB Volta Titan-V retailed for 3,000$, more than the 16 GB Quadro P5000. If you work in 3D and render scenes/animations, VRAM is your best friend, for if the memory loads of the render process is greater than the VRAM on the card than all he cores/stream processors wold be of no help.
 

It's funny you should say that, because I was just reflecting on Nvidia's Turing and how Nvidia seems to be way more trouble for AMD than Intel, lately.

Nvidia has stayed a step ahead of AMD for several generations, now.

  • ■ Maxwell introduced tile rendering
    ■ Pascall introduced packed fp16 arithmetic (although I think Broadwell's iGPU technically did this first)
    ■ Volta introduced Tensor Cores
    ■ Turing introduced RT Cores
It's not since Fury introduced HBM that AMD actually did something before Nvidia. So, I say: good luck to Intel. Nvidia is surely not a competitor I would like to have.
 

Again though, we're talking about processors that would likely be released around the end of 2020 or shortly thereafter, and Intel will obviously have a more efficient fabrication process available for their CPUs than the 16nm process that the already 2+ year old GTX 1060 uses. I suspect that they could get the total power draw for a 6-core + 1060-class APU down under 100 watts combined.

As I was saying before though, by that time, 1060-class performance should be fairly low-end for a dedicated card, probably still able to run the latest games at the time of its release, but most likely at relatively low settings in most big titles. It would certainly be a lot better than Intel's current offerings though, where most new releases at the time of a processor's launch are unplayable on the integrated graphics.

It's also possible that they could sell the processors that have better graphics capabilities at a premium though, a bit like how they charge extra for unlocked models now. Maybe have standard models with half the graphics performance and a lower TDP, for those who are less interesting in gaming, or are using a dedicated card.

Of course, Intel's not the only one who could do this. Even though AMD's 2400G is built on a 14nm+ process, it already offers 1030 class performance along with 4 cores and 8 threads, while only consuming around 50 watts during a typical gaming workload. With a larger chip, I have little doubt that they could put out a 95 watt TDP processor with 1050-class graphics performance today, if they wanted to. By the time Intel starts releasing their new GPUs, AMD should be building processors on a 7nm+ or better process, and they are also supposedly working on a major overhaul of their graphics architecture, so an APU with 1060-class performance at a reasonable TDP from them also seems very possible.
 

I'm taking the poster's comments to mean 1060-class equivalent (i.e. inflation-adjusted).

However, even if we're talking about literally 1060-level performance, that will still probably be out of reach for iGPUs, in only 2 years. Maybe 1050 will be within range.


How many people are going to pay extra money for that and an upgraded CPU cooler, when just a little more would get you a much better-performing dGPU?


Where/how are you going to get the additional memory bandwidth?

I don't know if your 50 W figure is accurate, but I've heard complaints it hits thermal throttling with the stock cooler.
 

They did say "as fast as a GTX 1060", which seems to imply that they were referring to an actual GTX 1060, rather than a 3060 or whatever is available in its price bracket then, which might offer performance closer to a 1080 Ti, if performance improvements from prior generations are any indication.


With a 95 watt TDP, it could be feasible for Intel or AMD to include a better (Wraith Spire level) cooler with the CPU to make it work, and if the processor is offering graphics performance comparable to a ~$100 dedicated card, then there should still be room for the APU to provide a better value.


There's a good chance that a new CPU released around that time might be for motherboards running DDR5 memory, which should provide additional bandwidth. And maybe even a small amount of memory could be incorporated into the CPU, though I'm not sure how practical that would be.

As for a theoretical 1050-class IGP released today, I'm sure the limited bandwidth would be detrimental to performance in many scenarios, but I also think there's still some room to get additional performance by adding graphics cores. Memory bandwidth might not necessarily need to match that of a 1050 to get performance roughly comparable to one. We'll have to see if AMD pushes graphics performance any further with their 7nm APUs.


Tom's Hardware tested power consumption of the 2200G and 2400G in The Witcher 3 and FFXV...

https://www.tomshardware.com/reviews/amd-raven-ridge-thermal-power-benchmarking,5464-2.html

The 2200G averaged just 32 and 37 watts in those games, while the 2400G averaged 40 and 49 watts, with only brief, split-second spikes significantly exceeding those values, and even on AMD's weaker Wraith Stealth cooler, temperatures hovered in the vicinity of 50C (albeit with the fan running at full speed). Since the processors can balance CPU and GPU performance based on temperatures and power draw, they don't need to run both at their maximum performance level. I do believe that Tom's was testing on an open test bench though, so it's possible that there could be a reduction in performance in a case with poor airflow or with higher ambient temperatures.

And I'm just saying that this could be possible, not that they'll necessarily do this. However, with both AMD and Intel potentially offering competitive entry-level gaming performance on their CPUs, I suspect that both will try to outdo each other. This goes for dedicated cards too. Intel is going to want to establish themselves as a serious contender in the GPU space, while AMD and Nvidia are likely to provide as much value as they can to avoid losing market share to a new competitor.
 

You're making some rather optimistic assumptions.

A stock GTX 1050's 128-bit GDDR5 @ 7 GHz provides 112 GB/sec. More than a factor of 2 as much as what a Ryzen APU has to share between CPU cores and its iGPU.

GTX 1060 steps up to 192-bit @ 8 GHz for 192 GB/sec. The only way you're getting that kind of bandwidth in a socketed APU is to plunk a stack of HBM2 in there, which will make it yet that much more expensive. As for the necessity of all this bandwidth, consider that the 1050 can do 15.5 FLO per byte it can read/write from/to memory (and those FLO's are operating on 4-byte quantities). The 1060 can do at least 18.1 or 20.1 FLO per byte, for 3 GB and 6 GB, respectively. Now, maybe this doesn't mean a lot to you, but consider that memory bandwidth costs money, and they wouldn't have equipped these cards with much more (or less) than needed to keep the shaders fed.

Then, there's the question of die size. If we say RX 580 is about comparable with a GTX 1060, you're talking about adding 232 mm^2, which dwarfs the cores at about 96 mm^2 (half of Ryzen's 192 mm^2 die). That's certainly going to impact costs.

As for power and cooling, we're talking about going from 11 CU to 36 CU. So, don't forget to scale accordingly.

Now, you're going to say this gets much better with 7 nm, but I think not by more than a factor of 2.

All I'm saying is that it's tempting to think that monster APUs are just around the corner. I think there are good reasons why it hasn't happened. Maybe we'll see them in laptops, where soldered CPUs and memory isn't such a big deal, but I keep coming back to the point about AMD having been there and done that since PS4 launched. If they thought it made sense for PCs, they'd be doing it. If you look at the price (~$650) of that recently-announced Chinese PC/console, maybe there are some clues as to why.
 
Status
Not open for further replies.