How AMD Could Disrupt The Graphics Card Market In 2019

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.


If, could, maybe. Fictional journalism shilling for ad dollars. Yes, its a genre.
 


So is the theory that we are playing both sides of the game here? I mean a month ago we were clearly on the take from Nvidia.

Or maybe its POSSIBLE an OPINION piece could... I don't know.... have an OPINION towards one side? Maybe? Do you understand the concept of an editorial or journalism? They taught this stuff in middle school.
 


There are multiple problems AMD is facing even in rasterization it's definitely not going to be easy for AMD to compete the first big problem is their compute utilization efficiency.

Firstly even though RX Vega LC at boost clock has a higher peak FLOPS as a GTX 2080ti also at its boost clock 13.448 vs 13.738 TFLOPs its ahead by a ridiculous margin and that much of a difference you can't attribute this to just higher memory bandwidth (27% bandwidth increase wouldn't give you 100% performance boost) nor to a larger die size its not case of wider lower clocked cores. (All those tensor cores/RT cores take up a lot of space probably 33%)

Secondly, Nvidia has a massive clock speed/power consumption advantage as Nvidia clocks their GPUs very high yet is able to maintain quite low TDP even on a mammoth 754mm2. Meanwhile, a Vega GPU clocked just at 1.6 GHz with a smaller die consumes mind-numbing 350 watts. This is really important since having a higher clock speed gives breathing room to die space. Just remember 7nm won't be quite definitely cheap especially since it would be in such high demand. For AMD to take a bite at mainstream market share they have to improve their architecture dramatically especially their memory compression algorithms even with 7nm otherwise they will either end up with a memory bottleneck or a very large die size (for 7nm even 300mm2 is large since it would be such a new node especially a mainstream graphics card)

Thirdly RTX also works on Vulkan they are bringing support to it so that open source point is just moot... Support for hybrid rendering is going to be hardware agnostic. AMD will eventually support DXR as well... The real question is performance for that they definitely need specialized hardware

http://on-demand.gputechconf.com/gtc/2018/video/S8521/

It really is going to depend on what and when Nvidia releases their 20 series mainstream cards and how it prices them. Actually, would they release 7nm mainstream cards with small dies with no RT cores? That would be checkmate as low power could become a value addin.


 
To be honest , I think that Nvidia will reduce the RTX cards price in 2019 to normal ranges once AMD cards appear...

They are just milking the rich for six months until the end of 1Q 2019 ....

AMD cant win anything if they dont release their stuff NOW ... OR make a super move and lower their VEGA cards prices 50% .

No one will touch any Nvidia card if they Lower the VEGA prices today , in case they dont have the new generation Ready.

Hey AMD , If you are reading this .. Just lower the Vega cards prices .
 

THey are going to do the same thing they did when the 1080tiTI was released. The 1080 TI was introduced at $700 and the non TI was bumped down to $500. Only this time they are going to release the Titan and then that will be when they reduce prices, but the titan will cost what the 2080TI costs now.
 
The method you are describing for accelerating ray tracing on AMD GPUs also works on Nvidia cards as well, after all, its just OpenCL. NVidia's ray tracing cores are, according to demonstrations from SIGGRAPH, faster than using OpenCL or CUDA to do the same calculations.
 

Except they priced the Titan V at $3000. They've shifted around their product names for each price point, and the 2080 Ti already fulfills the role of what would have normally been called a Titan in prior generations. They'll no doubt release another "Titan" soon, but I doubt it would be much less than the $3000 that they priced the previous one at, and it will likely offer worse gaming performance per dollar than any other card in the series, being intended more for professional tasks than anything.
 
The only issue that I see with opensource is this :

there are limitations with what you can work with. While Nvidia has its own api for Ray tracing, AMD has to work on opencl or Vulkan. You have to work with what's already available in opensource and work within those boundaries.

That's why people are telling g sync is better and rtx ray traces faster etc etc.

When it's your own, you have complete control and can implement it perfectly.
 

Nvidia may have good competition with AMD in the future. If AMD can step up to the plate, what I said will have to happen. However even if they don't, prices will drop. They always do.

 
Can DXR be offloaded to a separate GPU or the CPU or does it have to be performed by the same processor as the rasterization? Is that up to the engine developers?
 


WTH are you talking about? 4k 60Hz HDR and Freesync on 60+ inch screen.

It is not only for Xbox One you know... your PC as well. I am not buying monitor anymore, I am buying TVs. Nvidia will never be in the TV market, which mean they failed.
 


The little issue here is that they use their biggest die ever for the, supposed, gaming cards. Add to the price GDDR6 and you have something that is worst than Vega.
 


Interesting concept. I was going to purchase a gsync ultrawide, but now I might reconsider. Never say never though. If nvidia does release a gsync television it's going to be an insane amount of money.
 
WTH are you talking about? 4k 60Hz HDR and Freesync on 60+ inch screen.

It is not only for Xbox One you know... your PC as well. I am not buying monitor anymore, I am buying TVs. Nvidia will never be in the TV market, which mean they failed.

The article says "Samsung’s FreeSync mode only supports 1080p resolution for now, and the company notes that it might cause some random weirdness with the TV’s brightness at times." That's why I said what I said.
 
A well rasterized game(good engine) looks more realistic than a ray traced game. Look at Metro Exodus, doesn't really stand out, and look at UE4 demos.

It's best if AMD disrupts the GTX 1070 and GTX 1080 market with nearly equal power draw, a little more horse power and an unbeatable price.

The ideal scenario would be to get the RX 670 and RX 680 to fight with 1070 and 1080 with a price bump form 570 and 580. It might cannibalize the Vega but, I don't see the Vega doing any better right now.

To me, RTX is a luxury in the graphics department. It needs to start at a point, now is the right time to test it's worth but real world implementation is still light years away.

There are better and less demanding methods to make games look realistic. A whole compute core for just an effect that really doesn't add good photo-realism, smh.
 


Yet Intel invested $7 billion into a FAB which is more than AMDs entire gross revenue.

People laugh at Intel. When Intel was getting Core ready AMD didn't think anything of it. Now Intel controls 99% of the server market. Before that AMD had a vastly superior uArch especially for servers.

While I wont believe it till I see it, Intel has tried and failed before and they cut Larrabee from dGPu to HPC GPU I don't think its impossible. Their IGP, while not the best, still has some parts to it that are better. QuickSync is still one of the fastest encoding hardware out there and they are somewhat competitive with some AMD iGPU.

BTW every new tech is more silicon and engineers. Thats pretty much Moores law in a nutshell. To go faster add more transistors. You act as if AMD or nVidia don't do the exact same thing when every single new product they bring out has a larger die than the last.
 

Minor point, QS is fixed-function hardware. Limited in what it can do. I certainly wouldn't use it as a primary encoder. With that being said it's extremely fast and thus good for on-the-fly transcoding, for example. It's a good hardware block but it's got absolutely ah heck-all to do with that thar shadyrasteypoly business.

Otherwise, I agree with you. Don't underestimate Intel. If they don't have the talent, they'll poach it (they already have to a large extent).
 


Poach it is not quite the word I would use. Jim Keller was set to leave AMD after Ryzen launched anyways. I don't even think he was working for AMD when Intel picked him up. I would rather say they will find the right person for the job.

We will see. My point is that Intel can get into a market and do well. They have the funds to push R&D. Besides I am all for it. Shake the market up and give us more choice.
 

One, I'm not against it. Two, I wasn't talking about Keller, at all. I'm talking about their move into North York, and to a lesser extent Koduri (and those he has or will bring into the fold as well).
 


I thought about Koduri but not sure that was a win. AMDs graphics have not exactly been uber competitive and that shows (RTX prices).

Keller is a win though. He has been a great CPU designer since his days at DEC when he designed the Alpha (which helped influence his design for the K8 uArch).
 

Keller is absolutely a win... but I'm not sure what that has to do with the topic at hand. From what I've read they didn't bring him onboard to design a discrete GPU. He's going to be primarily in a project/resource management role.

Koduri is still a win, for Intel at least. You want to talk about not competitive, look at Intel's last stab at discrete graphics. So stick Koduri in there with a pile of resources and they'll do better than they ever did before. I'm not a fan of Koduri, but he does have the potential to bring more graphics talent into the fold and get the ball rolling.

Do a search for Intel North York and tell me what they're doing up there.
 


I never mentioned Keller for graphics just that he is a win.

Yea they never have been successful for GPUs but their last effort, Larrabee, did turn into a profit monster for them thanks to HPCs.