yeah two phat chips compared to one mid range will be two apples to one orange.WOW, the 295X2 is 60% faster than a GTX980 at 4K!!
Last edited:
yeah two phat chips compared to one mid range will be two apples to one orange.WOW, the 295X2 is 60% faster than a GTX980 at 4K!!
You keep on saying RT is like other graphical improvements, but it is not. Compare a PS5 vs a PS4 (non pro) game visually and tell me you wouldn't have rather had that earlier rather than waste time with ray tracing. Textures, polygons, shaders, all those things still need huge improvements, and RT is hybrid rendering, it relies on that for visual upgrades. Even look at Quake RTX, it is all the non RTX things that make it look good, not the RT itself. I can easily make a new Quake that looks better without RTX.
yeah two phat chips compared to one mid range will be two apples to one orange.
You keep on saying RT is like other graphical improvements, but it is not. Compare a PS5 vs a PS4 (non pro) game visually and tell me you wouldn't have rather had that earlier rather than waste time with ray tracing. Textures, polygons, shaders, all those things still need huge improvements, and RT is hybrid rendering, it relies on that for visual upgrades. Even look at Quake RTX, it is all the non RTX things that make it look good, not the RT itself. I can easily make a new Quake that looks better without RTX.
i didn't say card, i said CHIP.GTX 980 was a midrange card? ... I never knew that ...
If Nvidia never tried anything new we'd still be playing at 640x480x16bit on TNT2, rasterized of course.Wasn't worth the hit to frame rate in 2008, not worth it now. Maybe in 2028.
View: https://www.youtube.com/watch?v=mtHDSG2wNho
i didn't say card, i said CHIP.
yep. nvidia's shell game since kepler. the Gx-104 chips were in mid range cards but now sold as high end. x80/x70 cards had their phat chips (100/110) and x60/x50 card were mid range. they introduced the enthusiast level above high/mid/entry and there are 106 chips in x70 cards.
but if you want to believe nvida's marketing, have at it.
e:
but if it really bothers you that much; i'll rephrase that as, "two enthusiast chips vs one high end".
happy now? 😉
...
RTX Ray-Tracing may not be Pixar quality, but you have to admire them for trying.
...
the fact is one card had two CHIPS whereas the the other had one. now say they don't do anything. 🙄Getting happier ...
The chips by themselves don't actually do anything, 😉 ... the comparison was between two top of the stack cards. Sure you could say the AMD card was in a higher tier - it was over $200 more expensive, so yeah ... enthusiast vs high end is valid - but at the card level.
It's worth mentioning that Navi has significantly improved gaming performance vs FLOPS compared to Vega.Nvidia has traditionally gotten more real world game performance per FLOP compared to AMD. At stock speeds, the Radeon VII has more TFLOPs (13.8) than a 2080 Ti (13.5) despite on average losing to a 2080 (10 TFLOPS). If big Navi only reaches 18 TFLOPs, that would put it in roughly 2080Ti territory. If the 3080ti surpasses 20 TFLOPS, even the regular 3080 is going to blow by AMD's best while the Ti will be all out by its lonesome again.
From my understanding, the recent foray into ray tracing is already part of an "open standard"*, namely DirectX DXR. RTX is Nvidia's proprietary implementation of an accelerator for DXR calculations, kind of like how GPUs as a whole are proprietary accelerators for general graphics API calculations (among other things). I don't see RTX being a "dead end" unless ray tracing as a whole (or at least DXR) turns into a dead end.That way if RTX ends up a dead end, because someone else made RT an open standard, the consumers weren't forced to pay for that mistake.
I could imagine DXR becoming a dead-end if more companies decide to implement Cryengine-style ray-tracing approximation - no point in using an API that relies on costly new hardware when you can get 90% of the effect for a fraction of the GPU-power using methods other than brute-force ray count.I don't see RTX being a "dead end" unless ray tracing as a whole (or at least DXR) turns into a dead end.
the fact is one card had two CHIPS whereas the the other had one. now say they don't do anything. 🙄
E:
its funny how you're criticizing NV's marketing of RTX but yet drink their kool aid of market segmentation.
From my understanding, the recent foray into ray tracing is already part of an "open standard"*, namely DirectX DXR. RTX is Nvidia's proprietary implementation of an accelerator for DXR calculations, kind of like how GPUs as a whole are proprietary accelerators for general graphics API calculations (among other things). I don't see RTX being a "dead end" unless ray tracing as a whole (or at least DXR) turns into a dead end.
* Open in the sense that any GPU manufacturer can work with it.
I could imagine DXR becoming a dead-end if more companies decide to implement Cryengine-style ray-tracing approximation - no point in using an API that relies on costly new hardware when you can get 90% of the effect for a fraction of the GPU-power using methods other than brute-force ray count.
and what a card w/o a chip? a piece of PCB. if you want to dive into semantic much further; you are doing that on your own.Whether you have two chips or a hundred - unless you put them in a card, they don't do anything. Was my comment on that not correct?
What product segmentation kool aid did I drink? I don't know anything about their chips, but a GTX 980 wasn't a mid range card in anyone's book; if your still trying to claim that the chip inside made it a mid tier card: performance and price dictates segmentation ... not sure what you are trying to get at.
I'm not an NVidia fanboy if that's what you are thinking - haven't owned one of their cards in years and years.
rough napkin math is 41% more performance at 49% less power on the same node.
small print aside (ie how the benchmarks are skewed in their favor) nvidia can do some phenomenal things when they need to - maxwell was the last time they felt compelled to compete w/AMD. since then they have been sandbagging - HARD.
and what a card w/o a chip? a piece of PCB. if you want to dive into semantic much further; you are doing that on your own.
titanX/980ti high end, 980/970 mid range and 960/950 entry level. if you allow NV to tell you other wise w/pricing performance rhetoric; go educate yourself on chips then.
as i said, it was comparing two high end chips to a single mid range one. i don't care if you agree with that are not.
welcome to my ignore list, have a good day.
I don't know which exact games you consider "RTX games", but something like Battlefield V does use DXR.My understanding is that neither Crytek RT nor DXR requires specific hardware at all (and thus I consider them "open"), but RTX games need to have considerable tweaks to the game engine itself to utilize RTX cores.
That said, I haven't researched DXR that much, please correct me if I am wrong.
I don't know which exact games you consider "RTX games", but something like Battlefield V does use DXR.
And no, it doesn't strictly require specific hardware, it just tends to perform like crap without it. For example, you can enable raytracing on non-RTX 10/16 series Nvidia cards, it just usually performs so poorly it's not worth it IIRC.
The fact that it benefits from hardware acceleration doesn't make it not "open". As I alluded to, the entire concept of a GPU is about hardware acceleration. It's technically possible render games in software with your CPU, it's just infeasible/impractical.
I don't know which exact games you consider "RTX games", but something like Battlefield V does use DXR.
And no, it doesn't strictly require specific hardware, it just tends to perform like crap without it. For example, you can enable raytracing on non-RTX 10/16 series Nvidia cards, it just usually performs so poorly it's not worth it IIRC.
The fact that it benefits from hardware acceleration doesn't make it not "open". As I alluded to, the entire concept of a GPU is about hardware acceleration. It's technically possible render games in software with your CPU, it's just infeasible/impractical.
Well, most games seems to favor one GPU brand over the other in terms of overall performance (how much is incidental vs deliberate one-sided optimization, I couldn't say). Maybe we'll see the same thing for RT performance from one game to another.What I mean by using an "open" standard is that game devs won't have to cater tweaks to both Nvidia RTX GPUs and to an AMD dedicated hardware GPU because both or even one of them requires specific proprietary programming due to any "specialness" of the hardware - they won't be happy if they have to do that and the innovation will be stifled.
It sounds like DXR might be an API that will make that possible though ... ok, that's good to hear.
Yep, AMD is confirmed to be developing RT hardware for the PS5: https://www.tomshardware.com/news/sony-confirms-hardware-accelerated-ray-tracing-ps5,40583.htmlI am sure AMD will at some point design a chip with dedicated hardware as well.
Yep, AMD is confirmed to be developing RT hardware for the PS5: https://www.tomshardware.com/news/sony-confirms-hardware-accelerated-ray-tracing-ps5,40583.html
PS5 has Ray Tracing, it was confirmed in CES 2020 slides.