beyondlogic
Splendid
Excellent launch Nvidia! Watch AMD still make no real attempt to capitalize on.
dont know how there suppose to capitalize on it you want the amd horse to stumble out the gate as well lol.
Excellent launch Nvidia! Watch AMD still make no real attempt to capitalize on.
They can't, because drivers/FRS4 aren't ready. The only thing worse than failing to capitalize on this situation is if they did launch now and had embarrassing problems of their own.Excellent launch Nvidia! Watch AMD still make no real attempt to capitalize on.
Probably not a whole lot of reason to do that – for one, other people have already done it, and... what do people expect?And the general consensus of people that have taken the time to test it is that PCIe 4.0 x16 offers the same performance as PCIe 5.0 x16 on the 5090. Maybe when I have time (next month, or maybe March?) I'll try to do my full test suite in PCIe 4.0 and 3.0 mode. But right now I have a lot of other GPUs to test!
My point is they show no real interest in putting a genuine effort in on discrete GPUs. They develop graphics IPs for APU big or small. Discrete cards are just further monetizing of their APU tech.They can't, because drivers/FRS4 aren't ready. The only thing worse than failing to capitalize on this situation is if they did launch now and had embarrassing problems of their own.
My point is they’re not even trying to capitalize.dont know how there suppose to capitalize on it you want the amd horse to stumble out the gate as well lol.
All electronics actually run on smoke rather than electrons. They continue to work until you let the smoke out.Haha! Agreed.
That's along the lines of, "any machine can be a smoke machine if operated poorly enough!"
You have missed this part, even higher performance nVidia card ran in that config:So, PCIe 5.0 ended up being worse than a pointless waste of money - it's downright harmful!
For reference, this same setup was used to benchmark GPUs like the RX 7900 XTX, RTX 4080, RTX 4090, and even the RTX 5090, with no issues whatsoever.
With what product mate they can't capitalise on it if it's not ready sending a duff product out early hurts market share. They would if they could have by now if it was ready. Just be patient and wait this is exactly why games etc come out half brokenMy point is they’re not even trying to capitalize.
The latest I'm aware of is TechPowerUp testing a RTX 4090 with a i9-13900K. The faster your CPU and GPU, the more of a bottleneck PCIe should become. @JarredWaltonGPU , I'd like to see PCIe scaling tested using a 9800X3D and RTX 5090 at PCIe x16 3.0 vs. 4.0 vs. 5.0.Probably not a whole lot of reason to do that – for one, other people have already done it,
I'm sure games with larger levels are continually streaming in assets. That's the whole point of DirectStorage, in fact.Once the bulk of textures have been loaded, how much data do you need to transfer on a frame-per-frame basis?
I think PCIe scaling is interesting!otherwise, your time is probably better spent on more interesting topics![]()
I don't know how you can say that, when the RX 6950X managed to beat the RTX 3090 Ti!My point is they show no real interest in putting a genuine effort in on discrete GPUs. They develop graphics IPs for APU big or small. Discrete cards are just further monetizing of their APU tech.
They launched Ryzen 9000 too early, seemingly trying to capitalize on Raptor Lake's woes, and look where that got them!My point is they’re not even trying to capitalize.
I think the answer is a bit of both.I don’t know that I’d blame the retailers with all the rumors of EXTREMELY limited stock. Large retail outlets were talking about receiving single digits of 5090s. They probably got 2 5080s.
I saw where they ran the RTX 5090 @ PCIe 5.0. Just because some cards can work at that speed doesn't invalidate my point.You have missed this part, even higher performance nVidia card ran in that config:
I already wrote about this: the bus can severely limit performance at x8 (for z600-800 Intel and x870 with gen 5 nvme) width and simultaneous use of DLSS/FSR/XESS "performance"/"ultra performance" scaling, especially for the 1080/1440 Ultra performance scenario.I'd like to see PCIe scaling tested using a 9800X3D and RTX 5090 at PCIe x16 3.0 vs. 4.0 vs. 5.0.
Yeah, which is why I added the "If you can find a game that constantly streams in huge textures" partI'm sure games with larger levels are continually streaming in assets. That's the whole point of DirectStorage, in fact.
So, one card not running is validate your point?I saw where they ran the RTX 5090 @ PCIe 5.0. Just because some cards can work at that speed doesn't invalidate my point.
Already been done. Not that hard to check their website.The latest I'm aware of is TechPowerUp testing a RTX 4090 with a i9-13900K. The faster your CPU and GPU, the more of a bottleneck PCIe should become. @JarredWaltonGPU , I'd like to see PCIe scaling tested using a 9800X3D and RTX 5090 at PCIe x16 3.0 vs. 4.0 vs. 5.0.
I don't see a lot of value in testing many cards, the way TechPowerUp did. Maybe throw in a couple other RTX 5000 cards. 5070 should be interesting, due to being a 12 GB card.
Where? Why would that be?I already wrote about this: the bus can severely limit performance at x8 (for z600-800 Intel and x870 with gen 5 nvme) width and simultaneous use of DLSS/FSR/XESS "performance"/"ultra performance" scaling, especially for the 1080/1440 Ultra performance scenario.
What transaction size does that test use, or is it just timing a single transfer of that size? Were multiple concurrent transactions tried? I could certainly believe their driver or PCIe controller isn't pipelining, since game engines might generally overlap enough transfers that it's not necessary for them to.Also, no one has yet shown whether the RTX50 uses the full bw pcie 5.0 - for this, you need to do a gpgpu aida test for reading and writing to memory. Because it seems that it cannot, since Nvidia has historically had problems with bandwidth of PCIE
![]()
It's debatable whether even PCIe 1.1 is "severely limiting performance."I already wrote about this: the bus can severely limit performance at x8 (for z600-800 Intel and x870 with gen 5 nvme) width and simultaneous use of DLSS/FSR/XESS "performance"/"ultra performance" scaling, especially for the 1080/1440 Ultra performance scenario.
IMO, fps is all you can really measure. Asset streaming performance could vary the amount of draw-in or popping textures, but that's not something easily measurable outside of the engine.But it would need titles specifically selected for that, and a method that comes up with a meaningful benchmark, imho FPS probably isn't the right metric for it.
I get that, but the point of reviews is to guide readers about the importance of PCIe speed on their gaming experience. If most games simply do pre-loading, then it's fair to include some of those in the test suite so you're not left with an impression that PCIe speed is more important than it really is.What I wanted to avoid is another review that benchmarks games that do classic asset pre-loading, then concludes there's hardly any gain for the (relatively!) small frame-by-frame datasets.
If it's like 0.1% of all RTX 5000 cards, then yeah we just call it bad luck and move on. Especially if they accept it for exchange, under warranty.So, one card not running is validate your point?
The lower the resolution, the more transactions are made cpu-gpu - it just fills the channel. This is very important for eGPU - so to minimize losses on excess transactions for playing with it, it is advisable to enable the highest resolution possible. Otherwise, the losses will be too high. I had such a setup in the past.Where? Why would that be?
If you're going to embed the image, then a link to the article would've been courteous.Already been done. Not that hard to check their website.
![]()
Again, you're taking their data out of context. This data is weird, since it doesn't align with their raster data on CP2077. They gave this explanation:It's debatable whether even PCIe 1.1 is "severely limiting performance."
![]()
Yeah, I can't think of relevant measuring points, or how to present it to users in a meaningful way – which actually makes this somewhat interesting 😉. If there's no mix of System / DirectX APIs that that make sense to generically monitor, I do wonder if it would be worth finding a few select game titles and instrumenting them specifically.IMO, fps is all you can really measure. Asset streaming performance could vary the amount of draw-in or popping textures, but that's not something easily measurable outside of the engine.
Most definitely – if interesting outliers are found and benchmarked, the results should either note that "for your average titles the performance difference is within measurement error", or include a number of those normal titles. I'm annoyed by the constant push towards BIGGER NUMBER BETTER and the marketing departments + influencers convincing people it'll make a meaningful difference to them...I get that, but the point of reviews is to guide readers about the importance of PCIe speed on their gaming experience. If most games simply do pre-loading, then it's fair to include some of those in the test suite so you're not left with an impression that PCIe speed is more important than it really is.