Pierce2623
Commendable
Benchmarking should only count real frames from the game engine. Just ignore frame generation for all benchmarking purposes. Those aren’t actually frames. Why count them as such?
Wait…you think Jayz2Cents isn’t a shill ?….ok then.And thus the other shoe just dropped, nVidia is providing "guidance" for its affiliated partners, which I suspect they are required to follow under NDA or lose future access to products. This is why I'm going to wait until Steve on GN or Jay over at JTC do fully independent non-affiliated, unguided, and no-NDA testing. Steve was talking about how their team is looking to buy or borrow cards for the testing and report back to people about it.
Because even though they're not the same as rendered frames, they can have an impact on how a game feels. There are definitely games where, even on lower tier hardware like an RTX 4060 Ti 16GB, turning on framegen can make a game feel better than non-FG. It's a very murky situation overall, though.Benchmarking should only count real frames from the game engine. Just ignore frame generation for all benchmarking purposes. Those aren’t actually frames. Why count them as such?
Psssst, video games aren’t real..This!
Its sad that were approaching the POST TRUTH world, where everything is made up to fool people.
News, Video, Audio, Pictures, Ar, Capabilities, Power..etc..
Ok so let’s say I’ve got a 120Hz display but my GPU is only powerful enough for 100fps through standard DLSS. Would that 100fps not give me a better experience and less latency overall than using frame gen to get up to 120 but only having a “real” 60 fps?Because even though they're not the same as rendered frames, they can have an impact on how a game feels. There are definitely games where, even on lower tier hardware like an RTX 4060 Ti 16GB, turning on framegen can make a game feel better than non-FG. It's a very murky situation overall, though.
But it's the same problem as upscaling. I've reached the point where my benchmarks will generally be at native for comparison purposes, and I may use DLSS, FSR, XeSS, etc. for additional details, but the baseline will be native. Then I'll try the various upscaling and framegen solutions where applicable and tell how that goes.
If you can't tell the difference between the two visually, does it matter? You people screaming "fake frames" make me laugh. I highly doubt you could even tell the difference.Leave out the smoke and mirrors garbage entirely and test what matters, native resolution horsepower.
Testing faked frames will give faked results, which is exactly what nvidia wants you to do, sell the smoke and mirrors to the masses.
Benchmarking should only count real frames from the game engine. Just ignore frame generation for all benchmarking purposes. Those aren’t actually frames. Why count them as such?
I just had a question pop into my mind: the best justification for FrameGen would be to compensate for VSync "on" situations, so it would actually help there when Adaptive Sync is not available, making that experience actually better. Or when you go outside of the refresh range, I guess. So, the question was: does FrameGen require VSync to be turned on?
Also, how does the tearing come into play with FrameGen as well? I mean, unless you have a display with 240Hz, the chances of getting tearing when going over 120Hz is still there. So, I'll have to imagine there's still tearing happening?
Regards.
Unfortunately, the PCAT interposer card makes it hard/impossible to use a normal case. And I swap GPUs so often that the case just gets in the way. If you have a decent size ATX case with good airflow, that tends to run just as cool or in some cases cooler than an open test bench. If you're trying to stuff a power hungry GPU into a mini-ITX case? Well, have fun with that, and plan on higher speed fans that make more noise to keep things cool.If I wanted to see frame gen numbers I would just look at the Nvidia marketing slides! Also can you please benchmark using closed case to check for variances vs open bench for potential throttling?
I don't think I said it added more latency than regular framegen. It should basically be the same amount of latency, because it's just interpolating more frames in between the rendered frames. Well, except if the performance improvement from 2X framegen to 4X framegen (one vs. three interpolated frames) is only 65%, that means the base sampling rate will be lower for the same "framerate." And this is also why Reflex 2 exists, to try to compensate for even more latency. But Reflex 2 with no framegen will be even better than Reflex 2 plus MFG.Contrary to what's stated in the article, MFG doesn't add much latency compared to FG, most of the added latency comes from buffering the two rendered frames.
Here, maybe I read this wrong or don't understand something but, to me, it seems to imply that for a given base framerate (no FG), MFG would double the latency compared to FG. Which is what I expected when I first learned about the technology but it turns out my worries were misplaced. Still disappointed we're not getting FG by extrapolation because the use case for MFG will be relatively small I think. You'll need to hit a base 50-60 just like before, and it'll get you around 180-240 fps and not everyone has the kind of monitor required to display that.Without framegen, user input would get sampled every ~10ms. With framegen, that drops to every ~20ms. With MFG, it could fall as far as sampling every ~40ms.
Forgot to look them up yesterday to make sure I had the right info, but wouldn't CapFrameX + ElmorLabs Benchlab with PMD PCIe slot adapter be the way to go to eliminate any potential vendor issues and pull power data directly?FrameView is fine, AFAICT. I've tried PresentMon and OCAT in the past, and FrameView basically gave the same results. It's critical for me because I want to get the power data (and temp and clocks as well), which OCAT and PresentMon don't provide. But FrameView uses the same core code, just with some additional data collection going on.
Have you seen Hardware Unbox’s new video about the B580 losing performance much more than the 4060 and 7600 with lower end CPUs? IIRC the 1080p results put the card on par or sometimes slower than the 4060.Unfortunately, the PCAT interposer card makes it hard/impossible to use a normal case. And I swap GPUs so often that the case just gets in the way. If you have a decent size ATX case with good airflow, that tends to run just as cool or in some cases cooler than an open test bench. If you're trying to stuff a power hungry GPU into a mini-ITX case? Well, have fun with that, and plan on higher speed fans that make more noise to keep things cool.
You know even Nvidia is worried about their BS when they do stuff like this.. yikes..And thus the other shoe just dropped, nVidia is providing "guidance" for it's affiliated partners, which I suspect they are required to follow under NDA or lose future access to products. This is why I'm going to wait until Steve on GN or Jay over at JTC do fully independent non-affiliated, unguided, and no-NDA testing. Steve was talking about how their team is looking to buy or borrow cards for the testing and report back to people about it.
Psssst, video games aren’t real..
Sounds like going back to the days of GPU reviewing where ATI and Nvidia had different implementations for AA and AF (and other features) with different overall image quantity. As well as benchmarking, actual image quality had to be compared too, so you could end up with Card A being faster, but Card B providing higher image quality.When everything renders the same way, we should have the same output — with only minor differences at most. But in the modern era of upscaling and frame generation, not to mention differences in how ray tracing and denoising are accomplished? It becomes very messy.
If the framerate is constant, sampling latency changes. My point was to show that, if you had three different GPUs delivering 100 "FPS" to the monitor, one without framegen, one with framegen, and one with MFG, the base FPS is wildly different. Basically, it's to prove that framegen doesn't provide frames in the same way that normal rendering does. It feels different, in every game I've tested. It also looks different, usually smoother, so it's a personal decision on how much it helps (or not).Here, maybe I read this wrong or don't understand something but, to me, it seems to imply that for a given base framerate (no FG), MFG would double the latency compared to FG. Which is what I expected when I first learned about the technology but it turns out my worries were misplaced. Still disappointed we're not getting FG by extrapolation because the use case for MFG will be relatively small I think. You'll need to hit a base 50-60 just like before, and it'll get you around 180-240 fps and not everyone has the kind of monitor required to display that.
I used a custom Powenetics setup previously to capture power use. It was totally independent of AMD, Intel, and Nvidia. It also looks janky AF, but it worked. The problem isn't just getting power, but getting it for every test. I had to run a separate workload with Powenetics to capture data, and basically it wasn't feasible to do that for every game. So when Nvidia made the PCAT, I tested some cards with both devices and found the results were basically the same.Forgot to look them up yesterday to make sure I had the right info, but wouldn't CapFrameX + ElmorLabs Benchlab with PMD PCIe slot adapter be the way to go to eliminate any potential vendor issues and pull power data directly?
Obviously this is basically the worst possible time to be making any changes since we've got new cards coming from all 3 vendors. Just something to potentially look into after the vacation you hopefully get after testing the 7 cards (and potentially the new DLSS/FSR/XeSS) coming out in the next couple of months.
screw frame gen... only numbers I want to see between the generations is raw power.. ie turn off frame gen and DLSS.. I want to see RAW fps between the gen's.. I have a 4080.. I want to see raw 5080 and 5090 to compare to my raw 4080 data.