News Cyberpunk 2077 PC Benchmarks, Settings, and Performance Analysis

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.

jfplopes

Distinguished
Mar 26, 2009
16
1
18,515
Exactly. This was more than expected.
It's history being repeated. Each major architecture change or new console launch, devs like CDPR push it to the limits. They did it with The Witcher games and are doing it again.

Cyberpunk 2077 is one of the games that is pushing PC and console sales. And ray tracing is a key part of that.
Tech reviewers expected that and they did advice people to wait throughout the last year or so.

Now we have the result. Only with the highest end hardware of this year can we enjoy next gen features like ray tracing at its fullest.
I also got the PS5 and as usual I see the typical comments claiming that the PS5 is much more worth it then spending thousands of dollars on a PC.

The PS5 is a great console. I'm greatly enjoying playing games like Miles Morales. Very decent hardware for the price.
But you get what you pay for.
 
  • Like
Reactions: RTX 2080
Dec 10, 2020
4
2
15
Despite everyone's whining about …
  1. Cyberpunk pre-release performance
  2. Your article being "clickbait" because the original performance comparisons were based on pre-release code
  3. Cyberpunk post-release performance
  4. CDPR making an awful game because it doesn't run well on my (insert GPU here)
  5. Nvidia making horrible GPUs because Cyberpunk doesn't run well on my (insert GPU here)
… I just want to say thanks for all the work you are putting in answering everyone's criticisms and updating the article with useful relevant information. Thank you for trying to include hardware combinations that are still relevant to people and even moving stuff around to get it done.


I mean if this is directed at me for my disappointment in GTX 1080 performance, I wasn't really whining or saying the game or Nvidia is awful. I'm just disappointed that my GTX 1080 that often competes with a stock clocked 2060 Super (as my card is overlocked to 2215MHz core) in other demanding AAA titles like Metro Exodus is barely matching the 1660 Super in this title. I'm puzzled as to why it's under performing quite a bit relative to the norm but it's more so disappointment towards my GPU itself (not being able to make it for one last AAA title release). He tried to explain why, I'm in agreement that he's probably right, and thanked him for including other even older hardware with the GTX 970 inclusion. I think there is fine line between whining and justifiable disappointment especially with the pandemic hitting peoples finances pretty hard combined with 30 series availability. Most people can't upgrade from otherwise perfectly fine hardware and even if they want the cards don't exist. In my case I'm pretty much stuck with what I got for a bit (in regards to hardware) and I've been literally waiting for this game in excitement for 8 years, but again, it be what it be. I'm not too pressed about it, It's just a video game, I'll just play on lower settings till I can finally find an RTX 3080 and replay the game if I want. But correct, thanks to Jarred for his hard work. :)
 
Last edited:

Giroro

Splendid
So the question is, how long after launch is Cyberpunk 2077 going to have "pre-release" levels of performance and bugginess? The game clearly isn't finished, but I guess it had to be "out" before Xmas, even if the product is barely better than an alpha.
They should have just delayed it again instead of launch a broken product and expect us to forgive them later.. And that is IF they ever actually get the game stable enough to start putting more work into optimization. I'm getting really sick of game companies expecting customers to pay a company just to essentially do work for them as a beta tester on a broken product.

I feel bad for the dev team though, because properly managed projects usually don't need to be delayed and crunched 3 times. Plus, the end product gives the impression that they must have furloughed their hourly QA team, instead of setting them up to properly WFH.
 
I mean if this is directed at me for my disappointment in GTX 1080 performance, I wasn't really whining or saying the game or Nvidia is awful. I'm just disappointed that my GTX 1080 that often competes with a stock clocked 2060 Super (as my card is overlocked to 2215MHz core) in other demanding AAA titles like Metro Exodus is barely matching the 1660 Super in this title. I'm puzzled as to why it's under performing quite a bit relative to the norm but it's more so disappointment towards my GPU itself (not being able to make it for one last AAA title release). He tried to explain why, I'm in agreement that he's probably right, and thanked him for including other even older hardware with the GTX 970 inclusion. I think there is fine line between whining and justifiable disappointment especially with the pandemic hitting peoples finances pretty hard combined with 30 series availability. Most people can't upgrade from otherwise perfectly fine hardware and even if they want the cards don't exist. In my case I'm pretty much stuck with what I got for a bit (in regards to hardware) and I've been literally waiting for this game in excitement for 8 years, but again, it be what it be. I'm not too pressed about it, It's just a video game, I'll just play on lower settings till I can finally find an RTX 3080 and replay the game if I want. But correct, thanks to Jarred for his hard work. :)

This …

"The delta between the 10 series and other newer cards seems relatively larger than the norm and I'm curious as to why?"

… isn't whining.

This …

"They completely **** on the Pascal generation. Way to go nvidia (with CDPR support) on "forcing" Pascal users to upgrade to Ampere! "

"still not 1440p - RT Ultra - DLSS Quality testing? <Mod Edit> me "

"Basically if you make a game and it doesn't run flawlessly on the top 10 cards on the steam hardware survey, you have failed. "

"Nvidia and CDPR colluded to slow down older but still good cards to sell newer cards that people don't need."

… IS
 
Last edited by a moderator:

Giroro

Splendid
Exactly. This was more than expected.
It's history being repeated. Each major architecture change or new console launch, devs like CDPR push it to the limits. They did it with The Witcher games and are doing it again.

Cyberpunk 2077 is one of the games that is pushing PC and console sales. And ray tracing is a key part of that.
Tech reviewers expected that and they did advice people to wait throughout the last year or so.

Now we have the result. Only with the highest end hardware of this year can we enjoy next gen features like ray tracing at its fullest.
I also got the PS5 and as usual I see the typical comments claiming that the PS5 is much more worth it then spending thousands of dollars on a PC.

The PS5 is a great console. I'm greatly enjoying playing games like Miles Morales. Very decent hardware for the price.
But you get what you pay for.

Ray-Tracing isn't a "next gen" feature. It's been out for 2 years and we are on the second generation of supporting hardware.
It's just that the hardware was/is overpriced and developer support for ray tracing has been... basically nonexistent. We are up to what now, 20ish total PC games that support it? I would rather get what I pay for with a $400 PS5 than get way less than what I pay for with a $700+ RTX card (and I even already have the $1200+ in other PC hardware needed to fully use that card).
 
So the question is, how long after launch is Cyberpunk 2077 going to have "pre-release" levels of performance and bugginess? The game clearly isn't finished, but I guess it had to be "out" before Xmas, even if the product is barely better than an alpha.
They should have just delayed it again instead of launch a broken product and expect us to forgive them later.. And that is IF they ever actually get the game stable enough to start putting more work into optimization. I'm getting really sick of game companies expecting customers to pay a company just to essentially do work for them as a beta tester on a broken product.

I feel bad for the dev team though, because properly managed projects usually don't need to be delayed and crunched 3 times. Plus, the end product gives the impression that they must have furloughed their hourly QA team, instead of setting them up to properly WFH.

Yeah considering how many times and for how long they delayed it, it seems surprising that this many readily-apparent bugs weren't caught/squashed before this came out.
 
Ray-Tracing isn't a "next gen" feature. It's been out for 2 years and we are on the second generation of supporting hardware.
It's just that the hardware was/is overpriced and developer support for ray tracing has been... basically nonexistent. We are up to what now, 20ish total PC games that support it? I would rather get what I pay for with a $400 PS5 than get way less than what I pay for with a $700+ RTX card (and I even already have the $1200+ in other PC hardware needed to fully use that card).

Ray tracing capable hardware might have been out for two years now, but the rest of the industry has struggled to accommodate it. AMD just released ray tracing capable hardware within the last month (definitely a first-gen effort performance wise), and as you said, only 20 or so PC games support it; we're still in the relatively early days of this technology as far as support and adoption are concerned, performance is going to be an issue for some time now.

BTW:
  1. The PS5's MSRP is not $400
  2. Good luck getting anywhere close to the same experience on a PS5 as you would with "a $700+ RTX card."
 

Sleepy_Hollowed

Distinguished
Jan 1, 2017
512
200
19,270
I cannot believe the GTX 1660 Super is equal to the GTX 1080 in this game.... or how bad the GTX 1080 runs, if I were to say it the other way around.

None of the older games has this scenario and none of the newer games have it neither.
AC Valhalla, WD Legion, RDR2, Death Stranding, Borderlands 3, Gears 5, etc - in all of them, always the GTX 1080 is comfortable ahead of the GTX 1660 Super/Ti.

GTX 1070 & GTX 1080 Ti are also down from where they are supposed to be...

They completely **** on the Pascal generation. Way to go nvidia (with CDPR support) on "forcing" Pascal users to upgrade to Ampere!

It's literally unbelievable. I'm going to run some benchmarks at home, because that'd mean that it's got some heavy optimizations in favor of Nvidia's newer architectures indeed.
 
  • Like
Reactions: VforV

dpeter45

Distinguished
Sep 22, 2011
8
4
18,515
It's literally unbelievable. I'm going to run some benchmarks at home, because that'd mean that it's got some heavy optimizations in favor of Nvidia's newer architectures indeed.

It makes sense if NV is pulling some <Mod Edit>. A card that is normally on par with a 1070 is suddenly faster than a 1080 or 1080Ti. NV is deliberately gimping perfectly good 10 series cards. And during a pandemic when people are broke and stuck inside on lockdown with not much to do besides play video games. What a great company. This needs to be investigated. And I hope someone files a class action lawsuit like they did with the 970 VRAM thing.

And I love how that journalist came in here and downplayed it. The same journalists who told us for years that Apple wasn't slowing down their phones. And then we caught them doing it.
 
Last edited by a moderator:
  • Like
Reactions: VforV
Dec 10, 2020
4
2
15
This …

"The delta between the 10 series and other newer cards seems relatively larger than the norm and I'm curious as to why?"

… isn't whining.

This …

"They completely **** on the Pascal generation. Way to go nvidia (with CDPR support) on "forcing" Pascal users to upgrade to Ampere! "

"still not 1440p - RT Ultra - DLSS Quality testing? <Mod Edit> me "

"Basically if you make a game and it doesn't run flawlessly on the top 10 cards on the steam hardware survey, you have failed. "

"Nvidia and CDPR colluded to slow down older but still good cards to sell newer cards that people don't need."

… IS

Exactly, measured disappointment and having some questions is understandable but pitchforks, anger, and some deep accusations is indeed a little wild. :sweatsmile: Will be interesting to see if there are some optimizations for Pascal down the line in game updates. But as mentioned concurrent FP + INT issue might be unresolvable through updates.
 
Last edited by a moderator:
  • Like
Reactions: JarredWaltonGPU
It makes sense if NV is pulling some <Mod Edit>. A card that is normally on par with a 1070 is suddenly faster than a 1080 or 1080Ti. NV is deliberately gimping perfectly good 10 series cards. And during a pandemic when people are broke and stuck inside on lockdown with not much to do besides play video games. What a great company. This needs to be investigated. And I hope someone files a class action lawsuit like they did with the 970 VRAM thing.

And I love how that journalist came in here and downplayed it. The same journalists who told us for years that Apple wasn't slowing down their phones. And then we caught them doing it.
Are you referring to me? Because I sure as hell never defended Apple, and I suspect none of the current Tom’s Hardware people did either. Previous writers? Maybe. But I’m certainly not an Apple apologist.

Something else to note is that the GTX 1080 card is an FE running reference clocks. The 1660 Super is an EVGA SC. Dust buildup inside the 10-series GPUs (because again, they’re four years old and have been used plenty during that time) may also be limiting performance. Then again, the 1080 Ti, 1080, 1070, and 1060 numbers all make sense compared to each other.

There’s a big difference between Nvidia not continuing to put a lot of effort into maximizing performance of older generation GPUs and Nvidia actively sabotaging older generation GPUs. You are suggesting that the latter happened, and that is extremely unlikely.
 
Someone was asking about ultrawide testing. I can't remember if they wanted it tested on RTX 3080 or some other GPU, but I tested it on a 2060 Super (with an i7-9700K, because that's what was hooked up to the 21:9 display). Here are the results:

77

Perhaps not the best way to show performance differences, but making a 'better' chart would take more time and I decided not to bother. Basically, at Ultra (less GPU demanding), UW runs ~17% slower than WS. At RT Ultra (heavier GPU load), it's ~20% slower at 1080p and ~24% slower at 1440p. That last one is almost perfect scaling with the number of pixels to draw, which isn't really surprising as with ray tracing every pixel matters. "No pixel left behind" or something. LOL
 
  • Like
Reactions: seymoorebutts

MorganPike

Prominent
Jan 8, 2020
106
43
610
... is ludicrous. As is the idea that Nvidia is intentionally trying to reduce performance on older GPUs.

Yeah, businesses never do that kind of thing...

https://en.wikipedia.org/wiki/Batterygate

Of course, Apple said 'I didn't do that' but they settled anyway just because they're such awesome people. :)

Arbitrarily defining how software should run on any given hardware is ludicrous, the idea that a business would nerf an older products performance to drive current gen sales is certainly not.

I'm not saying Nvidia has done or is doing this, testing would need to be done to show that, but the possibility is certainly there. Refusing to recognize this possibility is just being naive.
 

MorganPike

Prominent
Jan 8, 2020
106
43
610
So not bethesda like. The bugs are minimal from my experience. ocacasional clippling of npcs and weird npc stuff but nothing unplayable.

As I said, nothing game breaking. Floating weapons and items, trying to draw a weapon and finding out, again, that my weapon in slot 1 has been put back in the inventory. Stepping close to an NPC and being suddenly snapped back a few feet.

This all within the first couple hours. So yeah, Bethesda level at first impression. I love Bethesda games and I love CDPR games but I'm not going to ignore the bugs just because I like the games.
 
Dec 11, 2020
1
0
10
Thanks for testing the GTX 970. Guess it’s time to upgrade finally. I really appreciate the extra work you put in on your articles as well as your balanced tone Jarred. I followed you over here from PC Gamer where you really stood out in terms of quality. Keep up the great work.
 
Is there a playable combination of RT + DLSS using lower graphics settings to get acceptable performance at 1080p on the lowly RTX 2060? Or is enabling RT on this GPU just an instant sub 30 fps brick?
 
Is there a playable combination of RT + DLSS using lower graphics settings to get acceptable performance at 1080p on the lowly RTX 2060? Or is enabling RT on this GPU just an instant sub 30 fps brick?

I'm guessing your only hope would be RTX performance mode which usually renders only a quarter of the pixels (540p in this case) which just isn't really enough data for DLSS to accurately upscale very well. Thus, even if you got playable frame rates, the combination of poor resolution detail and lowered quality settings would probably result in an overall uglier image than just running DLSS in quality mode with regular settings bumped up a bit from low.
 

Olle P

Distinguished
Apr 7, 2010
720
61
19,090
My reflection, based on a few reviews, is that with RT off there seems to be very little actual difference in image quality between the "Low" and "Ultra" settings.
In TH's examples I can see differences between the pictures, but I'm unable to tell which picture use which setting.
In the examples provided by SweClockers the only notable difference is in the leaves of a palm tree.
So as far as I'm concerned there doesn't seem to be any reason to use settings above Medium (unless also using RT).

I cannot believe the GTX 1660 Super is equal to the GTX 1080 in this game...
Nvidia's Sandbagging has begun with Pascal...
It's a standard procedure for driver development.
Video card drivers come in two types: The generic driver to be "one size fits all" and then optimized application specific drivers that superseed the generic when applicable.
Nvidia has many employees in driver development and therefore excel at creating optimized drivers for most games. As a result they also don't spend that much effort on the generic driver. The resources for development aren't infinite though, so older architectures also get less attention when it comes to optimizing for new games.
AMD has significantly fewer developers and therefore spend more resources on their generic drivers while not being that good with game specific optimizations.

The result of this is clearly visible in the test results here: Just compare the performance of GTX 1060 vs RX 580 at 1080p. These two cards were essentially equal performance at launch, but in this game where I expect both to pretty much rely on their generic drivers the AMD card is nearly twice as fast!
 
  • Like
Reactions: Phaaze88 and VforV

VforV

Respectable
BANNED
Oct 9, 2019
578
287
2,270
Not only Pascal generation is left in the dust with new games lack of optimizations, but it's funny how the gap between Turing and Ampere is also getting bigger.

Look at the RTX scores of 2080 Super vs 3060Ti in 1080p, it's hilarious how the maximum fps for the 2080 Super is the minimum for the 3060Ti.

It sure looks good that 3060Ti for a Pascal owner like me... but I won't be buying for at least 3 more months (I want to see AMD 6700/XT too), or until prices come to normal levels, whenever that is...
 

MorganPike

Prominent
Jan 8, 2020
106
43
610
My reflection, based on a few reviews, is that with RT off there seems to be very little actual difference in image quality between the "Low" and "Ultra" settings.

I don't know about the difference really, didn't study it that close. I settled on the following after testing RT...

9600k
2070 (not super but foc)
32GB

1440
High Setting
Turned off motion blur, don't like it, ever
No RT
Quality DLSS

Looks amazing.

Getting over 80fps consistently as per Steam overlay. This works for me.

I think DLSS has come a long way. Very impressed with it. Impressive showing with RT on in WD Legion. Legion looks amazing and plays smooth as butter for me at 1440 with RT low and DLSS medium (forget what they named that setting).
 

salgado18

Distinguished
Feb 12, 2007
933
376
19,370
Thank you very much for testing Vega 11. It seems to be the absolute minimum the game can run, even if not very well. It also means that anything above it can be made to work. Definitely a GTX 1060 or RX 570 is better with its minimums at 1080p, but then an RX 560 or even 550 can at least let us taste the game.
 
  • Like
Reactions: JarredWaltonGPU
My reflection, based on a few reviews, is that with RT off there seems to be very little actual difference in image quality between the "Low" and "Ultra" settings.
In TH's examples I can see differences between the pictures, but I'm unable to tell which picture use which setting.
In the examples provided by SweClockers the only notable difference is in the leaves of a palm tree.
So as far as I'm concerned there doesn't seem to be any reason to use settings above Medium (unless also using RT).

It's a standard procedure for driver development.
Video card drivers come in two types: The generic driver to be "one size fits all" and then optimized application specific drivers that superseed the generic when applicable.
Nvidia has many employees in driver development and therefore excel at creating optimized drivers for most games. As a result they also don't spend that much effort on the generic driver. The resources for development aren't infinite though, so older architectures also get less attention when it comes to optimizing for new games.
AMD has significantly fewer developers and therefore spend more resources on their generic drivers while not being that good with game specific optimizations.

The result of this is clearly visible in the test results here: Just compare the performance of GTX 1060 vs RX 580 at 1080p. These two cards were essentially equal performance at launch, but in this game where I expect both to pretty much rely on their generic drivers the AMD card is nearly twice as fast!
As I've noted elsewhere, a lot of the differences in performance also tend to be thanks to DX12. AMD's GCN architecture with its asynchronous compute engines is simply better at handling generic DX12 code. To extract good performance from Pascal GPUs in DX12 requires a lot of fine tuning on the developer side, and there's not nearly as much that can be done with drivers. My understanding (in talking with AMD and Nvidia over the years) is that with DX12 games, a lot of times it's Nvidia looking for things that the developers have done poorly and working with them to fix the code.

With DX11, it's easier to do wholesale shader replacement -- so if a game does a graphics effect that isn't really written efficiently, the drivers can have a precompiled optimized version and detect the game and replace the generic shader code with optimized code. The whole point of a low-level API is to give developers direct access to the hardware, which means the devs can seriously screw things up. Look at Total War: Warhammer 2 with its DX12 (still beta!) code, which runs far worse on Nvidia's Pascal GPUs than on AMD's GCN hardware. Actually, I think at one point even AMD's newer architectures perform better in DX11 mode. Shadow of the Tomb Raider, Metro Exodus, Borderlands 3, and many other games that can run in DX12 or DX11 modes still run better in DX11 on Nvidia (except sometimes at lower resolutions and quality settings -- so 1080p medium might be faster in DX12 on RTX 2080 and above, but 1080p ultra can still favor DX11 mode). SotTR and Metro are even Nvidia promoted games -- it worked hard to improve DX12 performance, since DXR requires DX12. But at 1440p ultra, you'll get 5-10% higher fps in DX11 mode last I checked (and of course can't enable DXR).

The 2080 Super vs. 3060 Ti isn't really surprising when you're running with RT Ultra preset and DLSS. Nvidia put quite a bit more effort into optimizing the RT and Tensor cores on Ampere. So 3060 Ti has 38 RT cores that are theoretically about 70 percent faster per core than Turing RT cores. 2080 Super has 48 RT cores. 38 * 1.7 = 65. That's potentially up to 35 percent faster in RT code. Tensor cores are a similar story: 3060 Ti has 152 Tensor cores, but each is twice as powerful as a Turing Tensor core, and with sparsity is up to four times as powerful. So 152 Ampere Tensor cores is roughly equal to 608 Turing Tensor cores, and the 2080 Super only has 384 Tensor cores. In non-RT performance, the 2080 Super is much closer to the 3060 Ti, but even in our review of the 3060 Ti we found it was on average 5 percent faster than 2080 Super in traditional rasterization performance. In ray tracing games, the average was still only about 5%, but Control for example has the 3060 Ti leading by 12-15% and Boundary it's also around 10% faster. So in Cyberpunk seeing a 15% lead isn't impossible.

And yes, Nvidia worked extensively with CDPR to optimize Cyberpunk 2077 for Ampere GPUs as much as it could. It probably put plenty of effort into Turing as well, so that every RTX card plays the game quite well, but was it equivalent effort? Almost certainly not. This is the biggest game of the year, by far -- possibly the biggest game launch of the decade. This is supposed to be the game to move RT into the "must have" category. While I like the end result in terms of visuals, I don't think it gets there. I'm still surprised at how good the game looks even at low or medium settings, though outdoor areas don't necessarily convey the enhancements you can get from RT as well as some indoor areas. But if your GPU can't handle RT Ultra, you can still enjoy the game at medium settings -- bugs and all.
 
  • Like
Reactions: Phaaze88
Is there a playable combination of RT + DLSS using lower graphics settings to get acceptable performance at 1080p on the lowly RTX 2060? Or is enabling RT on this GPU just an instant sub 30 fps brick?
RTX 2060 can handle RT Medium preset okay at 1080p, but I think the RT reflections is the most visible of the RT extras so leaving that off kind of stinks (if you're going for maximum image quality). DLSS Balanced at 1080p with RT Ultra should be just above 30 fps, or you could use RT Ultra but turn the RT lighting down one notch (to medium instead of ultra) while keeping reflections on, and that should get 30 as well even with DLSS Quality. Also note that I'm using a Founders Edition 2060 (FE was used for all of the testing of Nvidia RTX), so if you have a factory overclocked card that's 5% faster you might already be at 30 fps -- and a bit of manual overclocking would certainly get you there.