News Pre-Launch 'Jedi: Survivor' Eats 21GB of VRAM, Struggles on RTX 4090

Page 6 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
D

Deleted member 2838871

Guest
Interesting. And weird. My 12700K definitely behaves differently. Is this another instance of optimization for only AMD, maybe? That doesn't make much sense, though... but I do notice that only your lower cores (CCD1?) are used. So there does seem to be some parking going on.

You might be right... I didn't have resource manager up to confirm that cores were parking. A perk of this x3D CPU... :ROFLMAO: :ROFLMAO: :ROFLMAO: It's easy though so whatever.

No 2 PC are the same, even if they have exact same parts. So this forum is full of people posting cause they watched a video and can't match performance on their PC. It happens.

Just thought I mention that as sometimes you just have to be happy with what you get

Yeah so many bad reviews out there but if I was reviewing I'd honestly give it a positive review in performance because it hasn't been bad at least on my system. But... I built my system 10 days ago... I'm not trying to run it on a 5 year old 2700X that one reviewer called "a fairly recent CPU." :ROFLMAO:
 

Colif

Win 11 Master
Moderator
Your PC is far from average so if it didn't work, you would have to wonder.

its like everything, only people with problems report anything. The game might be running on a vast number of PC fine, but its the problems that get the attention. And tons of Youtube channels seem to act like vultures around any problem reported. Must get those views.
 

zx128k

Reputable

4090 is at ~50%.

The RX7900XTX at a random spot in the 4k video (2:07) is pulling 347 watts and the nVidia GPU is at 211 watts. AMD GPU is at 92% and nVidia is at 50%. Even the CPU on the AMD system is 48% and 36% on nVidia.

Specs first video
7900XTX + 5800X + 32GB Ram
RTX 4090 + 7900X + 32GB Ram

Spec second video
RTX 4090 + 7950X + 32GB Ram

Both videos show this issue.
 
Last edited:
D

Deleted member 2838871

Guest
Both videos show this issue.
The top video the 4090 is around 50% with the CPU around 35%.

Mine ran with 4090 at 70-80% and CPU around 20%... which is exactly what is shown in the bottom video with the same CPU/GPU.

XwBKqpj.jpg


I personally didn't have a problem with it but all the Youtube trolls are telling me that I don't know what bad performance looks like... :ROFLMAO: :ROFLMAO: :ROFLMAO:

I mean... if a smooth 60 fps in 4K Ultra with RT on with a frametime around 13ms is bad performance then I guess my system is guilty. ;)
 

Warrior24_7

Distinguished
Nov 21, 2011
34
19
18,535
Pre-release versions of 'Star Wars Jedi: Survivor' have revealed detrimental performance issues with the PC version. High-end GPUs like the RTX 4090 and 3080 Ti are only managing 35-50 frames per setting on average.

Pre-Launch 'Jedi: Survivor' Eats 21GB of VRAM, Struggles on RTX 4090 : Read more
Nvidia to the rescue again? For many folks who are always complaining about price and power consumption, this just makes the case for a next gen console for these people!
 

zx128k

Reputable
4090

0:07 VRAM 10Gb 1:19 VRAM 12Gb 2:52 VRAM 15Gb 5:15 VRAM 16Gb


8GB of VRAM works just fine and use Windows 11. Fixing CPU usage should be the primary goal. EA says all the gamers fault. Nasty DRM.

Small Jedi Survivor (PC) Workaround for more FPS (if you can't hit 60 on an RTX 3080 & co)​


Your trick indeed does work and it has its reason for this (it only shows how the config of the renderengine based on the displaydriver is messed up, this is something they have to fix), since it’s all based on Unreal engine I had a look at several different config files, but the programmers from Respawn changed a whole lot to it, best is to wait for patches

Star Wars Jedi Survivor: PC Ultrawide FSR 10900k 5.3ghz, RTX 2080 Ti game runs flawless lol​


 
Last edited:
D

Deleted member 2838871

Guest
Installed that 3GB patch they released today. Now my game keeps crashing, often directly after loading into the save. Thanks, EA -.-

That sucks... I didn't play yesterday so I'll update when I get home and see what happens.
 
The top video the 4090 is around 50% with the CPU around 35%.

Mine ran with 4090 at 70-80% and CPU around 20%... which is exactly what is shown in the bottom video with the same CPU/GPU.

XwBKqpj.jpg


I personally didn't have a problem with it but all the Youtube trolls are telling me that I don't know what bad performance looks like... :ROFLMAO: :ROFLMAO: :ROFLMAO:

I mean... if a smooth 60 fps in 4K Ultra with RT on with a frametime around 13ms is bad performance then I guess my system is guilty. ;)
I'm not sure what the COUNTLESS replies you've made on this topic are implying. Are you saying you have the most powerful gaming PC ever conceived and this game requires the extreme level of performance? Or, are you stating everybody else's PC is incapable of playing this game? Or, is it that everyone is lying when they say they're having issues with the game?
 
D

Deleted member 2838871

Guest
I'm not sure what the COUNTLESS replies you've made on this topic are implying. Are you saying you have the most powerful gaming PC ever conceived? Or, are you stating everybody else's PC is incapable of playing this game? Or, is it that everyone is lying when they say they're having issues with the game?

I'm not implying anything other than the obvious... and that is that there's OBVIOUSLY something else going on when I see a bunch of "the 4090 is only doing 25 fps" posts/YouTube videos when that just isn't the case on my end... not even close.

By "COUNTLESS" replies are you talking about my replies without including the half dozen other posters who have also replied on the topic? My feelings are hurt.

Lastly... you mad bro? Sounds like it. Get over yourself. Your post implies that you're butthurt.

I'm truly sorry you feel that way.

PS. Find the ignore button and use it. Problem solved. 👍
 
Last edited by a moderator:
D

Deleted member 2838871

Guest
Interesting. And weird.

No 2 PC are the same, even if they have exact same parts.

Downloaded the patch tonight and nothing really changed. Arguably worse with a few more quick frame drops into the low 50's but other than that maintained 60 with the same GPU and CPU usage.

Definitely playable but I hope more updates come out to help those who are having issues. With that I'm bowing out of this thread now with no further replies. (y)
 

sitehostplus

Honorable
Jan 6, 2018
404
163
10,870

game doesn't use enough cores but I don't know if that will help reduce memory usage.

More games to come that use more vram as the devs swap from PS4 with 8gb to PS5 with up to 16gb.
The usage of this needs to be reduced in some aspects to work on console as they only have 16gb ram.
That video you put up has a major problem.

They forgot to reboot the game after changing graphic modes.
 

zx128k

Reputable
Remember how bad Cyberpunk 2077 was when it released. Seems like all games get rushed out.

View: https://youtu.be/8aB6J5xI6qo


Seems like nVidia GPUs dont get more FPS if you change resolution.

A quick benchmark on my gaming PC with a 3080 FE and 3900X of the new Star Wars: Jedi Survivor game at 4K with FSR 2 and raytracing enabled. The game is poorly optimised and due to a big CPU bottleneck there is no improvement to my framerate by lowering the settings beyond those shown in the video (my optimised settings).

Star Wars Jedi: Survivor is the worst triple-A PC port of 2023 so far

Settings recommendations are therefore impossible, really, although you can disable ray tracing to claw back some performance. Basically, Star Wars Jedi: Survivor is not utilising the hardware presented to it at all in a meaningful way.

In short, Star Wars Jedi: Survivor is essentially ignoring the fact that CPUs have entered the many-core era. With higher settings it is even more disastrous – with ray tracing active, more smaller cores are tasked with maintaining RT's BVH structures, but ultimately, performance drops still further to the point where I've observed CPU-limited scenes on a 12900K that just about exceed 30fps. On a mid-range CPU like the Ryzen 5 3600, for example, it is even more catastrophic.
What do gamers think. Well...
Ruined by AMD™
Tiers of RT hardware.

Tier 1 RT is basically using shaders/compute untis to do Software RT.

Tier 2 is all about adding dedicated hardware support for Box and triangle intersection acceleration. I believe Ray Tracing in RTX cards is definitely at tier 2.

Tier 3 is all about BVH processing and the memory management. Which we all know as the memory bottleneck in recent RT games. This is the next step in RT hardware development and could prove to be critical in deciding which hardware ( AMD, Nvidia or Intel ) has better RT performance.

Tier 4 is all about making the use of RT hardware more efficient by grouping the Rays.

Tier 5 is all about adding Coherence and BVH generation of a highly dynamic/moving scene on fly on the dedicated hardware block.

Nvidia's RT cores do all the BVH traversal so they are at least tier 3.

And do note that Turing had two distinct hardware acceleration features from the RT cores, the first being a Ray-Triangle accelerator (Tier 2) and a BVH Traversal Unit (Tier 3). By this Tiered system, Turing cards are Tier 3.

Each of these following components are part of the process of ray-tracing;

BVH creation and updating

BVH traversal

Ray-Gen

Ray/Triangle Intersection

Shading

Denoising

And so if I were to take the current RTX cards and assess their RT capability compared to a GTX card, I would do it like this;

RT FeatureRTX GPUGTX GPU
BVH CreationNoNo
BVH TraversalYesNo
Ray-GenYesNo
Ray BatchingYes (Partial)No
Ray/Triangle IntersectionYesNo
ShadingYes (Legacy GPU Shading techniques)Yes
DenoisingNo (Potential to use Tensor cores)No

AMD.

US20190197761 is the original AMD patent everyone was discussing back in June 2019. It's description states the CUs are responsible for traversing BVH.

US20200193685 is the new patent.

And the implementation of raytracing that it describe pretty much matches Nvidia's implementation.

[0023] The ray tracing pipeline operaties in the following manner. A ray generation shader is executed. The ray generation data sets up the data for a ray to test against and request the ray intersection unit test the ray for intersection with triangles.
[0024] The ray intersection test unit traverses an acceleration structure...
...For triangles which are hit, the ray tracing pipeline triggers an execution of an any hit shader. Note that multiple triangles can be hit by a single ray.

AMD's RDNA 2 TMUs has both BVH & intersection fixed function units.

The only thing handled by the SIMD ALUs is passing the work to the TMU to process. LLVM commit shows how it's done, you call the RT function by using a regular texture shader with the data for the BVH start.

The TMUs also have their own cache to handle that, reducing memory access that would cause massive perf hits.

What it can't do is regular texture mapping and RT at the same time.
202008180228051.jpg



Nvidia already explained their RT cores in their architecture whitepapers where they describe how they do the entire BVH traversal in one go.

AMD the initiation of TMUs to do BVH, is through a regular texture shader. So you schedule that texture shader (with BVH data) and preferably along with other graphics shaders, the ALUs just pass on the BVH shader to the TMUs.

The patent also says shader units (aka SPs, cores, whatever you want to call it) decide what to do with the intersection results and how the next traverse it. The pdf is images instead of embedded text or I'd quote it, but it's spelled out in section 0047.

Once ray - triangle is confirm hit, the return data goes to the shared cache within the CU, the ALUs then proceeds on the next step, if there is a need for secondary or tertiary bounces, it will initiate the TMU BVH + intersection engine again.

The ALUs don't do any of the BVH traversal or intersection testing (SIMD ALUs suck at doing that, as we know), that's within the modified TMUs new RT fixed function unit.

One of the XSX engineers described the RT hardware on the 52CU RDNA2 GPU as being capable of >13TF for raytracing, and working in tandem with the shaders >25TF. Turing RT cores are capable of 34 "RT-FLOPS" and the 3080 58 "RT-FLOPS".

Lisa Su said at the launch of RDNA1 that they would not want Ray Tracing on their cards until the performance hit was mitigated enough to make it viable.

AMD only had a 256 bit memory bus and 128MB of cache to help alleviate the memory bottleneck.

If someone tells you AMD and nVidia are different for RT. That nVidia RT titles harm AMD performance. Note US20200193685 which implies AMD use the same implementation more or less as nVidia. Note the issue is hardware performance for AMD GPUs. nVidia should always be ahead in RT because their hardware is more capable.

Over on YouTube, PureDark (opens in new tab) has uploaded a video showcasing a modded version of the game running at much better framerates. They've implemented a DLSS Frame Generation mod, which according to the video evidence brought their game up from 45 to 90 actual fps. That's a marked improvement over what many are seeing, especially Steam's ticked-off comment section.
"I had a breakthrough and [am] now trying to replace FSR2 with DLSS, that would make the image look much better."

So DLSS 3 and 2 like mod on the way. What AMD paid to keep out.
 
Last edited: