News GeForce RTX 4060 May Consume More Power Than a RTX 3070

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
lol, no thanks. I didn’t ask for ancient graphics pumped up with modern technology.
Oh right. I forgot Metro Exodus was a thing. It's the only modern AAA game that's fully path traced.

Yeah, no, negative people have said similar things about technology in the past as well and were proven blatantly wrong with their predictions (hi Bill Gates), it’s better to not talk like this about the future. We will see, but I’m cautiously optimistic.
The problem I had with your statement is it didn't list any actual requirements. It just said "real path tracing", whatever that means. Since Pixar tends to be used as a benchmark for high-end CGI, well, I used that. Considering their initial stab at Coco needed 1000 hours per frame on a system of unknown spec, with what was presume to be 100% ray tracing and if we want to target 60 FPS, then we would need something that's about 216,000,000 times more powerful than whatever Pixar was using to achieve that. Even if we used the optimized method they settled on (call 50 hours per frame), we're still looking at over 1 million times more powerful.

As a point of reference, the RTX 3090 is about 65 times faster than a GeForce 8800 GTX, and there was 12 years between them. You could point out that it only took about 11 years between the first 1 TFLOPS supercomputer and the first graphics card to achieve that, but it's been almost 14 years since the first PFLOPS supercomputer and for the same price of that TFLOPS video card (inflation adjusted), we haven't even hit something with a quarter of that speed. And we've just hit the EFLOPS mark.

I guess what I'm trying to say we're not going to get Pixar levels of quality in real-time any time soon.

If Ray Tracing isn’t the best way for perfect graphics it will be replaced. Nobody said it is “the” way, it’s just the best graphics we have today and nothing more is certain.
In a simplified sense, ray tracing is the best way to render graphics, because it physically simulates light. You can't get more perfect than that.
 
Last edited:
Oh right. I forgot Metro Exodus was a thing. It's the only modern AAA game that's fully path traced.
It’s not, it’s just like any other game with partially RT, as far as I know. Otherwise I doubt it would be playable.

The problem I had with your statement is it didn't list any actual requirements. It just said "real path tracing", whatever that means. Since Pixar tends to be used as a benchmark for high-end CGI, well, I used that. Considering their initial stab at Coco needed 1000 hours per frame on a system of unknown spec, with what was presume to be 100% ray tracing and if we want to target 60 FPS, then we would need something that's about 216,000,000 times more powerful than whatever Pixar was using to achieve that. Even if we used the optimized method they settled on (call 50 hours per frame), we're still looking at over 1 million times more powerful.

As a point of reference, the RTX 3090 is about 65 times faster than a GeForce 8800 GTX, and there was 12 years between them. You could point out that it only took about 11 years between the first 1 TFLOPS supercomputer and the first graphics card to achieve that, but it's been almost 14 years since the first PFLOPS supercomputer and for the same price of that TFLOPS video card (inflation adjusted), we haven't even hit something with a quarter of that speed. And we've just hit the EFLOPS mark.

I guess what I'm trying to say we're not going to get Pixar levels of quality in real-time any time soon.
Theres no problem with my statement, you just need to adjust your mindset, and I see you already did, the knowledge is there.

In a simplified sense, ray tracing is the best way to render graphics, because it physically simulates light. You can't get more perfect than that.
Of course, but there are possibly many different ways to emulate the light, bouncing off objects and creating image. You said Pixars approach is atrociously inefficient, so I doubt it’s the best one. The future will rapidly evolve things, and APIs and apps will be replaced. Computers possibly not even based on silicon and possibly Quantum based, could be the future and way faster than we can imagine today.
 
This would be a really dumb move for Nvidia. Most people don't have a custom build PC, they're running a Dell XPS or some such. Most of those boxes don't have PSUs much more capable than 400W - many of them lower.

You really can't run a GPU over 200W on any of them unless you're talking about an Omen or Alienware that was designed for it. Right now, a 200W (actual draw) GPU is a 3060 Ti, with cards like a 2070 drawing less than that.

This sort of thing will just push more people to laptops and consoles.
 
A bunch of new game engines offer alternatives to RT that look just as good or sometimes better without the need for any RT-specific hardware to achieve good frame rates. In all likelihood, simulated RT will improve so much faster than raw RT compute power that full-blown RT will end up relegated to being a fallback for stuff developers haven't figured out a sufficiently convincing shortcut for yet instead of the primary render path.
You don't seem to understand that "simulated RT" was what we had before RT.

Of course it was less computationally intensive: it had to be.

And no it wasn't, still isn't, and never will be better (other than being less computationally intensive).
 
I'm predicting ZTX 6090 will consume up to 1000W of power, will provide enough fps for 1000Hz monitors at 4K res.
PSU requirements and the pricing will be as insane.
You'll need 2x for VR headsets.
 
You don't seem to understand that "simulated RT" was what we had before RT.
We've had many iterations of attempts at simulating RT. At first we had baked-in effects, then we had reflection maps, now we have game engines that can do recursive reflections between combinations of partially transparent and reflective surfaces to the point that when you switch between RT and raster effects, you may not be able to tell which is which unless you know the specific cues beforehand.

And no it wasn't, still isn't, and never will be better (other than being less computationally intensive).
There is one major flaw with full RT: artists are forced into visual effects compatible with real-world physics.

Personally, I'd be 100X more interested in creative visuals that could break RT than pixel-perfect photorealism.
 
There is one major flaw with full RT: artists are forced into visual effects compatible with real-world physics.
Completely false. No more true than use of a physics engine requires limitation to real-world physics. Same way that PBR (Physically-Based Rendering does NOT imply photorealistic rendering!

The big gain of RT over several decades of raster hacks piled on top of each other is in development workflow. When you have an array of cubemaps, reflection maps, screen-space reflections and shadows, deferred rendering, shadow volumes, etc all in play, it is incredibly easy for one minor art change to have knock-on effect across the whole toolchain. Or worse, for it not to have that knock-on effect and start to introduce rendering errors because (e.g.) a cubemap did not get updated when a lightmap changed because someone moved a lamp from one desk to another.
Raytracing calculates all of those effects in real time at runtime. You make a change, and everything is correct immediately. All the tower of hacks you need for - for example - continuous variable time-of-day lighting and shadowing changes are now free and performed by default.
The current situation is akin to needing to strip a room down to bare walls, and repaint, recarpet, and replace every furniture item every time you want to move a chair, and the best hope is that you can automate that process of redecorating and that the automation works. RT means you can just move the chair and get the same result. With the majority of game budget currently being spent on asset creation (vs. literally the entire rest of the development costs), that's a massive win.
 
Oh right. I forgot Metro Exodus was a thing. It's the only modern AAA game that's fully path traced.


The problem I had with your statement is it didn't list any actual requirements. It just said "real path tracing", whatever that means. Since Pixar tends to be used as a benchmark for high-end CGI, well, I used that. Considering their initial stab at Coco needed 1000 hours per frame on a system of unknown spec, with what was presume to be 100% ray tracing and if we want to target 60 FPS, then we would need something that's about 216,000,000 times more powerful than whatever Pixar was using to achieve that. Even if we used the optimized method they settled on (call 50 hours per frame), we're still looking at over 1 million times more powerful.

As a point of reference, the RTX 3090 is about 65 times faster than a GeForce 8800 GTX, and there was 12 years between them. You could point out that it only took about 11 years between the first 1 TFLOPS supercomputer and the first graphics card to achieve that, but it's been almost 14 years since the first PFLOPS supercomputer and for the same price of that TFLOPS video card (inflation adjusted), we haven't even hit something with a quarter of that speed. And we've just hit the EFLOPS mark.

I guess what I'm trying to say we're not going to get Pixar levels of quality in real-time any time soon.


In a simplified sense, ray tracing is the best way to render graphics, because it physically simulates light. You can't get more perfect than that.

Pixar used something very similar to Blender. And back then it was all done in the CPU not GPU. If you look at movies like "Monster House" the hair of the characters looked like a lego headpiece. Its because it was all done in CPU and things like dynamics (which Coco used) was incredibly computationally expensive, even with bounding boxes.

Since they have switched over, it is orders of magnitude faster. They are also using techniques similar to what NVIDIA/AMD does with temporal to smooth out noise with sparse renders. My favorite temporal uses Savitzky–Golay because it's fixed weighted and cheap to execute. A little noise is discovered to make the images look more real in fact. Because ambient lighting caused by rough surface reflections are somewhat random. For example, looking at the surface of a pavement on a hot day can make the pavement look like it's shimmering. Same for water, or reflections in glass.

Seeing what tricks they pull from their tail to make it more realistic is amazing. I look forward to SIGGRAPH every year
 
Last edited:
Case design is going to have to change. Plus, say goodbye to the idea that a computer with good performance can be compact or quiet.

There likely so will have to be a redesign. Flow is good, but surface area will become more critical. So far we are only using the top of the chip to cool.

I still think back side compression fitting cooling is coming. The risk in this approach is the stress it puts on contact points. But there are more complex mounting systems to take care of that. This means the graphics PCIe slot will have to be moved down to make room for an exchange stack on the backside.
 
We've had many iterations of attempts at simulating RT. At first we had baked-in effects, then we had reflection maps, now we have game engines that can do recursive reflections between combinations of partially transparent and reflective surfaces to the point that when you switch between RT and raster effects, you may not be able to tell which is which unless you know the specific cues beforehand.


There is one major flaw with full RT: artists are forced into visual effects compatible with real-world physics.

Personally, I'd be 100X more interested in creative visuals that could break RT than pixel-perfect photorealism.

And cubic light maps. :)

I used to work on POVRay, one of the first RT engines. I could optimize it myself, and create toolsets for features that weren't commonly available like meshes for spun surfaces.

I was always impressed by the visuals of Bioshock series. The lighting made it feel more real. But some of the effects in BS:Infinite were just down right impressive for the time.
 
Forcing budget PC gamers off of the PC game market is a dangerous proposition: if you make the potential market smaller, it may end up not worth some studios' trouble and with costs needing to be recovered across a smaller potential buyer base, prices will go up some more. PC gaming could be on its way down a death spiral.
PC gaming has always been a niche market within the overall gaming market. Has PC gaming been much more than a place for crappy console ports in recent years? It may take a shift in the market like this to differentiate PC gaming from consoles again. If something doesn't change, PC gaming is going to die anyway. Turning console graphics up to 10.5 on a lazy port is not going keep people spending increasingly large amounts of money on gaming PC's.
 
Last edited:
How much power it'll consume is uncomfirmed. The article is reporting from a leaker, and even then the leaker didn't provide any hard numbers, just some vague "more power."

Of course, but more than RTX 3070 then could be around the RTX 3070 TI.

We will have to wait, probably till next year, I highly doubt we will see any of the "midrange" GPUs like the RTX 4060 line this year (I could be wrong).
 
For the new power requirements Nvidia is moving towards... I think this signifies Nvidia is willing to cede the mid and lower end of the graphics market to Intel. AMD appears to want to play in every market segment while Nvidia becomes more expensive, demanding, and performant to make a hole for Intel to slot into. Businesses absolutely hate competition.
I'm just going to wait when we get an actual product before freaking out over rumors and leaks.
The problem with that line of thinking is that companies tend to be the sources and amplifiers of leaks and rumors these days. They do it covertly to gauge market reaction prior to finalizing a product -- a type of market research effort to better target new products to consumers.
PC gaming has always been a niche market within the overall gaming market. Has PC gaming been much more than a place for crappy console ports in recent years? It may take a shift in the market like this to differentiate PC gaming from consoles again. If something doesn't change, PC gaming is going to die anyway. Turning console graphics up to 10.5 on a lazy port is not going keep people spending increasingly large amounts of money on gaming PC's.
PC gaming has been more of an incubator that unpredictably strikes gold rather than a niche. True, PC gaming has been saddled with mostly low-effort ports from the big business, metrics-driven, monetization-powered, AAA studios and publishers. I think it's better to characterize the PC platform as more risky and varied rather than a niche that's going to die off. We will see the next Doom, Unreal, Starcraft, Half Life, WoW, Crysis, LoL, Minecraft, or similar big hit at some point -- it's only a matter of time.
 
The power creep upwards on GPUs is getting pretty ridiculous but isn't all that surprising. I bought a 750w Seasonic Platinum rated PSU just last year for my 5800X based build which includes a 3080. I'm starting to think I should have given myself a bit more headroom with it, but being required to use in excess of 1000w for a top end rig is seriously flawed. In the day and age of doing more with less, this isn't the right direction to go.
 
The power creep upwards on GPUs is getting pretty ridiculous but isn't all that surprising. I bought a 750w Seasonic Platinum rated PSU just last year for my 5800X based build which includes a 3080. I'm starting to think I should have given myself a bit more headroom with it, but being required to use in excess of 1000w for a top end rig is seriously flawed. In the day and age of doing more with less, this isn't the right direction to go.
Same here, I thought 650W would do me good for a long time, suffice to say, with the knowledge of today I would’ve gone for 850 minimum.
 
  • Like
Reactions: bigdragon
The way I have always bought power supplies was one that was 50% higher than the maximum load of the entire computer.
It's cheaper to buy something like EVGA 1000w GT 80 plus Gold than any 850w 80 plus Platinum with the only difference is efficiency curve being worse by 2-3%. But the efficiency of power draw of 600W on 850W Platinum(70% load) vs 600W on a 1000W Gold(60% load) is probably very similar.
 
Going backward with power draw. That being said I hope the 4090 needs a 2000 watt PSU that requires a dedicated 220V plug. 💫