Nvidia plans to radically lower power consumption of new-generation performance-mainstream part, a leaker says.
Leaker Suggests RTX 4060 Ti Has 160W TGP : Read more
Leaker Suggests RTX 4060 Ti Has 160W TGP : Read more
That's like... business 101 no?Nvidia makes stuff like this because they know there are suckers out there that will buy it.
lol yephow is a TDP of 160W on a card that probably draws over 250W of power "power efficient"?
what universe is this? I remember the nvidia 970 and 1060 both drew low enough power you could almost get away with passive cooling them as long as you weren't overclocking them. but here we are talking about a power draw along the lines of a 80 series gpu just 2 generations ago as "power efficient"
Compared to higher-tier GPUs, it is quite low. And sure, it may be too much for some. Personally, having a GPU with around 200W (at peak), that's some 130W more than I had with 1050 Ti, which didn't really carry a lot of newer games, let alone at 1440p. And with an average of perhaps 10 hours of gaming a week, that's less than 2 kWh extra, or less than 10 kWh a month.how is a TDP of 160W on a card that probably draws over 250W of power "power efficient"? ...
160W TDP for a xx60 series card is not low at all.Compared to higher-tier GPUs, it is quite low.
GTX 1070 = TDP 150W @ 6.42 TFLOPS = 428 GFLOPS/Watt160W TDP for a xx60 series card is not low at all.
That's around what GTX 1080 used which had a TDP of 180W.
The amount of power GPU are using the last 2 generations is out of control.
And Nvidia and AMD dare to call their new GPU efficient.
As an arch-to-arch comparison: the 4070Ti has effectively half the VRAM buswidth, capacity, and bandwidth, compared to the 3080. It performs basically the same or better. Ada is clearly a dramatically less memory bandwidth constrained architecture than Ampere for client workloads.Meanwhile, the new product will also feature a 36% lower memory bandwidth than its direct predecessor, the GeForce RTX 3060 Ti with GDDR6 memory. Of course, we would expect 32MB of L2 cache to somewhat reduce bandwidth limitations, but only tests will reveal whether this is the case.
Wait. What? A 1070 at that core clock?All run in the range of 1984-2025
My EVGA GTX 1080 SC boosted to 2012 Mhz default (Reference board). I managed to get that to about 2100Mhz.Wait. What? A 1070 at that core clock?
Don't hate the player, hate the game. They wouldn't do it if people were smart enough to do 10 mins of research.Nvidia makes stuff like this because they know there are suckers out there that will buy it.
This is completely a matter of perspective. If you down clocked a 4080 to match the performance of a 970 or a 1060 I would bet you would be far under that 160 watts. Remember that 7 watt LED isn't putting out 10000 more lumens than its siblings did, it's putting out the same 600 or so.lol yep
Just bought a lightbulb the other day. It uses 7 watt like any other lightbulb nowadays. My sister reminded me to turn them off when I'm not in my room to save energy.
Meanwhile this GPU will use 200+ watt and is called "efficient". Just lol, it would literally be the item using most electricity in our household. Yes our washing machine can use more, but it is only turned on for like 40 minutes twice a week. These companies live in an alternate universe where electricity is free and they haven't been paying attention to anything going on in the world.
No way! Wow. That is impressive.My EVGA GTX 1080 SC boosted to 2012 Mhz default (Reference board). I managed to get that to about 2100Mhz.
Pretty common for boost numbers in the mid 1900s on lots of SKUs.
Here is a good distribution map from 3D Mark. 1925, 1950 representing stock cards I imagine. Big dip down into 1975 the highest performing stock cards presumably, and then the overclockers kick in and raise the numbers on the other side and trail down to 2150 or so.No way! Wow. That is impressive.
Where did this 250W number come from?how is a TDP of 160W on a card that probably draws over 250W of power "power efficient"?