News 16GB RTX 3070 Mod Shows Impressive Performance Gains

Page 3 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
The entire issue of 8GB is a major distraction from actual marketing and market segmentation policies, making it a complete non-issue. There is already a 3070 Ti with 16GB of VRAM, and its name is A4000, as Hardware Unboxed uncovered.

It seems like a massive waste of time to resolder 16GB onto an effectively crippled A4000 for the sake of market segmentation. Has anyone tested professional AI workflows with Microsoft's DeepSpeed on dual/triple/quad consumer cards? That would be worth benchmarking as well. Good luck blocking that one out, Nvidia.
 
Last edited:
  • Like
Reactions: SoNic67
If you're not on an 8k monitor, then your system is technically incapable of benefiting from what we currently call "ultra" (4096x4096) textures. I'm not just saying that the difference isn't a big deal, I'm saying there your typical gaming setup is not capable of displaying better visuals when you switch from "High" (2048x2048) to "Ultra".
You simply don't have enough pixels to see the full resolution of a single texture, let alone the dozens/hundreds of textures on screen at any given time. A 3840 x 2160 monitor is less than half the resolution of each ultra texture. And at 1440p? Forget about it.
These unoptimized textures are also why games are wasting so much hard drive space. It's really easy for a game developer to just throw in ultra high res assets, but it's a waste of resources. Even at 4K-medium (1024x1024) it's rare in most games to have a single texture so large on screen that your system would have a chance at benefiting from the extra pixels. Even then, it's usually not that noticeable of a difference.

It used to be that medium textures meant 512x512 or lower, which could be a very noticeable step down at 4k... but that's not how most game developers are calling their settings, right now.

Jumping from High (2048x2048) to Ultra textures is a 4x hit to memory resources, but there's usually no benefit whatsoever to visuals in any reasonably described situation.
Lately, I hear a lot of people complaining that a given card can't even play "1080p max settings" with 8GB of memory, but maybe they should be rethinking their goal of using textures that are 8x larger than their monitor.
As the saying goes, "Ultra is for benchmarking, but High is for playing."
 
The entire issue of 8GB is a major distraction from actual marketing and market segmentation policies, making it a complete non-issue. There is already a 3070 Ti with 16GB of VRAM, and its name is A4000, as Hardware Unboxed uncovered.

It seems like a massive waste of time to resolder 16GB onto an effectively crippled A4000 for the sake of market segmentation. Has anyone tested professional AI workflows with Microsoft's DeepSpeed on dual/triple/quad consumer cards? That would be worth benchmarking as well. Good luck blocking that one out, Nvidia.
Compute cards need gigantic amounts of memory, for the workloads they are expected to perform. I have a Firepro W8100, which is basically an underclocked version of the R9 290, that has 8gb of VRAM, (the R9 290 only has 4) and draws less power than the R9 290. That memory is useless for gaming, though, because in my own testing, whenever I tried turning up the settings in games so that all of that VRAM gets utilized, the GPU just can't push the pixels fast enough to make a playable experience.
 
  • Like
Reactions: xauberer
This modded RTX 3070 uses 12.43 GB of VRAM at 2560x1080 resolution, this is not good for the new 12GB RTX 4070 Ti. Wake up Nvidia. By the way, where are the double VRAM cards that we used to have, like the GTX 770 4GB?
I think those cards were only periodically released, when VRAM prices dropped dramatically, because it wasn't a financial hardship on Nvidia to release them with more memory. The problem with those cards is, that they may not be fast enough to actually use the extra memory, without killing your frames. I remember this being an issue with the GTX 950, after a 4gb version was released. Yes, you could turn on more bells and whistles, but it just couldn't handle it without slowing down to the point of unplayability, which made the extra memory worthless. It was just something that Nvidia claim as a selling point, when it actually added no real value.
 
Nvidia has been running this VRAM gimping scam for a long time. They did it last gen with the 780 and 970/980 cards only having 3-4GB when the consoles had 8GB. I can't believe people are just now catching on to it. You always need at least as much VRAM as the consoles have total RAM for a consistently good experience because PC games are less optimized. Right now that's 16GB. Last gen it was 8GB. Gen before that it was 512MB.

If you buy anything with less than 16GB right now you are throwing your money away. The 8-12GB cards are going to be junk when the AAA heavy hitters from the PS5 and XSX come out in the near future.

If you can't afford a 16-24GB card you would be better off getting a console.

Tech reporters are complicit in this SCAM. You even mention it here you are likely to get censored or straight up banned. They know the cards don't have enough RAM and still recommend them. They should be telling everyone to hold off until there are 16GB cards that perform well for under $400.
Which is pretty ironic, because I remember back in the day.. they tried to sell the very weak chips with very low memory bandwith with very large amounts of ram. sometimes higher than the XX70 series cards and medium end GT cards.

Example, a high end card being 8GB and a "mid tier" was 4GB.
Then somehow they released a weaker gpu with weaker bandwith but with 6GB or 8GB of VRAM to entice "more is better".

I guess either memory is so expensive now (or Nvidia is cheap as hell ).
 
Because GPUs use way wider BUS widths at 192, 256 and over that, so imagine putting that on a removable interface for GDDR. Keep in mind the RAM DIMMs are ~128 pins (for dual 64 bit operation; keeping it simple). The connector would increase the BOM cost even further and it would be too low volume as well.

Regards.
doesnt have to be dimm/so-dimm, many years ago GPUs had mix of soldered/removable rams or few empty sockets where you added sdrams
azECa7T.jpg

Trident-8900D-1.jpg

so its not like it would be imposible task...you dont need dimm, but gddr6 is just too small for handling so some form factor with multiple ICs would be needed
 
doesnt have to be dimm/so-dimm, many years ago GPUs had mix of soldered/removable rams or few empty sockets where you added sdrams
azECa7T.jpg

Trident-8900D-1.jpg

so its not like it would be imposible task...you dont need dimm, but gddr6 is just too small for handling so some form factor with multiple ICs would be needed
I never said they had to be. I just called it a "removable interface". I used the DIMM example to show size comparison for only ~128 pins on an interface. I do know of the old way of putting more memory into video devices, but those were nowhere near the complexity of modern memory modules. The electrical integrity alone would make the potential removable interface so expensive no one would pay for it anyway.

Regards.
 
Does anyone knows of a similar mod for other 30xx cards?
I have a 3080Ti with 12GB memory and although is "fine", I would be interested in playing with it. 24GB mod maybe?
I am not sure if is even possible:
 
Last edited:
Does anyone knows of a similar mod for other 30xx cards?
I have a 3080Ti with 12GB memory and although is "fine", I would be interested in playing with it. 24GB mod maybe?
I am not sure if is even possible:
Certainly possible, not worth the cost of having someone do it or getting the equipment to do it yourself.

RTX3080 12GB and RTX3080Ti are just crippled RTX 3090Ti. Double the memory capacity, modify the vBIOS and it should be good to go. If you can find a dead 3090Ti (Or RTX A5000 or A5500) that has all good memory, that would be a good card to harvest from. RTX3090 has the same size memory chips as the 3080 and 3080Ti, just on front and back.

If you can source the memory directly from Micron, Samsung, or Hynix, you could do it that way.

De-soldering a memory chip, cleaning up and prepping the board, cleaning up and prepping the memory chips, re-balling, and re-soldering, testing and confirming it works. Not sure how to price the software work, but someone like Krisfix would charge something like 400 Euro for the chip swap.
 
RTX3090 has the same size memory chips as the 3080 and 3080Ti, just on front and back.
Not true. The 3080/3090 Ti versions have the same amount of memory chips, 12 chips, just double capacity chips on the 90. The non-Ti variants have just 10 pads populated.
From the micron link above:
Code for the 8Gb memory chip is D8BWW (19 GB/s) or D8BGX (21 GB/s)
Codes for the 16Gb memory chip is D8BZC (21GB/s) and D8BZF (24 GB/s).

3090Ti has D8BZC.

EVGA-RTX3090Ti.jpg
RTX3090Ti-ref-pcb.jpg

3080Ti has D8BWW:
OIP.4Nqm8eIvShjgpqf7ldIR0QHaEK


This 3080 is missing 2 chips, making a total of 10 D8BGX.
front.jpg

This
 
Last edited:
Jedi Survivor can use 18gb vram at 1440p so 16gb isn't looking much better than 8gb. Future is here it seems.
Games need to be reasonable or no one will be able to play them. It feels like they all racing to be the next Crysis
 
Jedi Survivor can use 18gb vram at 1440p so 16gb isn't looking much better than 8gb. Future is here it seems.
Games need to be reasonable or no one will be able to play them. It feels like they all racing to be the next Crysis
I've been reading a bit on it and leaving some game-breaking bugs, I think that is a bit bonkers XD

I wonder if they are trying to put everything in the VRAM buffer and duplicating it in RAM so they don't need to optimize?

Regards.
 
I wonder how it’s going to play on the series X console
 
It is just what is allocated so actual usage is probably lower. Game only uses 4 cores so it bottle necks on almost all CPU right now.
That's definitely a problem from the optimization perspective, but look at how it looks... Like holy cow. The amount of polygons on screen at any given time plus the fact most of those polygons are breakable by your light saber and all that.

I think 18GB is a lot, but lookie look: both VRAM and RAM are 18GB. That's a good hint, but just keep in mind that's allocated, not used. So... There's some optimizations that they can do for PC exclusively, but that's the new reality for big AAA titles and we have to get used to the idea of just expecting 32GB of RAM and, maybe VRAM, being something we'll need in the upcoming years.

Like, really, I watched IGN's review and the game just looks amazing and they've added a ton more to it that makes sense, somewhat, why it would need way more RAM, but I'm not sure so much VRAM, but definitely 16GB for this one seems plausible. Did you see the desert world map and how* extensive it was drawn? Like dang... This game looks close to movie quality.

Regards.
 
Last edited:
My Internet is dead so I am on my phone until it is fixed, so I haven't looked at the game, screen is too small.
32gb of ram is what I have already, not a big leap,
Indeed.

To be auto-referent: in 2012 I went with 16GB when every outlet said "8GB is fine" and now in 2019 I went with 32GB when all outlets are still saying "16GB is fine". Looks like we're slowly going into the "32GB is fine", so maybe we need to recommend 64GB going forward? 😀

Regards XD
 
The entire issue of 8GB is a major distraction from actual marketing and market segmentation policies, making it a complete non-issue. There is already a 3070 Ti with 16GB of VRAM, and its name is A4000, as Hardware Unboxed uncovered.

It seems like a massive waste of time to resolder 16GB onto an effectively crippled A4000 for the sake of market segmentation. Has anyone tested professional AI workflows with Microsoft's DeepSpeed on dual/triple/quad consumer cards? That would be worth benchmarking as well. Good luck blocking that one out, Nvidia.
The A4000 is meant for productivity, not gaming. You CAN play games on it, but you're not really getting your money's worth.
Calling it a 3070ti is a joke too. :It's not faster than a 3070: it's 6% slower! (Source: TechPowerup)