Question Graphics card with more vRAM for AI Art Generation ?

Status
Not open for further replies.

epicevan

Honorable
Feb 10, 2017
40
5
10,545
Currently have a 2060 super a really nice GPU but I want one with a bit more VRAM for art
Generation. Embedding training in particular.

I have a $1,000 budget I'm looking for quiet GPUs like Models with three fans etc
Right now I'm considering a 4070 Ti 12GB @ $800 or an RTX 3060 12GB @ $360

My turn off for the 4070 Ti though is it's power consumption, I want a power efficient and quiet card which is why I'm leaning towards the RTX 3060.

My motherboard is GIGABYTE B550 AORUS AM4, cpu is ryzen 7 3700x 8-core and PSU is 850W

Also looking to be still able to play new games like Hogwarts Legacy, Starfield, Diablo 4.

maybe someone who knows what they're doing can lead me in the right direction tell me if I'm making a mistake or not.

also how significant you think the difference in performance would be between the 4070 Ti 12GB and the RTX 3060 12GB ?
 

vishvajit

BANNED
Jan 21, 2023
17
7
15
The RTX 3060 12GB would be a better choice for you, as it's both power efficient and quiet, with a reasonable price. In terms of performance, the 4070 Ti would likely perform better in games and art-related tasks, but at the cost of higher power consumption and noise.
 

Firestone

Distinguished
Jul 11, 2015
99
18
18,535
If you are on a budget then yes the RTX 3060 12GB is fantastic, I used it with Stable Diffusion for a bit (https://github.com/AUTOMATIC1111/stable-diffusion-webui) and it worked great. Rendering times were not the best (approx. 30-90s per 6-image batch) but it was perfectly usable.

IMO, you must rank your choices based on VRAM capacity. If you are comparing e.g. RTX 4070 12GB for $800 vs. RTX 3060 12GB for $350, there is absolutely no point in getting the 4070, just get the 3060.

If you are gonna spend more money, and your primary goal is AI Art generation, then you need to be spending it on more VRAM.

If you are willing to spend $800, then go on eBay and dig up a nice condition RTX 3090 24GB. That will probably be all you ever need.

I used an RTX A4500 20GB for a while, before RTX 3090 prices started falling, and the A4500 was also great. Firmly in the middle between RTX 3060 12GB and 3090 24GB. Not sure if there's an RTX 3070 model with enough VRAM to be analogous. Now I am using a 3090 and its blazing fast. But with Stable Diffusion alone, I am nowhere near using its full VRAM capacity (I think I max out around 14GB VRAM on my largest batches)

If your PSU is 850W, you should be able to handle a single RTX 3090, and a 3060 would also be trivial for your build to support of course
 

Firestone

Distinguished
Jul 11, 2015
99
18
18,535
I'm looking for quiet GPUs
something else worth pointing out, you can reduce noise on your GPU during usage by simply imposing a power limit at a level which limits heat output (thus fan speeds required).

This is very easy to do with the built-in Nvidia command line tool `nvidia-smi`, and can also be accomplished a little more janky-ily using e.g. MSI Afterburner.

For AI Art generation you really are not gonna care as much how "fast" your GPU is, as long as it has enough capacity and completes within an order of magnitude of acceptable speed, so its not really a big deal to tune down the performance if it makes it more tolerable to your ears (lol).

that said, I only had to apply these settings for the RTX 3090; my RTX 3060 12GB and A4500 were both pretty much silent or at least inaudible at even max loads. And if your PC really is that loud, just put on some headphones or move it further away from your face; I got a lot of "noise reduction" by repositioning it on a shelf below my desk
 

jahu00

Reputable
Nov 22, 2019
36
8
4,535
RX 6800 and up do come with 16GB VRAM and can do Stable Diffusion, but you have to jump through some hoops to get most of it. NVIDIA is definitely easier to use for AI and has better performance.

On Linux, with right software installed, RX 6800 can do about 6it/s and RX 6950 somewhere near 10it/s. For now Windows is much less usable (with AMD) as models can't be loaded in half precision which makes them use 2x as much VRAM and possibly work 2 times slower.

Amount of VRAM needed depends on your workflow, but for generating game assets and the like with ControlNet and sometimes resolution higher than 512, I was running out of VRAM on 10GB card. For StableDiffusion I would aim at at least 16GB VRAM. NVIDIA while better for AI, is quite pricey when it comes to VRAM.

I also heard that Arc A770 16GB can do about 8it/s, but I have no idea what hoops you have to jump through to get such performance (is it windows or linux, what app and software stack, half precision or full). The one thing Arc has going for it is price. It's possibly cheapest 16GB card on the market.

Presonally, I jumped on the 7900 XTX bandwagon, but AI support is terrible so far. CUDA equivalent was recently released for 7000 series (though not officially supported), but I just can get it to work (recompiling PyTorch is a pain). Hopefully this will get sorted out soon enough. With a bit of luck, I might be able to run 30B LLM models on this card (18GB model I think).
 

jahu00

Reputable
Nov 22, 2019
36
8
4,535
If the rumors are true and 4060 Ti comes in 16GB configuration (and the price is right), then it might be a good option for you. The question is, when it's coming out?

P.S. Finally got my XTX to do Stable Diffusion on Linux. Not as fast as on Windows using SHARK, but still way more usable. Maybe AMD releases Windows version of their software stack eventually. It's not even fully optimized for 7000 series yet.
 
Status
Not open for further replies.