CPU or GPU for rendering?

xdacca

Honorable
Dec 14, 2013
1
0
10,510
0
hello,
i'm in the search for new notebook up to price 700 eur, and i have a dilemma which one to buy.
so, I have few questions:
1st: is it more important to have more cpu cores or better gpu for faster rendering in archicad, artlantis, lumion etc.
2nd: some i7 cpu comes with only 2cores and others with 4 cores. is there significant difference?
3rd: how important is number of threads, because i saw AMD cpus with 4 cores and 4 threads, unlike the i7 which has 4 cores and 8 threads

Thank You very much!
Very Happy
 

lp231

Splendid
CPU does most of the rendering, so get a notebook with the best CPU and one with the fastest speed.
Grab a Core i7 and if it's within budget then also find one that has dedicated video card.
 

Kenny Sau Fung

Honorable
Jan 21, 2014
1
0
10,510
0
1st: cpu is basically important than gpu for rendering (mostly, includes archicad), however, some applications like 3dsmax, solidwork render using gpu (workstation gpu such as Quadro, Firepro) also, but i found that lumion renders using gaming gpu (in which geforce titan renders faster than quadro k5000)

2nd: yes, cores are significantly different. Larger number of cores means faster rendering speed. 4 cores can do the job twice than 2 cores at the same time. However, some applications only utilize single core power, thus number of cores wont affect them.

3rd: Yes, number of threads is like number of cores. 4 cores 8 threads has twice the power of 4cores 4 threads. hence, usually people wont reecommend to buy amd laptop cpu, but an i7. Even the latest amd laptap cpu such as A10 5750m can only equivalent to intel high-end i5.

(Sorry if I posted something wrong. Correct me if I'm wrong. Thanks. Peace~)
 

fjan

Reputable
Apr 1, 2014
1
0
4,510
0
Kenny Sau Fung summarized up well. I have a bit more technical details to add (if you're interested):

1: CPU-based rendering/omputing is the old fashion way to compute things. CPU support a wide range of instruction, including vector instructions (so compute a set of values parallel in each cores). Also CPU cores can handle any loops and conditions. Compared to that GPU-s are much more "stupid". They can't really use conditions affecting many type of loops, and if they run out of tasks, they can't proceed to the next steps - but they are massively more cores compared to CPU.
Best example is image processing or neural networks: You have let's say one millions of the same task to do (for each pixel or node/synapse), the task itself is just 20-100 CPU cycles. On CPU that is a piece of cake, and they can even calculate 2-4 or (even 8 with avx2 instructions) of them parallel but than they have a load latency. So you can do [number of cores] * [supported vector size] tasks. Let's say 4 cores and 4 doubles (64-bit) as vector size: 16 tasks done in every 50-150 CPU cycles.
On GPU core: you can do only 1 operation (talking about 64-bit doubles or 32-64bit pixels) but you have 32 to 480 cores (depending on card) to do that. On top of it, if you're using lots of memory to each task: memory load is 300-500 CPU cycles to RAM->CPU and much less to GraphicRAM->GPU.
So even if current rendering engines don't support GPU/OpenCL rendering, they will in 1-3 years, and the performance of renderers that support it will be at least one magnitude more. I's just relative young parallelization approach and for complex softwares are hard to adapt or need basic structural change.

2: As long as some renderers don't support GPU, you need more CPU power. That is: more cores, higher clock, more cache, better vector instructions, better memory bandwith (faster RAM/QPI) in this priority-order. I'll recommend 4th gen i7 as it has avx2 (512-bit wide vector support). Also watch out that xxxxU (ultrabook/ultra low power CPU) has half or even less clock speed than xxxxM (standard mobile) CPUs.
Also you can simply "hope" that GPU/OpenCL support for renderers come out in near future, and than buy instead a good dedicated graphics laptop.

3: HT (hyperthreading or 4cores 8 threads) does NOT mean double performance. It just means, one CPU core has two instruction lines. That is good, if one thread needs to wait for some data from RAM (300-500 CPU cycles) the other thread become active so the CPU still have something to do.
Your operating system with graphical frontend (win, osx, linux with X) has around 200-800 threads already, which are mostly idle but still need some CPU time. Your application will be regularly interrupted to handle those. These interrupts can range from few hundred/thousand CPU cycles to only a few-hundred times in a second.

Also just to clarify: A 3GHz processor has around 1-3billion CPU cycles depending on idle states, most CPU instruction takes 1 or 2 cycles, but the vector-instructions can take up to 5-10 times more, so not so much "better" are they as they seem...
The memory load is almost constant time depending on architecture, so on low clock CPU the memory load is much less cycles than on a high clock one. Still in most cases the memory load is the bottleneck in today's computing. That's also one of the reasons, why GPU/OpenCL-based operation get more and more focus: they have dedicated graphic ram, which are quicker, less fragmented and don't need memory-access-protection, and don't spam the CPU cache.

regards,
jan
 

SeriousNND

Prominent
Apr 15, 2017
25
0
540
1
Depend on what program you working in : For Adobe Premiere Pro, After Effects who is heavily GPU optimized so choose GPU. (External or best in laptop what you can afford)
For Sony Vegas and all other CPU rendering , choose CPU.
If got Nvidia card and CUDA get: staxrip and use all cuda cores of your GPU and compress/convert your video with max speed and compression.
 

ASK THE COMMUNITY

TRENDING THREADS