Question Adobe camera raw and multiple cores

simbas

Distinguished
Oct 1, 2008
25
0
18,530
0
If comparing two CPUs, both the same speed, one 4 core and the other 8 core, how much faster would the 8 core convert the same number of RAW files? I’m guessing not twice as fast? Anyone ever done any tests?
Would an 8 core Ryzen 7 3800x or a 12 core beast like the 3900x be significantly faster for ACR conversions than an older 4 core 3.6Mhz CPU?
Thanks!
 
If comparing two CPUs, both the same speed, one 4 core and the other 8 core, how much faster would the 8 core convert the same number of RAW files? I’m guessing not twice as fast? Anyone ever done any tests?
Would an 8 core Ryzen 7 3800x or a 12 core beast like the 3900x be significantly faster for ACR conversions than an older 4 core 3.6Mhz CPU?
Thanks!
Do you already have the software?
You can use task manager affinity to lock the task to only use halve your cores which would answer your question.
 
Reactions: simbas

simbas

Distinguished
Oct 1, 2008
25
0
18,530
0
Do you already have the software?
You can use task manager affinity to lock the task to only use halve your cores which would answer your question.
That’s a good idea! Yes, I already have Photoshop. I’ll restrict it to only use two cores, convert 10 images and then try with all 4, time both and I’ll see where I’m at. Fantastic, thanks! The only thing I not sure about is how well it can handle 8 or 12 cores. Anyone have any experience with that?
 
The raw format processing is a task which takes extremely well to parallelization. It's taking 4 color values (R, B, G, and G) per 4 pixels produced by the Bayer filter, and averaging them out to 3 colors (R, B, and G) for each of those 4 pixels. Over and over millions of times. So more cores will normally help speed this up directly in proportion to the number of cores (assuming the cores are the same speed).

However, it takes to parallelization so well that Adobe actually supports doing it on the GPU instead of the CPU. Modern GPUs are basically CPUs (with extremely limited instruction sets) with several hundred or even thousands of cores.

https://helpx.adobe.com/photoshop/kb/acr-gpu-faq.html

In that respect, if your installation of Photoshop is already using the GPU to process RAW files, then upgrading the CPU will make no difference in RAW processing speed.
 
Last edited:
Reactions: simbas

simbas

Distinguished
Oct 1, 2008
25
0
18,530
0
The raw format processing is a task which takes extremely well to parallelization. It's taking 4 color values (R, B, G, and G) per 4 pixels produced by the Bayer filter, and averaging them out to 3 colors (R, B, and G) for each of those 4 pixels. Over and over millions of times. So more cores will normally help speed this up directly in proportion to the number of cores (assuming the cores are the same speed).

However, it takes to parallelization so well that Adobe actually supports doing it on the GPU instead of the CPU. Modern GPUs are basically CPUs (with extremely limited instruction sets) with several hundred or even thousands of cores.

https://helpx.adobe.com/photoshop/kb/acr-gpu-faq.html

In that respect, if your installation of Photoshop is already using the GPU to process RAW files, then upgrading the CPU will make no difference in RAW processing speed.
Man, I thought this isn’t as simple as I expected. Thanks for the link, I read it and they say that CPU speed also plays a role, but nothing about cores. Which makes sense, 4 cores added to the thousand ones already in the GPU don’t make much difference. If I remember correctly I also read on somewhere that the difference between a relatively low budget GPU and a top one is no more than about 5-10%. And yes, I do have GPU acceleration turned on which, as I understand it now, is kind of a bummer...

So, am I right to assume that any software programmed to use GPU acceleration can’t possibly benefit from more CPU cores thrown at it? On the other hand it would help to go from 3.6Mhz to a 5.0Mhz CPU?

And btw, thanks for the very clear explanation on how the raw data is being processed!
 

simbas

Distinguished
Oct 1, 2008
25
0
18,530
0
RAM, GPU, and which CPU also matter.
It is not a simple 'more cores'. Nor raw GHz number.
That also makes sense. To be more specific, my current system is:

Asus P8P67 mobo, Intel i7 2600 CPU, 16 gigs of 1666Ghz RAM, Samsung Sata SSD.

I am thinking about:
Asus x570 Prime (or similar) mobo (I’d like a front USB-C connection), AMD Ryzen 7 3800x, 16gigs G.Skill Trident Z 3600Ghz, Corsair MB600 1Tb SSD.

Would this be a siginificant improvement over my current setup? I don’t expect a night and day difference, but the only thing I was really hoping for, was the raw conversion speed increase which, as I’m finding out, won’t be happening.
 

USAFRet

Titan
Moderator
Mar 16, 2013
119,656
3,187
159,340
19,370
That would be a massive upgrade.

On my system upgrade a few years ago:
Going from an i5-3570k + 16GB RAM
to
i7-4790k + 32GB RAM (and better GPU)
Importing 100 RAW photos into Adobe Lightroom saw a 40% benefit. Actual timed results.

Going from your 2600 to a new Ryzen 7 3800x will probably see an even greater benefit.
 
Reactions: simbas

simbas

Distinguished
Oct 1, 2008
25
0
18,530
0
That would be a massive upgrade.

On my system upgrade a few years ago:
Going from an i5-3570k + 16GB RAM
to
i7-4790k + 32GB RAM (and better GPU)
Importing 100 RAW photos into Adobe Lightroom saw a 40% benefit. Actual timed results.

Going from your 2600 to a new Ryzen 7 3800x will probably see an even greater benefit.
Hehe, I badly needed to hear that. Thanks, man! I forgot to mention I’ll be keeping my current GTX 1060 GPU, but that shouldn’t make that much of a difference. Anyway, I need to upgrade, if nothing else, my case is starting to fall apart.
 
Man, I thought this isn’t as simple as I expected. Thanks for the link, I read it and they say that CPU speed also plays a role, but nothing about cores. Which makes sense, 4 cores added to the thousand ones already in the GPU don’t make much difference.
The bulk of the read work is creating full color data from the Bayer filter's minimal color data. But the companies making raw processors also like to do other tweaks (white balance and color space adjustment, luminosity curves, noise reduction, etc). They're very secretive about what exactly they do, to try to maintain an edge over the competition. And the resulting full-color bitmap will need to be compressed before it's written to disk. Some of these tasks don't parallelize as well, so are better off done on the CPU. So yeah it will be a combination of GPU and CPU performance (and memory and storage speed). I guess I should've limited my statement about CPU speed not mattering to only the Bayer filter decomposition stage.

Anyway, bottom line is, you want a simple answer to your question, but it doesn't have a simple answer. :)
 
Reactions: simbas

simbas

Distinguished
Oct 1, 2008
25
0
18,530
0
The bulk of the read work is creating full color data from the Bayer filter's minimal color data. But the companies making raw processors also like to do other tweaks (white balance and color space adjustment, luminosity curves, noise reduction, etc). They're very secretive about what exactly they do, to try to maintain an edge over the competition. And the resulting full-color bitmap will need to be compressed before it's written to disk. Some of these tasks don't parallelize as well, so are better off done on the CPU. So yeah it will be a combination of GPU and CPU performance (and memory and storage speed). I guess I should've limited my statement about CPU speed not mattering to only the Bayer filter decomposition stage.

Anyway, bottom line is, you want a simple answer to your question, but it doesn't have a simple answer. :)
Yep, pretty complicated stuff. o_O

Well, the only simple solution is to upgrade and make one final test on my old system then do the same on the new one. I’ll be sure to post the results. Thank you all, I appreciate the help!
 

simbas

Distinguished
Oct 1, 2008
25
0
18,530
0
So, I have now upgraded my system from 4 core Intel 2600 to Ryzen 8 core 3800x, RAM was 16Gb in both cases, with DDR4 in the new PC and with the same GTX 1060 GPU in both and the difference really is significant!

For example, I exported the same 20 RAW images with both. The old PC took 50 seconds and the new one took roughly 15 seconds! What's interesting is that with the old PC I also tried with an ancient 8800 GT GPU and it took 51 seconds, just one more than the GTX 1060! I also tried surface blur and the old PC took almost 15 seconds, with the new one doing it instantly.

Now, I guess we have to take into account that the old PC wasn't a fresh, clean install like the new one, so I guess those results should be taken with a grain of salt, but still, it makes a big difference!
 

ASK THE COMMUNITY

TRENDING THREADS