In 2025, DDR5 RAM has largely taken over new consumer and enterprise systems, with DDR6 on the horizon. But many midrange workstations — especially in research labs, including mine — still rely on DDR4 RAMs.
We use an HP Z-series workstation with an Intel Core i5-12500 (Alder Lake) paired with 32 GB Corsair Vengeance DDR4-3200 CL16 RAM. The workloads? Heavy and diverse — from compiling scientific code and running physics simulations to training and running machine learning models (small to mid-sized models).
My question is: Is DDR4 still sufficient for these demanding tasks in 2025? Or are we leaving performance on the table by not upgrading? For a midrange research workstation today, is DDR4 3200 MHz CL16 memory still adequate for heavy real-world tasks? How does it compare to DDR5 and projected DDR6 systems based on benchmarks alone?
Does it affect performance and efficiency that much? And if so, then how does RAM speed and latency affect each workload?
We’ve heard: “Faster RAM can help with compiling code more quickly”. Is this a reasonable summary? Or does it only apply in large, memory-bound build environments?
Lastly, for researchers and developers, but not bleeding-edge gaming or high-end rendering, does DDR4 still hold up? What would you recommend as a future-proof upgrade path?
Add more DDR4 now and wait? Or switch to DDR5? Hold for DDR6?
Would love to hear your thoughts, benchmarks, or upgrade experiences from similar workloads.
We use an HP Z-series workstation with an Intel Core i5-12500 (Alder Lake) paired with 32 GB Corsair Vengeance DDR4-3200 CL16 RAM. The workloads? Heavy and diverse — from compiling scientific code and running physics simulations to training and running machine learning models (small to mid-sized models).
My question is: Is DDR4 still sufficient for these demanding tasks in 2025? Or are we leaving performance on the table by not upgrading? For a midrange research workstation today, is DDR4 3200 MHz CL16 memory still adequate for heavy real-world tasks? How does it compare to DDR5 and projected DDR6 systems based on benchmarks alone?
Does it affect performance and efficiency that much? And if so, then how does RAM speed and latency affect each workload?
We’ve heard: “Faster RAM can help with compiling code more quickly”. Is this a reasonable summary? Or does it only apply in large, memory-bound build environments?
Lastly, for researchers and developers, but not bleeding-edge gaming or high-end rendering, does DDR4 still hold up? What would you recommend as a future-proof upgrade path?
Add more DDR4 now and wait? Or switch to DDR5? Hold for DDR6?
Would love to hear your thoughts, benchmarks, or upgrade experiences from similar workloads.
Last edited: