[SOLVED] i9 10900Kf, RTX3080, RoG Maximus XII Formula.....which RAM kit?!

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Jan 21, 2021
22
1
15
Here's the system I'm building (most pieces are already in my hands!!):
CPU: Intel i9 10900Kf (yep, there's an "f" there)
GPU: ASUS TUF RTX3080
MoBo: ASUS RoG Maximus XII Formula
PSU: FPS Hydro G Pro 1000W

Of course there're gonna be an AIO water cooling system for the CPU and a couple of M2 disks for OS and games.

Now, though, my main issue is with RAM modules.

My plan was to get the maximum possible, 128GB (4x32), starting with a 2-modules 64GB kit to later expand it to 128 with a second one.
But.
Apparently, 4000MHz+ 32GB modules are not a thing, unless one has time and money to waste.
I got the money, actually, but I'm starting to run out of patience, since it's proving hard to get a hold of them.

THIS was the kit I had in mind.
But, I don't know, it doesn't convince me. Especially because of that "Optimized for AMD Ryzen" stamp at the bottom of the description.
Thus I've started wandering the net looking for reviews mentioning good 64 or 128GB @4000MHz+ kits, but each one of them stated that anything above 3600MHz is basically a waste of money.

Ok, let's start over and let's say I'm looking for at least 64GB (preferably 2x32GB) @3600MHz.
More reasonable prices and, seemingly, also easier to get.
Doubts keep raining, though.

Here are my questions:
.- Is expanding my RAM at a later date a good idea, or is it better to get the kit as a whole?
-. With a system as the one listed, is it better to push on my budget a bit more and get a 4000MHz+ kit, or 3600MHz is more than reasonable?
.- I'm gonna use this system for a bit of gaming, and a fair amount of rendering/procedural calcs, especially with Blender, ZBrush, World Machine and UE4 (and possibly some VMware virtual machines): would 64GB be enough, or should I go for 128GB?

I'm moving from a 10yo, i7 920 @3.8GHz, 12GB (tri-channel) DDR3, GTX980Ti system, so I've lost touch with technology.
After all, my current setup can run Witcher 3 and GTA-V at nearly all-Ultra without noticeable FPS drops, I can't even start to imagine how much the performance will improve. And how relevant those 400MHz would be to my new system.

Please, elaborate your suggestions/opinions as much as possible, thank you.
 
Solution
Only you know how much ram you need, depending on the size of your objects, scenes and amount of VMs.
As for speed, that 4000mhz cl18 is actually the same speed as cl16 3600mhz, or atleast quite close, and while you could tighten the timings on 4000mhz cl18 to maybe cl16 or 17, that is still very similar and would not give a meaningful upgrade above 2-4%.
Optimized for ryzen means nothing, it's a habit of ram manufacturers since first gen ryzen was picky about ram, nowadays ryzen can handle almost all sticks.

Getting all ram together is generally better. While ram compatibility is not that big an issue nowadays, especially on amd, and especially if you get the same kit, i've seen 2 sticks that are literally the same and bought a week...
D

Deleted member 2838871

Guest
Anyway, if the performance gap is minimal between 3600 and 4000MHz (I understood it is), I doubt MSFS2020 will gain anything different.
Or maybe I'm missing something, there? =o

The reviews I read including the one here said MSFS2020 benefits from the fastest ram possible... so that's what I went with. The fact they even mentioned it in the review got me to thinking.... why not? Even if it's only 1-2% faster as I said already it was literally a $30 difference between 64GB 3600 and 4000... so I went with the 4000.

I'm one of those guys that puts value at the top... hence the ram choice... and CPU choice. I only considered 3600 and 4000 and the price difference really made it a no brainer. Same for the CPU.


It's true that the "silicon lottery" is a thing, when overclocking, but I believe 5.3GHz is perfectly within the capabilities of any i9 10900K. Unless you won the lottery of the opposite, and you got the worst possible, that is.

The issue with the temperatures becomes a thing when you pressure the CPU with Prime95 tests, and even then I doubt a good 360 AIO liquid cooling system wouldn't be able to handle the heat and avoid the down-throttling.

At least, this is what I've read around the Internet, by browsing through several overclocking articles featuring stability tests.
By all means feel free to correct any possible misconception.

You're pretty much spot on. I've had mine at 5.3 but temps were a bit much for my 360mm AIO... pushing 90C. Running it at 5.2 is 80-85C on stress tests so that's where I run it. Every day temps are much lower. I also run a -2 AVX offset. When using apps like Handbrake the difference between 5.2ghz and 5ghz when encoding 2 hour videos is literally 2-3 minutes... and temps are lower.

Either way, I'm happy with how I did with the silicon lottery.

A 5.1ghz all core i9 10900k is worth 800$..

:giggle:

Dunno why that guy talks about how much of an AMD fanboy I am. I actually perfer intel, and am super pumped for 11th gen, but AMD is just better right now, no contest aside from the low end, where AMD doesn't have a zen 3 cpu.

Value is a contest Intel wins hands down... at least in the USA.

My 10900k/3090 PC benchmarks in the top 1% on 3Dmark... and that would be the upper half of the top 1%. I say that because I've seen a lot of PCs that scored 3000-4000 pts lower than mine and were also in the top 1%. Links are in my sig... all for a $500 CPU... and got 79th in Unigen's Superposition 4K optimized. What would I have gained by getting a 5950x for $1300 from a scalper? Something like a 0.5% improvement and a few extra fps that you and I both would never notice?

Yeah, that sounds totally worth an extra $800. :rolleyes: Anyway, I rest my case.

Anyway, this discussion is done.
Thread closed.

Good idea. (y)
 
Last edited by a moderator:
The reviews I read including the one here said MSFS2020 benefits from the fastest ram possible... so that's what I went with. The fact they even mentioned it in the review got me to thinking.... why not? Even if it's only 1-2% faster as I said already it was literally a $30 difference between 64GB 3600 and 4000... so I went with the 4000.

I'm one of those guys that puts value at the top... hence the ram choice... and CPU choice. I only considered 3600 and 4000 and the price difference really made it a no brainer. Same for the CPU.




You're pretty much spot on. I've had mine at 5.3 but temps were a bit much for my 360mm AIO... pushing 90C. Running it at 5.2 is 80-85C on stress tests so that's where I run it. Every day temps are much lower. I also run a -2 AVX offset. When using apps like Handbrake the difference between 5.2ghz and 5ghz when encoding 2 hour videos is literally 2-3 minutes... and temps are lower.

Either way, I'm happy with how I did with the silicon lottery.



:giggle:



Value is a contest Intel wins hands down... at least in the USA.

My 10900k/3090 PC benchmarks in the top 1% on 3Dmark... and that would be the upper half of the top 1%. I say that because I've seen a lot of PCs that scored 3000-4000 pts lower than mine and were also in the top 1%. Links are in my sig... all for a $500 CPU... and got 79th in Unigen's Superposition 4K optimized. What would I have gained by getting a 5950x for $1300 from a scalper? .5% improvement and a few extra fps that you and I both would never notice?

Yeah, that sounds totally worth an extra $800. :rolleyes: Anyway, I rest my case.



Good idea. (y)
"Value at the top"
Just.. what? I said that there is a small upgrade from 3600 and 4000, it's a bit better, you need to decide for yourself if the upgrade is worth your money, Never said it was no upgrade at all.

Literally exact same thing I said for the clocks.

"A 5.1ghz all core i9 10900k is worth 800$.."
I should rephrase this:
It costs 800$, it's not worth that much.

3Dmark is a good benchmark for comparing 2 systems, but it is not a good benchmark to see how much better a system is.
Why are you again comparing the 5950X to a 10900K? they are not in the same class.
Also, the 5950X is much faster than the 10900k, not .5%.
6 more cores, and a real world single core improvement of about 15-20%.
https://www.guru3d.com/articles-pages/amd-ryzen-9-5900x-and-5950x-review,10.html

But again.. you shouldn't compare these 2 chips...
What is the price of a 5800X in the USA? that is the chip that is most closely resembles the 10900k is multicore performance (since it has 2 less cores, but faster overall cores)
If it is more in line with pricing, then it's really a tie in price-performance with those same scalper prices.
But again, in MSRP USA, and in most other places in the world, the pricing is higher than MSRP, but still same ratio, making ryzen 5000 more value.
 
D

Deleted member 2838871

Guest
Just.. what?

I really don't care man... All your babbling lost credibility with me when you made the comment about the most a 10900k can overclock is 5.1.... :LOL::ROFLMAO: Value and availability are contests Intel wins hands down... at least in the USA. It doesn't matter how good AMD is when it's not in stock.

... and the .5% comment was in relation to the top 1%.... am I gonna see this 15-20% improvement WHEN I'M ALREADY IN THE UPPER 1%? Am I gonna notice it? Am I gonna get $800 worth of improvement? Hell to the no and that brings us back to my point... VALUE.

Now I'm done debating when you. Go buy yourself an AMD system so you can actually own the product you are quite obviously in love with.

Anyway, this discussion is done.
Thread closed.

Take your own advice. I'm out and won't be responding further.(y)
 
Last edited by a moderator:
D

Deleted member 2838871

Guest
I've read what AVX instructions are, but I have no idea how/why to tweak anything related to them.
How does it work, why that value, and how should I choose which value to set?

Ok... I'll reply to you. :)

You can change the AVX offset in the BIOS settings. Your CPU will downclock when running those instructions. Mine is at -2... (200mhz) so my 5200mhz OC drops to 5000mhz while running AVX... like in Handbrake.

As I said, the result is lower temperatures while not really affecting performance all that much. My encodes take 2-3 mins longer but the CPU runs at a lower temp. AVX is brutal on a CPU.

More info here.... good luck.

 
  • Like
Reactions: Steamy_Steve
Jan 21, 2021
22
1
15
[....]
You can change the AVX offset in the BIOS settings. Your CPU will downclock when running those instructions. Mine is at -2... (200mhz) so my 5200mhz OC drops to 5000mhz while running AVX... like in Handbrake.

As I said, the result is lower temperatures while not really affecting performance all that much. My encodes take 2-3 mins longer but the CPU runs at a lower temp. AVX is brutal on a CPU.
[....]

Thank you for the link, it was enlightening!!
So, say, if you stepped back by 2, having a 5.2GHz overclock, would you say I don't need that with my 5GHz?
 
D

Deleted member 2838871

Guest
Thank you for the link, it was enlightening!!
So, say, if you stepped back by 2, having a 5.2GHz overclock, would you say I don't need that with my 5GHz?

It really just depends on what temps you are getting. If you're running hot set an offset. Mine was encoding at 85C or so on all cores, so I added the offset and now it's 70-80C on all cores... at the cost of only 2-3 additional minutes of encoding time.

Easy tradeoff.
 
  • Like
Reactions: Steamy_Steve