I also did the same upgrade and notice better performance in games.the SAME can be said about going from and intel ddr 4 based comp to a intel ddr5 based comp as far as ram is concerned.
just did a quick price check with a 7950X, Asus Strix X670E0E gaming, 13900k, asus Strix Z790-e gaming and the same Corsair 5600 mhz CL36 ram ( could use faster ram,but still.
ram, same price, face it, very few would by a new system and use ddr4 on t he intel system, boards, for this board the AM5 board was the SAME price as the intel version, so that argument is FALSE. the cpus, the 7950x is $150 more, so that is true, so based on regular prices, you are HALF right.
BUT currently everything i have listed is on sale, and taking that into account the amd system is less expensive by 10 bucks this isnt factoring anything else in the system, like cooling for example.
and thats half of his argument it seems, i doubt very few would by a 13900k and pair it with the ddr4 intel platform, that is just asking for another upgrade in 1-2 years for just the board and cpu, and at that time, may as well just upgrade the cpu as well.
heh, i went from a 3900x to a 5900x about a year after getting the 3900x, and saw quite the performance bump in transcoding with handbrake, a bluray went from about 60 mins i think it was to about 30 mins.
You are forgetting the cost of pci-e Gen 5, on top of the ridiculous inflation we are experiencing.
I assume you're talking about little endian (i.e. lowest-valued bytes at lowest address). Although I think ARM is bi-endian, Android on ARM runs it in little endian mode. Even iOS runs in little endian, and Apple had its roots in big endian Motorola 68k. Pretty much every Linux distro, even on IBM POWER processors (which are also bi-endian), runs in little endian.But in the end the only way I found to make fast code was to assume x86 integer encoding, which probably doesn't run on ARM (most phones).
I've heard that GPU shaders get optimized with use. This suggests there's recompilation or iterative refinement going on. Otherwise, what I'd suggest is do the quickest compilation upon initial loading and then do a backgrounded optimization pass.1) The shader compile stuttering on PC needs to be solved. There is no point in a game running 200fps if it drops to 10fps each time a shader needs to be compiled. The proposed solution from Digital Foundry to do the shader compiles during loading and letting people wait 15 minutes is imo, ridiculous. My guess is that we will see a hardware solution to this problem, unless Nvidia and AMD agree on a unified driver architecture which is unlikely.
FWIW, @JarredWaltonGPU recently mentioned that this involves a semi-manual benchmark. Maybe that's why it doesn't feature in more reviews?Can we get some 4K (UHD) gaming benches for MSFS that showed massive FPS improvement over Intel with this CPU between HD (1080p) and QHD (1440p)?
Perhaps they could lobby Microsoft to make it easier to benchmark?I know some in the MSFS sim community who are looking at upgrading their older hardware,
I think that's going to be almost 100% GPU-limited. Maybe there are some CPU layers or some CPU-intensive preprocessing or postprocessing, but AI is usually 100% GPU-limited.I would like to see some new tests included using AI engines; You could run something like REALESRgen to upscale some anime videos or something; Granted, if you have a 4090 you are using the GPU; but there are CPU options as well;
Adding PCIe 5.0 support to a motherboard adds cost, due to the signal integrity requirements. To satisfy those involves more PCB layers, more expensive materials, retimers, etc. I think that's what the point was about.No, I don't see that as relevant. Don't people know that PCIe is backwards compatible? Just because it supports PCIe5 doesn't mean you are required to run PCIe5 NVMe's etc.
No, I don't see that as relevant. Don't people know that PCIe is backwards compatible? Just because it supports PCIe5 doesn't mean you are required to run PCIe5 NVMe's etc.
Just to add. In their simulated 7800X3D, they found it used 44W to play games. That is absolutely Laptop territory.More efficiency data. This time, from TechPowerUp's review where they tried disabling each CCD, but it also includes similar scheduler tweaks. Note that 0 is the 3D cache die, so look for "CCD1 Disabled" (the brown bars) to see their simulated 7800X3D results.
I'm not sure what point you're trying to make, considering that you're not upgrading an Intel build anyways. Their sockets are NEVER compatible. You'd be doing a brand new Intel build from scratch and spend $1500, but you're decrying an extra $60 on RAM that will work an any system in the next six years?The cons just weigh too heavy for me to even consider the CPU or the AM5 platform.
I already have DDR4. AMD expects me to just throw this into the trash and buy expensive new DDR5 for a 1% difference in performance. It even bothers me from an e-waste perspective, let alone the financial cost.
The fact performance of X3D CPU is all over the place bothers me too. Yes it's fast in -some- games, but then you get less overall performance in several important applications. If I just used my PC for gaming, I would have bought a console, I don't like the idea of having to make a trade-off with X3D CPU.
Another thing is, and this is not AMD specific but PC specific. The biggest issue with gaming on PC has been stuttering because shaders need to be runtime compiled on PC. This CPU will not solve this. When PC gaming becomes a meme, #stutterstruggle, I don't think many people are going to be willing to invest in $600+ CPU. Solve this problem, PC gaming is currently in shambles.
Buy before inflation price hikes!I'll be going with AM5 after the 7800X3d is released in another month. Haven't decided which one to buy, however. The advantage of AM5 is that it has years of CPU & system ram & bios development ahead of it, whereas the AM4 platform I'm currently using is at an end in terms of further development. I'm not in a huge hurry, though. I may wait a year for AM5 bios maturity and some new motherboard offerings.
What I don't care for with AM5 is the current crop of motherboards--for instance, my x570 Aorus Master is a much better motherboard than the x670e Aorus Master--which doesn't support nearly as many features as the x570 Master (like a back-panel CMOS clear button (!) and it has no hardware DAC or earphone amp & doesn't offer dual, manually switched bioses!) Adding insult to injury, it costs > ~$140 more while delivering much less! The x670e supports more NVMe drives, and it's 8 layers instead of 6, but that doesn't make up for the other deficits. To get what I had in my current mboard I'd have to go to a $699 Xtreme! Ridiculous. No way am I reverting to opening up the case to set a Clear CMOS jumper, and I'm not spending ~$700+ on a motherboard! The $360 for the x570 Master was a lot, too, but it was justified by the hardware that appealed to me. So I'll see what happens on the motherboard front.
Thats why i said they could use the CPU profile, which does not use the GPU at all. From my experience, when you use the CPU profile, it pretty much maxes out all your cores; Since CPU's will start coming with AI accelerators built in, i think it would be a valid test.I think that's going to be almost 100% GPU-limited. Maybe there are some CPU layers or some CPU-intensive preprocessing or postprocessing, but AI is usually 100% GPU-limited.
FWIW, @JarredWaltonGPU recently mentioned that this involves a semi-manual benchmark. Maybe that's why it doesn't feature in more reviews?
Sorry, I missed that.Thats why i said they could use the CPU profile, which does not use the GPU at all. From my experience, when you use the CPU profile, it pretty much maxes out all your cores; Since CPU's will start coming with AI accelerators built in, i think it would be a valid test.
Paul and I use different test scenes for MSFS, but these days I use the landing challenge for the Iceland airport. The plane pulls to the right, so I start by turning to the left, drop the engine power to 20%, and then just let the plane "coast" for about 30 seconds before I stop and restart the test sequence. I also have on the online (Bing maps) integration, but with live weather and traffic off, as that seems to improve the fidelity and also makes things a bit more demanding. But MSFS does take longer to load than a lot of other games, and there's no specific built-in benchmark. Also, installing the game initially takes an eternity, relatively speaking. Even with gigabit internet, it only averages maybe 200 Mbps download speeds and takes a few hours to do a clean install. Because MS puts most of the data separate from the executable, so you get a ~2GB game download, and then 100+ GB of updates within the game.FWIW, @JarredWaltonGPU recently mentioned that this involves a semi-manual benchmark. Maybe that's why it doesn't feature in more reviews?