wonders if Mandark is ever happy with any thingVote with your $$$ just say no. They will relent on price. They are GOUGING the market. .

wonders if Mandark is ever happy with any thingVote with your $$$ just say no. They will relent on price. They are GOUGING the market. .
Not at these prices lol 😂wonders if Mandark is ever happy with any thing![]()
I believe it was Ryan Smith at Anandtech who talked about them changing the CU design to using dual-issue instructions which is a problematic choice that AMD never explained as far as I'm aware. I imagine this likely accounts for a large chunk of the lost performance potential rather than the chiplet decision. I certainly agree AMD still has a lot to prove with their design choices and I hope their lower price point cards stand out better.I don't think so. I mean, new architecture with 67% more raw bandwidth, supposedly even more if you look at the total Infinity Cache bandwidth, and theoretically 160% more compute. And in practice it performs as if a lot of that bandwidth and compute isn't realized in the real world. Having to go over the extra Infinity Fabric to get to L3 cache could be a big part of this.
Put another way:
RX 6950 XT has 59% of the theoretical compute of the RTX 3090 Ti and 57% of the raw bandwidth. At 4K (rasterization) on the updated testbed, it delivers 88% of the performance.
RX 7900 XTX has 26% more theoretical compute than the RTX 4080 and 34% more raw bandwidth. At 4K (rasterization), it delivers 4% more performance.
So due to architectural changes plus chiplets, AMD has gone from delivering close to Nvidia performance with substantially lower paper specs, to now needing more paper specs to deliver comparable performance.
It's also worth looking at relative chip size and performance. AMD has a 300mm^2 GCD plus 220mm^2 of MCDs. Some of that size is due to the Infinity Fabric linking the chiplets together. Nvidia meanwhile has a 379mm^2 die that has a lot of extra features (DLSS and DXR stuff). I'd wager the RTX 4080 actually costs less to manufacture than Navi 31, factoring in everything.
AMD is going to need to prove with RDNA 4 that chiplet GPUs can continue to scale to even higher levels of performance without sacrificing features. They certainly haven't done that with RDNA 3. A monolithic Navi 31 without chiplets probably would have been in the 400mm^2 range and offered even higher performance. That's just my rough estimate, and we can't know for certain, but I'd love to know how much AMD actually saved at the end of the day by doing chiplets.
Folks are voting though:Affordable gaming is dead. Long live overpriced gaming hardware! 😂
Vote with your $$$...
I don't think so. I mean, new architecture with 67% more raw bandwidth, supposedly even more if you look at the total Infinity Cache bandwidth, and theoretically 160% more compute. And in practice it performs as if a lot of that bandwidth and compute isn't realized in the real world. Having to go over the extra Infinity Fabric to get to L3 cache could be a big part of this.
Put another way:
RX 6950 XT has 59% of the theoretical compute of the RTX 3090 Ti and 57% of the raw bandwidth. At 4K (rasterization) on the updated testbed, it delivers 88% of the performance.
RX 7900 XTX has 26% more theoretical compute than the RTX 4080 and 34% more raw bandwidth. At 4K (rasterization), it delivers 4% more performance.
So due to architectural changes plus chiplets, AMD has gone from delivering close to Nvidia performance with substantially lower paper specs, to now needing more paper specs to deliver comparable performance.
It's also worth looking at relative chip size and performance. AMD has a 300mm^2 GCD plus 220mm^2 of MCDs. Some of that size is due to the Infinity Fabric linking the chiplets together. Nvidia meanwhile has a 379mm^2 die that has a lot of extra features (DLSS and DXR stuff). I'd wager the RTX 4080 actually costs less to manufacture than Navi 31, factoring in everything.
AMD is going to need to prove with RDNA 4 that chiplet GPUs can continue to scale to even higher levels of performance without sacrificing features. They certainly haven't done that with RDNA 3. A monolithic Navi 31 without chiplets probably would have been in the 400mm^2 range and offered even higher performance. That's just my rough estimate, and we can't know for certain, but I'd love to know how much AMD actually saved at the end of the day by doing chiplets.
Thank you for the hard work.I don't do a ton of power testing scenarios, so I'd have to look into that more... and I really need to go sleep. As for the "being competitive," AMD is pretty much on par with Nvidia's best in rasterization (similar to 6000-series), and it's at least narrowed the gap in ray tracing. Or maybe that's just my perception? Anyway, since basically Pascal, it's felt like AMD GPUs have been very behind Nvidia. Nvidia offers more performance and more features, at an admittedly higher price.
This is actually a bug that was already reported to AMD and they're already working on a fix.
This is a long technical discussion. But cheaper manufacture is just one reason for chiplets. Another is uniform binning across the cores."Chiplets don't actually improve performance (and may hurt it)"
How on earth is this even a con? who said chiplets are for performance? chiplets are for cost saving.
But what did y'all expect ? You just can't complain for this price point.. there you go, chiplets saved you $200 bucks + gave you 24GB of VRAM as bonus (versus 16GB on 4080)
It is meant for GAMING so don't expect productivity performance, for many reasons including nvidia's cuda cores that has every major software optimized for it.
RDNA is going the right direction with chiplets.. in an industry of increasing costs year after year.
Chiplet design is a solution, and not a new groundbreaking feature that's meant to boost performance.
Sadly, instead of working on the problem, nvidia decided to give excuses such as "Moore's law is dead".
Possibly because "value" is a subjective term and the reviewer didn't want to venture into anything other than pure factual numbers at this time. What do you value in a card over what I value in a card? Pure performance? Energy efficiency? Ray Tracing? Sure you can come up with a formula to derive a number for "card value" that appears strictly mathematical and non-subjective; but there is still some subjectivity in that formula that hides behind the number.
"Chiplets don't actually improve performance (and may hurt it)"
How on earth is this even a con? who said chiplets are for performance? chiplets are for cost saving.
But what did y'all expect ? You just can't complain for this price point.. there you go, chiplets saved you $200 bucks + gave you 24GB of VRAM as bonus (versus 16GB on 4080)
It is meant for GAMING so don't expect productivity performance, for many reasons including nvidia's cuda cores that has every major software optimized for it.
RDNA is going the right direction with chiplets.. in an industry of increasing costs year after year.
Chiplet design is a solution, and not a new groundbreaking feature that's meant to boost performance.
Sadly, instead of working on the problem, nvidia decided to give excuses such as "Moore's law is dead".
Stupid question here: Why don't we have dedicated Raytracing cards like 15 years ago where we had the physx add-in card and get that out of the GPU?
Too much info and latency to shuffle between CPU/GPU?
wonders if Mandark is ever happy with any thing![]()
One thing I have accepted is that yes PC gaming has gone exponentially higher in cost. Sure you can get a budget or midrange PC to play games but I want eye candy and that costs serious dough. I was pricing a new Alienware PC with a 20% discount coupon and it was still $3700 with tax out the door and it was spec'd with a 13900K, 4090, 1TB Gen 4 NVME, 32GB of DR5 @ 5200Mhz, Wifi 6E and Alienware's Cryo-Tech cooler.
But here's the thing. This PC would be for Microsoft Flight Simulator 2020 so yeah I could probably get 120FPS in 4K with DLSS 3 with every detail dialed to the MAX.
I also have an XBOX Series X that looks pretty close and so close to my current rig running a RTX 3080 @ 30FPS with MAX detail that it's really hard to tell them apart. My Series X cost $499 and my Omen 30L set me back $2000. The other thing is that this is NO longer the 90's or the early 2000's where games were designed first around the PC and then ported to the consoles unless it was a console exclusive.
Nowadays, it's just the opposite where games are built and designed for the consoles first, then get ported to the PC. I was thinking of jumping on the Alienware deal because of the 4090 but then I saw how smooth and nice looking MSFS 2020 plays on the Series X that I felt, $3700 for a gaming PC is better spent elsewhere.
You obviously missed the point.
1. Chiplets being cost saving isn't the issue, as long as it doesn't impede performance. Remember AMD is the one that said it increased performance. Not Jared.
2. We can complain, AMD hyped these cards up and they are worse than the 4090 and barely keeping up with the 4080, and thats the XTX, the performance difference for the XT is not worth the $100 savings.
3. As far as "It is meant for GAMING so don't expect productivity performance, for many reasons including nvidia's cuda cores that has every major software optimized for it." You contradict yourself here so I will leave this alone lol
Obviously you are loyal to AMD, try being neutral, you will save money and always profit with what you get for your money.![]()
You said the 120 fps PC looks as good as the 30 fps console, yet costs 7 times more. Well, that's the price for 120 fps, right? And because you want fps to go over 60, you hit the diminishing returns on hardware.One thing I have accepted is that yes PC gaming has gone exponentially higher in cost. Sure you can get a budget or midrange PC to play games but I want eye candy and that costs serious dough. I was pricing a new Alienware PC with a 20% discount coupon and it was still $3700 with tax out the door and it was spec'd with a 13900K, 4090, 1TB Gen 4 NVME, 32GB of DR5 @ 5200Mhz, Wifi 6E and Alienware's Cryo-Tech cooler.
But here's the thing. This PC would be for Microsoft Flight Simulator 2020 so yeah I could probably get 120FPS in 4K with DLSS 3 with every detail dialed to the MAX.
I also have an XBOX Series X that looks pretty close and so close to my current rig running a RTX 3080 @ 30FPS with MAX detail that it's really hard to tell them apart. My Series X cost $499 and my Omen 30L set me back $2000. The other thing is that this is NO longer the 90's or the early 2000's where games were designed first around the PC and then ported to the consoles unless it was a console exclusive.
Nowadays, it's just the opposite where games are built and designed for the consoles first, then get ported to the PC. I was thinking of jumping on the Alienware deal because of the 4090 but then I saw how smooth and nice looking MSFS 2020 plays on the Series X that I felt, $3700 for a gaming PC is better spent elsewhere.
What on EARTH are you talking about all the reviews I've seen put the 7900 xtx ray tracing at 3090 ti performance, and about 10% down from the 4080, which is consistent with NVidia naming schemes (x090 in one generation becaomes x080 in the next generation).
The 4090 is still going to be tricky to get. I just clicked on the ASUS link and even though it said in stock, the link turns into "Notify Me". It will be awhile before they are readily available. The other reason that the Alienware works better for me is the 12 months no interest finance I qualify for. That said, I am staying with what i've got.If you really want eye candy, specially for such a great looking game like MSF 2020, you should not look at DLSS 3 for now. Nvidia will probably make it better with time, but is not there yet for me.
On the other hand if you are happy with the graphic details the Series X show at MSF 2020, then I guess some sort of DLSS will be ok to.
You could get a little cheaper price for a PC like the one you linked, but you will have to do the hard work of putting it togheter, settings BIOS options to keep the Core i9 under leash, and installing Windows, drivers, and everything else.
Just an example:
PCPartPicker Part List
CPU: Intel Core i9-13900K 3 GHz 24-Core Processor ($599.99 @ B&H)
CPU Cooler: ARCTIC Liquid Freezer II 280 72.8 CFM Liquid CPU Cooler ($119.99 @ Amazon)
Motherboard: MSI PRO Z790-P WIFI ATX LGA1700 Motherboard ($239.99 @ B&H)
Memory: G.Skill Trident Z5 RGB 32 GB (2 x 16 GB) DDR5-6000 CL40 Memory ($159.99 @ Newegg)
Storage: Samsung 980 Pro 1 TB M.2-2280 PCIe 4.0 X4 NVME Solid State Drive ($99.99 @ Amazon)
Video Card: Asus TUF GAMING OC GeForce RTX 4090 24 GB Video Card ($1799.99 @ ASUS)
Case: Corsair 4000D Airflow ATX Mid Tower Case ($104.99 @ Amazon)
Power Supply: Corsair HX1200 Platinum 1200 W 80+ Platinum Certified Fully Modular ATX Power Supply ($263.98 @ Newegg)
Total: $3388.91
Prices include shipping, taxes, and discounts when available
Generated by PCPartPicker 2022-12-13 07:15 EST-0500
No, no, what I was trying to say was that MSFS 2020 looks just as good on the Series X which runs @ 30FPS as it does on my Omen 30L which also runs at 30FPS with every detail MAXED out on the PC. It's hard for me to tell them apart whether I do low level or high level flying.You said the 120 fps PC looks as good as the 30 fps console, yet costs 7 times more. Well, that's the price for 120 fps, right? And because you want fps to go over 60, you hit the diminishing returns on hardware.
When launched, the PS5 and Series X were a better deal than any PC. Now, they can run the games, but as you said, at 4k 30 fps. More than that and you either need a refreshed console (PRO etc), or a stronger PC. You can't run away from the fact that consoles are fixed hardware, and PCs evolve over time.
Also, you could just swap the 3080 with the 4090 and save a lot of money (but wait for prices to get more reasonable).
No, no, what I was trying to say was that MSFS 2020 looks just as good on the Series X which runs @ 30FPS as it does on my Omen 30L which also runs at 30FPS with every detail MAXED out on the PC. It's hard for me to tell them apart whether I do low level or high level flying.
The Omen 30L can produce roughly 60FPS by enabling DLSS. I was throwing out the 120FPS as really the only difference maker by going with a 4090 setup.
Agreed on the CPU part unless you do 4K which then requires a good GPU and if the CPU is bottlenecking the GPU, you will get micro stutters because the CPU is not fast enough. Anything running at 1080p is going to depend more on the CPU. Going to 1440p will require some from both and 4K will primarily require a strong GPU and the CPU needs to be fast enough to keep up with its demands. My i9-10850K is still good enough to handle 4K but going from 30 to 60FPS requires at least a 3090 or 6900 XT.I fly a lot on MSF 2020 (fancy 3rd party airliners). Video card isn't the problem with that game. Its mostly CPU bottle necked. Got a lot of micro stutters with the 8700k on High End Settings, especially low to ground when approaching and landing (most annoying time for micro stutters). Improved a lot when I upgraded to the 13700k, almost no micro stutter. But DLSS 3.0 frame generation helps a lot. Doubling FPS with a mouseclick.....nice to have. Input lag is not an issue in that game.
I used the link provided which was ASUS, Newegg. Both links showed out of stock.