News AMD Intros Zen 4 Ryzen 7000 CPUs and 600-Series Chipset: Up to 5.5 GHz, 15%+ Performance, RDNA 2 iGPUs, PCIe 5, DDR5

Page 3 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.

escksu

Reputable
BANNED
Aug 8, 2019
878
354
5,260
Because single-core =/= multi-core and single-core is what people are talking about here, jfc...

It doesn't matter how the 5950X performs here. Single-core performance does not 1:1 translate to multi-core performance. Besides, according to the official table, the 12900K does NOT beat the 5950X; it's the other way round:

The 12900K is better in Cinebench R23 than the 5950X, yes. However, the clock speed of the 12900K is also higher, which, as stated above, also plays a role for performance. You seem to miss that the new Ryzen 7000 got a significant clock rate uplift, which does help considerably with performance considering it means more of the cycles from the term IPC. That's why people state the IPC gain is small. A lot of the strength comes from the clock rate boost.

EDIT:
Actually, reading the article again, it does mention a possible reason for the 30% rendering difference:

If that really is true, extremely higher IPC even less likely the reason.

Another highly possible reason, TDP.

When all cores are under full load, the clockspeed will be alot lower compared to just 1-2 cores. By default 5950x TDP is 105W. Even if its PBO, it maxed at ~142W (socket limit by AMD). Now AMD has risen that limit to 170W. With the extra 28W + benefits of 5nm, Zen4 should be able to run at much higher clockspeeds at full load compared to 5950x.

I don't have any results for blender for I did read that 5950x runs at around 3.85-4GHz in cinebench, this is around 1GHz lower than its max boost clock.

So, I reckon Zen4 should be running much higher clocks (maybe 4.5-5GHz).
 
  • Like
Reactions: KyaraM
Wow 12% more performance but needing 28% more power to do so...that really sounds efficient.
Why do you think that intel chose 125W as the base power? You think it's a random number?
Using PBO generated 2.21% higher performance for 1.91% more power on the 5950X. That is efficiency.
Yeah there is a efficiency curve and the closer you are to the sweet spot the closer the performance increase will be to the power increase.
This is why when you look at the score/watt the 5950X is 12.84% better than the 125W Intel. The Intel needs 160W to be 3% faster but use 33% more power than the Ryzen. Calculated to their efficiency ratings (score per watt) the 12900k @ 160W gets 159.94 that puts it at 28.87% less efficient than the 5950X.
My point is that the difference isn't 142W to 241W which is 70% ,it changes depending on settings.
30% is still far better than 70% and the 13% difference in efficiency, at 125W for the intel CPU, is even better.
 
  • Like
Reactions: KyaraM
Wait, so you're one of the people that after AMD actually announces a plethora of extra connectivity over Z690 it's just going to shrug it off and say "meh, it's enough at Z690". Am I reading your comment correctly?

If so, them I'm sorry to say, but that's BS of the highest caliber. Z690 is a step up over X570, but not a massive one if you read the fine print. Sure, it supports PCIe 5 GPU (only) and allows for a split for a PCIe NVMe card to support PCIe 5 NVMe at the expense of the GPU link, but AMD's X670(E) won't do that: it's going to on top of giving you full PCIe5 for the GPU, it'll also give it for the NVMe slot (maybe 2 or just 1; dunno as it wasn't explicit) and the rest is going to be using PCIe5 for everything. Z690 uses PCIe4 equivalent for the DMI link and everything else. Whether you want to admit or not, X670(E) is a proper upgrade over X570 in all accounts and all regards that matter for any consumer. If you don't need all that connectivity, then B650 is for you, which if you read the fine print as well, it'll be slightly below the Z690 in terms of connectivity by trading the PCIe5 for the GPU for the NVMe. Again, and this is IMPORTANT: if Z690 wants to use a PCIe5 NVMe, the motherboard maker will need to include an additional chip to bridge the CPU link that splits the PCIe5 of the GPU in two (dual X8) so it can accommodate it.

Do I agree with the overall picture of "most users don't need this?". Yes, kind of. Have most outlets and people made it a HUGE FRIGGEN DEAL that X570 doesn't have PCIe5 (with a huge asterisk, again)? Absolutely they have.

Regards.
The interesting point will be if all of these connections will be mandatory to all boards because that will increase the cost of entry by a lot.
DDR5 is already a must so increased cost from that, the iGPU won't be for free, TDP went up so cooling and VRM of boards have to be better, it's all going to add up to quit a sum.
 
  • Like
Reactions: Why_Me

escksu

Reputable
BANNED
Aug 8, 2019
878
354
5,260
In the past it had been observed that Intel spent so much time on the 14nm die that it we where running out of space for all of those "+"s, yet all of the experience they gained meant that it ran at about the equivalency of a 10nm die. Just imagine if they do develop in-house 3nm dies in the next two years, and then spend a couple more years perfecting it, and allowing it for desktop CPUs, not just server or HPC uses.

I am glad to see that AMD has not conceded much to Intel, and is jumping back in with both feet. It was sad to see Intel take the lead with PCIe 5, yet they are not the only game in town - just not enough people have taken advantage of PCIe 4 yet to justify the move to the next gen unless you are in a high-end business that needs it.

I welcome the ARM, Apple, and RISC-V chips, it means that chip development is moving forward, and if you are not careful, you can be left behind in the dust. Not everything is based on the x86 instruction set, which helps make for good competition.

One of my cats spilled water into my computer earlier this year, so I ended up having to replace so much that more than half of it looks new. There goes any budget I may of had for a new computer this year (came early as a rebuilt unit). Well, I was planning to wait for at least 13th Gen Intel or Zen 4/5 8000 series before building a new PC, and seems as that might be a good idea. Things are progressing nicely, even if it is not a 15 - 20% IPC over the prior Zen factor. I do like the fact that it will now have RNA 2, and not a reheated Vega chipset (which has been better than Intel's iGPU) for graphics - my biggest complaint with the current APUs.

I would put process aside because AMD is using TSMC instead. Intel has also contracted TSMC so any process advantage AMD has now no longer exist.
 

escksu

Reputable
BANNED
Aug 8, 2019
878
354
5,260
The interesting point will be if all of these connections will be mandatory to all boards because that will increase the cost of entry by a lot.
DDR5 is already a must so increased cost from that, the iGPU won't be for free, TDP went up so cooling and VRM of boards have to be better, it's all going to add up to quit a sum.

Yes ,all these will definitely add up.
 
Why do you think that intel chose 125W as the base power? You think it's a random number?
Never said it was a random. Intel just uses that to make their CPUs look more efficient than they actually are.

Yeah there is a efficiency curve and the closer you are to the sweet spot the closer the performance increase will be to the power increase.
The 12900K's sweet spot is 125W. However, that is not what it is running at depending on the situation. If you need all the cores then it is running closer to 240W since the motherboard default setting is unlimited boost. In reality Intel should state their CPU is a 240W CPU if they aren't going to make sure that the default for motherboards is unlimited boost. However, doing that would make their CPUs not look as good.

My point is that the difference isn't 142W to 241W which is 70% ,it changes depending on settings.
30% is still far better than 70% and the 13% difference in efficiency, at 125W for the intel CPU, is even better.
You need to reread what was stated as this makes no sense compared to what I had posted.
 
So is it DisplayPort 2.0 or 1.4? I definitely want 2.0 across the board, so I hope this is just a typo in the article. I'd hate to think that the m/b makers were just continuing with 1.4 outputs, even if 2.0 capability is now available.
I think they're saying that if you are using four displays the DP will only be running at 1.4 spec. Probably full DP 2 spec if only one or two screens.
 
The interesting point will be if all of these connections will be mandatory to all boards because that will increase the cost of entry by a lot.
DDR5 is already a must so increased cost from that, the iGPU won't be for free, TDP went up so cooling and VRM of boards have to be better, it's all going to add up to quit a sum.
As I understand it, AMD made it so AIBs can choose (except DDR5, obviously). The actual chipset used (south bridge) is very simple and the high end is going to use 2 of them, so lower end can do with one. B650 will use one and a potential lower end version will also use one. As for what they'll do in terms of connectivity, I have no idea, but the chipset does support a lot and the CPU as well. I'd imagine they'll keep it to PCIe4 standard for the low end and that'll be fine? I guess?

I am also sure, just like with Zen3 and before, the efficient parts will be at or under 95W and that's going to be fine for 99% of power delivery systems. There will most definitely be 65W parts. That segment is important for OEMs, so AMD can't ignore it.

Regards.
 

King_V

Illustrious
Ambassador
Later in the article is states "To that effect, AMD also boosted the maximum power delivery of the AM5 socket that will house the Ryzen 7000 chips to 170W, a 28W increase over the previous-gen AM4 socket's 142W peak." That to me sounds like the Total Package Power for AM5 is 170W.
Yes but that's toms interpretation which doesn't have to be correct, the wording is not clear and that's my point in the first place.

How is this "interpretation," exactly? This is an apples to apples comparison - maximum power delivery of AM4 socket vs maximum power delivery of AM5 socket.

Also, saying it's "not clear" doesn't at all mean the same thing as "that's toms interpretation."
 
How is this "interpretation," exactly? This is an apples to apples comparison - maximum power delivery of AM4 socket vs maximum power delivery of AM5 socket.

Also, saying it's "not clear" doesn't at all mean the same thing as "that's toms interpretation."
The only thing we have from AMD themselves is this:
AMD also revealed that the AM5 socket would support up to 170W CPUs,
if you look at mobos today they are advertised as support up to 105W CPUs not up to 140W.
So common sense would be that 170 is the new 105.

The thing from tom's saying that the 170 is the new 140 is out of the blue, unless they have a quote from AMD that they didn't care to show us.

But until clarified both could be true.
 

SunMaster

Commendable
Apr 19, 2022
159
136
1,760
How is this "interpretation," exactly? This is an apples to apples comparison - maximum power delivery of AM4 socket vs maximum power delivery of AM5 socket.

It's not excacly apples to apples. My AM4 processor, specced at 105watt TDP (AMD TDP, not Intel TDP), can draw 210+ watts sustained on a socket that's "specced" considerably lower. It's not as simple as comparing AMD-wattage vs Intel, simply because they have different means of describing the procesossors behaviour and power draw.

The only thing we have from AMD themselves is this:
AMD also revealed that the AM5 socket would support up to 170W CPUs,
if you look at mobos today they are advertised as support up to 105W CPUs not up to 140W.
So common sense would be that 170 is the new 105.

I agree to this.
 
  • Like
Reactions: KyaraM

InvalidError

Titan
Moderator
AMD dictating how many of the CPU's PCIe lanes are allowed to do 5.0 depending on which chipset it is paired with is kind of sad. I don't give a damn about premium overclocking or even just overclocking at all in the first place but still would like to have 5.0 possible on all CPU-powered lanes without paying for a premium chipset I have no use for.

AMD is getting as greedy as Intel, just using different features for its version of artificial market segmentation.
 

InvalidError

Titan
Moderator
if you look at mobos today they are advertised as support up to 105W CPUs not up to 140W.
The VRMs on most modern retail boards are grossly overbuilt for stock CPU specs and all that raising the bar to 170W would do is reduce the overkill headroom on existing designs by 65W. That is a side effect of motherboard manufacturers discovering they can use $5-10 worth of extra parts on additional VRM phases most people don't need as an excuse for jacking up mid-range board prices by $30-50.

Prices won't necessarily go up by much, except for the worst VRM designs that will require major upgrades. For the mid-range, it depends on how lucky board manufacturers will feel about jacking up prices even more for extra phases most customers have no need for just to restore unnecessary overkill headroom. Each unnecessary extra component is one more potential point of failure on top of cost and design effort. At some point, it isn't worth it anymore.
 
The only thing we have from AMD themselves is this:

if you look at mobos today they are advertised as support up to 105W CPUs not up to 140W.
So common sense would be that 170 is the new 105.

The thing from tom's saying that the 170 is the new 140 is out of the blue, unless they have a quote from AMD that they didn't care to show us.

But until clarified both could be true.
View: https://youtu.be/SRY5ZWNmwpo?t=561


There's your confirmation: 142W -> 170W.

Regards.
 

escksu

Reputable
BANNED
Aug 8, 2019
878
354
5,260
AMD dictating how many of the CPU's PCIe lanes are allowed to do 5.0 depending on which chipset it is paired with is kind of sad. I don't give a damn about premium overclocking or even just overclocking at all in the first place but still would like to have 5.0 possible on all CPU-powered lanes without paying for a premium chipset I have no use for.

AMD is getting as greedy as Intel, just using different features for its version of artificial market segmentation.

Its not a deal breaker. This is because we have yet to even fully utilise PCIE 4.0...
 
AMD dictating how many of the CPU's PCIe lanes are allowed to do 5.0 depending on which chipset it is paired with is kind of sad. I don't give a damn about premium overclocking or even just overclocking at all in the first place but still would like to have 5.0 possible on all CPU-powered lanes without paying for a premium chipset I have no use for.

AMD is getting as greedy as Intel, just using different features for its version of artificial market segmentation.
All CPU PCIe lanes are PCIe5. I doubt motherboard vendors will link them and/or use them as PCIe4; that would make no sense. The chipset links are PCIe4 for uplink and downlink, but I'd imagine they'll still support PCIe5 from the CPU, just not all of them. I'd imagine that's what you were talking about?

Regards.
 

InvalidError

Titan
Moderator
Its not a deal breaker. This is because we have yet to even fully utilise PCIE 4.0...
It is a deal breaker when companies like AMD decide to launch things like the RX6500 that suffer massive performance bottlenecking from having excessively limited bus width which is almost certainly going to get worse as entry-level GPUs get faster.

If you are in the "I wish CPU manufacturers kept the same sockets for 100 years" club, then you should also want to have the most future-proof IOs available from components installed on the board so you don't end up having to upgrade the board just because the IO suddenly got old.

Most people I know including myself keep their PCs for 10+ years when we include backup/secondary use life. Pretty sure most of the IO bandwidth they may not need today will be very handy at some point down the line. Most people couldn't imagine needing SATA3 until SSDs came along, couldn't be bothered with USB3 until speedy USB3 thumb-drives and SATA3 enclosures for SSDs became affordable, and PCIe 3.0 in x16 flavor is a significant contributor to how the 4GB RX580 and GTX1650 Super can hold their ground against newer GPUs with truncated PCIe interfaces. I didn't need any of the new IO available on my i5-3470/h77 at the time I put it together but needed all of it by the 5th year out of 9 years as my primary PC. Having reasonably future-proof IO for things I was in no hurry to get into contributed to doubling its useful life.
 
  • Like
Reactions: larkspur and KyaraM
It is a deal breaker when companies like AMD decide to launch things like the RX6500 that suffer massive performance bottlenecking from having excessively limited bus width which is almost certainly going to get worse as entry-level GPUs get faster.

If you are in the "I wish CPU manufacturers kept the same sockets for 100 years" club, then you should also want to have the most future-proof IOs available from components installed on the board so you don't end up having to upgrade the board just because the IO suddenly got old.

Most people I know including myself keep their PCs for 10+ years when we include backup/secondary use life. Pretty sure most of the IO bandwidth they may not need today will be very handy at some point down the line. Most people couldn't imagine needing SATA3 until SSDs came along, couldn't be bothered with USB3 until speedy USB3 thumb-drives and SATA3 enclosures for SSDs became affordable, and PCIe 3.0 in x16 flavor is a significant contributor to how the 4GB RX580 and GTX1650 Super can hold their ground against newer GPUs with truncated PCIe interfaces. I didn't need any of the new IO available on my i5-3470/h77 at the time I put it together but needed all of it by the 5th year out of 9 years as my primary PC. Having reasonably future-proof IO for things I was in no hurry to get into contributed to doubling its useful life.
Sorry to say, but most OEM life-cycles with big Corp are around 2 years for laptops/PCs and 5 years for Servers. Also, keep in mind PCIe3 was an anomaly as it was the longest lived spec in a long while and PCIe4 was the shortest lived (maybe next to PCIe1.1 and the other x.1s). Also, funny you mention 10 years of platform use. A computer from 2012 did not have NVMe. This is to say, who knows what we'll be using for connectivity in 10 years from now? I get your point, but I also think you need to put what you're saying in a reasonable perspective. If you maxed out your build and just upgrade later, then you can't just expect the new stuff to work at 100% in the old system. And as for the GPU, well... If the PC is for games, then no one will get a 6500XT if they can avoid it; or to put it differently in your context: if you buy a top of the line CPU+motherboard and you end up getting a bottom of the barrel GPU, then chances you actually need the GPU grunt are very low. GPUs are things that can be carried over to new builds as well, as long as there is a PCIe x16, so you will always upgrade to what you need and not to the bottom of the barrel card of the moment, right? I just can't think of a reasonable scenario that fits your reasoning. And just to be clear: this is not me trying to defend the 6500XT's shortcomings.

All in all, keeping the same socket is less of a shortcoming for the platform as long as you plan for it. AMD used around 940 pins for 5 generations? s939 all the way to FM3+ were all around 940 pins. They supported DDR1, 2 and 3. Intel had about 1155 pins for a lot of generations. You think 4 more pins actually made a huge difference? The point here is that both AMD and Intel just move on when they don't want to go through the hassle of backwards support or the socket just doesn't do what they need anymore. And one silver lining: AMD did keep the new socket's HSF compatibility, so point there for AMD. They could have just gone the Intel route and say "oops; they don't fit anymore". Intel with all the R&D budget, you'd imagine they could just make the socket compatible with older mounting systems, but I guess they physically needed more space. They were preparing for 10nm+++++++++ for sure, heh.

Regards.
 

KyaraM

Admirable
Sorry to say, but most OEM life-cycles with big Corp are around 2 years for laptops/PCs and 5 years for Servers. Also, keep in mind PCIe3 was an anomaly as it was the longest lived spec in a long while and PCIe4 was the shortest lived (maybe next to PCIe1.1 and the other x.1s). Also, funny you mention 10 years of platform use. A computer from 2012 did not have NVMe. This is to say, who knows what we'll be using for connectivity in 10 years from now? I get your point, but I also think you need to put what you're saying in a reasonable perspective. If you maxed out your build and just upgrade later, then you can't just expect the new stuff to work at 100% in the old system. And as for the GPU, well... If the PC is for games, then no one will get a 6500XT if they can avoid it; or to put it differently in your context: if you buy a top of the line CPU+motherboard and you end up getting a bottom of the barrel GPU, then chances you actually need the GPU grunt are very low. GPUs are things that can be carried over to new builds as well, as long as there is a PCIe x16, so you will always upgrade to what you need and not to the bottom of the barrel card of the moment, right? I just can't think of a reasonable scenario that fits your reasoning. And just to be clear: this is not me trying to defend the 6500XT's shortcomings.

All in all, keeping the same socket is less of a shortcoming for the platform as long as you plan for it. AMD used around 940 pins for 5 generations? s939 all the way to FM3+ were all around 940 pins. They supported DDR1, 2 and 3. Intel had about 1155 pins for a lot of generations. You think 4 more pins actually made a huge difference? The point here is that both AMD and Intel just move on when they don't want to go through the hassle of backwards support or the socket just doesn't do what they need anymore. And one silver lining: AMD did keep the new socket's HSF compatibility, so point there for AMD. They could have just gone the Intel route and say "oops; they don't fit anymore". Intel with all the R&D budget, you'd imagine they could just make the socket compatible with older mounting systems, but I guess they physically needed more space. They were preparing for 10nm+++++++++ for sure, heh.

Regards.
My boyfriend actually dealt with people asking if the 6550XT would fit into their old prebuilt as an upgrade quite frequently for a while after it came out. Not everyone is as PC literate as people are on this website, and to those people, who usually have no clue what any of that means, and that the card is severely hamstrung in their old PCIe 3.0 system, it looks like a cheap upgrade. They don't look at tests and benchmarks, and they don't read up on what the card can and cannot do. They just see "card is 5 years younger than the mid-range card I have now, and fits right in with my old PSU" (if they even think that far...), and then think "it must be better than what I have now!". So they go to the helpdesk of an online retailer, if they are smart enough, just to be hopefully told that this is a bad idea. If they aren't smart enough, they ask after they got disappointed.

So bottomline is. While I agree that people here on this website are likely to upgrade to a better card than a 6500XT, that by no means applies to the general public.

I mean, the other day I tried building someone a more or less affordable gaming PC, with instructions on how to order it prebuilt for them, just to be told they don't understand anything I wrote them. And literally all I did was give them a parts list they could choose from a freaking filtered list, with full names and everything, and an instruction on what box to check to make it prebuilt. They will now most likely get some crappy pre-configured prebuilt you cannot upgrade due to the OEM parts and that isn't worth the money the retailer asks for. But that's on them... still, there are many people like that out there is my point.
 
Last edited:
  • Like
Reactions: shady28
My boyfriend actually dealt with people asking if the 6550XT would fit into their old prebuilt as an upgrade quite frequently for a while after it came out. Not everyone is as PC literate as people are on this website, and to those people, who usually have no clue what any of that means, and that the card is severely hamstrung in their old PCIe 3.0 system, it looks like a cheap upgrade. They don't look at tests and benchmarks, and they don't read up on what the card can and cannot do. They just see "card is 5 years younger than the mid-range card I have now, and fits right in with my old PSU" (if they even think that far...), and then think "it must be better than what I have now!". So they go to the helpdesk of an online retailer, if they are smart enough, just to be hopefully told that this is a bad idea. If they aren't smart enough, they ask after they got disappointed.

So bottomline is. While I agree that people here on this website are unlikely to upgrade to a better card than a 6500XT, that by no means applies to the general public.

I mean, the other day I tried building someone a more or less affordable gaming PC, with instructions on how to order it prebuilt for them, just to be told they don't understand anything I wrote them. And literally all I did was give them a parts list they could choose from a freaking filtered list, with full names and everything, and an instruction on what box to check to make it prebuilt. They will now most likely get some crappy pre-configured prebuilt you cannot upgrade due to the OEM parts and that isn't worth the money the retailer asks for. But that's on them... still, there are many people like that out there is my point.
I don't disagree with that premise, but that's a different issue you're talking about: "not knowing". If you know, then you won't get a "bottom of the barrel" GPU if you want to game with a certain level of quality; if you don't care about quality, then... while it may sound a bit heartless, then it's a non-issue for the buyer? Also, if you're assembling your own PC, then you at the very least need to ask someone else when you don't know. If you don't, then that's on you and you alone. We can all agree the 6500XT is a bad product because the specs are there for anyone to read about them, so the issue you're talking about has a lot of potential avenues of solution. If you're in the unfortunate position in which the 6500XT is your only option, then nothing can be done unless you buy a new PC or upgrade other things like the PSU and/or case, etc.

The reason why the 6500XT was used in that context was the PCIe x4 restriction. If you buy a PC now and 10 years later the bottom of the barrel GPU blows your 10 year old GPU out of the water, but it's just x4 and PCIe10 (humour me), then your choices, as stated above, are quite simple: upgrade everything, deal with the x4 restriction, or just buy the best you can which is x16 and carry it over when you can upgrade the rest. In terms of what the future holds, no one can be 100% certain. Who knows, maybe PCIe5 will be used for another ~7 years like PCIe3 even when PCIe6 spec is out there because it's not economically viable for the consumer market. We could even move to a completely different standard and it would matter even less. Maybe you're old enough to remember ISA -> PCI -> AGP -> PCIe.

Regards.
 

KyaraM

Admirable
I don't disagree with that premise, but that's a different issue you're talking about: "not knowing". If you know, then you won't get a "bottom of the barrel" GPU if you want to game with a certain level of quality; if you don't care about quality, then... while it may sound a bit heartless, then it's a non-issue for the buyer? Also, if you're assembling your own PC, then you at the very least need to ask someone else when you don't know. If you don't, then that's on you and you alone. We can all agree the 6500XT is a bad product because the specs are there for anyone to read about them, so the issue you're talking about has a lot of potential avenues of solution. If you're in the unfortunate position in which the 6500XT is your only option, then nothing can be done unless you buy a new PC or upgrade other things like the PSU and/or case, etc.

The reason why the 6500XT was used in that context was the PCIe x4 restriction. If you buy a PC now and 10 years later the bottom of the barrel GPU blows your 10 year old GPU out of the water, but it's just x4 and PCIe10 (humour me), then your choices, as stated above, are quite simple: upgrade everything, deal with the x4 restriction, or just buy the best you can which is x16 and carry it over when you can upgrade the rest. In terms of what the future holds, no one can be 100% certain. Who knows, maybe PCIe5 will be used for another ~7 years like PCIe3 even when PCIe6 spec is out there because it's not economically viable for the consumer market. We could even move to a completely different standard and it would matter even less. Maybe you're old enough to remember ISA -> PCI -> AGP -> PCIe.

Regards.
I'm explicitly not talking about 10 years old systems, though, but GTX 900 or 1000 era ones. People are already looking for replacements for those.

And I mean, those people were looking at that card because they thought it'd be a good, cheap, frugal upgrade for their systems to keep them running a bit longer, and with PCIE 4.0 that might even be somewhat the case for many if that bs 3.0 restriction didn't exist. Also, the next step of the story then is that he advices them on how to proceed, just for them to come back with another plan barely better because they then read up on things, but did it wrong, or listened more to someone with a fraction of experience as the one who gave the first advice and who has 20 years experience in the business. Heck, I know people who know enough to build a PC, but then believe an SSD will die on them within 2 years if they install anything on it that is not the OS, or that it would significantly slow down the OS to do so, but then use their SSD as cache for their HDD to improve access times which they never figuted could to the very thing they were afraid of, namely, increase write activity on the SSD. And no matter what you do, even if you showed the proof that they are talking nonsense, the next time it comes up you are back to square one.

You can neither expect anyone to know everything, nor can you expect them to actually listen to your advice, unfortunately.
 
I'm explicitly not talking about 10 years old systems, though, but GTX 900 or 1000 era ones. People are already looking for replacements for those.

And I mean, those people were looking at that card because they thought it'd be a good, cheap, frugal upgrade for their systems to keep them running a bit longer, and with PCIE 4.0 that might even be somewhat the case for many if that bs 3.0 restriction didn't exist. Also, the next step of the story then is that he advices them on how to proceed, just for them to come back with another plan barely better because they then read up on things, but did it wrong, or listened more to someone with a fraction of experience as the one who gave the first advice and who has 20 years experience in the business. Heck, I know people who know enough to build a PC, but then believe an SSD will die on them within 2 years if they install anything on it that is not the OS, or that it would significantly slow down the OS to do so, but then use their SSD as cache for their HDD to improve access times which they never figuted could to the very thing they were afraid of, namely, increase write activity on the SSD. And no matter what you do, even if you showed the proof that they are talking nonsense, the next time it comes up you are back to square one.

You can neither expect anyone to know everything, nor can you expect them to actually listen to your advice, unfortunately.
This is not me being or playing "devil's advocate", but there is a point in which you just can't fight "stupid". AMD, Intel and nVidia can only do so much and they won't ever say publicly "this is a bad product, don't buy it". That's, well, a stupid expectation and it won't happen in any way. You can only read the fine print and make your very best for people to understand what that means. After that, nothing anyone can do.

We can debate for hours to no end what each Company has done and how egregious it has been for the world in terms of misleading or dubious marketing, but the TL;DR will always be the same: if someone is convinced they know better then there's little to nothing you can do and whatever the outcome, it'll be on them and their "advisors".

Again, the point of the 6500XT being brought to the discussion wasn't if it was a good or bad video card, but the I/O support and it's longevity. The 10 year arbitrary number is because of the original post I quoted.

Regards.