News AMD Big Navi and RDNA 2 GPUs: Release Date, Specs, Everything We Know

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Unless the consoles prices are going way above what everyone thinks they'll go ($500/550 for PS5 and $600/650 for XsX), They'll literally be equivalent to a mid/high tier PC with their GPUs basically equivalent to $300-500 variants of current day GPUs and more powerful than most PCs at this point.

Should be interesting how this plays out in terms of pricing, since if this is the case, the consoles will be A LOT more price efficient than PCs versus just a bit more price efficient.
So realistically, if we're talking actual manufacturing costs, I'd break down the Xbox Series X as follows:

8-core Zen CPU: $100
52 CU Navi 2x: $175
16GB GDDR6: $100
1TB SSD: $75
Case, power, etc: $100
Total estimate: $550

Performance of the GPU should be at least at the level of RTX 2080 Super, probably even 2080 Ti. So yeah, next-gen consoles are going to be extremely impressive in terms of specs for a while. The big question is what they'll cost. $600 is viable, but I could see MS and maybe Sony pushing as high as $800 at launch. Or they'll sell at a loss and make up the difference on games and subscription services. Guess we'll find out in about six or seven months.
 
That would have been lovely, except part of getting ready for Navi was probably the work on Vega 20. Certainly Navi 2x was nowhere near ready to go last year. AMD likely chose to limit die size (Navi 1x) to get a better handle on the new architecture and 7nm. Maybe Vega 20 helped them realize making larger chips at the time was going to be difficult. But really, Polaris 30 and Vega 20 were both stopgap solutions just to pass time while finishing up Navi 1x.

I think its more of the fact AMD didnt have the resources to devote to GPU development. If zen 3 is a success then the income streams will keep flowing and AMD will start putting the monetary resources to good use. Buy finding good driver and hw teams if difficult. The kind of people who can do this work are rare. So Frederick Brooks law applies when bringing in new resources. But Im getting off track.

The 50% might be 22% power reduction due to 7nm+ and 22% efficiency boost. Multiply 1.22 * 1.22 and you get roughly 50%. Node improvements AND arch improvements in a single release are rare since Sandy Bridge.

Early leaks suggested 20% faster on 2080ti. But this was in January which would be first spin silicon.
 
I think its more of the fact AMD didnt have the resources to devote to GPU development. If zen 3 is a success then the income streams will keep flowing and AMD will start putting the monetary resources to good use. Buy finding good driver and hw teams if difficult. The kind of people who can do this work are rare. So Frederick Brooks law applies when bringing in new resources. But Im getting off track.

The 50% might be 22% power reduction due to 7nm+ and 22% efficiency boost. Multiply 1.22 * 1.22 and you get roughly 50%. Node improvements AND arch improvements in a single release are rare since Sandy Bridge.

Early leaks suggested 20% faster on 2080ti. But this was in January which would be first spin silicon.
Navi 1x / RDNA 1 was supposed to be a 50% improvement in perf per watt, which ended up being about a 24% drop in power use and a 25% improvement in performance. (That's RX 5700 XT vs. Vega 64.) So AMD definitely delivered on its promise of a 50% improvement last time. This time, AMD is saying RDNA 2 / Navi 2x will be an additional 50% improvement in perf per watt over Navi 1x. How will it get there?

Lithography is obviously out -- it's still 7nm FinFET. Presumably it's all coming from architectural updates, then. The RX 5700 XT pushed power use higher (relative to 5700), so it's possible to run at a lower clock for the "sweet spot" in efficiency. Could Navi 2x have a 200W part that has 64 CUs or 80 CUs, running at more modest clocks? Sure. We'll have to wait and see.

As for the lack of resources, that's always the problem. When I say Navi 1x wasn't ready, it's lack of resources -- manpower, funding, or both, and probably newness of 7nm and other factors as well.
 
My terminology might be off, but why is lithography out? It is clear that Nvidia waits for a node to mature, and even then they squeeze every optimization out of it, with excellent results. Hasn't AMD mentioned something like that somewhere? I don't follow it so closely. Can't they optimize different parts of 7nm process for efficiency instead of performance, or vice versa etc. and achieve quite a lot just through the fact that their GPUs don't overheat at 1.65GHz but at 2.2GHz for example?
 
My terminology might be off, but why is lithography out? It is clear that Nvidia waits for a node to mature, and even then they squeeze every optimization out of it, with excellent results. Hasn't AMD mentioned something like that somewhere? I don't follow it so closely. Can't they optimize different parts of 7nm process for efficiency instead of performance, or vice versa etc. and achieve quite a lot just through the fact that their GPUs don't overheat at 1.65GHz but at 2.2GHz for example?
Lithography is out because they aren't going from 14nm to 7nm like GCN > RDNA was. Navi 2 I'd staying on 7nm so it won't get any boost from the smaller process to be more power efficient. All savings will have to come from architecture design.
 
Lithography is out because they aren't going from 14nm to 7nm like GCN > RDNA was. Navi 2 I'd staying on 7nm so it won't get any boost from the smaller process to be more power efficient. All savings will have to come from architecture design.
Exactly. Though of course a process does mature some over time, so lithography improvements aren't totally out of the question. But it's not going to be 10-20% improvement from lithography this round for RDNA 2. Nvidia Ampere gets the full lithography bump of course, which is why many are expecting Ampere GPUs to be incredibly fast.
 
I think lithography efficiency gains will be found in TSMC's N7FF+. It's 4 layer euv patterns will allow for high yields of the larger die.

On a side note SEUD PTGI ray tracing shaders have found a way to avoid geometry shaders entirely, which is one of the things that made RT shading so incredibly slow on AMD hardware. He precalculates the volumes for the shaders ahead of time. This is ingenuous but really cranks memory usage if hes doing what I think he's doing by creating a volumetric shading file for each voxel.
 
I think lithography efficiency gains will be found in TSMC's N7FF+. It's 4 layer euv patterns will allow for high yields of the larger die.

On a side note SEUD PTGI ray tracing shaders have found a way to avoid geometry shaders entirely, which is one of the things that made RT shading so incredibly slow on AMD hardware. He precalculates the volumes for the shaders ahead of time. This is ingenuous but really cranks memory usage if hes doing what I think he's doing by creating a volumetric shading file for each voxel.
Yes, the N7FF+ EUV node should show gains over the existing N7 node, but from what we can tell AMD stopped talking about N7+ for both Zen 3 and Navi 2x and both are now on a performance optimized N7P node. I'm not sure AMD has officially said that, however, but that's the current expectation.

I'm still not certain exactly what the difference is between N7+ and N6, though. Both are EUV, both seem to have similar characteristics. N7+ was the "trial run" for EUV I think, and N6 is more for volume or something. Feels like N6 is really just a new name for N7+ because it's more marketable (ie, similar to the 16FF vs 12FF naming change), only N7+ is sort of still a thing so that no one can say for certain that N6 is just a rebadge.
 
That's because AMD dragged their feet with this. They should have launched Navi much sooner.

and never released RX 590 and Radeon VII GPUs.

RX 5700XT and RX 5700 were cards meant to combat the RTX 2060 and RTX 2070.

AMD still hasn't offered anything to compete with the RTX 2080, now RTX 2080 Super, and RTX 2080Ti.
Well that's the issue with being a small company - they are not large enough to support 2 main products, they either need to be a CPU company or a GPU company - Both of those areas have major 900# competitors, and with Nvidia - GPU is really their only products - not much R&D went into the Tegra systems, so 90% of their R&D budget is GPU focused.

With Intel coming into the GPU market (and don't get twisted, Intel learned from the i740 era and is not making the same mistakes - and will be a major player - and the only real competition that Nvidia will have in the compute GPU market)and with it's issues with 10nm well behind them, this is the 2nd Netburst moving into Core once again - and we know how that worked out for AMD - oh yeah, FINALLY 2 years ago they delivered something remotely competitive...
I don't find Su that impressive - basic level stuff - their previous CEO was a dumpster fire - but when you have nothing to begin with, and after selling off what became GF, and the IP to China, and who knows what else to simply keep the lights on. 2 people were largely responsible for what AMD is today - Jim Keller and to a lesser degree Raj... Keller's designs are all in the market - they have nothing large left - Each generation of Ryzen is just a small iterative change - the move to multi chip modules (chiplets is an AMD marketing term for something that has existed since the early 80s) was more from a point of necessity (Ms Su is not great with contracts, and with a simple contingency clause in the GF wafer agreements, she should not have had to make lemonade out the lemons she had) - they were paying for the GF silicon whether it was used or not - that is where the i/o die came from.

What AMD delivers is amazing marketing materials and superfluous cores that 1-2% of the user base will use, and the other 1-2% that do nothing productive and just run benchmarks. Kinda like having a 3000HP car and you only drive in NYC... Looks cool, bragging rights, but not translated into sales. ..

If anyone thinks that Intel is suffering - they are not. They would rather have a strong AMD - it tends to stave off the "but but but monopoly" charges... Intel knows the datacenter market, they know their customers, and are delivering an ecosystem that includes CPUs, GPUs. FPGAs. Optane SSD, Optane DIMMs (that one should scare the living hell out of AMD) not to mention a leading role in Copper and Optical networking and technology. AMD sells a CPU and a GPU... how impressive.

The Vega VII was a panic move - they had just sent Navi back to the drawing board and needed something to fill that keynote address... Hence the rebadged, limited numbers VII - a re-purposed lower end Instinct card (sold in limited numbers because they were losing money on every single unit).

No they don't have anything to compete with the 2080s - their competition is Pascal - a previous gen design, just like the Navi - no next gen features (funny how everyone thought RT was a joke, and now you all think that AMD will put out a card that can do what the RTX2080Ti can't...) and now it's on everyone's mind - the APUs going into the next gen consoles- you all missed the slide that said the full screen real time RT will be done IN THE CLOUD.. so why do you think it will be any better on the mythical "Big Navi"?

So AMD needs to make up it's mind whether it is a GPU or a CPU company - it does not and will not have the resources to do both. Ms Su expected the Ryzen 1 to do OK - and I think she achieved that - but expected the Ryzen 2 (Ryzen 1 refresh 2000 series) to do much much better - as she did with the Ryzen 3 (3000 series) - and neither of those have been the resounding success AMD needs to fund a respectable R&D dept... So now the next great hope, the nail in the coffins of Intel and Nvidia - the 4000 series Ryzen and "Big Navi" - are likely to follow the same trajectory - fanfare from the marketing consuming masses and little to no uptake in the market. That is the story of Ms Su's AMD - great press, not so great sales.

TSMC, after getting so much wrong, finally got something right and is just as likely to blow it again - as is their track record - but that is their success, not AMDs. Intel after getting it almost all right, skipped a beat and stumbled a bit (and AMD still was unable to capitalize on that once in a generation opportunity), and looks to be well back on track. If you think they are not, you are clueless.

FWIY - I built a 1700x when they came out, built a 2700x when they came out, and currently have a 3950 base system... 1700 was given away, still have the 2700x (about to be donated to my bro in law). I have a 5700XT in the 2700X and VII inthe 3950. Good Systems - with support from higher end motherboard vendors (unlike the Athlon XP and the Biostar motherboards) they are competitive - but not nearly enough to dislodge the i9900K / dual 2080Ti in our gaming systems. Fact is you should build what you want / can afford and be happy with it - at any rate you are talking about a scant few % points difference. But don't fall into the "evil corporation" BS - All corporations exist for 1 single solitary purpose - revenue/profits for the shareholders. AMD is not evil, Intel is not Evil. Facebook IS evil. and Nvidia is not evil.

Enjoy the fleeting time in the sun, it will be coming to and end soon.
 
Yes, the N7FF+ EUV node should show gains over the existing N7 node, but from what we can tell AMD stopped talking about N7+ for both Zen 3 and Navi 2x and both are now on a performance optimized N7P node. I'm not sure AMD has officially said that, however, but that's the current expectation.

I'm still not certain exactly what the difference is between N7+ and N6, though. Both are EUV, both seem to have similar characteristics. N7+ was the "trial run" for EUV I think, and N6 is more for volume or something. Feels like N6 is really just a new name for N7+ because it's more marketable (ie, similar to the 16FF vs 12FF naming change), only N7+ is sort of still a thing so that no one can say for certain that N6 is just a rebadge.
First - EUV - TSMC is talking about a few of the 15+ layers needed to fab a modern CPU/GPU. Intel 7nm will be full EUV for all layers - not just a couple... Where does Samsung and TSMC stand on cobalt integration? Not very far at all - while Intel has been perfecting it.

TSMC uses different numbers, while the archair lithography pundits assign a + to Intel's processes... So by using TSMC's method 14 13 12 11 instead of 14+++ - this is one of the reasons that 10nm will be short lived (and has nothing to do with YIELD) but has to do with the facts it will be a short node - and regardless - it will be a miracle is any node will be as productive as 14nm - even Intel's 22nm is no where near the productivity of 14nm. Intel and TSMC and GF have never developed a node as much as Intel developed 14nm - like it or not - that "ancient" node still outsells all the magical TSMC nodes....

Process is irrelevant - I don't care that Toyota uses a JIT system rather than a large warehouse - I don't care if the side rails of the Jags are hydro formed or use super plasticity.... Doesn't matter. And the market is saying loud and clear they don't care about lithography used to make their computers.... They have 1 question - will it do what I want it to do? That's it - not complex. We in this forum are the exceptions - because this is what we love. We like getting into the details, we like seeing what performs better - but in the end our purchasing decisions are only a little bit more complicated - "Will it do what I want it to do?" along with "what components do I want to have in this new PC".. Regardless of whether you choose AMD/AMD or AMD/Nvidia or Intel/Nvidia or Intel/AMD they first criteria is a given - yes, it will do what you want it to do. But what you want and can afford and enjoy it.
 
Yes, the N7FF+ EUV node should show gains over the existing N7 node, but from what we can tell AMD stopped talking about N7+ for both Zen 3 and Navi 2x and both are now on a performance optimized N7P node. I'm not sure AMD has officially said that, however, but that's the current expectation.

I'm still not certain exactly what the difference is between N7+ and N6, though. Both are EUV, both seem to have similar characteristics. N7+ was the "trial run" for EUV I think, and N6 is more for volume or something. Feels like N6 is really just a new name for N7+ because it's more marketable (ie, similar to the 16FF vs 12FF naming change), only N7+ is sort of still a thing so that no one can say for certain that N6 is just a rebadge.
Not higher yields - easier than a pure tradition multi pattern play.
I think lithography efficiency gains will be found in TSMC's N7FF+. It's 4 layer euv patterns will allow for high yields of the larger die.

On a side note SEUD PTGI ray tracing shaders have found a way to avoid geometry shaders entirely, which is one of the things that made RT shading so incredibly slow on AMD hardware. He precalculates the volumes for the shaders ahead of time. This is ingenuous but really cranks memory usage if hes doing what I think he's doing by creating a volumetric shading file for each voxel.

Not sure if any of you remember the Sega Dreamcast - one of the things the PowerVR GPU did was not render things that are not visible - sounds simple - but the Dreamcast maintained a rock solid 60fps at 480 without fail - and that was with a high end laptop GPU - in 1998.

Thing about EUV - regardless of how many layers you use EUV on - unless it's the entire process (15+ layers, you are still at the mercy of the errors from multi patterning with DUV... So maybe easier to fabricate - but yields would remain about the same (yield is the % of usable die out of the total # of die per wafer - so say you have 100 die per wafer, and 20 of them turn out to be duds - you have an 80% yield)... easier to manufacture plays for the producer, not the consumer.

Raytracing is ultra GPU calc inensive- there are no shortcuts - It has been slow on both AMD and Nvidia - and the potential is not even being scratched. The goal of ray tracing is to be able to play in a Pixar movie level game in real time - all of those movies use ray tracing - some frames on the original Toy Story took 8 hours of CPU time minimum (per frame, 24fps or 30fps). Neither AMD nor Nvidia have a lock on ray tracing - and there are no shortcuts to full scene ray tracing - it is raw GPU power.
 
Not higher yields - easier than a pure tradition multi pattern play.


Not sure if any of you remember the Sega Dreamcast - one of the things the PowerVR GPU did was not render things that are not visible - sounds simple - but the Dreamcast maintained a rock solid 60fps at 480 without fail - and that was with a high end laptop GPU - in 1998.

Thing about EUV - regardless of how many layers you use EUV on - unless it's the entire process (15+ layers, you are still at the mercy of the errors from multi patterning with DUV... So maybe easier to fabricate - but yields would remain about the same (yield is the % of usable die out of the total # of die per wafer - so say you have 100 die per wafer, and 20 of them turn out to be duds - you have an 80% yield)... easier to manufacture plays for the producer, not the consumer.

Raytracing is ultra GPU calc inensive- there are no shortcuts - It has been slow on both AMD and Nvidia - and the potential is not even being scratched. The goal of ray tracing is to be able to play in a Pixar movie level game in real time - all of those movies use ray tracing - some frames on the original Toy Story took 8 hours of CPU time minimum (per frame, 24fps or 30fps). Neither AMD nor Nvidia have a lock on ray tracing - and there are no shortcuts to full scene ray tracing - it is raw GPU power.

I think we found an Intel employee.
 
I think we found an Intel employee.
Yeah, I am choosing not to engage. His posts are full of half-truths and opinions, sprinkled with a few facts. Lots of red herrings and invitations to argue. Case in point:

Fact: TSMC 7nm isn't the same as whatever Intel will do for 7nm.
Fact: Intel has named its recent refinements of 14nm with multiple plus signs.
Opinion: Intel has been perfecting [the use of cobalt at 7nm].
False: Intel 14nm+++ is as good as TSMC 7nm.
 
Well that's the issue with being a small company - they are not large enough to support 2 main products, they either need to be a CPU company or a GPU company - Both of those areas have major 900# competitors, and with Nvidia - GPU is really their only products - not much R&D went into the Tegra systems, so 90% of their R&D budget is GPU focused.

With Intel coming into the GPU market (and don't get twisted, Intel learned from the i740 era and is not making the same mistakes - and will be a major player - and the only real competition that Nvidia will have in the compute GPU market)and with it's issues with 10nm well behind them, this is the 2nd Netburst moving into Core once again - and we know how that worked out for AMD - oh yeah, FINALLY 2 years ago they delivered something remotely competitive...
I don't find Su that impressive - basic level stuff - their previous CEO was a dumpster fire - but when you have nothing to begin with, and after selling off what became GF, and the IP to China, and who knows what else to simply keep the lights on. 2 people were largely responsible for what AMD is today - Jim Keller and to a lesser degree Raj... Keller's designs are all in the market - they have nothing large left - Each generation of Ryzen is just a small iterative change - the move to multi chip modules (chiplets is an AMD marketing term for something that has existed since the early 80s) was more from a point of necessity (Ms Su is not great with contracts, and with a simple contingency clause in the GF wafer agreements, she should not have had to make lemonade out the lemons she had) - they were paying for the GF silicon whether it was used or not - that is where the i/o die came from.

What AMD delivers is amazing marketing materials and superfluous cores that 1-2% of the user base will use, and the other 1-2% that do nothing productive and just run benchmarks. Kinda like having a 3000HP car and you only drive in NYC... Looks cool, bragging rights, but not translated into sales. ..

If anyone thinks that Intel is suffering - they are not. They would rather have a strong AMD - it tends to stave off the "but but but monopoly" charges... Intel knows the datacenter market, they know their customers, and are delivering an ecosystem that includes CPUs, GPUs. FPGAs. Optane SSD, Optane DIMMs (that one should scare the living hell out of AMD) not to mention a leading role in Copper and Optical networking and technology. AMD sells a CPU and a GPU... how impressive.

The Vega VII was a panic move - they had just sent Navi back to the drawing board and needed something to fill that keynote address... Hence the rebadged, limited numbers VII - a re-purposed lower end Instinct card (sold in limited numbers because they were losing money on every single unit).

No they don't have anything to compete with the 2080s - their competition is Pascal - a previous gen design, just like the Navi - no next gen features (funny how everyone thought RT was a joke, and now you all think that AMD will put out a card that can do what the RTX2080Ti can't...) and now it's on everyone's mind - the APUs going into the next gen consoles- you all missed the slide that said the full screen real time RT will be done IN THE CLOUD.. so why do you think it will be any better on the mythical "Big Navi"?

So AMD needs to make up it's mind whether it is a GPU or a CPU company - it does not and will not have the resources to do both. Ms Su expected the Ryzen 1 to do OK - and I think she achieved that - but expected the Ryzen 2 (Ryzen 1 refresh 2000 series) to do much much better - as she did with the Ryzen 3 (3000 series) - and neither of those have been the resounding success AMD needs to fund a respectable R&D dept... So now the next great hope, the nail in the coffins of Intel and Nvidia - the 4000 series Ryzen and "Big Navi" - are likely to follow the same trajectory - fanfare from the marketing consuming masses and little to no uptake in the market. That is the story of Ms Su's AMD - great press, not so great sales.

TSMC, after getting so much wrong, finally got something right and is just as likely to blow it again - as is their track record - but that is their success, not AMDs. Intel after getting it almost all right, skipped a beat and stumbled a bit (and AMD still was unable to capitalize on that once in a generation opportunity), and looks to be well back on track. If you think they are not, you are clueless.

FWIY - I built a 1700x when they came out, built a 2700x when they came out, and currently have a 3950 base system... 1700 was given away, still have the 2700x (about to be donated to my bro in law). I have a 5700XT in the 2700X and VII inthe 3950. Good Systems - with support from higher end motherboard vendors (unlike the Athlon XP and the Biostar motherboards) they are competitive - but not nearly enough to dislodge the i9900K / dual 2080Ti in our gaming systems. Fact is you should build what you want / can afford and be happy with it - at any rate you are talking about a scant few % points difference. But don't fall into the "evil corporation" BS - All corporations exist for 1 single solitary purpose - revenue/profits for the shareholders. AMD is not evil, Intel is not Evil. Facebook IS evil. and Nvidia is not evil.

Enjoy the fleeting time in the sun, it will be coming to and end soon.
There is so much false information in there that I swear I was listening to Donald Trump speak. I would love to know what orifice you used to come up with "Each generation of Ryzen is just a small iterative change " and "
the APUs going into the next gen consoles- you all missed the slide that said the full screen real time RT will be done IN THE CLOUD.. so why do you think it will be any better on the mythical "Big Navi"?" I personally have looked through all the slide decks from AMD in the last 2 months and not seen anything about IN THE CLOUD ray tracing. Zen 2 is far from small iterative changes. Yes Zen - Zen+ was small, but it still increase the IPC by about 3%. What has Intel changed about its micro-architecture since Skylake? Absolutely nothing. Take a 9000 series with the same core count and speed as the 6000 series and you will find that they perform almost identically. Just a refresher Skylake was released in August 2015! Going back even further to be beginning of the Core architecture there was a big just from Conroe to Nehalem and then from Nehalem to Sandy Bridge. Since Sandy Bridge, it has been all iterative changes with 3-5% IPC gains each generation. Yes over enough generations those 3-5% gains make a large difference in performance; however, it took from Sandy Bridge to Skylake to get the same jump as we got from Zen to Zen2.

Your whole ordeal about your i9900k with 2080Ti SLI being better at gaming than a 3950X with a 5700XT is the most NO $H!T SHERLOCK thing that could be said. We understand that you are nothing more than an nVidia & Intel troll so please go back under your bridge and don't come out.
 
There is so much false information in there that I swear I was listening to Donald Trump speak. I would love to know what orifice you used to come up with "Each generation of Ryzen is just a small iterative change " and "
the APUs going into the next gen consoles- you all missed the slide that said the full screen real time RT will be done IN THE CLOUD.. so why do you think it will be any better on the mythical "Big Navi"?" I personally have looked through all the slide decks from AMD in the last 2 months and not seen anything about IN THE CLOUD ray tracing.

Zen 2 is far from small iterative changes. Yes Zen - Zen+ was small, but it still increase the IPC by about 3%. What has Intel changed about its micro-architecture since Skylake? Absolutely nothing. Take a 9000 series with the same core count and speed as the 6000 series and you will find that they perform almost identically. Just a refresher Skylake was released in August 2015! Going back even further to be beginning of the Core architecture there was a big just from Conroe to Nehalem and then from Nehalem to Sandy Bridge. Since Sandy Bridge, it has been all iterative changes with 3-5% IPC gains each generation. Yes over enough generations those 3-5% gains make a large difference in performance; however, it took from Sandy Bridge to Skylake to get the same jump as we got from Zen to Zen2.
Your whole ordeal about your i9900k with 2080Ti SLI being better at gaming than a 3950X with a 5700XT is the most NO $H!T SHERLOCK thing that could be said. We understand that you are nothing more than an nVidia & Intel troll so please go back under your bridge and don't come out.

https://i.redd.it/yn88m58epym31.jpg

And each generation of Ryzen was just small iterative changes - the whole "chiplet" crap is meaningless - the main architecture, not the packaging, has gone small iterative changes - that was Keller's style. Radical redesign -> iterate ->iterate ->iterate.

Yeah pretty sure everyone one here knows all this - I bought almost every single processor you named, A history lesson is not needed to discuss IPC gains.

Ice Lake with Sunny Cove cores increases IPC by over 30% - and the Willow Cove increases IPC over Sunny Cove. Haven't seem much on Golden Cove - which is the 3rd part of the overhauled architecture. Ice Lake and Sunny Cove and Tiger Lake and Willow Cove are the new architecture - not based on Skylake. Massive IPC uplifts - which allowed the Ice Lake perform on par with much higher frequency previous designs. I have Dell 13 2-in-1 - have had the Ice Lake version since October, and it replaced a similarly configured Dell 13 2-in-1 that is 2 years old - the difference is night and day.

Not sure what ordeal you are talking about - Bought them, assembled them, no ordeal, no expectations.
I never expected the system with either of the AMD GPUs to compete with a Dual 2080TI system - i don't read or care about reviews, I buy them myself and try them the way I would use them. I have quite a few complete systems laying around here - I should donate them or give the away.

So the remark about Trump was truly below the belt - I am sorry that you don't agree with facts, and think that the only facts are your own - but nothing I said was incorrect.

I enjoyed our exchange, your response was lack luster and really didn't make any of your points, but I am glad we had this exchange. I hope you and yours are well and not being affected financially or health wise from COVID-19 and that is sincere.
 
FWIY - I built a 1700x when they came out, built a 2700x when they came out, and currently have a 3950 base system... 1700 was given away, still have the 2700x (about to be donated to my bro in law). I have a 5700XT in the 2700X and VII inthe 3950. Good Systems - with support from higher end motherboard vendors (unlike the Athlon XP and the Biostar motherboards) they are competitive - but not nearly enough to dislodge the i9900K / dual 2080Ti in our gaming systems.
That is the ordeal I was getting at.

Ice Lake with Sunny Cove cores increases IPC by over 30%
Wrong Intel claimed an 18% IPC improvement over Sky Lake. This IPC improvement does help the lower clocked Ice Lakes keep up with higher clocked Coffee Lake Refresh but not enough to dislodge Coffee Lake Refresh from the highest performing stack.

and the Willow Cove increases IPC over Sunny Cove. Haven't seem much on Golden Cove - which is the 3rd part of the overhauled architecture. Ice Lake and Sunny Cove and Tiger Lake and Willow Cove are the new architecture - not based on Skylake. Massive IPC uplifts - which allowed the Ice Lake perform on par with much higher frequency previous designs.
That is nothing but speculation on the next cores IPC. There are also rumors that due to the 10nm issues, Intel will back port Willow Cove to 14nm. Until we are given any information from Intel, even marketing would be good, we have no idea what will happen with Willow Cove. We don't even know when that is going to be released.

So the remark about Trump was truly below the belt - I am sorry that you don't agree with facts, and think that the only facts are your own - but nothing I said was incorrect.
Don't spread FUD and people won't call you out. Granted it was in a negative way that I called you out, but the results are still the same. Just because you believe that you are right, doesn't mean you are. Your "facts" were equivalent to "alternative facts" and had no actual backing of any hard evidence.
 
I guess reading slides is too hard - on the new consoles the full scene ray tracing will be done in the cloud - not on the hardware... Would imagine the "Big Navi" willl do much of the same.
[...]
https://i.redd.it/yn88m58epym31.jpg
What is being discussed in this thread is RDNA2 and associated products, namely XSX/PS5 and "Big Navi". These correspond to 2nd step in the progression of RT shown in that slide, i.e. "Hardware: Select Lighting Effects for Real Time Gaming (Next Gen RDNA)". Cloud based full scene RT is shown as later, 3rd step, so presumably not for the upcoming consoles.
 
Last edited:
those some pretty impressive theoretical compute performance. Excited to see how that translates tor real world performance versus some pointless synthetic benchmark.
AMD has long held a GFLOPS advantage vs. Nvidia, in spite of lagging gaming performance. Navi's new RDNA architecture did a lot to close the gap, but not completely.
 
you should be able to get relatively close to the theoretical compute performance. Or at least, both GCN and Navi should end up with a relatively similar percentage of the theoretical compute.
One big improvement in RDNA vs. GCN is the rate of retiring scalar instructions. That doesn't show up in the gross number of TFLOPS, but it makes a difference in real-world performance.
 
and never released RX 590 and Radeon VII GPUs.

RX 5700XT and RX 5700 were cards meant to combat the RTX 2060 and RTX 2070.

AMD still hasn't offered anything to compete with the RTX 2080, now RTX 2080 Super, and RTX 2080Ti.
Agreed on the RX 590, but I think AMD was just trying to cash in on cryptomining.

With regard to the Radeon VII, in fact it does basically match the RTX 2080! And it was virtually free to release, because it's a slightly-cripled datacenter GPU that they just sold to consumers as a bonus. Originally, they weren't planning to, but then they saw a market opportunity.
 
Well that's the issue with being a small company - they are not large enough to support 2 main products, they either need to be a CPU company or a GPU company - Both of those areas have major 900# competitors, and with Nvidia - GPU is really their only products - not much R&D went into the Tegra systems, so 90% of their R&D budget is GPU focused.
This is weird and unhinged.

AMD bought ATI, which was a separate GPU company. So, the idea that they can't continue to work on CPUs and GPUs comes from ???

As for Tegra not taking much R&D, where did you get that? Were you aware that they designed custom ARMv8 CPU cores for it? It's not just a repackaging of their GPU with some off-the-shelf ARM cores. Furthermore, their self-driving software also required RTOS support (i.e. doesn't use Linux), for regulatory approval.

If anyone thinks that Intel is suffering - they are not.
Oh dear. Sounds like someone sat on some Intel stocks for too long. Pity about the 5% drop.

But don't fall into the "evil corporation" BS - All corporations exist for 1 single solitary purpose - revenue/profits for the shareholders. AMD is not evil, Intel is not Evil. Facebook IS evil. and Nvidia is not evil.
LOL.