AMD's Future Chips & SoC's: News, Info & Rumours.

Page 97 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.

InvalidError

Titan
Moderator
Need to start somewhere i surpose but really, as we all know the true intended purpose of RTX is too ahead of it's time
RTX isn't ahead of its time, it is an under-powered gimmick at the moment, the ray-tracing performance needs to at least double before it becomes truly viable.

RTX's true purpose is to look bad for the price so people will finish buying out all of the GTX10xx leftovers, similar thing with Navi which is barely competitive with older cards costing $100-200 less.
 
  • Like
Reactions: rigg42
Yea current navi offerings are just eh. I think AMDs GFX devision just missed the mark.

If they could deliver 1080 or better performance for $300 with no ray tracing, they could easily win. Just a vega revamp would be all it takes.
 
  • Like
Reactions: rigg42

boju

Titan
Ambassador
RTX isn't ahead of its time, it is an under-powered gimmick at the moment, the ray-tracing performance needs to at least double before it becomes truly viable.

RTX's true purpose is to look bad for the price so people will finish buying out all of the GTX10xx leftovers, similar thing with Navi which is barely competitive with older cards costing $100-200 less.

Sorry i meant 'Trying too hard to be ahead of it's time'

I agree with what has been said regarding.

My apologies
 

boju

Titan
Ambassador
And i was kidding about effects and mirrors, i have high hopes for Amd for those waiting. Fx line was ahead of it's time afaik. Microsoft let Amd down back then with how scheduling was coded, no?
 
And i was kidding about effects and mirrors, i have high hopes for Amd for those waiting. Fx line was ahead of it's time afaik. Microsoft let Amd down back then with how scheduling was coded, no?

No, FX was a piss-poor design no matter how well you scheduled for it. It's performance hasn't increased with scheduling improvements. It's a power hungry monster that is good for little more then number crunching in select benchmarks.

Everyone remembers how down I was at FX as news came out, but even I was floored by how bad the architecture was. I still remember when the cache latency numbers came out like one week before launch, and I said "those are fake; there's no possible way they're that bad.". They were, and that alone highlights how horrible the Bulldozer architecture was.
 

jdwii

Splendid
No, FX was a piss-poor design no matter how well you scheduled for it. It's performance hasn't increased with scheduling improvements. It's a power hungry monster that is good for little more then number crunching in select benchmarks.

Everyone remembers how down I was at FX as news came out, but even I was floored by how bad the architecture was. I still remember when the cache latency numbers came out like one week before launch, and I said "those are fake; there's no possible way they're that bad.". They were, and that alone highlights how horrible the Bulldozer architecture was.

I was so disappointed i even bought my Sabertooth 990FX board like a month before bulldozer came out(remember that "We were promised up to 50% higher performance thanks to the 33% more cores "). Bulldozer was easily Amd's biggest disaster to this very day I have no idea what they where thinking. They ruined their FX branding as it used to mean something now its a joke(FX 60, FX 53 crazy powerful for the time)

I also notice how Sempron isn't used anymore instead they are using Athlon for their low-end parts.

Amd is much closer to Intel's IPC now then they where with the Phenom II vs Bloomfield time frames.

3 months before Bulldozer came out Amd and a few of their reps kept downplaying IPC and FP performance and kept talking about the future i was so badly on the hype train but i should have noticed them doing that.

With Zen 2 however they talked more about IPC at computex and E3 instead of ignoring it and that gives me faith that they got IPC up and that they will continue to focus on that.
 
Last edited:
  • Like
Reactions: NightHawkRMX
Amd is much closer to Intel's IPC now then they where with the Phenom II vs Bloomfield time frames.

To be fair, if PII wasn't so late and competed against Yorktown instead of Nahalem/Sandy Bridge, it would have been viewed a lot more favorably.

3 months before Bulldozer came out Amd and a few of their reps kept downplaying IPC and FP performance and kept talking about the future i was so badly on the hype train but i should have noticed them doing that.

Some of us called this out, repeatedly. Having people like JF_AMD here certainly contributed to the over-hyping though; he basically got run off the forums after that debacle.

Hell, even recently, even with double the cores AMD didn't start to outperform Intel until after they closed the IPC gap. That tells you exactly why AMDs thinking back them was backwards.
 
Some of us called this out, repeatedly. Having people like JF_AMD here certainly contributed to the over-hyping though; he basically got run off the forums after that debacle.

Hell, even recently, even with double the cores AMD didn't start to outperform Intel until after they closed the IPC gap. That tells you exactly why AMDs thinking back them was backwards.
I'll admit I was also massively disappointed with the last FX generation (BD/PD/SR), but I will also admit the design was interesting and, although futile at the end of the day, did bring AMD some important lessons to be had. Specially when betting so much on Speed instead of IPC. The modular approach, I still find fascinating, even with the shortcomings for real world usage, but I can absolutely see why it can work, under a brainiac uArch, in server workloads. While Ryzen's SMT current incarnation is great, I believe "modules" is still better (when done correctly) for specific workloads.

In any case, I skipped the FX generation and only built 1 PC for a friend to experiment with it. While I wasn't massively disappointed with the performance, the shortcomings of the CPU were obvious. The only good thing about FX was that, at the bottom, they offered decent performance for a really low price. The FX4200 was decent, but really un-remarkable. Same with the APUs based on FX and kind of ironic... Where you need to save power the most, you put a power hog next to the iGPU... Some great thinking process there, hahaha. Anyway, that desperation brought AMD to the current power saving techniques they applied to Ryzen, so that's what I meant by "lessons learned".

Cheers!
 
I'll admit I was also massively disappointed with the last FX generation (BD/PD/SR), but I will also admit the design was interesting and, although futile at the end of the day, did bring AMD some important lessons to be had. Specially when betting so much on Speed instead of IPC. The modular approach, I still find fascinating, even with the shortcomings for real world usage, but I can absolutely see why it can work, under a brainiac uArch, in server workloads. While Ryzen's SMT current incarnation is great, I believe "modules" is still better (when done correctly) for specific workloads.

In any case, I skipped the FX generation and only built 1 PC for a friend to experiment with it. While I wasn't massively disappointed with the performance, the shortcomings of the CPU were obvious. The only good thing about FX was that, at the bottom, they offered decent performance for a really low price. The FX4200 was decent, but really un-remarkable. Same with the APUs based on FX and kind of ironic... Where you need to save power the most, you put a power hog next to the iGPU... Some great thinking process there, hahaha. Anyway, that desperation brought AMD to the current power saving techniques they applied to Ryzen, so that's what I meant by "lessons learned".

Cheers!

My biggest issue was that AMD should have known better; they made pretty much the exact mistakes Intel made with the Pentium 4: Long pipeline, poor power characteristics, scheduling difficulties (HTT vs. Modules). And it wasn't like the power wall @5Hz was going anywhere, so having an arch that required hitting this clocks to perform was dubious at best.
 
My biggest issue was that AMD should have known better; they made pretty much the exact mistakes Intel made with the Pentium 4: Long pipeline, poor power characteristics, scheduling difficulties (HTT vs. Modules). And it wasn't like the power wall @5Hz was going anywhere, so having an arch that required hitting this clocks to perform was dubious at best.
You could say AMD was strangely confident in GloFo getting their act together back then, so they were counting with having a process that could give them the characteristics they needed for BD. That clearly didn't happen, even though they were able to put a behemoth with a 220W TDP anyway. I don't believe they were counting with the server space telling them they didn't want a hot and hard to cool CPU in their datacenters after seeing how IBM can push their Power9 CPUs to (prison) customers :p

And the Pentium 4 comparison is a bit unfair, although not entirely incorrect and I agree. Both uArchs are really different beasts and just share 2 important similarities, but that's it.

I wonder if Intel or AMD would ever try another speed demon uArch at any point in time.

Cheers!
 

InvalidError

Titan
Moderator
You could say AMD was strangely confident in GloFo getting their act together back then, so they were counting with having a process that could give them the characteristics they needed for BD. That clearly didn't happen
It was never going to happen, a bad architecture is still bad even if the process gets 10X better, which is why PD clocked at 5GHz still gets crushed by sub-4GHz 2C4T i3 in many games . Netburst didn't improve much from Willamette to Prescott despite the die shrink in-between either, biggest improvement was turning HT on for at least some SKUs starting with Northwood.
 

jdwii

Splendid
You could say AMD was strangely confident in GloFo getting their act together back then, so they were counting with having a process that could give them the characteristics they needed for BD. That clearly didn't happen, even though they were able to put a behemoth with a 220W TDP anyway. I don't believe they were counting with the server space telling them they didn't want a hot and hard to cool CPU in their datacenters after seeing how IBM can push their Power9 CPUs to (prison) customers :p

And the Pentium 4 comparison is a bit unfair, although not entirely incorrect and I agree. Both uArchs are really different beasts and just share 2 important similarities, but that's it.

I wonder if Intel or AMD would ever try another speed demon uArch at any point in time.

Cheers!
No way in heck did Global foundries(as everyone here should know i HATE this foundry always have always kept Amd behind) have the resources to create a 5ghz bulldozer in 2011. Amd should have known better back then and something tells me if nerds like us in a forum knew better that Amd engineer's knew better something tells me bulldozer was a management product. With Lisa being a engineer herself its nice to know that someone like that is in charge of Amd now.

As a side note does anyone at all have any idea about when preorders are up? Gonna snatch that 3700x

I bought the 1700 for $329.99 then 2700X for that price and now the 3700X lol
 
  • Like
Reactions: goldstone77

goldstone77

Distinguished
Aug 22, 2012
2,245
14
19,965
  • Like
Reactions: NightHawkRMX

jdwii

Splendid
I'm thinking the same things, $199 4.2GHz. I'm interested to see if the new processors will overclock. 6 days to go!

Not to be a downer here but given the leaks that we seen with Zen 2 16 core under ice and all and still reaching what i would call lower frequency's(5.2ghz at crazy voltage) makes me think we will not see great overclocking potential from these parts under normal cooling like AIO's and standard heatsinks.
Lets run down the specs and put are thinking caps on

Ryzen 5 3600 65 watts 3.6ghz(base) to 4.2ghz turbo
Ryzen 5 3600X 95 watts 3.8ghz(base) 4.4ghz turbo(46% more power for 5.6% more frequency)
Ryzen 7 3700 65 watts 3.6ghz(base) 4.4ghz turbo
Ryzen 7 3800X 105 watts 3.9ghz(base) 4.5ghz turbo(62% more power for 8.2% more frequency)

I'm pretty sure Amd is pushing these chips to their limits like they did with Zen 1 and Zen+ which is fine for me anyways.

I'm 100% certain we will not see 5ghz overclocks on air or with AIO's or even custom water cooling setups with Zen 2 1st gen parts.


Edit it appears that in terms of performance per watt these chips probably run best at 3.5Ghz or so before massively losing efficiency

Edit again

If i had to guess the 3900X and 3950X are using the best 3800X and 3600X chiplets to hit their higher frequencys under turbo
 

InvalidError

Titan
Moderator
If i had to guess the 3900X and 3950X are using the best 3800X and 3600X chiplets to hit their higher frequencys under turbo
There are no "3600X and 3800X" specific chiplets, all Ryzen 3, ThreadRipper 3 and EPYC 2 CPUs use the same ones. EPYC gets the best, ThreadRipper will get second best once AMD can spare enough to launch those, 3900X+ get third best, 3800X downward gets the rest.
 

jdwii

Splendid
There are no "3600X and 3800X" specific chiplets, all Ryzen 3, ThreadRipper 3 and EPYC 2 CPUs use the same ones. EPYC gets the best, ThreadRipper will get second best once AMD can spare enough to launch those, 3900X+ get third best, 3800X downward gets the rest.

Keep in mind 8,6,4 core Zen 2 chips have 1 chiplet which can be binned separately from another
 

InvalidError

Titan
Moderator
Keep in mind 8,6,4 core Zen 2 chips have 1 chiplet which can be binned separately from another
You got this backwards. Binning is the process of dividing items based on what criteria they meet. By the time chiplets are divided into 8-cores, 6-cores, 4-cores and whatever else, binning is already over - identifying the number of viable cores in each chiplet along with their max frequencies, power-frequency-voltage curves, etc. are all part of binning.
 

goldstone77

Distinguished
Aug 22, 2012
2,245
14
19,965
We do not know what the critical points on the V/F curve are for TSMC's 7nm HPC process. I'm sure we will find out sometime after release. What I want to know is why are there huge jumps in TDP for some chips, but relatively small jumps in boost ~100 MHz. It makes be think a lot of power is required for a relatively small gain, which would point to the chips reaching their limits similar to Zen 1.
The overclocking headroom for the higher-end Ryzen models is rather slim. This was expected due to the relatively high stock frequencies, high-density orientation of the design and the low power targeted manufacturing process used for the Zeppelin die (Samsung 14nm LPP).
The hope is that since we are using HPC, High Performance Computing, will provide a higher frequency/better leakage, but shrinking to the extent we are now my not make higher frequencies possible. At least not without some refinement of the node, and/or uarch. Just my 2 cents
8Rch6JF.png

 
I think the easy explanation for the TDPs is the base clock and turbo curve for PB2. As for OC potential, I would imagine the CPUs (specially the 3700X and 3800X) would be within striking range, if not the exact same potential (whatever that is). The 3800X will just have a higher baseline speed, but similar-ish PB2 behaviour I'm sure.

Cheers!
 
  • Like
Reactions: goldstone77
‚Overclocking is getting out-moded fast, turbo is getting better. Ryzen can be OC-ed on all cores to just about same frequency as what it's turbo mode is. above that it just wastes energy. OC will soon cease to be a factor. Once upon a time you could OC some CPUs by close to or to 1GHz above base frequency and now what ? Just some more process tweaking and even x or k will not be needed.
For now, an x model saves a lot of trouble OC-ing and even that it does better. A non x Ryzen is just second grade product I don't want to go for.
 

jdwii

Splendid
‚Overclocking is getting out-moded fast, turbo is getting better. Ryzen can be OC-ed on all cores to just about same frequency as what it's turbo mode is. above that it just wastes energy. OC will soon cease to be a factor. Once upon a time you could OC some CPUs by close to or to 1GHz above base frequency and now what ? Just some more process tweaking and even x or k will not be needed.
For now, an x model saves a lot of trouble OC-ing and even that it does better. A non x Ryzen is just second grade product I don't want to go for.


I agree for example with the 2700X you are better off just getting the fastest ram you can get with the lowest latency and leave the 2700X alone give it a good cooler and that thing will always be at 4050+ during gaming with spikes at 4350. I expect the same with the newer chips Amd is getting better at pushing their chips to their limits out of the box which is good and bad.

Overclocking will probably matter less on higher-end parts when it comes to GPU's and CPU's in the future and matter more in the lower-end spectrum. Guessing max overclocks on all Zen 2 1st gen parts(8 core models and below) with it being 100% stable all cores with air coolers and AIO's I expect 4.3-4.5Ghz before needing 1.45+V to remain stable which isn't practical for long term use.