AMD's Future Chips & SoC's: News, Info & Rumours.

Page 116 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.

MasterMadBones

Distinguished
Pretty sweet comparison of IPC between the CPU's like a broken record the FX bulldzoer was such a pile of crap!

https://i.redd.it/9kt1t1gws4h41.png
Which benchmarks are used to calculate IPC? I have a few problems with the numbers posted here.

I am pretty sure that Haswell's IPC improved on Ivy Bridge by less than 10%, let alone the 22.5% improvement shown here. On top of that, as far as I know, Nehalem was a significant step forward compared to Penryn, which isn't reflected here at all. All in all that should mean that Nehalem through Ivy Bridge should all have higher numbers.

Also, how does K8 regress by nearly 10% going from gen 2 to gen 3?
 

InvalidError

Titan
Moderator
Which benchmarks are used to calculate IPC? I have a few problems with the numbers posted here.
IPC is heavily application-dependent. The newest Intel and AMD CPUs get their 20-40% IPC bumps mainly from widening AVX units so only SSE/AVX-intensive workloads that specifically stress those widened resources will show gains anywhere near that large, same goes for most other architectures with significant improvements - you see the largest gains only in algorithms optimized specifically for them, the rest will be a somewhat mixed bag.
 
Which benchmarks are used to calculate IPC? I have a few problems with the numbers posted here.

I am pretty sure that Haswell's IPC improved on Ivy Bridge by less than 10%, let alone the 22.5% improvement shown here. On top of that, as far as I know, Nehalem was a significant step forward compared to Penryn, which isn't reflected here at all. All in all that should mean that Nehalem through Ivy Bridge should all have higher numbers.

Also, how does K8 regress by nearly 10% going from gen 2 to gen 3?

I would say this test is looking at (or at least heavily influenced by) floating point performance. It also depends on if this test is looking at single or multi-thread performance.

In single thread, bulldozer was faster than K10 (albeit only slightly) as it could execute more int and fp operations per clock. The issue was the module penalty so at 2 threads K10 had a good lead- one of the biggest failures of the original BD module was the fact that despite being '2' cores it could only issue instructions to one or the other per clock- halving it's max throughput. That is something that didn't get fixed until Steamroller (which you can see has higher IPC than K10). It's not really clear what this chart is measuring though- looks like it must be some composite of INT and FP, possibly looking at more than 1 thread as well (maybe 2 threads to account for HT?)....
 
Pretty sweet comparison of IPC between the CPU's like a broken record the FX bulldzoer was such a pile of crap!

https://i.redd.it/9kt1t1gws4h41.png

If I were to guess I would say this is single core, floating-point arithmetic but we need context here. This graph is nothing more than pretty wallpaper without it.
Yeah, I had the FX-8350. The only thing it really did was push AMDs high multi-core (or pseudo-multi-core) solution into the spotlight.
 
Dec 27, 2019
37
8
35
As you can see down below Haswell was a pretty big jump for AVX performance which Cinebench uses. That's the main gain here. I also showed this a few pages ago with more tests comparing IPC, as you can tell fr om his limited testing he even got 20% improvement from Ivy-Haswell. Of course this is again mainly due to the FP operations more specifically AVX.

View: https://youtu.be/psKEiWXDR28?t=546


https://www.anandtech.com/show/9483/intel-skylake-review-6700k-6600k-ddr4-ddr3-ipc-6th-generation/9
 
Dec 27, 2019
37
8
35
I would say this test is looking at (or at least heavily influenced by) floating point performance. It also depends on if this test is looking at single or multi-thread performance.

In single thread, bulldozer was faster than K10 (albeit only slightly) as it could execute more int and fp operations per clock. The issue was the module penalty so at 2 threads K10 had a good lead- one of the biggest failures of the original BD module was the fact that despite being '2' cores it could only issue instructions to one or the other per clock- halving it's max throughput. That is something that didn't get fixed until Steamroller (which you can see has higher IPC than K10). It's not really clear what this chart is measuring though- looks like it must be some composite of INT and FP, possibly looking at more than 1 thread as well (maybe 2 threads to account for HT?)....

In no way was Bulldozer more powerful clock for clock then the Phenom like you said it wasn't until Steamroller-Excavator and then Amd finally overtook the K10 architecture.

View: https://youtu.be/psKEiWXDR28?t=546



It's been so boring on the Intel side of things i can't wait for Ice Lake to land on desktops, from what i'm reading maybe even by the end of this year! Not that i'm switching over no way am i going to a new board every gen a new CPU comes out and i love testing the newer parts and tweaking it out.
 

Eximo

Titan
Ambassador
Playstation/Xbox chip on a board, what I've always wanted. GDDR5 or DDR3/DDR4. Would make a fine general purpose PC. I think there was a Chinese clone at one point. Came with Windows Enterprise and a 'console' application as I recall, but did have GDDR5 for system memory.

Then again, still wanted Intel/Vega to be reasonably priced, oh well.
 

That is good - although keep in mind the 2080ti has been around a while now, AMD's new GPU needs to be significantly faster if it's going to be competitive with nVidia's 7nm range due out soon.

At 50% faster than the 2080ti (I imagine that is a best case so probably a bit less than that on average) it should be able to keep up with a theoretical 3080ti which is rumoured to be 30% faster than the 2080ti.
 

jaymc

Distinguished
Dec 7, 2007
614
9
18,985
That is good - although keep in mind the 2080ti has been around a while now, AMD's new GPU needs to be significantly faster if it's going to be competitive with nVidia's 7nm range due out soon.

At 50% faster than the 2080ti (I imagine that is a best case so probably a bit less than that on average) it should be able to keep up with a theoretical 3080ti which is rumoured to be 30% faster than the 2080ti.
but does it "need" to be... ? On a smaller node as well and now best buds with TSMC getting preference for pecking order on new nodes. APU's wise holy jumpin cat fish...as well though lot to be said for controlling the hole landscape consoles etc.. all game dev will be focused now on 8c 16t cpu's and AMD's GPU's with asynchronous compute etc and tweaked for AMD's APU's across the board, xbox's, ps's and then for the most part ported to pc's. taking the market from the ground up and climbing. Intel's dead, Nvidia is next target in the sites, a difficult target at that. But next on the hit list for sure... win or loose it's interesting times for sure.

View: https://www.youtube.com/watch?v=ZYqG31V4qtA
 
Last edited:

Karadjgne

Titan
Ambassador
Intel dead? Hah, not even close. They just took a step back to regroup, releasing the KF's and KS as an almost afterthought, just to maintain a presence. Not a bad idea since the disaster of Broadwell when it was rushed out. If they continue the above rediculous pricing with the 10 series, and slashes the 9 series to be competitive with AMD, mainstream colors will be Blue...
 

InvalidError

Titan
Moderator
If they continue the above rediculous pricing with the 10 series, and slashes the 9 series to be competitive with AMD, mainstream colors will be Blue...
I wouldn't expect any meaningful price cuts in the mainstream while Intel is still struggling to meet demand of higher-margin products where there is a whole lot more profit flab to cut into when and where necessary. There is no point in aggressively competing on price while fabs are booked for the foreseeable future.
 
but does it "need" to be... ? On a smaller node as well and now best buds with TSMC getting preference for pecking order on new nodes. APU's wise holy jumpin cat fish...as well though lot to be said for controlling the hole landscape consoles etc.. all game dev will be focused now on 8c 16t cpu's and AMD's GPU's with asynchronous compute etc and tweaked for AMD's APU's across the board, xbox's, ps's and then for the most part ported to pc's. taking the market from the ground up and climbing. Intel's dead, Nvidia is next target in the sites, a difficult target at that. But next on the hit list for sure... win or loose it's interesting times for sure.

View: https://www.youtube.com/watch?v=ZYqG31V4qtA

"Need" is a strong word - no it doesn't need to beat a 3080ti in order to be good. What will make it good or bad for consumers is the price and performance (efficiency is in there as well to a lesser extent). I actually expect nVidia to hold the lead again once all the next gen cards are out - although what would be nice is AMD offering a card that truly competes in the same class as Nvidia's best (and at the time of release, it's notable that AMD's older cards are aging notably better than Nvidia's equivalent but sadly it's taken way too long to come to fruition).

I think from AMD's position they do need to be able to compete with nVidia on a level footing in terms of silicone area and power consumption. The 5700 series are good area wise but pretty power hungry given they have a full node shrink advantage and still only match nVidias 14nm parts. AMD are predicting a 50% perf/w improvement with RDNA 2.0, which I think should put them in the same ballpark as nVidia on 7nm which should make for an interesting contest.

With respect to AMD having the consoles - they have had this advantage for a full generation now and honestly the optimization in the PC space is still tends to favor nVidia out of the gate. Also keep in mind, nVidia are in the Switch so they have a presence in the console space as well. What we really want is both companies doing well and competing, I'd also like to see a decent and competitive GPU from Intel to mix things ups. Graphics card prices have been pushed up to silly levels in recent years, a good price war should reset the scales.
 
  • Like
Reactions: jaymc

Karadjgne

Titan
Ambassador
A lot depends on where you look. AMD might be having a field day in the consumer mainstream area with pc's and consoles, but traditionally it's been Intel and nvidia in the service areas like servers and pro graphics. AMD has a serious handicap there, fire-pro's and threadripper just haven't been upto the same ability as the Quattro and Xeon components. And that's where the big money is.

I don't see Intel making a big splash into graphics, not unless they come up with something new. Nvidia got a major bonus with its acquisition of 3dfx and Ageia, which brought in 3d, sli and physX. Now there's DLSS and Ray Tracing. Intel will have a hard time competing against that pedigree, just as AMD is now.
 

jaymc

Distinguished
Dec 7, 2007
614
9
18,985
"Need" is a strong word - no it doesn't need to beat a 3080ti in order to be good. What will make it good or bad for consumers is the price and performance (efficiency is in there as well to a lesser extent). I actually expect nVidia to hold the lead again once all the next gen cards are out - although what would be nice is AMD offering a card that truly competes in the same class as Nvidia's best (and at the time of release, it's notable that AMD's older cards are aging notably better than Nvidia's equivalent but sadly it's taken way too long to come to fruition).

I think from AMD's position they do need to be able to compete with nVidia on a level footing in terms of silicone area and power consumption. The 5700 series are good area wise but pretty power hungry given they have a full node shrink advantage and still only match nVidias 14nm parts. AMD are predicting a 50% perf/w improvement with RDNA 2.0, which I think should put them in the same ballpark as nVidia on 7nm which should make for an interesting contest.

With respect to AMD having the consoles - they have had this advantage for a full generation now and honestly the optimization in the PC space is still tends to favor nVidia out of the gate. Also keep in mind, nVidia are in the Switch so they have a presence in the console space as well. What we really want is both companies doing well and competing, I'd also like to see a decent and competitive GPU from Intel to mix things ups. Graphics card prices have been pushed up to silly levels in recent years, a good price war should reset the scales.

Have a look at what they are doing with the PS5's custom SSD chip (reminds me the HBCC angle in the past) PCIe 4.0 I believe and 12 channel DMA controller to boost performance and with less CU's than the xbox, it's a very interesting approach, it seams this custom SSD chip's access speeds are far more important to sony than CU count.
Leading us to "instantaneous acid streaming" which will not be possible on PC and may lead to hole new types of games that cannot be played on or ported to PC, most likely only available on PS5 at the beginning anywa... very fast pace (maybe racing) in massive open worlds that simply will not be able to be done on any of Nvidia's GPU's (or on any PC discrete GPU at all, at time of writing this, we may eventually have a solution I guess.), check it out here:
View: https://youtu.be/PW-7Y7GbsiY?t=1211
 
Last edited:
Have a look at what they are doing with the PS5's custom SSD chip (reminds me the HBCC angle in the past) PCIe 4.0 I believe and 12 channel DMA controller to boost performance and with less CU's than the xbox, it's a very interesting approach, it seams this custom SSD chip's access speeds are far more important to sony than CU count.
Leading us to "instantaneous acid streaming" which will not be possible on PC and may lead to hole new types of games that cannot be played on or ported to PC, most likely only available on PS5 at the beginning anywa... very fast pace (maybe racing) in massive open worlds that simply will not be able to be done on any of Nvidia's GPU's (or on any PC discrete GPU at all, at time of writing this, we may eventually have a solution I guess.), check it out here:
View: https://youtu.be/PW-7Y7GbsiY?t=1211

I note we already have open-world racing games. And don't automatically buy the PR hype; improving IO performance is nice, but your still bound by read access speeds and memory write speeds, so the smae limitations apply. Really, faster SSD access speeds is really scraping the bottom of the barral (though obviously a HUGE improvement over mechanical drives).