AMD Ryzen 3 1300X Review

Page 3 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.


The question about how the 8MB come to be are for the 1200 and 1300X parts.

That is why, if they decide to put L3 to the APU Zen cores, it would be *up to* 8MB inside the single CCX (which I agree there, having 2 CCX'es would be weird, unless they do packaged/MCM APUs; fat chance).

Look at the die shots. The way the even organized the L3, they can get away with 0MB, 4MB and 8MB of L3 in a single CCX.

EDIT:


I have always been more exited about the APUs than the CPUs, to be honest, haha.

Cheers!
 

Not quite: the 8MB of L3 appears to consist of 8 macros of 1MB each, each with their own tag-RAM so in principle, AMD should be able to achieve 1MB granularity if it wanted to and included the necessary logic to support it, though this may give rise to L3 port access contention between cores all trying to hit on the reduced number of L3 slices.

Another problem with scrapping the L3 from the die altogether to reclaim die area for the IGP is that Ryzen's cores have 512KB of L2 per core instead of the previous architectures' 1MB so you could expect Ryzen to suffer a larger performance hit from losing its L3 on top of having to share memory bandwidth with the IGP when it is in use.
 


I thought about that, but would you justify a victim cache that is 1:2 with your L2? A good size parity should be at least 1:3 and given symmetry, 4MB seems like the bare minimum they should use as L3. So for me it's 0, 4 or 8 to keep symmetry in the CCX unless they are forced to disable un-even regions... Would be weird, but not out of the realm of posiblity as you point out.

Also, they can indeed bump the L2 to 1MB just like they did with Llano to make up somewhat the lack of L3. So, it can swing both ways really.

Which one do you like best? 😛

Cheers!
 

As long as the caches are exclusive, any amount of L3 reduces the read/write load on the memory controller for cached data passing between cores or algorithms that need a little more cache than what individual cores have immediate access to. As for the "oddness" of the cache ratio, that doesn't seem to be a problem for Intel's Skylake-X which has 11MB of L3 for 10 cores and 1:1.1 L2:L1 ratio.

Bumping the L2 to 1MB may sound easy but it could also have deeper performance implications if it comes at the expense of an extra clock cycle of latency for L2 accesses and other non-obvious repercussions from having to re-arrange the rest of the CPU core attached to it. Using building blocks like the CCX is supposed to reduce re-design work by carrying all of the testing, qualifying and characterizing of the building blocks through the product stack. There is no point in having a modular architecture if the modules are never going to actually get re-used.
 


Leaving what might be best in terms of relation to sizes, L2 can be moved to 1MB per core and remove the L3; I have zero clues if it's easy or hard to do, but they have done it before. You do make sense in that it would defeat the purpose of the CCX, since it wouldn't be the "smallest part" of the puzzle, but I wouldn't put it past, since removing the L3 might help improve efficiency a bit. I'm sure APUs want to be as efficient as possible in the node. Plus, they were going to be manufactured by GloFo as well, right?

Cheers!
 

Do you know what else AMD has never done before? Modular CPUs where AMD is supposed to be reusing building blocks across multiple designs and the CCX is supposed to be one of those blocks.

As for AMD aiming for absolute efficiency, I think it is safe to conclude that AMD has thrown that idea out the window a long time ago with the Zeppelin die having over 10% of die area overhead dedicated to supporting TR and EPYC all the way down to the R3-1200. Based on deBauer's ThreadRipper de-lidding, it turns out that TR has four Zeppelin dies but according to AMD, only two of those are actually active. From that, it appears that ThreadRipper could be an outlet for EPYC CPUs that fail to make the grade post-assembly.

AMD is definitely not looking like it has any time or effort to waste on minimizing wasted die area on unused resources for a given market segment on this generation.

Edit: another thing to keep in mind is that every man-hour spent on altering already completed, tested and characterized Zeppelin/Ryzen building block is that many more man-hours taken away from Ryzen +/2/3 development. Making sure that next year's Ryzen refresh or whatever it is does better where first-gen Ryzen came short is likely higher on AMD's priority list than saving maybe $5 per chip by messing around with the CCX.
 


That sounds incredibly condescending. Specially since I do acknowledge that in the next part you didn't quote.

Fine, I'll stop here.

Cheers!

EDIT: Typo.
 
When AMD cut the L3 cache, they took a dramatic performance hit. The cache plays an important role, and cutting its size without making some significant changes to offset the impact is a bad idea in general, especially if it costs money to do so.

Basically, you were saying "why doesn't AMD just make these slower, like they used to?". You could have argued that it would reduce the die area (price), power consumption, or made room for other improvements, but you didn't. You just suggested they make it slower.

Naturally, your suggestion was taken to be poorly formulated, especially since you didn't make any arguments in favor of such a change. That said, it did strike me as condescending as well.
 


"Dramatic performance hit"... I had a Phenom II 965 and still have the A8 3500 as my HTPC. I already went to the trouble of benching them, but you can still go and check it out. The L3 removal did at best a 5% penalty when running at the same speed in my benchies.

Just in case: http://www.anandtech.com/bench/product/399?vs=102

They have 500Mhz of difference and Llano, thanks to the iGPU, runs very hot, so it throttles a lot. You can see in the scores, taking the speed difference into account, the "dramatic performance hit" is a huge overstatement. And for the big gaps in MP tests, they can be attributed by throttling: I see it all the time in my HTPC nowadays, even running at 2.5Ghz.

Cheers!
 
I would have liked to see and i5 in there just to put the 4 core against another 4 core and not a dual core. Why I realize an i5 would be $80+ or more at least we would get to see what that brings to the table.
 

I was merely pointing out the fact that although AMD may have experimented with different L2 cache sizes before between product generations, AMD has also never committed to reusable blocks before Ryzen's CCX. Between that and how AMD could not be bothered to design SKU-specific dies anywhere between the R3-1200 and the top EPYC, AMD appears far more interested in minimizing engineering effort and die inventory liabilities than chip manufacturing cost. If any major CCX restructuring like moving cache from L3 to L2 is going to happen, Ryzen 2 is where I would expect it to get done.


With Ryzen having ~50% better IPC and half the L2 cache per core as AMD's older architectures, the performance hit it would get from having L3 cut off without compensation would almost assuredly be worse in most cases.

If AMD put 8MB of L3 cache in its CCX, it would be because that is where it determined that the die area and transistor budget would deliver the most cost-effective performance over all other options AMD's engineers could foresee at the time the decisions were made.
 
^ yes but its still a hyper threaded dual core & there are instances where a true quad will shine.
The performance of the Pentium doesn't make the ryzen 3 look inherently bad , it just makes the pentium look incredibly good (which it is for $70 - its arguably Intel's best current CPU full stop)

Your argument also means it makes the i3, the i5's & maybe even the locked i7's look bad too (& that's a $300 chip not a $120 one)
 
I'd say the price/performance ratio of the Pentium (ie, the value) makes a lot of other chips look 'bad' up to a point. That point being whether its performance (a more objective measure) can fulfill your needs. Would you rather buy a pair of pants that fit? Or a pair for 1/5 the cost that's 2 sizes too small? Would it really make any difference if it was 1/10 the price rather than 1/5?
 


Between here, andantech, and Jarred Walton on PC Gamer it's more like, do you want the pair of pants that fit you now or the pair of pants that might fit you later, or that are flame retardant (depending on the correlation). For someone budget minded pc gamer, I'd argue the former is far more sensible and acceptable. Because what I'm seeing between all those gaming benchmarks is a dual core with hyper-threading that can come close (and in some cases even beat) Ryzen 3. I was just hoping for a home run here as far as pc gaming, but it's clearly not, unfortunately.
 


A pentium. Thought that was pretty clear.
 
^ yes but the Pentium pretty much matches the i3's also & they're the same pricepoint as the ryzen 3's.

The ryzens are overclockable on the stock cooler , they have a good upgrade path & they're on a socket that will stay current for a minimum of 3 years.

They're all benefits that the Pentium doesn't really have .

 


When it comes to gaming, pretty much any socket that has come out since LGA 1366 has been good to present. And that's what I'm talking about, gaming. Even overclocked the difference will be negligible on average for gaming. With 2 cores and 4 threads the new Pentiums will still be valid for years when we are talking pc gaming. I think Walton said it best in his review; "If all you're doing is playing games, for most games (Hitman being a clear exception) you won't benefit a lot from more cores and higher clockspeeds beyond a certain point."

 
I can understand your points a little when it comes to the 1300x.

But still in multi-threaded & optimised games it still beats the pentium at stock , at 3.9ghz it decimates it pretty much.

Look around at reviews of the 1200 now , bear in mind it costs 20% more than the Pentium in the us, & 15% more in the UK.

Every single reviewer has hit 4ghz with the 1200 easily with below 70c temps

In essence the 1300x at its pricepoint can seem a little underwhelming , the 1200 definitely is not.

The 1200 is the budget chip people should be looking at IMO.

The 1200 makes the 1300x look bad value just as the 1600 does the 1500x.
That said the 1600 makes every current CPU on the market look bad value full stop.

Ghost recon wildlands , bf1, GTA v - all titles where the ryzen 3 beats the pentium easily.

When the actual fps is close , look at the frame times , the pentium struggles massively.
 
If we weren't past the 2c/4t for games before, I think we definitely will be within 2 years. Game developers always lag way behind, or way ahead the hardware curve. Usually it's been behind CPU's, and ahead of GPU's. But with the staggering pace of GPU improvement and the glacial pace of CPU improvement they've mostly caught up. And given that even a rock bottom budget build can afford an R3 right now, by this time in 2 years they'll be able to assume 4 physical cores. And dual core chips haven't maintained their traditional clockspeed advantage for a couple generations now.
 


CPUs haven't improved much when it comes to gaming since 2009. They have improved some, but routinely when running custom benchmark runs even Bloomfield is plenty fine for pc gaming in 2017. I wouldn't call GPU improvement staggering either; moderate would be more apt. A GTX 1060 gives ~GTX 780Ti performance, so really looking at 3 year old performance there, 1070 is around 980Ti performance so 2 years old and the 1080 is not a bunch better than a 980Ti. Over 42% of pc gamers are on dual core CPUs for a reason, and that reason isn't going away with Ryzen 3; unless they drop to $70.
 
Status
Not open for further replies.