AMD Ryzen 5 1500X CPU Review

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.

PaulAlcorn

Managing Editor: News and Emerging Technology
Editor
Feb 24, 2015
858
315
19,360


Oops, correcting now.

 

joex444

Distinguished
What good does the unlocked multiplier do when your own review showed that the 1500X doesn't overclock as well as the more expensive parts and your own benchmarks show that even when overclocked the 1500X isn't as powerful as Intel's 7600K?
 
It strikes me as outright misleading to compare the i5-7600k at 5GHz to the 1500x in price/performance charts without including the cost of the cooler required to hit that OC.

The $30-$60 cooler has a significant impact on the value of the i5 where the stock 1500x doesn't need an additional cooler.
 

InvalidError

Titan
Moderator

Yes, pricing at the entry-level is made somewhat stupid by having a die ~80% larger than necessary. This could turn into a lead zeppelin for AMD's profitability if Intel decided to do some strategic price cuts.
 

s4fun

Distinguished
Feb 3, 2006
191
0
18,680
Too expensive at $190. AMD needs these R5 at $100-$130. Basically price like an i3 and the value proposition would be clear and obvious. Otherwise you might say well just stick with a i5-7600K for $190 and it would be risk free.
 

PaulAlcorn

Managing Editor: News and Emerging Technology
Editor
Feb 24, 2015
858
315
19,360


We also present the 1600X and 1500X OC'd, and those would both require a better cooler, as well, to reach those clocks. We're trying to balance the right mix of information in the chart, but if we continue down the rabbit hole we would also have to factor in a 3200MT/s-capable memory kit as well. Then we go all the way down the rabbit hole and start thinking motherboard pricing, and before you know it, we are spec'ing out entire rigs. I think it's best to just confine it to the cost of the chip for simplicity and clarity. I could've done a better job explaining that in the text/chart.

You make a relevant point, which we also call out in the respective articles.
 

PaulAlcorn

Managing Editor: News and Emerging Technology
Editor
Feb 24, 2015
858
315
19,360


Well said, I may steal that line :p
 
^ you can do that yourself by just cross referencing the results here with the ryzen 7 review against the i7's

There's both a 1600x & a 1700 in the above reviews results.

I'm sure these reviews take a helluva lot of time as it is without benching every possible CPU.

 

InvalidError

Titan
Moderator

It is mainly the last 100-200MHz of Ryzen's nearly nonexistent overclocking margin that really hurts power consumption. Below that, the stock coolers are still adequate most of the time. I personally wouldn't bother with spending over $30 on an aftermarket cooler when I'm unlikely to gain more than 200MHz (6%) extra performance at the expense of 30W (~50%) more power.
 

psiboy

Distinguished
Jun 8, 2007
180
1
18,695
@ Paul Alcorn. I Question the validity of Civ VI AI Tests that are supposedly DX12 but fail to see a per core boost but clearly see a clock speed boost.. Suspect it's running on one core and still using DX9/10
 

psiboy

Distinguished
Jun 8, 2007
180
1
18,695
@ Paul Alcorn. Civ VI AI Test DX12 doesn't see the per core gains one would expect from a DX12 coded AI. Are we sure it's not using DX9/10 as it seems to respond more to clock speed than a higher core count which seems counter intuitive for something claiming to use DX12 and for any AI coding for that matter!
 

InvalidError

Titan
Moderator

There is no such thing as a "DX12 coded AI" for a CPU benchmark since the GPU is not involved in the AI's turn calculations. If Civ VI used GPGPU, it would use DirectCompute, not DirectX, and that would make it pointless as a CPU benchmark.
 

alextheblue

Distinguished
What revision of AGESA and/or BIOS revisions are you testing with?

80% larger than necessary? So, if the die is 192mm², you're saying their die should be roughly 107mm². I'd like to see the math on that. How big is a single CCX? They could spend a lot of time and money on a second Ryzen design (they can't just poop out a new design and email it to GF), and the die size wouldn't budge as much as you think. Plus, if they end up with a defective core, that's a tri or dual core. Instead of a quad+. Also, look at Intel die sizes. IGP is use-impaired on $200+ desktop CPUs, but there it is. Even with Intel's superior process, their designs have some caveats too.
 

InvalidError

Titan
Moderator

Really?
http://images.anandtech.com/doci/11170/AMD%20Ryzen%20Tech%20Day%20-%20Lisa%20Su%20Keynote-11_575px.jpg
Each CCX accounts for about 35% of the die space by pixel (area) count. The area near the middle of the bottom edge appears to be a wide die-to-die interconnect (Ryzen HEDT dual-die MCP) which would be unnecessary in a budget-oriented chip, cutting another 5%. Removing or simplifying the infinity fabric after axing the extra CCX, external interconnect and support logic would likely cut another 5%. So, a single-CCX Ryzen would be around 55% of the size of the dual-CCX one. (The inverse of which is 182% or 82% bigger than a single-CCX Ryzen might be.)

As for defects in the smaller dies, AMD could very well sell those as 2C4T, 3C8T and reduced cache variants, absolutely nothing lost there. AMD currently has nothing to compete at the $100 level, even the R3 will be $120+ and a total waste of a 8C16T die. (At least I suspect that's what AMD is going to do based on the prices.)

As I wrote many times before, if Ryzen's defect rate were so high that the bulk of R5-1600 chips aren't fully functional dies, AMD would need to cut cache long before hitting the R5-1400 as about 50% of the CCX surface area is the L3 cache. That would be a heck of a lot of chips to get rid of as R5-1400 or worse due to bad L3 blocks.
 

thekingprawn

Prominent
May 1, 2017
1
0
510
Thank you for the review and gearing toward the value calculation as well as the technical performance. Many of us really are looking for the best bang for the buck rather than the most bang irrespective of the buck.
 

Retrogame

Distinguished
Aug 31, 2007
34
0
18,530
I understand the theory of writing reviews of a CPU with an overkill GPU, and of reviewing GPUs with an overkill CPU. But for i5 and Ryzen 5, I think we need more realistic builds to test gamimg value.

Just before the 1600x and 1500x launched, Steve Walton at Techspot did a fun experiment. He turned cores off a 1800X sample in pairs via the bios to simulate the Ryzen 5s. He then ran a few games with different GPUs. For more affordable GPUs, the Ryzen 5s were not seriously CPU bottlenecked. It took big Titan level GPUs at low resolutions to create bottlenecks on the CPU.

We've established that the real chips should behave similarly based on power and other readings, albeit the clock speeds vary. But in the same vein as recent articles here testing the effect of number of Kaby Lake or Skylake CPU cores vs. game performance, or the recent Battlefield 1 comparison with a cross section of GPUs young and old on an old vs a top of the line platform, I think some more digging is necessary for a value recommendation.

If you're building a budget or mid range gaming rig, you're not putting a 1080Ti or TitanXp card into it, you're putting a GTX 1060 or RX 480 into it. A 1070 on the outside if you've got VR or a super expensive monitor going on.

Not that long ago, Full HD was still a GPU bottleneck on ultra game settings for anything less than a GTX 980, but the latest top end and very expensive GPUs have blown that level out of the water. That's why I fully agree with the GPU recommendations article listing hardware based on what resolution you want to run. I suggest the same idea for gaming CPU recommendations. Test the i5 and Ryzen 5 with the same GPU you just recommended for my monitor or TV. And then evaluate which is the better deal.
 
The die size argument is a loser. Big Whoop.

Zeppelin at 192mm2 (and new re-spins/steppings) is a single, 'simple' design that covers everything from 16c/32t enterprise ('Snowy Owl' cherry-picked 'MCM' chips reportedly from 35w to <100w TDP) to 4c/4t desktop. And, as Paul has reported, we will soon see 2P 'Naples' on the enterprise side ...
aHR0cDovL21lZGlhLmJlc3RvZm1pY3JvLmNvbS9LL0UvNjU3ODA2L29yaWdpbmFsLzAzLkpQRw==
Things look promising for Team AMD.

Next up for Zen cores are the Raven Ridge APUs which, presumably, will cover desktop, office & mobile with designs that likely include HBM.

I don't think The World cares if AMD 'harvests' their initial Zen chips :) We want to see a Raven Ridge mobile with HBM !!
Now, that's a design :ouch:



 
I think testing a range of GPU's would be nice, if people had more patience. 5 minutes after the NDA expires there's a dozen posts complaining that the review is taking forever and such and such site already has theirs posted. Anandtech has taken tremendous flak for that over the last year.

If there's only time to test one GPU, then logically they should test with the fastest that they can and go back and add others later. How many GPU upgrades does a typical system see, vs CPU? Most people hang onto their systems for at least 3-5 years ever since Sandy Bridge came out. They upgrade their CPU 0-1 times, then change platforms and jump a generation or even two. In contrast, the GPU isn't platform bound but it's quite common for people to upgrade every other year.

Here's Tom's Sandy Bridge review from back in 2011:

http://www.tomshardware.com/reviews/sandy-bridge-core-i7-2600k-core-i5-2500k,2833.html

What struck me as sounding very familiar was this:

We used the fastest single-GPU graphics card available in order to expose any platform-oriented bottlenecks in Metro 2033. With that said, it’s hard to imagine anyone buying a GeForce GTX 580 and gaming at 1680x1050. If they did, they’d see performance start to drop off in a noticeable way starting with AMD’s Phenom II X4 970, continuing on through Intel’s dual-core offerings, and ending with an older Core 2 Quad Q9550.

The moral of the story here seems to be that, as you step up to higher-end graphics, a dual-core processor simply isn’t fast enough.

Also interesting is that the six-core Phenom II X6 1100T, though not the fastest offering, opens up enough headroom to enable the highest minimum frame rate in our Metro 2033 benchmark. That advantage shrinks as you crank resolution up, though, shifting more demand onto the GPU. By the time you hit 2560x1600, eight of 10 platforms fall within one frame per second of each other.

If they'd tested with the more mainstream GTX 460 (current at the time) there's nothing it could have told people about how their 2600k would perform today with a 1080ti, or even an rx 480. Let alone how the 2600k would fare @2560x1600 against a Phenom II X6. Now admittedly, testing with the GTX 580 wouldn't have told them that either, but it's a lot closer and the best that could be done at the time.
 

goldstone77

Distinguished
Aug 22, 2012
2,245
14
19,965
An overclocked 1400 offers almost the same performance as the 1500x, and you can buy a $29 212 EVO which drop temps by 20C over the wraith spire cooler. overclocking 3.9-4.1GHz.
 

logainofhades

Titan
Moderator


The 1400 has less cache, iirc, so I am not sure that is accurate. Also wouldn't get a 212 these days. You can get better coolers for a similar price, or just a bit more.
 
^ also the 212 will drop 10-12c compared to the 1500x cooler not 20c.
Yes it'll drop 20c compared to the stock 1400 cooler which is way way smaller.

Looking at prices a 1400+ aftermarket cooler will cost $5-6 more than the 1500x - hardly tempting for a lot of buyers.

 
Status
Not open for further replies.