AMD Ryzen 7 1800X CPU Review

Page 16 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.
Ah, fanboys. About six months after Socket 478 was released someone sold me a four-month-old Northwood 1.8 clocked to 2.4 GHz, with 512MB OCZ PC-1066 RDAM, and motherboard, as a complete platform kit, for $150. All the AMD fanboys came out to slam me for buying an $800 platform when their $600 platform could match it...ignoring the fact that I paid only $150 for a platform that could match whatever they were buying for $600.
 


I don't think the CCX will allow for easy die harvesting and we'll most likely see a penalty on high threaded situations... Maybe... Depending on the thread bouncing, the CCX design might affect scaling on odd heavy threads and heavy switches. But that's just me!

On the other hand, can you explain why they would need a new die for the 4C version? Or are you talking about the APU coming next? 1 CCX would be a 4 core CPU on its own, but in terms of effective die space is like... 20%? I mean, the effective die are you "free up" is like 20%? So you should get (blowing wind prediction) at best 10% clocks? Uhm... I don't even know I came up with that number, lol.

Also, nice reading you again; where have you been? T_T

Cheers!
 


You can easily disable individual cores and their associated L2 cache, this makes die harvesting easy. You can even disable regions of the shared L3 cache, which takes up a lot of space and is prone to having problems.

This is an eight core die with a large amount of L3 cache that takes up a lot of die space. If you look at the die you can see the individual cores divided into two separate clusters of four cores. The various SoC ancillary components surround those core clusters.

AMD%20Ryzen%20Tech%20Day%20-%20Lisa%20Su%20Keynote-11.jpg


If you wanted to be competitive in the lower cost markets where the i5 stands then you need to get the costs per die down, meaning getting the number of dies per wafer up. So you remove one of the four core clusters and half the shared L3 leaving you with a four core / eight thread CPU that can be sold for cheaper then current i5 offerings. Also due to the lower amount of physical hardware needing power, you get a lower TDP or higher clocks.

As designed this is a great workstation / server CPU, lots of potential parallelism for doing labs and running a ton of VM's or rending stuff. As a gaming CPU (the computer market that's booming) its mediocre because the Intel i5's end up better choices. Scale AMD's design down with half the cores / L3 and you become a lot more competitive.
 


Ah, but of course! The 4C dies can also come from APUs with the iGPU disabled as well! I completely forgot that. The plot thickens.

So they effectively have at least 2 lines with 14FF nodes (either in GF alone or somewhere else), so having a 3rd one producing something in-between is not that far fetched. Do you think the volumes in the consumer market make it a feasible option to follow? Since it's basically simplifying the design is not a hard thing to do in parallel, I guess?

Cheers!
 

Almost half the die size = twice the amount of dies per wafer and half as many chances that any given die will have a defect in it = twice as much potential revenue per wafer (a little bit more since smaller dies waste less wafer space around the edge), half the wafer cost per CPU, over twice as many CPUs per wafer. In other words: much cheaper to mass-produce.

Intel has different dies for the Pentium/i3 and mainstream i5/i7/Xeon: the Pentium/i3 dies are dual-core while the i5/i7/Xeon one is quad-core. It is all about maximizing revenue/profit per wafer.
 

It seems strangely familiar for me, I feel that I've seen it somewhere many years ago.
 
Ryzen isn't the problem - I think you need the 4th iteration of mainboards 😛

One recommendation:
If someone is using a Ryzen 7 1700 @3.8 GHz (yes, it's rockstable), don't use AMDs so-called master tool, make it better by hand. Tested two CPUs now and it works. Together with an Asus B350 for 100 Euro it is a good solution to enter the real 8-core world 😉

 




...just because I enjoy mixing metaphors...that horse sure died right out of the gate.




Not even Scientology?
 



I am talking with facts you are talking general ... "what if" , Gamble , etc

when you talk facts and in detail come back to me ..

and you are mistaken about the CPU design , When the pins are not connected the CPU will still work on the lower motherboard but not in full .. The same when you make an itx motherboard for 2011 CPU with 2 Dimms slots only .. and one X16 PCIE slot only.

Here the CPU will work on Dual channel mode instead of 4 , and will use only 16 lanes from the total 40 lanes ...as they are not all connected .

When you design a universal socket , you can make them this way ... from the beginning.

The Higher CPU will work on lower motherboard but not all options enabled ... not 4 or 8 channels , not all lanes used. as SIMPLE as THAT.

not that they will put the CPU and it will not function at all. it will fit and work 2 channels instead of 8 , and X16 lanes instead of 64 or 32 or w/e

and when you use and design a universal slot , this is easy to do ... and not gambling and not all your general non specific talk ...

you are the one who lacks the understanding , and I am the one explaining in detail how it works. you are ust talking in General and each point you will say I will address it.

I explain in detail and you just say "Gambling" "risk" "other designs"

This is the end of this conversation . I will not bother anymore.

oh and whoop-de-do to you too !!!
 
I don't understand... What happened to all the engineers you were going to have backing you up? What happened to being professional?

I see from your response you don't get it, and so sure, I won't bother either. You've clearly proven my point.
 
Uh yeah, well all that aside, everyone's favourite Scotsman at AdoredTV pointed out something that completely blew my mind about low-resolution gaming benchmarks. Although they do seem to make logical sense to me and to pretty much everyone else, he points out a lasting trend that will really have you scratching your head. I'm a long-time and well-decorated Tom's expert. I built my first PC 28 years ago (286-16) when I was 12 years old (yes, I'm 40 so 😛~~~), I've seen the CPU wars between not only Intel and AMD but also Cyrix and VIA. I had an ATi EGA Wonder, I've seen video cards not made not only by ATi and nVidia but also Oak, Orchid, 3dfx, Matrox, S3 and CirrusLogic. I saw the 84-key keyboard in my IBM PC model 5150 replaced by the 101-key keyboard from the IBM PC/AT. I saw bus standards like 8-bit, 16-bit ISA, 32-bit EISA and VESA, 64-bit PCI, AGP & PCI-Express. Hell, I've seen MFM, RLL, IDE and SATA but I've NEVER seen ANYTHING like he's going to describe here and to be honest, the fact that I've seen so much but never noticed what he talks about before embarrasses me. You really need to see this because THIS is some of the greatest investigative tech journalism I've seen since Charlie Demerjian exposed nVidia:
https://www.youtube.com/watch?v=ylvdSnEbL50
 


This is what you have said :

Feel free to contact whomever you like. You have proven here that you do not understand what goes into bringing a product like this to market, so continuing this argument will only lead us around in circles.

Why should I bother after that ? you understand very well . and I am not wasting my time any more here .

by the way , you did not even ask about my Qualifications before saying "you dont know what you are talking about"

you dont see anything and you are not a judge ... you are just trying to make me angry and I am a Moderator in other sites as I said and I dont fall for this kind of talking and I dont fall for downvotes , they dont even exist in my eyes.

I will give you an advice next time you want to say : you dont know what you are talking about , ask for his qualifications first ...

see ya in other threads ...

 

That video you linked is indeed interesting, and I can't find anything wrong with his claims on a first look, so I'd say that this would warrant more investigation, and could mean something. Would really like to see more testing on this, might be onto something. I see some of the inconsistencies he points out, and if I had the time right now would like to investigate more on the issue.

Still, I believe that 1080 Ultra are really useful benchmarks, as there are quite a lot of people running 1080p144 monitors, and a good way to compare pure gaming performance, probably the only feasible one on a reasonable timeline. They might be far from the best, but they are all that we have. Asking to "skip them" is just asking to give us less information, which I don't find correct in any situation.

But what actually has me curious is that nVidia thing you talked about. Can you explain a bit more? I wasn't in the PC building community when that happened, so I would like to know a bit more.
 


You gave us your qualifications, you told us you took some electrical engineering classes in college.

If you have more than that please share with the group, it will qualify your observations.
 


I'm sure he's positively heartbroken that he doesn't have the immense privilege of continuing to receive your insights.
 

And how much do mini-ITX x99 motherboards cost? Even with all that stuff cut out, they are still $180+, hardly any cheaper than no-frills ATX ones. Having fewer electrical connections under the socket and across the board does not reduce the platform's cost anywhere near enough to make it budget-friendly.
 


Keep in mind that he's comparing the 8350, which was released October 2012, to an i5 2500k that was released 1 year and 9 months earlier in January of 2011. If you look at the other benchmarks in this video, you'll see the Ivy Bridge chips (released April 2012, 6 months prior to the 8350) maintain their leads and in some cases increase it. The gains are only realized against a chip nearly 2 years older. The closest contemporary to the 2500k (since it wouldn't be fair to compare with Phenom) would be the 8150, released October 2011. That's still 9 months after Sandy, and it does not fare nearly as well despite the higher core count.
 


This a very skewed way of thinking because those chips are four cores while this is eight. Extra cores aren't free from an engineering point of view, they draw power, generate heat and take up die space which all could be trimmed in favor of single thread "gaming" performance. Its why we don't see "gaming" benchmarks on Intel server CPU's every though their quite expensive. Would you argue that Intel server CPU's are quite poor and those users should instead drop i5's and i7's into their infrastructure servers? This CPU is designed for very wide workloads present in higher end applications just like it's predecessor. The big benefit is that it doesn't make monstrous sacrifices to achieve that wider workload, which makes a cut down version quite appealing.

This CPU should be compared to Intel's 6c/12t offerings on a cost per cost basis. I fully expect AMD to release cut down versions with four cores which will have the same or better (clock speed) gaming performance but at a much reduced cost, making them competitive with the Intel i5's and i7's.
 


I watched the whole thing. Great information. Everyone should see this video. Thanks for posting this.
 
Status
Not open for further replies.