AMD Demonstrates Ryzen Performance At New Horizon Event, 95W TDP Confirmed

  • Thread starter Thread starter Guest
  • Start date Start date
Status
Not open for further replies.
I would be interested in a high-end mobile AMD CPU .. hopefully a 45w part not just up to 35 watts TDP so it can be clocked decently high. Otherwise, a high-end 45w Kaby lake for me in a MSI GS73VR please.
For many generations, AMD has stuck with 35w and less for mobile chips, disappointing.
 
not sure why every website mentions that the i7 was running at 3.2-3.7.
unless the bios settings are on auto and its LC, running encoding/rendering on all 8 cores will prevent boost clocks.

so to me, this is actually showing that AMD only beats the 6900K, when its (slightly) clocked higher (3.4 vs 3.2).
even if its only a little.

if the IPC is sooo good, why not run both cpus with fixed 3.2GHz on all cores?
right, cause then we would see a difference.

i dont expect a sub 500$ cpu to beat the 6900K (and it doesnt need to anyway), but i would like AMD to be a little bit more conservative on perf,
rather than showing stuff that wont hold up once its on the market..

@metathias
would it matter? since ipc is close to intel and the 8/16 is sub 800$ its still a win.
i expect it to be around 500, maybe 350 for the lower clocked ones.
and even if they are that expensive, i can still save by goin 6 core.
a 5820K will still cost 3-400, and i doubt a 6C/12T ryzen will cost that much.
 


They did that test already to show IPC and people complained. Also, the stock 6900K runs at 3.2Ghz and turbos to 3.7Ghz by default. Essentially AMD didn't touch any settings on the 6900K. So it is by far the most valid test albeit AMD did not have the "turbo" on there CPU enabled. The conclusion if you were to make one is that the stock AMD chip vs the stock Intel chip are fairly on par with each other. I don't think I would read anything else into it until independent tests are done.

 


http://www.tomshardware.com/reviews/intel-core-i7-broadwell-e-6950x-6900k-6850k-6800k,4587-9.html

Edit: Emphasis added.
 
@JamesSneed/TJ Hooker

exactly: under LIGHT load.
so it might boost during a game bench etc, but not when doing something like rendering/encoding etc..
which to me is a little bit better to compare the performance between "similar" cpus, since there are less fluctuations on stuff like clocks.

at least for previous i7, you would normally see a drop around 200-500MHz the more cores are running, so 3.2 when all cores under full load. but ppl tend to remember last given info (3.7), which then might make them believe the amd beats it while running at only 3.4.

and again, no matter "who" tested it, any higher clocks than 3.2 (on 6900K, when all cores at full load), means the bios is not set to reg/normal but rather on auto (for vrm/cpu clock/volt/power/temp limits etc).

its actually pretty common for the big board makers to apply some oc with auto settings, so they get better bench results in reviews.

my 3770k ran @4.3 on all cores with LC, all settings auto.
once i switched to norm/reg/man with "stock" intel limits set, it would run all cores at max 3.5, up to 3.9ghz for a single core.
 


3.7 GHz across all cores in Aida64’s integrated stress test. Is that a "light load"?
 
Very interesting stuff, (and a very mediocre presentation). IPC seems almost identical to broadwell. That old figure AMD threw out of 2% faster than broadwell seems pretty damn accurate. Their messaging has been consistent of matching intel, and I believe them. The surprising thing is the efficiency claims. They might beat Intel at their own game here.

This eight core beast is of little interest to me, as I am primarily a gamer and I already have an overkill i7-6700, but a 4 core 8 thread or 8 core 8 thread (will that exist ?) at say $250-$350 could be very compelling. Again, as a gamer, I am not looking to upgrade for a little while, but I kinda want to see AMD kick intels <language edit> in the high end.

It has been rumored that intel is finally going to increase core count in the 2018/2019 release, one step after coffee lake, two steps after kaby lake. I sincerely hope this happens and AMD is able to compete.

I view this launch as the start of Intel's Core series. I expect there to be incremental improvements to this basic cpu architecture for the next couple years. Intel has essentially been releasing the exact same CPU with a few tweaks every year, and they seem to be running out of room for more performance, while AMD can probably optimize another 20% IPC out of the arcitecture. If they can get the coming Zen iterations to substantially beat intel in IPC, I would love to buy one of their chips. For me it is kinda a side step not an upgrade right now.

Bring on the prices and reviews.
 

But as @JamesSneed said above, they locked both processors at the same clocks and ran a blender test a while back to show similar IPC.

Anyway, in the end of the day, IPC is actually less important than actual performance and power consumption. If FX processors could have run at 7Ghz with acceptable power draw, no one would have cared about the low IPC because the performance would have been competitive.

If AMD are able to clock Ryzen slightly higher than competing Broadwell-E CPUs, while keeping thermals in check, as it's looking like they're able to do, then who cares about IPC? Intel & AMD architectures are going to show slightly differing IPC depending on the workloads. Maybe for Handbrake encoding Intel's IPC is slightly better. But if Ryzen is clocked a little higher and they perform the same... great!

The reason AMD keep talking about IPC is because they already have an "8 core" CPU that's running at 4.7-5Ghz. Now they're touting a new 8 Core CPU running at >= to 3.4Ghz. You can understand in that context why they'd be so eager to underscore the improved IPC. But in the end of the day, it's performance and power draw that really matters. And if the few tests we've seen turn out to be good indicators of overall performance, then next year might give us the first genuinely exciting desktop CPU launch since Sandy Bridge.
 


Just wait for the actual independent benches come out and you will have your answer's. Honestly they have shown there hand way more than I can recall AMD or Intel doing in the past, at least this far from release. They have shown us clock for clock tests with Blender(may argue it was not fair) and now this preview. I think its fair to say its just showing AMD has a decent CPU, for actual real world comparisons we all wait.


 
@TJ Hooker
no, but again, that would not be within intels "limits" for stock volt/temp/power, and was only possible because of better cooling and the board could push higher voltages which resulted in higher clocks.
as long as i dont see 6900Ks on a cooler with a 140w tdp, clocking higher than 3.2 (with 100% load all cores), and stock volt/power/tdp limits etc set inside bios, i have to assume its (auto) "oced"..

@rhysiam
and thats why i would have liked them to repeat that, just with a 3.4 clock on the intel.
heck they could have just done both tests (limited to 3.4/ with boost).

i was just interested in how much they
A) gained on intel
B) gained vs fx8xxx/9xxx
C) gained vs zen clocked at 3.2


@JamesSneed
pffft, if amd has a 8/16 or even a 6C/12T cpu for under 350$, im game ;-)
hope they will have em out in early jan, im selling mine in 3 weeks...

 
I can't wait for the generation after the next <language edit> Lake, when Intel decides to actually do something innovative with their CPUs.
 

Do you have any evidence for anything you've said here? Because I've already provided evidence showing that a 6900k will boost to 3.7 GHz on all cores. You mention "better cooling" and "higher voltages"; better/higher than what? There's no stock cooler or stock motherboard to compare against. Yes, if you have insufficient cooling, it will throttle. That's how turbo boost works as a rule. "(auto) 'oced'" is basically what Turbo boost is. The fact is a 6900k can/will hit 3.7 GHz on all cores without having to overclock or change any BIOS settings. If you can point to a specific example where a 6900k with a cooler rated for strictly 140W wasn't able to hit those boost speeds or something like that, please share it. And even then, that's only relevant if we assume AMD skimped on their CPU coolers for the demo.

Do we know for a fact that the 6900k was boosting (to 3.7 GHz) in the demo? No. But if we believe AMDs statement that the CPU was running at stock settings, then it's reasonable to think it was. And if we don't believe them, well it's sort of a moot discussion because then we would have no reason to believe the results.
 

They could have done that, I suppose, but it still wouldn't really have told us anything. It seems like you want to know the exact IPC relative to Broadwell-E... but IPC is totally workload dependent. All matching the clocks would have done is told us the relative IPC of Ryzen to Broadwell-E for Handbrake, running that particular encoding profile (of many, many different potential profiles). That doesn't necessarily represent the exact IPC for any other encoding tasks, let alone the myriad of other workloads we want to know about.

I'm looking forward to launch reviews which compare relative IPC and performance across loads of different benchmarks and workloads. No doubt we'll see bright and relative weak spots of both architectures and then we can start to generalise about overall performance, and the sorts of clock speeds required to match or beat competitors on either side.

It doesn't matter how many different combinations of clock speeds AMD used, no single benchmark run is ever going to answer the question you want with any degree of accuracy.

A) -> they've at least closed the gap dramatically, and (if the benchmarks we've seen can be generalised) are now in the ballpark for Broadwell IPC
B) -> they're demolished FX IPC
C) -> Not sure I understand... if IPC is similar, then clock-for-clock it'll be similar

Until we get a full suite of benchmarks we're going to be left at best with a rough approximation of relative IPC... and that's exactly what AMD have given us. Tweaking the clock speeds up or down a couple of hundred Mhz isn't going to change the picture in any real way.
 
I think everyone is expecting too much of Intel. Everyone is saying "now, thanks to AMD, Intel will give us a nice performance boost in the xxx next generations!". I sincerely believe that we are just getting to the limit of (economical) performance improvements with current tech.

I actually would love to see AMD to be really competitive. It's nice to be able to choose. I will be building a new system in about 6 months, so I have some high hopes here.
 
@TJ H
that is my main problem. they ran it with "stock" settings, which in almost all cases means auto settings, which will allow for higher clocks than "normal" for asus/msi/asrock boards (didnt use/build any other brands last few years).

http://www.intel.com/content/www/us/en/support/processors/000006652.html

as you can see, not one cpu boosts to full turbo clock, once all cores are running, and unless intel has completely redesigned its 6xxx cpu (vs 5xxx) or turbo boost, this is still valid.


dont remember when exactly i first read about the "auto" settings ocing the system, but i know its happening since ivy bridge.
might just be on boards that allow for ocing (like Z series), but i know it now appears to happen with smaller chipsets too (a couple of MHz more on BCLK/chipset)

AFAIK it started with bench results varying too much, and after looking into it, they found the cpu was clocking beyond boost freq,
and i know i read about it on more than 1 site.


best example is a good friend of mine that couldn't play a game without crashing a driver (sensitive to ocing), and asked if he was ocing his 4790k, and he said "no, everything on auto".
after changing everything in bios to fixed/normal (intel's limits for that cpu), it wouldn't oc anymore beyond the boost clock, nor would it run boost freq on all cores when at 100% load (prime95).
but that changed again after he swapped the air cooler for a H100i, and now runs boost on all cores again.

and so far, any i5/7k (non-extreme tho) rig i worked on or helped with problems, were clocking higher than max boost freq under full load, and then didnt, once i set everything to normal/reg/man.
 

That will be really interesting to see. The next few Intel releases are relatively locked in, but in a few years time we may well see what Intel can do... and you may well be right, with one clear exception.

Intel could absolutely, and without a huge amount of effort or cost, increase the core count further down their product stack. Skylake i7s have the smallest die size of a high end CPU in over a decade. And it's likely that more than half of that is the iGPU anyway, which is useless for some of their target market. As a point of comparison, an i7 6700K is around half the die size of an RX 480, which ships with expensive RAM on a full PCIe card with a higher end cooling solution for substantially less money. Now you can argue whether we really need more than four cores, or whether or not Intel as a business should/should not increase core counts just because they can, but there's no doubt Intel have been milking the quad core desktop market with minimal improvements for years now.

Even if engineers really are hitting the wall in terms of per core performance, adding more cores is absolutely possible and not particularly expensive. Whether we have the software to make use of the additional cores is another question.
 
i wouldnt call it milking.
you can only get milk if there's a cow.
and no one had a gun put to their head, and was forced to buy that i7 chip...

and who can blame them.
i mean, what business, here to make a profit, would NOT keep the improvements low, when there is no real competition.

e.g, starbucks wont lower its prices, just because the other place around the corner has not so good coffee.
 

Well that's what I meant when I said you can argue whether Intel as a business should/should not improve core counts just because they can.

Have a look at the die size chart in this article: http://www.anandtech.com/show/9505/skylake-cpu-package-analysis
Since 2009, Intel have been able to get more and more CPUs out of each silicone wafer, but outside of integrated graphics, haven't provided consumers with any significant benefits and have continued to charge the same prices.
As a point of comparison, a $1700 6950X CPU has only a slightly larger die size than an i5 2300 which sold (presumably at a profit) on what was then Intel's brand new 32nm node at $177 when it was released six years ago.
That's hardly progress!!
Yes, I realise that newer nodes are becoming increasingly difficult and more expensive. But not that much. And it's not like the 14nm process of Broadwell-E was particularly new by the time it came around anyway. Broadwell-E was launched more than 18 months after Intel's first 14nm chips hit the market.

I'm not saying it's bad business.
I'm not saying that Intel should necessarily have given us more (that's ultimately a business decision)
I am defining "milking" as gaining significant advantage/profits simply because you can. And I think it's pretty clear that this is precisely what they've done.
 
The exact performance and price might not yet be known, but the apparent efficiency seems like a good sign. They could definitely price the chips competitively too, since Intel's pricing has largely stagnated in recent years. Even if Intel's chips happened to perform slightly better at a given core count, if AMD could significantly beat them in terms of price and efficiency, I'd consider their chip a winner.

 
As much as this does look promising, until real world tests are performed we know actually very little.

I like the idea of a 8 core CPU, especially if it does compete with intel (that will drive prices down).
My problem is that AMD is showing very little (if you have a winner, no point in doing so). I think there might still be a few issues that they are trying to solve.

One good question will be, how well does it OC?
Another, What Price?
Another, How hot does it run when OCed?
Another, Will the instruction set work well with all OSes? (more importantly, older windows than 10).
Another, Will it even work with older Windows systems?
Another, How good will the chip be in keeping frame consistency stable?

So, still a lot of questions 😀.
 
Status
Not open for further replies.