AMD CES 2019 Keynote Live Coverage

Page 4 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.


The way I read it is, yeah, there may be a slight speed improvement, but that improvement is negligible. It isn't by a large enough of a margin to be worthwhile.



What was the last AMD processor you used? Was it Bulldozer/Piledriver? Was it Phenom II, Phenom?
 
 


Let's get a few things straight here ...

When I made that comment, you were arguing that only OC headroom counts. No mention of single threaded performance or absolute clocks had been had by anyone in the conversation, including you. So how can you say that I was arguing against single threaded performance when that topic had not even arisen yet? Damn you're creative.

"Overall performance" vs "performance" - same damn thing. Don't you think? Or do you think there is a difference?

I said "buy a damn Pentium" because at that point, you were saying that you need at least "1 ghz OC" and only Intel can do that - again no talk on single threaded performance or clocks. Then I merely pointed out an 8 core Ryzen that has 1 OC of headroom by the standards that you yourself put forth claiming AMD couldn't achieve, and you got all defensive and huffy and started arguing random points all the way along till we got here, to which I obliged because you still needed correction on most of what you were saying. Example: "All flight sims only use one thread" <-- not true and so I corrected you. They way that flights sims can use up to 32 cores isn't part of that argument, now is it?


Now here you are conflating all things I guess you meant to say but didn't, and are trying using the things you didn't say in the argument at that time as "proof" I was wrong. That's weak. You're literally the worst debater in the world.

Don't say things you don't mean and if you mean something other than what you said - correct yourself instead of conflating your after-thoughts with the original argument.

Do you think I care that as a general rule Intel has a higher OC headroom? I have nothing against Intel. Here again you are conflating me correcting all your inaccuracies and truth bending, with arguments that I am not making at all. I have nothing against Intel - I am responsible for all hardware purchases at work and guess what, they are all Intel, so please stop thinking that by me correcting your untruths at every turn means I am attacking Intel - I have not at attacked Intel at all in any post and have nothing against them. (except that the people who buy their processors are seen by them as just "revenue" and nothing more, and they cater all their efforts to try to keep investors happy - even if that means gouging, ripping off, and lying to the very people who give them their revenue - 5ghz 28 cores anyone? That lie was to keep investors happy. I have that against Intel, for transparency)

You on the other hand ...

 
*steps up on soapbox*

Specter0420, you need to understand that the burden of proof is on the CLAIMANT, not the respondent. It is and will always be so. As such, you've passed off an awful lot of talk without much (or any) actual substantiation. It's your job to provide links to support the claims you're making and not just use anecdotal "evidence" (your personal experience) as a steadfast fact. Every time you do that, you undermine the very notion of what you're trying to say. Either provide the information to go along with your claim or don't make the statement at all. Much of what you've done to this point is purely argumentative for the sake of it, sprinkled with some ad hominems and other logical fallacies.

Don't do any of that if you wish to be truly taken seriously.

*steps off of soapbox*

Having said all that....

This year is going to surely be something for AMD. I'm looking forward to it with anticipation as I'm still chugging on with my FX 8350.
 


What claims have I made that need to be substantiated?

That Intel has historically had more room to OC? (When compared apples to apples, I can't believe I need to note this but people around here...). Or is it the one where I said that Intel OC's a full Ghz higher?
15% of i7-8086Ks hit 5.3 Ghz within safe voltages, http://lmgtfy.com/?q=i7-8086K+max+overclock
That is a 1.3Ghz OC on the high end (not a high end chip downclocked and sold as a cheaper model that is then able to
OC 1Ghz back up... that is a deceitful AMD fanboy's loser argument).

The 2700X can overclock to 4.2Ghz within safe voltages, a 500Mhz OC... https://www.tomshardware.com/reviews/amd-ryzen-7-2700x-review,5571-11.html

A little simple math shows that 5.3Ghz-4.2Ghz = Over 1Ghz higher max OC... https://calculator-1.com/simple/

So yeah, butthurt AMD fanboys just need to deal with it and none of this needs to be proven by me, it is common knowledge outside of the red reality distortion field. It isn't a claim, it is a verifiable fact and I DO hope it changes with Ryzen 3.

I really do hope AMD stomps all over Intel, it will just mean better things for my next build. The brand in my next build will be determined by my needs and what each side offers at the time.

If something happens and I am poor, maybe I'll look to the price/performance compromise option but I hope not.

My other claim is that modern VR Flight Sims in multiplayer require an overclocked Intel CPU for acceptable performance and that in situations like these, the 5 year old Intel beats the highest end Ryzen with a max OC.

I don't want to waste any more time on this so I got a quick one from: https://www.notebookcheck.net/X-Plane-11-Benchmarks.287550.0.html#toc-benchmark

"What is interesting is how the Ryzen 7 1800X gets beaten by the Kaby Lake i7-7740X and the much older (2014) Haswell i7-4790K. This is because the rendering process is all loaded on a single core (OpenGL shows threading and object handling problems here), and these two Intel CPUs both have high clock speeds on architectures with good IPC levels. There isn't much separating the clock speeds of the two Intel chips, so we are attributing most of the difference in frame rates to the generational IPC improvements made over time."

If you need further convincing I'd recommend google. There are good FB groups to like "VR Aviators", "X-Plane 11", "DCS Multiplayer Addicts" you can go ask around in.

Are you satisfied?
 
If you make claims of the sort you've made, yes, they absolutely need substantiated. ANY claim needs support and especially if you're passing it as fact and not opinion. It's called "burden of proof". Look it up. Congrats for finally doing what you should have done from the start. It took long enough.

Sprinkling juvenile and unnecessary ad hominem attacks doesn't help your cause; on the contrary, it rather hurts it. Don't do that. You seem to feel it necessary though and have continued to do so in your reply to me. Why? Does it feel to you like it punctuates your point or opinions? Do you feel better about yourself in doing so? If you're going to refute a claim, leave the personal nonsense out and focus solely on the points at hand. That's just using sound logic and practicing good discussion skills.

You've also made baseless generalizations relating to those who choose AMD by alluding to them being "poor". Don't do that. You don't know what someone's personal finances are nor do you know if they are choosing AMD for any other reason than because they want to do so.

Finally, and perhaps most importantly, this whole thing could have been approached better than the angle you took, but that's my opinion. Your hubris serves nothing more than to turn people off. We're here for the same reason and that's to discuss computing as enthusiasts and nothing more. Remember that. I'm just suggesting you exercise a bit more logic, civility, and tact is all. Take what I have said here as some light critique or however you see fit. I'm good either way.

Oh, and it was never about me being "satisfied" in as much as it was getting you to do your part. That's it.
 


I apologize, reading back through this, my "deceitful little turds" comment was out of line around here. A little context; I am a member of many VR Flight Sim groups and deal with deceitful little AMD turds on a daily basis. They use deceptive benchmarks showing a clear GPU bottleneck in some unrelated game (Elite Dangerous) to encourage people to buy Ryzen for X-Plane 11 VR...
Ryzen can't do X-Plane 11 VR (unless you enjoy 22.5FPS induced motion sickness, hopefully this changes via Ryzen 3 and/or optimizations)...

"Look it is just as good as Intel" *Benchmark shows ED running on a 1050ti in VR with max quality settings and both machines getting 12FPS, the bars are very long though, as if they are getting hundreds of FPS*

They talk about how "AMD's hyperthreading is some new technology, and it is better than Ghz in X-Plane VR", not even knowing that "hyperthreading" is an Intel term, it has been around for ages, and it hurts performance in most flight sims.

The same on DCS VR groups, a bunch of fanboys spreading misinformation about how great Ryzen is because of all the threads... How you don't need a great CPU, as long as it is good enough to power the GPU, everything is fine.
DCS uses one core and is CPU bottlenecked, just like X-Plane 11...
*Technically two, but they just split the audio off, like that helps*
Ryzen fails hard in DCS VR...

But that is FaceBook, low information users and confusion is expected there.

Then I come here and state a fact; Historically Intel has OCed more than AMD and I hope this is no longer the case. The butthurt deceptive little AMD turds jump out and start their same old BS... Claiming that a 1Ghz All Core Overclock is really only a 300Mhz OC because normally, if the load is just right, and your motherboard is good enough, and your cooling is good enough, 1 or 2 of your cores can reach 4.7Ghz for a short time...
Or things like; My AMD chip that is downclocked 800Mhz from the factory overclocks 1Ghz (so 200Mhz total, lol), that is just as good as Intel's top end adding 1.3Ghz! Your argument is so stupid it defeats itself, hahahahah!

I may have started with the "deceptive turds" comment but there has been plenty of passive aggressiveness coming from all sides and, frankly, they earned it by being deceptive little turds. If they didn't agree with the fact, THEY should have googled it and educated themselves.

It is exhausting, just look here. I've spent two days arguing a simple fact with delusional people that want to do mental gymnastics and argue false equivalencies to protect the mental image they've built for their favorite hardware. I get it, this article attracts AMD fanboys.

These same people lie about what they said, they lie about what you said... They even use little quotation marks, pretending we can't just scroll up and read what they really said...

None of the "attacks" in my reply to you were directed at you. They aren't even attacks, they were merely there to preemptively fend off the same old deceptive responses that I had already received to these valid points, and highlight their ridiculousness and deception. Nipping them in the bud this time, if you will.

I didn't know I needed to source such basic things so thoroughly around here. JoeBlowsMyNose said something I didn't agree with; An ancient obsolete flight "sim" uses more than a couple cores. I googled it and found out he is correct it does, although poorly.

Is it really that hard for Toms readers to verify on their own? Asking me to prove that Intel has been OCing farther than AMD, AROUND HERE, is pathetic. I'd say if you go against common knowledge, the burden of proof is on you. Galileo would vehemently agree, although in his case he WAS actually correct and "common knowledge" was wrong. Not true here.

I told you what "I" would do if something happens and "I" become poor. Claiming that poor people can't budget for a high end rig, or that wealthy people don't get mid-range rigs is on you. I never claimed anything like that because it would be stupid. I speak for myself.

This entire thing could have been handled much better, I agree. A simple google search turns up this, notice how they agree with me, history, and reality? Specifically the part titled "Overclocking Potential": https://www.tomshardware.com/news/ryzen_2-vs-intel-9th_gen-core,38000.html

What do you think? Is that a reputable site? 😀

If I need to "hand hold" everyone with every fact I state, I am done wasting my time around there. I'll leave this here for anyone that wants to argue further;

http://bfy.tw/1tXF
 


Thats gonna be a big yikes from me dawg :)

 
I've had so many claims and words forced into my mouth on this thread just so some ja kas can argue against them, I won't need to eat for a week! :)


Here's a few of Spectres "claims" ...

"Only OC headroom is important!" - for 1% for people yes ...

"Only Intel can give you 1ghz of OC headroom!"
- except some AMD parts like the R7 1700

"8 core ryzen is a low end part!" - I guess that's your opinion ...

"My $200 video card is faster than your Ryzen!" - I assume you're next going to claim your dad can beat up my dad?

"I've been overclocking for 15 years!" - so what, my first OCd chip was a Thunderbird 1000 - 19 years ago - is that supposed to make people believe everything I say? lol.

"All flight sims use only one core!" - Not true at all and I don't think any flight sim uses "only" one core.

"Joe claims that ghz and IPC means nothing!" X 2 - I NEVER did anywhere at all ... I only argued that "OC headroom" (no mention of clocks) isn't what 99% of people care about - if I have a 1ghz chip and I can clock it to 2.5, who's going to care? But, but, it has 1.5 ghz of headroom! Ghz of headroom is not IPC nor is it clock speed but perhaps you weren't aware of that when you made the argument that only OC headroom is important .. ? I don't know ..

"Joe's way of overclocking ..." -- WTF? I never told anyone ever how overclock was supposed to be done - and I only used YOUR method (which is debatable at best), to compare the R7 1700 OC headroom, so as to correct, objectively.

"MY way is the RIGHT way of overclocking because I did it for 15 years!"
-- lol, so I guess that's sound logic ...

"Joe doesn't like that Intel has OC headroom!" -- Really? Did I imply that somewhere or is that you projecting your bs onto me?

"See! here's an article that says Intel has better OC potential!"
-- okaaay, did anyone, once, on this entire thread make that argument? Or have you now succumbed to finding facts that you agree with that no one argued against so you finally aren't wrong about something ... you really did that didn't you? lol ...


Hilarious ... the poor guy ... I think he's spelled out the value of his claims well enough ...

Kid's, when you are trying to make a point or debate one, say what you mean and mean what you say, own up to your mistakes and correct yourself where you are wrong or misunderstood, and be able to substantiate every single claim you make. Most importantly don't respond with logical fallacies like ad hominems, deflection, or by trying to put words in other peoples mouths and then argue against what you projected, and of course at all costs refrain from gaslighting ... else you risk just looking like an ass in public ...

(Edited for grammar and spelling and format and a few adds)
 
I'm reluctant to go reopening this whole debate @Specter0420, but your last post seems to indicate a willingness to engage in a sensible discussion, so I'd like to attempt that one more time. Given that I responded to your post citing Intel Turbo clocks, I assume I'm one of the posters you're accusing of being a "butthurt deceptive little AMD turd"? I'd like a chance to explain where I'm coming from. You might disagree - that's fine - but I do think I have a reasonable position.


No question that Intel CPU's over the last few years have OC'd better than AMD. Anyone disputing that deserves a "fanboy" label!
But what you actually stated was: Intel has been releasing their chips with 1Ghz+ room for overclocking for over a decade now
You then expressed concern about the Zen 3 benchmark because the potentially maxed Ryzen 3 CPU could only match the stock Intel. So if understand your logic, you were suggesting:
IF Zen 3 is running close to max (a reasonable assumption)
THEN Intel will continue to lead in outright performance because their CPUs have >1Ghz OCing headroom.

You accuse me and others of being "deceptive turds" because we bring boost clocks into the discussion, but the logic of your argument above (if I've understood it correctly) is flawed precisely because you are ignoring boost clocks. Here's some independent testing to support what I'm saying:

AMD were comparing Zen 3 to an i9 9900K in Cinebench R15 multithreaded - they told us that. We know the 9900K has a base clock of 3.6Ghz and can overclock to 5Ghz. That sounds incredible and by your logic is a staggering 1.4Ghz (~40%) overclock! I think we can all agree there's no way AMD is going to leave 40% OC headroom on the table for Zen3, so therefore, again following your logic, once both the 9900K and Zen 3 are OC'd, Intel's ~40% advantage will absolutely wipe the floor with AMD.

That logic is all well and good until we look at how the 9900K actually performs and overclocks in the real world.
- The 9900K review here on Toms had a Cinebench R15 score of 2038 (AMD's keynote listed 2040 - effectively the same).
- The 9900K OC'd to 5Ghz (here on TH) scored 2214 - so just under 9% faster.

So what's the reason for this paltry performance improvement despite what seems like a massive overclock? It's because: the 9900K, like every modern Intel CPU, basically never drops anywhere near its base clock under load.
To quote Anandtech's Ian Cutress from the 8086K review I linked earlier: the idea is that the processor never has to use that base frequency (https://www.anandtech.com/show/12945/the-intel-core-i7-8086k-review)

The 9900K has an all core turbo of 4.7Ghz. That is a far, far more accurate measure of the actual clockspeeds under load than the baseclock of 3.6Ghz. All core turbo is hardly a perfect measure either. I agree with some of the issues you've raised around AVX workloads, cooling and power delivery. But it's still vastly more accurate than baseclock, which is becoming increasingly meaningless on modern CPUs.

We can argue back and forth as long as we like but here's the reality of the situation: According to Tom's Hardware, OCing the 9900K to max stable 24/7 clocks results in less than 9% gain in Cinebench.

In the end of the day, you measure overclocks however you like. My personal view is that we should consider all core boost clocks, because that results in much more accurate estimates for the sort of performance improvements we should expect. You don't like that metric? Fine, use another one. But please don't go calling me and others "fanboy" and "deciptive turds". As I've demonstrated clearly from the very example you raised in your first post - making performance estimates based on OCing from base clock results in very inaccurate predictions.

I understand why you get frustrated at AMD advocates coming onto your VR Flight Sim forums and spouting things that you know from your experience and extensive research are misleading and deceptive. FWIW, there's no question in my mind that a poorly threaded, CPU intensive title would generally run better on an Intel system, particularly in VR where extra performance can mean the difference between an immersive experience and throwing up.

Hopefully you can understand too why there are those of us here, when we see something like "Intel CPUs OC >1Ghz" in a thread about the 9900K and Cinebench scores will refute the claim. OK, it may be technically true by some metrics, but it's woefully unrepresentative of real-world performance gains and would certainly be misleading to someone who wasn't particularly informed about modern CPUs and boost/turbo technologies.

You disagree? Fine - let's a have a reasoned discussed about it and leave the school yard insults out of it.
 


Allow me to quote myself... A real quote... Not a JoeBlowsMyNose-change-the-words-around-and-hope-nobody-reads-what-was-really-said quote... This is the EXACT comment that has triggered your AMD fanboy butthurt lasting into its third day now... The comment YOU replied to...

"I am only interested in tuned performance (overclocking). Intel has been releasing their chips with 1Ghz+ room for overclocking for over a decade now.
AMD has been releasing chips that are already trying as hard as they can, with room for 200-300Mhz overclocking at best.
If this trend continued then Ryzen 3 is trying as hard as it can and scores numbers just under Intel at stock (from Anandtech), that isn't very promising. Hopefully I am wrong."

I didn't think this would trigger fanboys, I hope AMD does well! I really didn't think it needed hand-holding around here but oh boy was I wrong...

"only interested in tuned performance" means maximum performance, after tuning (overclocking)... If you can't see that, I'll give you a hint: I did NOT say "tuning performance", I said "tuned performance".

Reading comprehension FTW!

I don't care what your 65W, $200, "lowest end of three [old] new Ryzen 7 CPUs" can gain over factory. You are aware that comparing the OC potential of that to Intel's highest end is stupid at best, deceitful at worst right?
Why not try a real comparison, high end from both like 2700X VS 8086K? Just because it is completely fair, destroys your argument, and upholds mine isn't cause to ignore it.

I admitted I was wrong about FSX's thread usage and you didn't even have to prove it, I looked it up myself. I've proven you wrong over and over, you haven't admitted anything. Funny how you just said "Kid's, when you are trying to make a point or debate one, say what you mean and mean what you say, own up to your mistakes and correct yourself where you are wrong..."

Hahaha. Kids...

I don't care how YOU want to count an OC. My point stands. This site, the professional one we are on right now... It agrees with me. Can YOU be a man and admit you were wrong about anything?
 
To steer this thread hopefully back to topic ... and away from ridiculousness ...



I think it would be 4600G and 4400G -- considering the way they are labeling their APUs.

I just read an article on Anand that claims that AMD gave a fairly clear indication no GPU was going into these ... if that's the case then the only thing left is more cores ...
 


From the article on Anand, it could just be that new potential new gen APUs might just be in a different TDP tier.
 


I'll have to read that articel again, but if you are right, that would be a very weird thing to assert ... since every APU on zen has been in a different TDP tier ... I'll check that out again when I get home.
 


Here's what Ian from Anand claimed AMD had said:

AMD stated that, at this time, there will be no version of the current Matisse chiplet layout where one of those chiplets will be graphics. We were told that there will be Zen 2 processors with integrated graphics, presumably coming out much later after the desktop processors, but built in a different design.


Lisa Su, without coming right and saying it, (in a conversation shortly after the presentation) has also seemed to indicate that more cores were going there for Matisse ... it seems almost certain now.

Another site claimed that the sample AMD was running was running at a fairly high clock (sorry I can't recall where I read that) - faster than what zen+ is remotely capable of and likely not far off the 9900k. So it seems that the seemingly over-optimistic rumours (around cores and clocks at least, plus Lisa confirmed that TDP would remain largely unchanged from current gen) still hold at least some water ...

 


I see, perhaps then newer generation APUs will release later than their Ryzen 3 counterparts similar to how the current 2200g and 2400g released after the 1000 series Ryzen processors but were based on the same process.

In fact the Zen APUs released closer to Zen+ CPUs than their Zen counterparts so perhaps the new APUs will release closer to the next gen (Zen2+? Ryzen 4?).
 


with navi GPU?

thanks

 

Agreed. Sorry for my part in dragging this on too long!

Back to the Ryzen demo: has anyone else watched the latest @AdoredTV video? He's standing by his leaks despite no sign of Navi at the keynote. He also made an interesting argument that the Ryzen demoed was, by his calculation, likely a 65W Ryzen 5 part. By that logic he claimed that a 65W midrange Ryzen 5 was therefore matching the top of the line Intel CPU desktop CPU (not including HEDT, of course).

It seems questionable to me that AMD would deliberately show a midrange, 65W part for their first ever CES keynote. But having said that, the massive power difference between the parts does lend some credence to the theory.

Of course, we're still only looking at Cinebench which has always been a best case scenario for Ryzen.

I'd be curious what others think?
 


I would assume if so its planned as a backup in case Intel has something good planned. I see no reason why they wouldn't show it off now other than that.

That or they just didn't need the same space with a much smaller die.