AMD CES 2019 Keynote Live Coverage

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.

joeblowsmynose

Distinguished


You are still being generous -- most reviewers couldn't get their 9900K stable past 5ghz on all cores. A few with very expensive custom loops could do 5.1 and I doubt that anyone who claims a 5.2 can run an hour of wPrime without issue.

Intel no longer has any OC headroom on their top parts at all considering the two core boost is 5ghz. It may also come down to the socket or Mobo itself as the 9900K draws more power than a 16 core Threadripper when MCE is enabled under a full stress load.

Looks like someone is still a few years behind the times ... 1 ghz OC hasn't applied to Intel for the last two generations ... If all you want is an OC ghz number then buy a damn pentium, lol.

The things that matter 99% of people is overall performance, performance per watt (in which Intel no longer has a lead in most case), and performance per dollar (which Intel fails miserably at); the other 1% are either fanboys or just ignorant.

 

joeblowsmynose

Distinguished


Well by that logic an R7 1700 with a 3.0 base can also OC 1ghz (that you are praising Intel for) ... and that is previous gen ... you just defeated your own argument, lol. That's funny right there ...
 


As I said no confirmation as to what is the top or mid range yet. It could change or it may not. Pushing 16 cores into mainstream so soon in my opinion is not a good idea. If they push 16 cores into mainstream they will cut into their TR market where they can charge more for a better platform especially in the prosumer market where TR makes more sense (more prosumer apps utilize multiple cores than normal consumers).

It would literally have no place in the mainstream market as no normal consumer or gamer would be able to utilize 16 cores for many more years. 8 higher clocked cores would be the smarter route to go to be competitive and to leave TR to the prosumer market. Its why Intel ddin't push 8 cores into the mainstream market too fast, to keep it from eating the prosumer market up.
 

Specter0420

Distinguished
Apr 8, 2010
111
28
18,710
Overall performance is the kicker. Depending on the application, IPC and Ghz are what matter. Further, almost anything where Ryzen's low IPC, low clock rate thread advantage would help is done multitudes faster on a video card anyway.

I fly VR flight simulators, Ryzen is not an option here. I am not ignorant nor a fanboy. You need 4.5Ghz+ on 1-2 cores MINIMUM in DCS, X-Plane 11, FSX, P3D, IL2, and any other flight sim in VR, especially multiplayer. If AMD was the wiser option, I would've chosen it like I did back in the Pentium 4 days.

I do hope things change, but Intel has over double the OC potential at the moment, deal with it.
 

joeblowsmynose

Distinguished


"After three years" -- you mean vs the just released Intel flagship that it outperformed in the demo and uses 30% less power than?

You have to be some fanboy to argue that slower means faster. Did you not watch the demo? 30% reduced power consumption is reasonably significant. Also the test was run at "not final clock specs" - which means they can potentially turn that 30% advantage in thermals into even higher clocks with refinement and tuning - widening the performance gap by a fair margin, that is, if they want to take the same route as Intel by allowing their 8 core to pull up to 250w through the socket even though they claim a 95w TDP. wink, wink.
 


Mane its fun to watch for sure. However on stage demos by any company are to be taken with a grain of salt the size of a mountain as they are always cherry picked. Intel, AMD, nVidia etc always put the best show on stage to get people excited. The truth comes out after the product is released and independent third party sites get their hands on them to truly test the product out.
 

joeblowsmynose

Distinguished


FSX (I also do flight sims) will use all the cores you throw at it. If what you say is true, sounds like a TR 2950X would be the best CPU for that scenario, at a fraction of the cost of an Intel equivalent. IPC difference is only ~3% between Ryzen 2000 series and Intel - everything else is clock speed.

I don't need to "deal with it" - my Ryzen 1700 has almost a ghz OC and renders far faster than a 9700k OCd to as far as you can possibly take it. (5.3)

How about you deal with reality for a moment?
 

Specter0420

Distinguished
Apr 8, 2010
111
28
18,710
Come on JoeBlowsMyNose,
We are comparing the high end here, please try to keep it apples-to-apples. When you have to be deceitful in your argument (boost clock vs all core OC, high end vs low end, etc, etc, etc.) it is because you have a loser argument. Intel's lower end stretches its legs even farther than AMD's, upholding my argument.
 

joeblowsmynose

Distinguished


Er ... it was you who first said AMD can't overclock at all, and Intel always has at least 1ghz headroom.
Someone pointed out that Intel's flagship can't really OC beyond boost, except with custom water cooling. (just like AMDs 2000 series)

You claimed that OC needs to be measured between base and OC.
I then gave you an example of an AMD 8 core part clocking 1ghz over base - to use your own argument.

Now you say I'm being deceitful because you believe that an 8 core Ryzen is a "low end" part?

Then you argue that only itel processors count because for your application you need "4.5" ghz at least, but then I bet you forget that TR 2950x can do that and has all the cores any flight sim would ever need. I've never played FSX on a TR but I bet it would be an awesome experience.

Anyway I'm done talking to you, your points aren't very good at all.
 

Specter0420

Distinguished
Apr 8, 2010
111
28
18,710
Yeah Joe,
My old $200 video card will render exponentially faster than your Ryzen, you'd need to be a fool to render on CPU these days. You are wrong about FSX, a simple google search shows that was proven a decade ago. Enjoy waiting 4-8 years for the performance I enjoy now. It shouldn't be too hard for you, you do still render on CPU after all.
 

joeblowsmynose

Distinguished


Agreed. But in the demonstration the AMD cpu beat the intel one - somehow everyone is saying the opposite ...

What I liked about that demo, was that Lisa was sweating bullets ... their new CPUs are still very unrefined, I bet she was keeping her fingers cross it wouldn't crash ... now that would have been funny.
 

joeblowsmynose

Distinguished


I guess you've never used the Corona renderer. (I am also not memory limited due to it being a CPU renderer)
You need to get up to speed with things. I guarantee my "Low end" Ryzen can render far better than your $200 video card.

I personally don't have $1000+ dollars to put into powerful enough GPU to make my renders faster than what Corona can do.

Stop just throwing random arguments my way hoping you'll be right on one of them.
 

joeblowsmynose

Distinguished


I'd argue a couple things ... not adding more performance per dollar for fear of eating into expensive higher margin parts is what Intel does, AMD doesn't seem to have that issue. My bet is that if they do decide to go past 8 cores in AM4, the TR3 (non-WX) will get more than 16 cores. That would easily solve the issue you describe - and with Epyc being now at 64 cores, this is easily possible for TR3 (non-WX) and since AMD is working on Epyc first this time around, this means TR3 may well land at the same time as Ryzen 3, negating the issue.

I would easily welcome a 12 or 16 core CPU if the price was right. I don't like to just use my comp for one thing at a time. When I'm rendering or encoding, its a nice bonus I can play a AAA game at the same time - try that on a four or 6 core. This is why Intel isn't even an option for me with their paltry 4 and 6 cores ... great for pure gaming tho.

But ... that said, I think AMD is leaving that void for now and will wait to see how/if Intel responds later this year. That would be a good strategy.

Adding a 7nm Vega II die in there might also be interesting ...
 

InvalidError

Titan
Moderator

I'd say pushing more cores in the mainstream is perfectly fine, nothing more than a correction for ~10 years of stagnation in my opinion - we should have been there years ago. I wouldn't worry at all about hurting TR sales since most workloads that scale to 16 cores and beyond also benefit from quad-channel memory. Most people who genuinely need 16C32T will still want TR instead, unless they can't afford the platform. AMD gets $500-600 for the CPU either way, motherboard doesn't matter since most of that money goes into the board vendor's pockets minus ~$30 for the chipset.
 

Giroro

Splendid
"the Zen 2 microarchitecture that provides a higher level of instruction level parallelism (IPC) than the first-gen Zen design, meaning the chips can process more cycles per clock. "

IPC does not mean 'instruction level parallelism', it usually means 'Instructions per Clock', or sometimes 'Instructions per Cycle'.
Cycle and Clock are synonymous in this context, so the phrase "process more cycles per clock" doesn't make sense. The ratio of cycles to clock is always going to be 1:1 no matter how well the processor improves.

Is this an editing mistake, or is AMD being confusing/misleading by trying to redefine a commonly used acronym?
 

Specter0420

Distinguished
Apr 8, 2010
111
28
18,710


From Corona themselves;
"We are proudly CPU-based and we do not have any intention of making GPU or hybrid version in the foreseeable future. The reason is that our CPU version is fast *enough*, and there would not be nearly *enough speed improvement* to justify the amount of changes and work required to develop a usable GPU version. Also, reduced flexibility and hardware limitations are the reasons why we do not want to go the GPU way."

They admit themselves it would be faster on a GPU, why not use a "real" software that doesn't have lazy developers? I thought you were done? Go look up V-RAY, it does what you need on the GPU, MUCH faster. I tire of your deceit, or do you just need to get with the times?

Even if my old $200 video card doesn't compete, my 1080 sure will.

I mentioned VR flight sims in multiplayer, AMD can't do it, even when overclocked to the max. In all fairness, Intel can't really do it well without overclocking. Stop arguing irrelevant, and incorrect, points about your 13 year old 2D simulator. It is just a pathetic attempt to win a loser argument. I don't care about your "1 Ghz OC", in reality it brings you up to to "not even almost good enough" for competing with Intel in virtually all CPU tasks. I don't care if it is better at GPU tasks, I have a GPU.

I get it, there are, very few, select cases where lower IPC, lower clockrate, higher threads makes sense (especially if you have a garbage video card). If you have one of those needs, good for you. I won't call you a fanboy or ignorant.

I have a need that AMD simply cannot fill, and I don't do the 1 or 2 super rare things that Ryzen excels at vs Intel CPU's or video cards in general. I'm not a fanboy or ignorant, I understand my needs and what each side provides.
 


I don't think we would have been at more cores a few years ago. No other company has gone past 4 full cores except recently in mainstream products, even if you count ARM with their big.LITTLE design its still 4 big full cores with 4 smaller, light cores.

AMD was the first to push 8 cores to the mainstream but honestly I am not sure 22nm or larger would have handled that many cores well.

And maybe it wont but I still don't see AMD, a business, putting out a product that will have the potential to eat into TR at all. AMD is still a business and in the end the decisions will be to benefit the bottom line and satisfy the stock holders and up those profits. TR provides a larger revenue stream, especially if they can match Intel in IPC as they will be able to easily demand higher prices for their parts and I would not be surprised if they do.

Look at Vega VII. Priced the same as the RTX 2080 which means it will need to compete with that. I took the performance between a Vega 64 and RTX 2080 and did 4K only games and averaged the percent differences. The RTX 2080 is on average 41% faster in the games used by Anandtech than a Vega 64. Now the average gains, stated by Lisa Su, is 30% which puts it still under the RTX 2080 yet its still priced as high as an RTX 2080.

That said I don't see why AMD would ever put a product out that could cut into any other revenue stream.
 

joeblowsmynose

Distinguished


Lol - you didn't read that right ... note that I followed Corona all through development and was part of their community and I know the exact post you are quoting.

They are saying that moving to GPU wouldn't give it enough performance gains over their CPU renderer - that's how fast their CPU renderer is. No need and no desire because the improvement in going to GPU over current performance would be negligible. I do find your interpretation interestingly creative though ...

Two more things ...
1) If I throw 64gb of RAM into my system I can use all that memory with consistent performance. With GPU renderer you are limited to what the GPU has on board. I do realize that the GPU renderer devs have tweaked their product to be able to use system ram if GPU ram runs out, at a great performance loss though. I regularly have mapping that exceeds 8gb of RAM - and I can't afford a 16GB card for rendering.

2) I can play my flight sim while rendering animations (8 cores are handy) - try that on a GPU renderer. :)

I am very familiar with V-Ray - which is now a partner of Corona's BTW - I chose Corona over V-Ray and I am very pleased with the performance and my choice.

I haven't tested V-Ray head to head with Corona (others have - there's reviews on the web), but I have tested Corona vs ProRender (GPU renderer) and for complex scenes - Corona cleans up noise in the shadows far faster in complex lighting scenarios - quite a lot faster, actually. (on a mid range-ish video card mind you)

You may be happy with your choice of hardware/software - I'm happy with mine. No need to argue who's is "better".
(but you may want to consider TR and Corona for your next purchases (wink, wink)). :)
 

Calm down mate, we're just trying to have a conversation about tech.

My 8086K runs at 4.0Ghz, ACM, stock.
No it doesn't. You're confusing "base clock" and all core turbo. Intel doesn't publish the latter anymore but they still abide by them and most motherboards will allow the CPU to run at the all core turbo frequency indefinitely out of the box (for non AVX workloads). For reference your 8086K will run at an all core turbo of 4.3Ghz indefinitely (turn off your OC and try it!). Source: https://www.anandtech.com/show/12945/the-intel-core-i7-8086k-review
If an Intel CPU sits at its base clock under load, it's either an intense AVX workload, or something is wrong with your system. That has been the case for years now.

There's still a good deal of OCing headroom on an 8086K though, I grant you that!

Where your argument falls apart is Intel's 9th gen parts. They, like the Ryzen CPUs, are pushed basically to the limit out of the box. The 9900K has an all core turbo of 4.7Ghz, with just a few hundred Mhz left for OCing. A Noctua NH-D15 can barely handle the 9900K with a sustained AVX workload out of the box. The 9700K runs at 4.6Ghz on all 8 cores: https://www.anandtech.com/show/13400/intel-9th-gen-core-i9-9900k-i7-9700k-i5-9600k-review

Apples to apples, what I said holds true. Intel is jogging at stock, AMD is already gasping for air at stock.
Before Intel had competition that was true for sure. 9th Gen shows us it's absolutely not true anymore. Neither the 2700X or 9900K has much OCing headroom. At least however, the 2700X is relatively straightforward to cool.

While there's solid competition in the CPU market, I wouldn't expect either company to leave performance on the table. The fact that both AMD and Intel are now pushing things close to the limit tells me we've got a good competitive landscape and long may it stay that way!
 

Specter0420

Distinguished
Apr 8, 2010
111
28
18,710


Flight sims use one thread, the really good ones use two (DCS splits the audio off to a second thread, this is common knowledge). I don't care what the developers claim, I care about real world testing and benchmarks. Toms readers should be able to appreciate that. Flight sims love high IPC and Ghz. 4.5 Ghz is MINIMUM for VR while 5.0Ghz is much better, a quick google search shows that thread-ripper struggles with 4.1Ghz ON WATER and all those extra threads are worthless.

Your 1700 is a "low end part" because even after a 1Ghz overclock it runs at a "good for 2004" 4Ghz, probably on water cooling too. My 8086K started there, with a $25 air cooler she hit 5.2Ghz. Last I checked 1.2Ghz OC on the high end for the last decade beats 300-400Mhz on the high end for the last 2 years. That is 3-4 TIMES better for 8 TIMES longer....
Intel reliably maxes a full Ghz higher total than Ryzen so far. Hopefully this changes.
 

Z1NONLY

Distinguished
I have been "out of the game" for a while, happily chugging along with my lightly-over-clocked 3770k and a few GPU upgrades over the years. (running a 1070 now)
I get that AMD would demo the best case scenario on stage, but they beat Intel's flagship, core-for-core, in the benchmark they chose to demo.

The 9900K looks like Intel brought in a team of hard-core over-clockers to pull every last bit of performance they could milk out of their silicon (and still last through the warranty period) then mass produced it...complete with voltages, power draw and heat to match an aggressive OC. Yes, it's currently the fastest chip on the market, but Intel is showing the proverbial "drop of sweat" whilst trying to look like everything's cool.

I don't know what the landscape will look like in 6 months, but if AMD can translate that demo into real-world, mass-production performance, my next build will be AMD.

AMD's demo was compelling enough for me to hold off until the summer for my next build.
 

XMEN_2012

Distinguished
Oct 13, 2012
107
0
18,690
I don't understand by Intel boys crying so much from early sign here.. It will be available by mid 2019 hence it will be there or above who knows but domination is going to be end soon.
 

PapaCrazy

Distinguished
Dec 28, 2011
311
95
18,890


I am on FSX, and will add XP11 on my upcoming build... which will be a Ryzen 3700x. I refuse to support Intel anymore, and have been hanging on with a 2600k for many years. They gave me very little reason to upgrade, and though the 9900k is a monster, it is also heavily flawed in power/heat, and overpriced. Judging by the Cinebench demo, which is a very linear indicator of IPC/clock, this looks to stand neck in neck with the 9900k, and believe it will become a viable alternative for those who do flight simming and also need 8+ cores for work apps.

Regarding FSX core usage, I have seen FSX use up to 4 cores, but in an extremely inefficient manner to the point where you are better off using an affinity mask. I get much better performance by keeping FSX off CPU 0 or any hyperthreaded cores. For a while I ran HT off for smoothness, but got tired of taking the performance hit when rendering.

It would be so nice if Toms Hardware added FSX, P3D or XP11 benchmarks to CPU tests. But we're a small crowd. One can only dream.