AMD CPU speculation... and expert conjecture

Page 366 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.


Woah, someone else who understands what an API is. My mind is blown. I am going outside for a walk, I simply can't deal with seeing someone else understand the differences between API and implementation of an API. My head is spinning.

BUT I THOUGHT MANTLE ONLY BEING IMPLEMENTED ON GCN CARDS MEANT THAT IT WAS GCN ONLY FOREVER!!!!!
 


AMD really had no choice but to spin GF. They would have gone bankrupt.
 


Another bet that you lose, because I have THE 2014 desktop roadmap and it appears one big CPU mentioned in it...
 
And the world has been accused of being flat by Nobel Prize winning Physicists. That doesn't make it true. Doesn't make it false either.

*shrug*

There is a big problem here that chips can no longer scale in the ways they previously have, and that we have to do something else to make a difference.

The path you are suggesting of sticking to single core rendering and not looking for alternatives is suggesting that we create a world where there's no longer a reason to upgrade your CPUs because Intel can't make anything faster and AMD is still catching up in single thread.

But as I'm trying to point out, this is the exact same argument made back in the late 70's in favor of distributed computing (really, multi-CPU mainframes). OS's were developed that allowed cooperatively scheduled threading to give the developer fine control of application performance, and significant amounts of time and money were used to developed hard coded assembly to take advantage of those features.

DYI, it didn't work, because it was found after a handful of physical CPU's, performance gains fell off a cliff. And this is more of an issue now since, in a pre-emptively scheduling OS, you have even less control over threading, so you'd expect less scaling then on a cooperatively scheduled OS.

As I've said, there are things that can scale reasonably well. Rendering, obviously, which has been done for about 15 or so years now. Physics, which we see being offloaded from the CPU. You could do AI too. But then you have to manage those threads, keeping the right ones running and keeping the state sane (remember: You don't control what threads run when), so you need some type of locking scheme (locks, mutexs, semaphores, etc) OR perform optimistic processing with a VERY high worst-case outcome.

the issue I have with AMD isn't that they are going to more cores, its that they went to more cores while reducing per-core performance. The issue that is doubling the cores is pointless if its competitors cores end up being 50% more powerful [remember, non-perfect scaling + overhead].

The situation with the BD arch has not changed: It lags behind current mid-tier i5 chips in the BEST case. But I still see plenty of people (wrongly) claiming that a 6350 beat an i5-3570k in gaming (it doesn't), or the old BD argument of "It will get better over time" (it hasn't).

And I was one of two or three people here who called this, back in 2009, based SORELY of my knowledge of how the OS/Software works, not even bothering to look at the details of the BD arch. Remember, I said BD would be a benchmark monster, but it games, it would be low-tier i5 level performance? And I nearly got laughed out of the forums.

My primary point being is that you aren't going to see continual scaling as you add more cores, so that approach, much like increasing IPC and clock, is going to come to an end. There's only so much you can do with the way CPU's are currently designed. Maybe quantum computing will change things, but I really think X86 is nearing its limit as a processor arch.
 


Well, I have it as well, I was given a link:

http://phx.corporate-ir.net/phoenix.zhtml?c=74093&p=irol-newsArticle&ID=1876132&highlight=

Scroll to the bottom and click on product roadmaps, open the .pdf and you will see the one that has "PILEDRIVER 2nd GENERATION FX SERIES" which is listed as the "PERFORMANCE PROCESSOR" category.

Ironically, Kaveri is not listed as a performance processor...I wonder why...maybe because, as my source intimated, AMD understands Kaveri is not for HEDT, as you seem to want to assert it is.
 


Did you watch the Mantle keynote? Mantle and HSA is not about using "MOAR COARS", it's about creating a set of tools to make using the right components for the right task easier.

Rendering is distributed across many cores because it can be, and GPGPU tasks like calculating global illumination will be passed off to another GPU while a main one renders the scene.

Had you paid attention to APU13, you would have seen Oxide's presentation where they showed a ton of on screen objects and then ran a benchmark where FX 8350 at stock, FX 8350 at 2ghz, and i7 4770k were all performing the same and the 290x was a performance bottleneck.

I don't know what else you want. At APU13 AMD announced Mantle will make it possible to shift:

1. Rendering across multiple GPUs and CPU cores
2. Physics to GPU
3. AI to GPU
4. Global Illumination across GPUs

If you'd like to tell me that part of the game that's going to need strong single thread performance when you use Mantle, please enlighten me about it, because I have no idea what it is right now.
 
So far Steamroller is a disappointment, the pointless 3 day conference where they unveiled one APU, they didn't even display a game with Mantle.

Maybe Steamroller is just one product and AMD are making cutbacks.
 


It says Mainstream / performance all-in-one.

Frankly...there is nothing "performance" about AIOs. My wife works on one at home...it's a 1.5 or 1.8 GHz dual core with a 500 GB HDD and 4 GB RAM. AIOs are bargain bin "walmart" computers for $500 retail, and you're honestly overpaying at that price point...

Kaveri is not a performance CPU/APU/GPU. AMD knows this, and so does the rest of the world...except juan it seems.

 


the only reason you keep talking about the BSN article is this:

This article was originally published by Juan Ramón González Álvarez at http://juanrga.com/en/AMD-kaveri-benchmark.html

I have already stated my theory of why your claims are flawed. You refuse to respond to that, only saying "you never listen to me". This is not a BSN article, its your article published on their website.

How can kaveri catch the i5 when its 80% slower in the workloads I have shown without resorting to HSA?

How can kaveri catch the 4320 in the same workloads starting at 40% behind without resorting to HSA?

How is a APU thats slower than the previous gen CPU "not low end"?

As I stated, APU is a mix of both cpu and gpu, and a master of none. APU will not be HEDT. APU is great for laptop, AIO and small form factor pcs/htpcs.
 
Sorry about jumping in here, but I just saw this flowchart and thought it would be really good to give people understanding, as well as a fair debate about our new console friends. ^_^ Enjoi

6187_kick_post.jpg
 
At long last we finally have to answer that we have been waiting for. There will not be an BD FX successor for 2014. We will have to keep pushing our 8350s to higher clocks. I personally will be shooting for 5.0GHz base, 5.2GHz turbo, with cool and quiet and other power saving features enabled.


10861225406_8dbbd0dc46_o.png
[/url] [/img]
 




The company that is most pissed off about Mantle is MS.

MS now has to buy APUs from AMD when AMD has created a direct competitor to DirectX. The kicker is that AMD's APU is capable of Mantle and would run better than DirectX would run on the DirectXBone.

My guess is that MS is going to want to separate themselves from AMD. Either that or whoever made this infographic is an MS fanboy who wants to make the APU in the MS system look superior. The XBone has no hardware advantage other than Kinect.

Things are going to be interesting if there's another Xbox.

Nvidia completely pissed Microsoft off with the original xbox and now AMD is pissing off MS by releasing a competing product to one of MS's flagship technologies.

XBox Two powered by Intel.

And yes, kansasboy I am disappointed too, but the FX is still at least a good chip. I would have really loved to drop my render times by 30% just by dropping in a new CPU for $200 but this FX has already saved me a ton of time from my i7 920 and made me a lot more than I've spent on the chip.

EDIT: Wow, mixing up AMD and MS so much =[
 


I believe juan has an understanding that Kaveri is a mid range product. Neither high end or low end.
 


Every year people predict the end of Moore's law and yet it still goes on. There are so many facets to the compute equation that can be tweaked. New meta-materials, new algorithms, cheaper computers. Every year more people are exposed to computers and whether it's X86/ARM/PowerPC/MIPS, the ISA is just one means to an end.
 


Yes I know. I have the roadmap, do you remember? And it shows AMD will continue selling old FXs.

Do you see it now? All your speculations about SR FX, Phenom return, AM4, FX replacement... vanished in the air.

The word "performance" also appears in the Kaveri row. And don't be confused but Kaveri is much more powerful than a FX-8350

FX-8350: 256 GFLOP
Kaveri: 118+738 = 856 GFLOP

It will be funny to do the first HSA benchmarks and show how Kaveri destroy a FX-8350 in a number of workloads.
 


Right!!! They mentioned how a 8350 @ 2GHz was able to compete with both i7-4770k and 8350 at stock freq.

We also know that

8350 @ 2 GHz ~ 4 SR cores @ 3.7GHz

which means that the CPU in Kaveri will perform like a FX-8350. As I said before the slide #13 given by AMD at October talk, showing an APU in close competition with FX-8350, was the prototype of benchmark that AMD was transmitting to OEMs.

This is the relevant slide of Oxide presentation at APU13

IMG0043517_1.jpg

http://www.hardware.fr/medias/photos_news/00/43/IMG0043517_1.jpg?

The relevant bullet: "Blazing fast CPU not necessary, a mid-range CPU will be sufficiently fast to drive the best GPU"

Which confirms my previous claim that Kaveri APU will be able to run a R9 290X.

I also find it fascinating how Oxide showed a concrete example where MANTLE improved performance by 3x (200%) and still gamerk pretends that the DX overhead is at best of 10% :ROFL:
 
The FX-8350 to Kaveri comparisons are getting tired. They're different products, made on different processes, with significant TDP differences, for different market segments.
 


Kaveri has up to 12 compute units. If you disable 8 of them then the chances are that the remaining 4 lose to the eight-cores of the 8350, except in workloads that are single, dual, triple, or quad threaded.

Previously I showed a leaked integer benchmark with four Kaveri cores outperforming four Piledriver FX cores. I attach it again:

177_201310291249222ocFT.png
 
Status
Not open for further replies.