AMD CPU speculation... and expert conjecture

Page 452 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.

wh3resmycar

Distinguished
imho if:

fx6300/i3 + 280x mantle >= i5+290/780 dx we should consider it a win.

i'm just disappointed that being "closer to metal" didn't extract huge numbers on enthusiast class systems.

the argument before is that we all have the processing power in our hardware but all is being held back by software.
 
imho if:

fx6300/i3 + 280x mantle >= i5+290/780 dx we should consider it a win.

i'm just disappointed that being "closer to metal" didn't extract huge numbers on enthusiast class systems.

the argument before is that we all have the processing power in our hardware but all is being held back by software.

Why would Mantle do that? It's about providing a more efficient and streamlined interface between the software and the GPU's. That translates into letting your CPU feed more to your GPU without bottle-necking itself in the process. If your GPU was the primary bottleneck "aka balanced" then making a more efficient mechanism won't really help out. If instead the primary render thread was crapping itself trying to feed your GPU, then you can expect large increase's.

This actually helps Kaveri quite a bit. All the reviews basically said the same thing, big GPU performance increase but the CPU performance stayed flat even with the improvements due to lower clocks. Introduce a new technology that allows that CPU to feed it's own GPU faster and you've got a decent product. Of course with it being a new technology it's more of a demonstration of what's possible then a mainstream product. We'll see how things change over the next year or two. Personally I'm glad that AMD didn't find some way to limit it to AMD only CPU's, makes for wider adoption.
 

Steve Faugue

Honorable
Sep 30, 2013
8
0
10,520


I'm eventually going to pair my 7850K with a dGPU but I'm waiting on the next round of AMD graphics cards as I have a very modest budget for building my rig (need time to save cash for a GPU).

But this is probably because I snagged the 7850K for the appropriate price of $137 at Micro Center ;D, otherwise I wouldn't of considered it. I also wanted to have the option to upgrade to Carrizo later on (I feel this was a sound economic choice).
 
I'm eventually going to pair my 7850K with a dGPU but I'm waiting on the next round of AMD graphics cards as I have a very modest budget for building my rig (need time to save cash for a GPU).

But this is probably because I snagged the 7850K for the appropriate price of $137 at Micro Center ;D, otherwise I wouldn't of considered it. I also wanted to have the option to upgrade to Carrizo later on (I feel this was a sound economic choice).

As of now there is no real benefit of pairing up an APU with a dGPU. Nearly half of the space on an APU is devoted to the giant vector coprocessor we call an iGPU. That's space that could be taken up with components that enhance your scalar performance, so there is an opportunity cost involved. As long as your actually using the vector coprocessor for something then it's not wasted, but the moment it becomes redundant it's no longer viable. Pairing up iGPU's and dGPU's rarely works out well due to the memory configuration being so different (latency / bandwidth / size) that it can screw with performance and cause micro stuttering.

Now that all being said, it's possible that future software will be capable of using the iGPU as a vector coprocessor to handle tasks that would done by the dGPU or by brute force scalar processing. I can't predict when this will happen, not even a general idea timeline, so that's why I don't recommend putting an APU in a system with a dGPU.

For the cost you got it at, might as well play around and see what you can do with it. Personally I've found ~100W to be too hot for APU's inside SFF enclosures. It results in the fans having to spin full on to keep everything cooled and creating ambient noise, which I've grown to hate. I'll be aiming at a 65W or less solution for silent living room casual experiences.
 

juggernautxtr

Honorable
Dec 21, 2013
101
0
10,680
off topic,


you all probably have awesome computers at the moment, me i am buying rope and 2x4's for a hanging
platform party...... water, electricity, mother boards and cpus don't mix, if you see the news about some guy hanging 4 kids in his back yard it was me.
may its recyclable parts become someone's friend in the future.

just a bigger excuse for an major upgrade this year.
 

why not offer something logical for a change?

mantle is so new and so niche that it's not worth bothering. the fact that amd, ea/dice are going out of their way to inflate mantle's improvement in unbalanced gaming pcs speaks volumes, loud and clear. it's not that balanced gaming pcs don't gain with mantle, but those gains are utterly small - almost nothing a minor o.c. and/or a driver update won't deliver - for everything, instead of just one game. for example, with core i3 or a10 7850k with radeon r9 290x(widely advertised and tested for 1600p and 4K) - it's good for benching gaming cpus at 1080p.. but the card is so powerful that even 4x antialiasing don't seem to tax it much at that resolution. it hard to deny that mantle offers big performance gains for that kind of combo (from the early previews). but without more in depth analyses or independent Reviews (not previews) it's hard to understand. and overall perf-price for that kind of combo is not favorable right now.

and the bottleneck you're talking about isn't gone. it's lessened significantly, but not gone at all. check the previews again... or test it with your pc - you have an fx as well as a gcn 1.0 gfx card. better if you have both intel and amd cpus.

mantle has a lot of potential, no doubt about that. but it's up to amd and game developers to realize that. the way amd and dice launched mantle(to public) is revelatory. there was little coordination, repeated delays. optimizations for widely used, sweet spot gfx cards were ignored in favor of overpriced new gcn 1.1 gpus (gains less than new ones, if any). this is the result from a company widely derided for it's software support and apparently they have not learned their lesson. if this continues, it'll affect mantle's future adoption and user satisfaction (outside passionate supporters.... but all tech products have a few of those..).

amd needs at least a year for continuously, consistently developing mantle and push it to more and more game engines (that makes popular games).
 

truegenius

Distinguished
BANNED
but billion dollar question is can it play minecraft
http://www.reddit.com/r/Minecraft/comments/1wuaiv/amd_141_beta_drivers/

and a question that worth a fortune is can it show double digits improvements in GTA5 :love:
if it shows improvements in GTA5 then heil mantle :pt1cable:
AdolphHitlerSalutes.jpg
 
gta 5 with mantle eh.. suddenly those radeon r7 260x cards look mighty cheap... :whistle:

the reddit links say catalys 14.1 breaks minecraft.

mantle is actually far more suitable for carrizo. by then, there will be far less rough edges, only optimizations for new games/engines, hsa will be more integrated, more gpus will be supported (hopefully).

edit:
fudzilla previews mantle with amd a8 7600
http://www.fudzilla.com/home/item/33829-catalyst-141-beta-lands-with-mantle-support-in-tow
 
mantle is actually far more suitable for carrizo. by then, there will be far less rough edges, only optimizations for new games/engines, hsa will be more integrated, more gpus will be supported (hopefully).

That's why I said,

When hardware assisted rasterizers were first introduced to the consumer landscape the exact same comments were made. The programs that took advantage of them were too few, and that software rendering would work on far more programs. A few decades later hardware assisted rasterization is in everything, everything is using them and software rendering is unheard of outside of ray tracing for CGI environments. And even that is being accelerated via large vector coprocessors.

I expect the current iteration of Mantle to be like MMX / 3DNOW / Glide. Useful in some situations, especially the low cost ones, but not widely adopted. In the next two years we'll see several updates to the API and further refinements as developers start working with it and providing more feedback to what they want and what they need. Eventually it'll be replaced by another version of itself or by another API that does the same thing but better.

When the first accelerator cards were out very few games used them and many considered them a novelty at best. HW assisted 3D rendering was for CAD/CAM workstations only. That eventually proved false when Voodoo released the first widely successful card and paved the way for eventual standardization.

https://en.wikipedia.org/wiki/Graphics_processing_unit#1980s

We are seeing the same thing now. A new technology is released and isn't remotely realized. It will have bugs, glitch's and all sorts of issues, as all new technology's do. Those will be ironed out, fixed and revision 2, then revision 3 will be out and suddenly nobody can remember what it was like without that technology. I challenge everyone to go load up Doom, Quake or Duke Nukem in Dosbox and compare it to what HW assisted rasterization did to the gaming world.
 
When the first accelerator cards were out very few games used them and many considered them a novelty at best. HW assisted 3D rendering was for CAD/CAM workstations only. That eventually proved false when Voodoo released the first widely successful card and paved the way for eventual standardization.

But its worth noting: While initially niche, the rendering process itself scales naturally. So once you got the API standardized, you had explosive growth and mass acceptance.

The issue here is one of standardization. If Intel and NVIDIA don't jump on board, or if MSFT improves DX [even if it isn't quite as much of a gain], Mantel dies a quiet death.

The issue here is AMD made an API, but did so without input from the rest of the players in the market. Traditionally, API's like that don't get adopted, and some OTHER API comes along later with industry wide support that takes over. That's what I expect we'll see at the end of the day: Intel and NVIDIA forcing MSFT to improve DX.
 

Master-flaw

Honorable
Dec 15, 2013
297
0
10,860
Meh, Mantle will be fine...and will, at the least, follow AMD supported games for some time with what was shown in BF 4.
Yea DirectX may get improvements, but who's to say Mantle won't receive any improvements itself...DirectX is looking at years of optimization on their API so improvement really isn't going to go much further; where as Mantle has room for much more...It's still in beta, and we haven't seen a good representation of how effective it truly is outside of Star Swarm.

BF 4 is just not the best game to showcase Mantles purposes. Especially for hardware enthusiast...We need a game with draw calls, and even though BF 4 can have a large amount of objects on some of the maps it still doesn't do it justice.

Really, we don't have enough information on Mantles true performance gains in order to make assumptions about it. But for an opening API, AMD went above and beyond what I expected.
 
BF4, and all games before it, get around the draw call bottleneck by sending a lot of data at a time, rather then submitting a call every time you want to render a texture. Factor in the PCI-E latency, and you realize this is the CORRECT approach to take anyways, regardless of the overhead involved in draw calls.

As for DX, it wouldn't be that hard to introduce a new set of extensions that remove a glut of the abstraction layer, at the cost of universal code support.
 
Once Star Citizen gets its official release, being a great space Sim (think Freelancer, EVE Online and X2 or X3), it will tax the CPU quite a lot. They're on board with MANTLE (AFAIK), so we'll see how good the API really does when used in a favorable scenario for it.

Any RTS will do the trick as well, but Star Citizen has the added bonus of immersion, hehe.

Cheers!
 

truegenius

Distinguished
BANNED

:na:
can we compare it to nvidia's PhysX
i means it offload Physics from cpu to GPU thus improving quality/performance on "the way it meant to be blah blah" titles
and maybe mantle is doing same thing that is why reducing frame latency and frame rate ( i didn't read about mantle ( because i don't find any article with mantle and x4 and x6 phenom 2 cpus ) so it can be way too off :p )
if thats true then mantle is answer to physx, isn't it ?
 

Master-flaw

Honorable
Dec 15, 2013
297
0
10,860

I believe they are too as well as being Cry Engine 3...Brings the nerd outta me:bounce:
The simulation elements should help showcase the API nicely. Completely disappointed in the quality of current RTS's and simulations.
 

jdwii

Splendid


Ask yourself why they didn't do that already and maybe you will understand why we need something else. There is a reason why Steam is being created, people are sick of Microsoft
 


Because that abstraction allows DirectX to work with EVERY GPU. Remove the abstraction layer, and you have to start using different code for different GPU's. That's the downside. OpenGL is no different in this regard.

FYI, MSFT already did remove a bit of the glut in the latest version of DX11; see BF4 on Win 7 versus Win 8 to see the effects of this.



Which MSFT shot down. That was AMD farther trying to push their API by spreading FUD about the future of DX.
 

juanrga

Distinguished
BANNED
Mar 19, 2013
5,278
0
17,790


Game developers have been complaining for years about the DX overhead because it restricts games visual quality. Current games are limited by Microsoft API bloatware. Developers worries were mentioned again during Oxide APU13 presentation

IMG0043505_1.jpg


The Oxide demo uses 100000 draw calls which is a higher number than what you find in current PC games (the so-named 'top' games). Thanks to MANTLE, developers can now made visually improved games. BF4 MANTLE 30--60% performance boost is only a shadow of what we could see in future games if MANTLE becomes a standard. Oxide predicts 300000 draw calls for 2015 and 1 million for 2018

IMG0043522_1.jpg


But as blackkstar said ironically some people here think AMD is evil by helping developers to get the best from hardware.
 

juanrga

Distinguished
BANNED
Mar 19, 2013
5,278
0
17,790




Not to mention what Nvidia and Intel are saying about Microsoft and Windows. Both are migrating to linux/android. During Annual Investor Day 2013 Nvidia shows a graphic with the evolution of market share of OS. I will read two data points:

2003: Windows(95%) Apple(5%) Linux/Android(0%)
2013: Windows(30%) Apple(20%) Linux/Android(50%)

data which was then used by thre CEO to explain future plans at Nvidia. Nvidia partnership with Valve is so strong that they are opening its documentation for linux/android, when Nvidia never did it in the past. And during last annual conference Intel praised Android/linux and mentioned its intention to scale up Android to replace Windows. Intel is already promoting dual OS configurations now

http://www.engadget.com/2014/01/06/intel-unveils-dual-os-platform-that-runs-android-and-windows-o/

Microsoft improvement of DX will be 'cosmetic' and will not provide MANTLE gains. Moreover, DX is associated to a falling OS with an insignificant market share. AMD already announced intention to port MANTLE to linux/android.
 

blackkstar

Honorable
Sep 30, 2012
468
0
10,780


The thing is that EA and DICE are horrible. A better example is something like Star Swarm. You're looking at a game where they're working to give you a different gaming experience that isn't possible on DirectX.

What DICE and EA are doing with Mantle is not impressive. They're basically just tacking it onto an existing engine, one that is a complete disaster.

I am thinking back to the Oxide Mantle presentation where they were talking about how Mantle allowed them to do new things they couldn't before, like allowing them to have a massive amount of units on screen.

We haven't really seen if Mantle gives potential to open up unit counts like Oxide is claiming. Everything has been centered around EA and BF4.

From reading up on the Star Swarm benchmark, it seems really difficult to compare FPS numbers because the benchmark dynamically adds units on screen depending on draw call performance and stuff.

And yes, I'm well aware that it's just lessening a bottleneck, but regardless, it's a step in the right direction. As was mentioned earlier, there was quite some resistance to hardware rendering of games instead of going software, and software lost that battle. I can guarantee you that the big engines are doing everything they can to get rid of reliance on single threaded performance.

Game developers (at least the good ones) have more than likely came to the conclusion that single thread performance isn't going to get any better, ever, on x86. And that if they keep depending on single thread performance, they'll never be able to push CPU bound tasks in games further. If they can get around that style of programming (and yes, I know it's extremely hard) and they can get engines to scale across multiple threads evenly, then we're suddenly back in a situation where game developers have new hardware to look forward to with more cores instead of "wow awesome this new CPU is the same speed but it uses less power, so I guess lets just not change anything"

However in regards to the launch, I expected things to go a lot worse. BF4 has been a complete disaster every step of the way. I realize it is being pushed as the primary Mantle game, but Star Swarm definitely looks to me more of a proper implementation of Mantle as opposed to BF4.

Even balanced systems can see huge gains from Mantle
http://www.overclock.net/t/1463351/steam-star-swarm-benchmark/100#post_21704707
http://www.overclock.net/t/1463351/steam-star-swarm-benchmark/100#post_21704889
http://www.overclock.net/t/1463351/steam-star-swarm-benchmark/120#post_21705912

Seems Mantle likes HT, but here's one without HT
http://www.overclock.net/t/1463351/steam-star-swarm-benchmark/120#post_21705980 (still a 40%+ increase)

There are more in the thread.

But my question is why is everyone assuming it's AMD trying to skirt balanced system benchmarks when it's perhaps EA and DICE trying to hide the fact that they did a lazy job of implementing Mantle in BF4.

But I've given you evidence that Mantle can offer lots of improvement when using a "balanced" system. The problem with BF4 Mantle benchmarks isn't Mantle, but it's BF4. If Mantle was a disappointment in general, we'd see the same style results out of Star Swarm benchmark, with balanced systems and Intel quads not gaining anything.
 

con635

Honorable
Oct 3, 2013
644
0
11,010
The only reason AMD have a shot with Mantle is because of the market share the consoles will give them imo, some seem to forget that there will be alot more x86/gcn apus around the world soon.
edit I read nvidia 'snubbed' the consoles because margins were too low, it makes sense now, AMD completely undercut them.
 

juanrga

Distinguished
BANNED
Mar 19, 2013
5,278
0
17,790
Important update of the Oxide demo

http://oxidegames.com/2014/02/04/new-star-swarm-build-better-mantle-performance/

We just deployed a new build of our Star Swarm stress test that significantly improves the demo’s performance using AMD’s Mantle API. We suggest that those of you interested in benchmarking and performance numbers re-run your Mantle scenarios; you may be surprised at the results.

Did we crack some secret code or find a crazy new optimization? No, nothing so spectacular. The truth is that we made a mistake in our haste to deploy the build that stripped out the activation process. We didn’t follow our normal release process, and missed how a minor change in that build had disabled some of the Nitrous engine’s multi-threading features when using Mantle. Unfortunately we didn’t notice that at first, as nobody was running Mantle last week due to the beta driver being delayed.
 
Status
Not open for further replies.