AMD CPU speculation... and expert conjecture

Page 469 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.

juanrga

Distinguished
BANNED
Mar 19, 2013
5,278
0
17,790
MANTLE first sub-products:

For nearly 20 years, DirectX has been the platform used by game developers to create the fastest, most visually impressive games on the planet.

However, you asked us to do more. You asked us to bring you even closer to the metal and to do so on an unparalleled assortment of hardware. You also asked us for better tools so that you can squeeze every last drop of performance out of your PC, tablet, phone and console.

Come learn our plans to deliver.

http://schedule.gdconf.com/session-id/828184

Driver overhead has been a frustrating reality for game developers for the entire life of the PC game industry. On desktop systems, driver overhead can decrease frame rate, while on mobile devices driver overhead is more insidious--robbing both battery life and frame rate. In this unprecedented sponsored session, Graham Sellers (AMD), Tim Foley (Intel), Cass Everitt (NVIDIA) and John McDonald (NVIDIA) will present high-level concepts available in today's OpenGL implementations that radically reduce driver overhead--by up to 10x or more. The techniques presented will apply to all major vendors and are suitable for use across multiple platforms. Additionally, they will demonstrate practical demos of the techniques in action in an extensible, open source comparison framework.

http://schedule.gdconf.com/session-id/828316

Funny that first it was said to us that 'mantel' was unneeded, that overhead was zero on DX, and that nobody was asking for it. And now 'mantel' is claimed to be dead because DX/OGL are doing the same. Close to the metal was evil, when AMD mentioned it, now that Microsoft pretend to bring close to metal capabilities on DX, I guess we will be said that close to metal is fantastic thing...
 


I fully expect MANTLE to be implemented in the other manufacturer's design's, that or something very similar to it. DX and OGL can not, by definition, be as good as Mantle. DX works by fully abstracting the GPU and therefor must make assumptions when attempting to allow access to semi-low level hardware. Mantle on the other hand assumes a GCN architecture and thus should always be faster. In order for DX to offer the same same level of functionality it would need to make the same assumption, that the GPU is based on a scalar GCN design vs nVidia's vector design. Because of how radically different those design's are, there is no singular set of assumptions that DX can make that would enhance performance on both. Intel's design is actually closer to AMD's due to their usage of scalar clusters that can act like vector units when necessary, it would be easy to get Mantle to work on an Intel design. You should know all this.

Anyhow, I expect a standards war is brewing and we're just seeing the first volley of shots. Eventually there is going to a minimum set of hardware "requirements" to support this new design, regardless of the manufacturer. It will most likely center around clustering of scalar units and how memory access is handled. Once that's semi-standardized across the manufacturers then you can have a general purpose implementation. Mantle support, or whatever takes it's place in the next ~5 years, will most likely get subsumed into the standard list of "features" that GPU's come with. Kind of like what happened with NVidia's TnL support.
 
Well, MANTLE has 1 big advantage over OGL and DX that is not-so-technical: It's working in games right now.

Plus, they'll add all of this to OGL as extensions and DX will be tied to Windows forever; Devs don't want to be tied to Windows, that's the trend now, thanks to Android. So, like I said, MANTLE has inherent advantages to it that OGL and DX don't have or can't have.

From the consumer point of view, I really hope OGL wins this battle, but from the technical one, MANTLE should. Intel and nVidia shouldn't be so asshats about adopting it and AMD shouldn't be an asshat either to offer it to them with no fees or restrictions. They're the big players for the graphics industry, so they should play nice in standards and backstab each other in tech hardware, haha.

Cheers!
 
AMD reference 28nm tablet runs 64-bit OS
http://www.fudzilla.com/home/item/34067-amd-reference-28nm-tablet-runs-64-bit
running temash's [strike]arm replacement[/strike] successor mullins apu with 4.5 sdp.

amd should port mantle to linux and android for the sake of tablets asap. that move alone will hugely boost amd's apu performance in "closed" un-upgradable devices. in desktop, whatever mantle results in (regardless of vendor) - would be the one, not mantle itself.
 


From a Developers standpoint, I hope DX wins. OGL is HORRID right now; it desperately needs to be re-written. That leaves DX as the only API that works for all vendors across the board.

Mantle's problem is simple: It won't work for Intel and NVIDIA, the two largest GPU makers on the market. I can not stress that point enough. No matter how much extra performance you gain, if only 20% of the cards on the market benefit, and DX improves "enough", it will win by default.

But yeah, we're now heading down the road I was afraid we were going to walk down: If you want more performance, go Mantle/AMD. If you want Physics and (insert new mode of AA/Post Processing here), go DX/NVIDIA. I think we're going to see the end of "apples to apples" gaming, where different manufactures start boasting different features. Wouldn't be shocked if you get the occasional game that simply dumps one or the other in the name of performance...
 

ColinAP

Honorable
Jan 7, 2014
18
0
10,510


Rubbish. DX doesn't work for:

PS3
PS4
Linux
Mac
Android
iOS

And future versions probably won't even work for earlier versions of Windows.
 

etayorius

Honorable
Jan 17, 2013
331
1
10,780
Guys guys, MANTLE is a WINDOWS only API, for SteamOS or Linux it is unnecesary... MANTLE was designed to get rid of CPU Bottlenecks, it means that it is more CPU Oriented and does very very litttle for the GPU itself, this effectively could be used by Intel and nVidia (on windows) as soon as MANTLE gets out of BETA and starts releasing all the documentation, MANTLE will be 100% open source (when they finish with it) and it could take a couple years before that happens.

Linux nor SteamOS will not need MANTLE, it is an exclusive thing to Windows... besides OpenGL cand provide both AMD and nVidia with the necessary extra paths in order to give them a boost in regards to Low Level access, AMD is already working on giving the same performance or close to OpenGL in Linux/Windows compared to MANTLE only on Windows.

http://www.dsogaming.com/news/amd-aims-to-give-opengl-a-big-boost-api-wont-be-the-bottleneck/

There is nothing in MANTLE that would Stop nVidia into implementing it on their own Arch (if so they wish) since it is low level but it is not that LOW LEVEL in order to be dependent of X or Y Arch, Johan Andersson already explained this, but yeah AS OF NOW (Because is a BETA) MANTLE is only available on HD8000 and R7 and R9 Series, or you expected AMD to serve the cake also for nVidia in the Beta State? besides, they are also using MANTLE as a selling feature... which means they really want people to get a latest Radeon Card, MANTLE may be supported for older Radeon cards if so they wish.

http://www.maximumpc.com/AMD_Mantle_Interview_2014

Now let me explain you the true purpose of MANTLE, it was created to expose Microsoft CPU dependency on D3d (DirectX), which is the main Bottleneck now a days, this was a way to force MS into FIX DirectX and give it some Low Level Path codes, now if MANTLE becomes the standard that is great for AMD, if not it was Mission Accomplished anyway, they already said that they don`t mind if MANTLE dies short in the way as long as something SIMILAR takes it`s place, we already know that Dx12 and future OpenGL will both have low level path codes.

http://www.chw.net/2014/02/directx-12-y-opengl-5-accederan-al-gpu-a-bajo-nivel/

Back in the 90s, D3D was focused on an era where CPUs were doing HUGE Steps and dGPUs were barely a starting to kick off, so MS made D3D dependent wholly of the CPU with High Level Access in order to avoid Hardware component incompatibilities.

So either way AMD already WON with or without MANTLE... DirectX as of now, it is very very obsolete in regards to Hardware access.
 




I said "Vendors" not "OSs". These days, if OGL was even a remotely decent API, it would make a comeback. The fact hasn't should tell you all you need to know about it.
 
And if it was something made by Nvidia it would be this next amazing thing that AMD just won't be able to compete with. If it was made by Intel then it's be the salvation of the computer industry that would herald in a revolution. Yet it's made by AMD and thus isn't needed, will hurt the market, won't perform, won't be support, and generally needs to die in a fire.

Really getting tired of people with that mindset. Judge technology by the actual technology, not by the label on it.
 
Etayorius, I believe MANTLE can be used in Linux and other OSes with no extra effort, since it's basically Driver driven and game driven; no middle man like in DX (that middle man, being windows, haha).

Also, in Linux you have the great additional advantage (that Vale is using) to tweak the Kernel to your liking. So, you have a huge margin of improvement in Linux more than in any other OS, but DX doesn't work in it. OGL does and Valve is using it right now. For whatever reason, AMD doesn't like Linux (their efforts are half baked at best) and won't actually go into it fully like nVidia and Intel do. Also, MANTLE is in Beta, I don't think they have the time (for now) to port it to their Linux drivers, specially in the crappy situation they are at the moment. They need a lot more time in order to port it there than testing the theory of it in Windows directly makes a lot more sense to me. Plus, devs keep using Windows as a target platform for big titles that could make good use of MANTLE.

Starship Citizen. Once that baby comes out, we'll see.

Cheers!

EDIT: Typo
EDIT2: More typos, lol.
 

ColinAP

Honorable
Jan 7, 2014
18
0
10,510
As a Linux user, until Mantle is officially announced for Linux (I'm not holding my breath), I couldn't care less about it as an API, per se.

But... if it serves as a catalyst (geddit?) to improve OpenGL, then great!
 

etayorius

Honorable
Jan 17, 2013
331
1
10,780



I can easily be ported to Linux, but it will not... AMD wanted to use it only to Slap MS in the face and force them to do next version with some Low Level calls, for all that AMD cares, MANTLE can die or become the standard... any case AMD already won.

AMD have huge plans for OpenGL in Linux/SteamOS:

http://www.dsogaming.com/news/amd-aims-to-give-opengl-a-big-boost-api-wont-be-the-bottleneck/

So MANTLE in Linux/SteamOS is a waste of efforts, this is why AMD wants MANTLE as a Windows Only Thing,
 


That makes sense, yes. Still, since MANTLE is in beta, it also makes sense to focus on one platform on the meantime with a big gaming user base while they polish everything. Specially since how horrible Catalyst is in Linux currently. Well, not horrible (I don't have problems with it), but with less than half supported features than in Windows and crappy OGL performance.

Anyway, the only thing about making it OGL extensions is that they'll still be tied to a vendor, right? Or I don't remember OGL extensions correctly? haha. Still, making OGL better doesn't mean they will stop MANTLE development, at least, not at first glance.

Cheers!
 

blackkstar

Honorable
Sep 30, 2012
468
0
10,780


I have to agree with you on this. I don't see these improvements for DX coming to anything other than Windows 9. This kind of thing is exactly what Microsoft needs to push enthusiasts into a Metro environment that uses the cloud extensively.

Right now, that's what most enthusiasts (from my experience) absolutely hate. But when you offer 50% higher frame rates, I have a feeling a lot of people will be abandoning Windows 7 for Windows 9.

The greatest irony of the Mantle thing is that all the anti-Mantle folks are screaming against vendor lock in due to Mantle only working on GCN while hoping Mantle features come to DirectX, which only ends up in vendor lock-in per OS as DirectX only works on Windows, and MS is free to limit it to whichever Windows products it deems will yield the highest profits and upgrade conversions.

My problem with this is that it completely ignores the fact that a significant portion of game developers no longer want to rely on Microsoft to do the right thing with Windows. Windows 8 was essentially MS giving game developers and desktop users a giant middle finger.

It bothers me because we're finally starting to see significant migration from reliance on Windows for gaming, and DirectX coming out with a feature that is Windows exclusive and gives Mantle-like performance increases, it would squash things like SteamOS very quickly.

It's too bad. I try and speak out for free software as much as possible. A lot of Window users don't seem to realize that if their desktop environment pulled a Metro, they could just change it with something really simple like "sudo apt-get install kde4-meta" (I'm just guessing the package name, I'm a Gentoo user myself).

But they seem compelled to stay with what is familiar and to be at the whims of MS.

It really irks me to no end. I've seen huge performance gains in Linux on AMD hardware, not only on my FX 8350, but on my A4-5000 as well. But I guess if people want to keep buying hardware for $300+ that's 15% faster when they could use a different OS for no cost and see 50%+ increases in performance in some applications, they're free to do so. Mantle and the lower level things are all the same as well, but people are stubborn and do not like change. Some go as far as to retro-actively fight in forums that things remain as the status quo.
 

juanrga

Distinguished
BANNED
Mar 19, 2013
5,278
0
17,790



I still find interesting that you took Hazra marketing stuff (against heterogeneity) as gospel but now you take Intel roadmap and selling plans as marketing.

You also avoid that AMD and Nvidia have the same plans than Intel, because the three are bounded by the same laws of physics. Intel is not differentiating itself from AMD/NVidia current products, but accelerating the transition to future of computation, before AMD/NVidia have ready their respective new products.

Cray still works with AMD because offers updates to legacy users of former Opteron-based supercomputers and because AMD will offer new products for HPC in future. But the point is that Cray engineers know the laws of physics and that is why Cray is collaborating with Nvidia on their exascale supercomputer based APU, whereas Cray is not collaborating on any CPU+dGPU based similar project.

Nvidia joined IBM because has not ready its high-performance ARM cores to compete on HPC/servers. What part of we aren't living in the year 2018 isn't still understood? I don't know why Samsung joined the OpenPower consortium. Do you? If you believe IBM people is nervous, just check Intel recent cancellation of the 14nm fab and their plans to go directly to 10nm due to pressure from ARM players...

I tried to explain you that taking an current interconnect and scaling up the bandwidth up to 150GB/s is not enough, because you need the 150GB/s and extra properties. What part was not understood?

What I am saying you is the the locality principle prohibits that you can design a CPU+dGPU that provides the same performance than the APUs that Intel AMD and Nvidia are designing for exascale supercomputer. Current supercomputers are based in CPU+dGPU. Exascale level supercomputers aren't. I already explained why...




I explained you that the laws of physics say otherwise.

I gave you a link to AMD chief engineer mentioning they will use only APUs for exascale supercomputers.

I mentioned you that Nvidia has same plans for using only APUs for exascale supercomputers.

Both AMD and Nvidia agree on which is the order of magnitude: APU ~ 10x GPU. Everyone on HPC community agrees.

I already gave you a link to Intel slide at Supercomputer 13, saying that discrete cards are outdated and that future supercomputers will not use them.

You can believe that everyone is wrong and you are right. You can delete the links and slides when I give they to you, but it is not going to change anything.
 

8350rocks

Distinguished


Ubuntu 14.10 will have full mantle support via catalyst drivers...which means the drivers will be available in the debian tree at that time or sooner depending on what repositories you use.

14.2 beta is already out for linux OS...you might take a look at it. It is not full mantle support yet...but many of the features are already in the driver, it is just not polished enough to activate all of it.
 

8350rocks

Distinguished


+111111111111111111111111111111111111111111111111111

We do not often agree...but we are 100% on the same page with this one. APUs will not be in HPCs in any large scale within my lifetime. That is to say...there may be some experimental HPC project that says..."see we could do it with APUs", but it will be a novelty at best, and will be less efficient than dCPU + dGPU machines that outperform it in raw FLOPS as well as other areas.

@juanrga:

Intel says that discrete cards are outdated? Outdated for what? They are making Xeon Phi coprocessor discrete cards...clearly they are not outdated or Intel themselves would not be making them.
 

Cazalan

Distinguished
Sep 4, 2011
2,672
0
20,810


Well I've been following these products and roadmaps for 10+ years so it's no surprise to have a different opinion on them. The basic difference is I don't see Phi as a heterogeneous product. It is a homogeneous CPU. And Xeon of course is a homogeneous CPU. Now when you combine two homogeneous CPUs in a HPC deployment that becomes a heterogeneous solution.

Marketing gets messier now for Intel because they still want to sell Xeon's and Phi's. And they can both work stand alone or together. And you can bet they want the rich owners of Tianhe-2 to upgrade all 48,000 Phi in that system to the new Phi. That's over 200 million bucks right there.

Also if they were to build a Tianhe-3 with just Phi, they would probably still use a similar front end, which just happens to be Sparc based. In the end it's still heterogeneous even with Phi because they also use Sparc. That was one of the things I was trying to explain to you. HPC deployments are far from simple as compute is just 1 aspect of it.


As for why Samsung joined the OpenPower consortium I think that's a sign they realize IBM has developed some rather clever "stuff" in their CPUs with regards to reliability. They dub this Reliability-Aware Microarchitectures. To describe in brief they have the capability to send a work queue to 2 CPUs and perform in hardware error correction if the results mismatch. Consider it ECC for compute cores rather than just the DRAM DIMMS. You'd be surprised how many errors a day a computer with 7 9's (99.99999%) reliability will make.

If that error is in your flappybirds game I think no one will care, but if it's in the billions of potential microtransactions a day then it's a different story. Likewise for compute jobs that take weeks to run. It's costly to have to restart them.
 

jdwii

Splendid


i'd say i would remember the word you used "some". I installed a decent amount of software and the performance was only 5% better on linux such as handbrake which was a pain to even install. Somethings on linux are easy to do but its a LOT easier to be a power user on windows than linux this is talking from someone who has a 2 year computer networking degree.
 

jdwii

Splendid


Haven't you heard?

Intel a company that always talks big when it comes to graphics, are going to kill the discrete video card market can't wait to play Crysis 5 on ultra(with 4K) with just an Intel processor its going to be epic.
 

etayorius

Honorable
Jan 17, 2013
331
1
10,780
It seems AMD just made a Statement regarding OpenGL 5 and DirectX 12, it seems they welcome them with arms wide open... but they both will take from 2-3 years to arrive, so that means right now MANTLE offers the same features Dx12 and OGL 5 will be having in the next two years.

http://www.tomshardware.com/news/directx-direct3d-opengl-mantle,26167.html

There is going to be some sort of Architecture standards from Microsoft and Khronos Group for the couple next GPU Generations in order to really make all hardware accessible and compatible with future DX and OGL versions with low level access, this kinda means that current GPUs will not be supported... i think its a good move none the less, most Hardcore gamers change GPUs every year or at least once every two years, so this is yet another great excuse to dump or kick DX9.0c right in the balls and abandon it once and for all.

http://www.techradar.com/news/computing/pc/amd-on-mantle-we-want-our-gaming-api-to-become-the-industry-standard-1218560
 

jdwii

Splendid

ColinAP

Honorable
Jan 7, 2014
18
0
10,510


If that were true that would be awesome, but until I see that news on Phoronix, then I remain skeptical.
 
Status
Not open for further replies.