AMD CPU speculation... and expert conjecture

Page 303 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.

juanrga

Distinguished
BANNED
Mar 19, 2013
5,278
0
17,790


Steam has promised that many AAA games are coming to with native SteamOS support for 2014

http://arstechnica.com/gaming/2013/09/valve-announces-linux-based-steamos-as-basis-for-living-room-gaming/
 

8350rocks

Distinguished


AMD is designing a proprietary new API to replace DX on their GPUs, and it will be cross platform. Support for windows will come first, but Linux will be coming shortly afterward. BF4 will be using this new API called MANTLE.

It's supposed to be setup, and have a great deal of the tools be similar to Glide. Which is a blessing for anyone trying to develop a game on Linux. It's also a great reason to use an AMD GPU on a Linux machine now...as this is basically AMD's way of locking out AMD on Linux.

Since this API can be used for AMD co-developed games, it looks like running an AMD supported game on NVidia hardware will provide a LARGE disadvantage moving forward.

This new API will give PC developers console like low level access to the hardware. Which is going to be extremely innovative, and allow MUCH greater performance from PC hardware than was previously capable.
 

8350rocks

Distinguished
So, near as I can tell, based entirely on the 3Dmark FS scores they provided...

R7-250X ~HD 7790 - $89
R7-260X ~HD 7870 GHz (non XT) - $139 (Has TrueAudio)

R9-270X ~HD 7950 Boost (My FS score is ~5600 @ 1100 MHz OC w/7870XT) - $199
R9-280X ~HD 7970 GHz -$299
R9-290 ~GTX 780??? - $399??? (Has TrueAudio)
R9-290X ~GTX Titan - $499-549??? (Has TrueAudio)
 

juanrga

Distinguished
BANNED
Mar 19, 2013
5,278
0
17,790


I don't understand why you repeat stuff that I said to you before, whereas at the same timne you ignore my arguments. I will try again.

CISC cannot "do more things than RISC", simply each one tries to solve essentially the same problem from a different tactic.

About 256 bit floating point:

To put this improvement into perspective, SSE (found on modern x86 chips) provides only 16 128-bit registers. AVX, introduced in 2008, extends them to 256 bits. This means that ARMv8 and x86 chips have the same-sized vector register files, but with different layouts.

Which of these is more useful in practice? In theory, the x86 approach should give better throughput for the same number of instructions, because it lets you operate on twice as much data at a time, but the cost is limited flexibility. There's a reason that we don't have 1024-bit vector coprocessors in desktop CPUs: As the size of the vector increases, the number of problems that can make use of it decreases. 128 bits was popular because it's very useful for 3D graphics. Color and vertex values fit nicely into those registers.

If your code only makes use of 128-bit vectors, half of the register space in AVX is wasted. A lot of code uses SSE for purely scalar operations, because the source code is not amenable to vectorization. For code like this, AVX looks like a bank of 16 floating point registers and NEON like a bank of 32 floating-point registers. This makes NEON much easier for compilers to target, because the register allocator has to do a lot less work to find a spare register.

In any case I don't want to see further improvements in the FPU. I want FP to be moved to a GPGPU/accelerator using HSA or similar technology. In case you didn't know heavy floating point scientific computations are made in GPGPU/accelerators. The chip that Nvidia will release for supercomputers is not only ARM, but ARM+CUDA.

Contrary to your claims maintaining simple instructions is not a "inherent limitation in RISC", but an advantage.

I expect about two-thirds the TDP of a similar performance x86.

Nobody said ARM will be replacing x86 soon.

Nobody said you that Android is a desktop OS.

Right, but when you eliminate tablets and phones, who wins?

LOL, yes if you start eliminating things, you can distort reality and fit it to your own as hapiduuur does often.

Who said you that Phone/tablet = DT PC? In fact I am rather sure that I am saying the contrary.

You don't seem to understand that Microsoft is no more driving the industry.

I have shown you data for two three generations. Your insistence in ignoring them doesn't make them vanish.

Nobody is extrapolating DT performance from mobile chips.

If you're already burning that much power, why wouldn't you take the backwards compatability, and capability to run more complex instructions? There's no argument against x86 at that point...at all!

Maybe a >35% improvement in efficiency doesn't mean anything for you, but it is important for others: all that many people who is so interested in ARM, including AMD.

I am sorry to say this, but your anti-AMD, anti-ARM rant is not going to change roadmaps...
 

BeastLeeX

Distinguished
Dec 13, 2011
431
0
18,810


I knew this would happen, not to mention pricing is ridiculous. I was at least expecting a 7970 performance chip for $250-$275, but it seems that there have been no changes in 2 years for gpus, other then drivers, and adding more cores, and slightly better power consumption (Hafidurp should be happy). The R7 series might have good pricing though, and thats probably the best thing about the refresh. Now im extremely happy that a bought a 7970, because price/perf for high end has not changed.
 

juanrga

Distinguished
BANNED
Mar 19, 2013
5,278
0
17,790


The idea of killing that inefficient/bloated monster called DirectX and its draw calls overhead like the idea of offering close-to-the metal access are not new. Also low level access to GPU hardware was already available on linux, but this MANTLE looks interesting.

 

jdwii

Splendid
I'm a HUGE audio fan i bought the polk monitor 70's(with a Polk CS20 center and monitor 30's in the back) and a Bic F12 sub and i'm going to buy the Polk ultrafocus 8000.

Anyways I'm more excited about the TrueAudio vs the extra performance. But of course both are welcomed on their new video cards.

Since my 6950 lasted me for quite some time and its still kicking well on new games i'm going to spend 250-350$ again on a new video card most likely. (ah yeah and keep my current CPU/Board setup)
 
People, please understand there is no such thing as "CISC". RISC is not a processor design, it's a set of principles and a philosophy to simplify processor designs to maximize space usage and efficiency. So there is really "RISC" and "not-RISC" though the later is often labeled "CISC".

This is important to know because RISC itself offers some serious advantages in hardware design, namely that instructions have uniform execution times and memory access instructions are separate from logic and execution instructions. The downside is the software compiler must be good at extracting maximum usage out of those resources.

Anyhow the DT world is dominated by Microsoft and the DT gaming world is dominated by DirectX with consoles using various API's based on their creators. With Nvidia open-sourcing their drivers and Gallium3D being functional, we can expect to start seeing gaming on Linux become viable. Contrary to what some people believe Linux is not superior to NT, they are merely different. Linux allows far greater control over it's internal workings then NT and thus can be optimized for many different things, NT on the other hand is generic and optimized for a large variety of workloads. Also "Linux" is not an Operating System, it's a Kernel and a set of standards to interface with that Kernel. Android, SteamOS, RHEL, CenTOS, Slackware, SUSE, Debian, those are all Operating Systems. Because they all share a common Kernel and common standards software is fairly compatible between them. You can take that Kernel + standards and optimize them for fast graphics processing and "gaming", this will perform better then the same software running on an unoptimized (for gaming) NT OS.
 


I don't really like closed-proprietary standards from any camp. I won't celebrate MANTLE until AMD extends it as an open standard. Since it's an ISA, then nVidia and even Intel should/must be able to support it.

I really hope AMD doesn't F it up in that front, since they would be on top of gamers (pseudo monopoly). Any additional info on MANTLE?

Cheers!
 


It's my understanding that Mantle is an open standard (open source is software) and that Nvidia could easily program their drivers to communicate in it. No whether they chose to do so or not is a different story.

 

juanrga

Distinguished
BANNED
Mar 19, 2013
5,278
0
17,790




Effectively it is open (unlike DirectX). Nvidia could chose to support it (writing a mantle driver?), but probably will not.

http://www.techspot.com/news/54134-amd-unveils-revolutionary-mantle-api-to-optimize-gpu-performance.html

As said before not all of this is new. E.g. the low-level access provided by MANTLE seems to be a return/derivative of AMD CTM. The AMD presentation claimed MANTLE can manage 9x more draw calls than other APIs, but I think this is comparing to an old DirectX version. I think the overhead is reduced to ~2x in modern DirectX 11. Of course, ~2x continues being an important overhead and eliminating it is fine! Everyone wins except Microsoft :)

There is a question that I don't understand. Traditionally consoles have competed against gaming PCs thanks to low-level APIs (and even using non-API level access to hardware) and fixed specs. A console can perform about 2x faster than a Windows PC with similar hardware.

If a low-level API is now available for PCs, this eliminates one of the traditional advantages of consoles.

I believe that the more exciting part is that MANTLE is cross-platform: windows/linux/console. Game developers can now port easily the same game to different platforms: this would generate more ports of games and ports of more-quality.
 

juanrga

Distinguished
BANNED
Mar 19, 2013
5,278
0
17,790
Cazalan: Valve has just promised that the ~3000 games in Steam will be available for SteamOS for the beta of the SteamBox

http://store.steampowered.com/livingroom/SteamMachines/

Combine that with MANTLE and I think we can say that the main advantage of windows (games) is gone.
 

juanrga

Distinguished
BANNED
Mar 19, 2013
5,278
0
17,790


This is why, as mentioned above, any modern x86 chip doesn't implement x86 directly on hardware, but first translates x86 instructions to RISC-like uops, which are then executed in the hardware. Of course the uops translation 'trick' doesn't eliminate other deficiencies. The complexities of implementing a x86 decoder on the chip remain. I think this is the reason why AMD has needed three generations (Bulldozer/Piledriver/Steamroller) before implementing a dual decoder per module.

RISC couldn't use all its potential decades ago, when compilers were in its infancy, but I think that it is just the contrary today. The CISC (or "non-RISC" as you prefer) approach of the x86 ISA with variable-length instructions and a small register set interferes with modern compiler optimizations.
 

BeastLeeX

Distinguished
Dec 13, 2011
431
0
18,810


We would need always need a trusty copy of XP 32-bit, just for those older games like Baldur's Gate, and Total War: Rome.
 
It's my understanding that Mantle is an open standard (open source is software) and that Nvidia could easily program their drivers to communicate in it. No whether they chose to do so or not is a different story.

So in other words: Mantel is no different then PhysX for NVIDIA, right?

And yet, one is embraced while the other is shunned. For the same exact reasons...

In any case, I can basically guarantee Mantel is almost never going to get used. Three main reasons why:

1: Because Mantel is AMD only, you still need a DX/OGL render path. No dev is going to waste time and money having multiple different rendering paths. For a title with XP support, for instance, you'd likely have THREE render paths: DX9, DX11, and Mantel.

2: With Mantle, you run into SERIOUS problems as the hardware changes. Sure, you may gain 50% performance on a generation of cards, then two generations down the road, the hardware design changes, and guess what? Your carefully optimized to the metal code suddenly runs like crap because the assumptions you made in regards to the hardware no longer hold true. Hence why to the metal coding is ONLY used for embedded systems that have locked hardware AND need the performance. Both of these conditions do not apply for PC's.

3: Developers like me HATED having to have separate device drivers for every single piece of hardware; that's why Glide, and later OGL and DX became so widely adopted. Going back to needed optimized coding for each family of GPU is not going to catch on with any developers.
 

8350rocks

Distinguished


From what I gathered it will be A LOT like a really low level Glide API...
 

8350rocks

Distinguished


We're talking past each other...you don't get it...clearly.

The LIMITATION of ARM is it's inability to do EVERYTHING that x86 can on the high end as efficiently.

I think that sums up in layman's terms the best explanation I can give you. No amount of marketing slides, or other stuff is going to change the fact that ARM failed as Acorn and now license their cores.

They tried it as a business and failed selling their processors. Yes, they're good for mobile...however, they're not good for dense computing with long instruction sets, and it will *NOT* surpass x86 in *ANY* market for a long time to come (excluding mobile). I won't say never, but it will be a long time from now *IF* it ever happens.
 

etayorius

Honorable
Jan 17, 2013
331
1
10,780


You paying attention? MANTLE is Open and it will work on non AMD Hardware but it will just not see a benefit.

nVidia could if they want to, implement it for their GPUs, since they recently promised to support any open standard, besides AMD is all over consoles and nVidia has little choice in the matter... unless they want to shoot themselves in the foot by ignoring AMD tech.
 




Thanks for the additional information on Mantle, guys.

I do share the same concern as you gamerk and have a really big doubt here... Since it's a new ISA for GPUs, wouldn't DX and OGL, through drivers, include it seamingless? It's like having a binary for X86 and you want to have another binary for, say, PPC. You need to recompile it or use a compatibility layer (emulator). There are several ways you can go around it without having to re-write the code IMO. For old stuff, I'd say the overhead won't be bad.

Also, Devs, after a 3 or 4 gens of video cards / cpu cycles, stop giving support for their old games. This is usually a 3-4 year cycle. Specially with new Windows releases this has been the case. Remember when DX9a came out? Win98 was left in the duster and we all HAD to move to XP.

Cheers!
 


You paying attention? [PhysX] is Open and it will work on non [NVIDIA] Hardware but it will just not see a benefit.

[AMD] could if they want to, implement it for their GPUs, since they recently promised to support any open standard, besides [PhysX] is all over consoles and [AMD] has little choice in the matter... unless they want to shoot themselves in the foot by ignoring [NVIDIA] tech.

I hope I made my point here. I really do.
 
I do share the same concern as you gamerk and have a really big doubt here... Since it's a new ISA for GPUs, wouldn't DX and OGL, through drivers, include it seamingless? It's like having a binary for X86 and you want to have another binary for, say, PPC. You need to recompile it or use a compatibility layer (emulator). There are several ways you can go around it without having to re-write the code IMO. For old stuff, I'd say the overhead won't be bad.

Emulating low-level code is much more expensive then simply re-writing in a higher level language. Not worth the effort. You aren't going to see MANTEL emulated in DX/OGL, you are instead going to have to code a separate DX/OGL render path.

At the end of the day, this is more likely to force MSFT to make significant changes to the DX API, rather then MANTEL taking off.

Also, Devs, after a 3 or 4 gens of video cards / cpu cycles, stop giving support for their old games. This is usually a 3-4 year cycle. Specially with new Windows releases this has been the case. Remember when DX9a came out? Win98 was left in the duster and we all HAD to move to XP.

Not as true for DX10, or even DX11 though. And remember the game engines last a LONG time now; how long has Unreal 3 been around? You think there's going to be a lot of support for re-writing the game engine every time the GPU hardware changes?


Hence the major concern: While adding GCN support and also supporting a traditional DX render, what ISN'T being done? Adding features? Testing? SLI/CF support?
 


Glide was a very well designed API for its time, and very fast due to being tightly coupled with the hardware

That being said, Glide was limited to just what Voodoo cards supported, so when other cards (NVIDIA 256) started to move in other directions and took advantage of features DX/OGL supported that Voodoo cards didn't, 3dfx became non-competitive.

From what I've gathered, MANTEL is essentially the PS4/XB1 low-level render API, supporting all the goodies you NEVER see on PCs, such as direct access to the GPU framebuffer [which is a MAJOR performance booster is used right]. Its tightly coupled with the underlying hardware, but since AMD is supporting GCN on all three platforms, there's not much danger is porting MANTEL to the PC.

However, hardware has to change over time. Let me ask what happens if DX starts moving in a direction that GCN doesn't support in HW? What happens to performance if AMD has to significantly change the HW design going forward? And so on.

Never mind that NVIDIA is as likely to support this as AMD is to support PhysX. [Both are free, open standards]. Nevermind the underlying hardware is totally different.

Also FYI, NVIDIA has had its own low-level renderer for years now. No one uses it though due to the same reasons I posted a few posts back.
 

etayorius

Honorable
Jan 17, 2013
331
1
10,780
Since when is PhysX open? what the hell are you on? PhysX is a Closed tech and they just made the documents FOR DISPLAY DRIVER on Linux as open source, they still have to confirm PhysX will be open too which i doubt and it wont be any time soon, PhysX can not even run in AMD GPUs, it may run on AMD CPUs but not on GPUs, so i am not sure what sort of med are you on.

Oh by the way, can`t you just edit your last comment instead of posting like everyone else does? we do not need 10 different of your comments one after another, i hope i made myself clear.

Nothing about PhysX going open source as this moment, and IF this ever happens expect AMD to support it, they already said MANY TIMES they will support any open standards, there is a huge difference between Open Driver Documentation and CUDA/PhysX going Open Source.

http://arstechnica.com/information-technology/2012/06/linus-torvalds-says-f-k-you-to-nvidia/

Torvalds has responded to Ars, saying he's optimistic but not quite ready to apologize to Nvidia. "We'll see," Torvalds wrote in an e-mail. "I'm cautiously optimistic that this is a real shift in how Nvidia perceives Linux. The actual docs released so far are fairly limited, and in themselves they wouldn't be a big thing, but if Nvidia really does follow up and start opening up more, that would certainly be great.
 

8350rocks

Distinguished
@gamerk

I agree about a lot of the limitations, however, it does provide some nice features for ports to PC. Additionally, AMD had stated somewhere in the presentation, that MANTLE would be something that would be launched AFTER the DX11 version of BF4 came out. I read this to mean that MANTLE is something AMD users can possibly download, something akin to catalyst if you will, that would allow them to better use their GPUs.

I could be reading into it too much...however, if they had the software for developers to use, and AMD users, specifically, could DL the code path updates to the game on PC. That might be the most amazing idea ever to come out for PC hardware.

They could totally botch this, and they could totally hit a home run. With Raja Koduri likely a driving force behind it, I think they'll succeed. However, and take this with a grain of salt, I think it will lead to a MASSIVE second look at DX and the latency issues involved in that API.

One of the key things I liked greatly about MANTLE was the fact that it accepts HLSL, which means the code paths may not have to be that different from a developer perspective. Keep in mind, AMD has stated emphatically, they are giving developers what *they* wanted. If that's truly the case, I think MANTLE could end up being *FAR* more versatile than we initially expect/suspect.
 
Status
Not open for further replies.