AMD CPU speculation... and expert conjecture

Page 498 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.

etayorius

Honorable
Jan 17, 2013
331
1
10,780


I don`t think it`s a 20 min conversion... it does save time and at the end it`s worth it, but it`s not even a 1 day port, i am not sure, but i think i read that it was between 1,000 or 2,000 lines of Code that needed to be changed Manually as DX Leftovers which took them 1-2 months of work.


They also claim that they did a very hurried port into MANTLE, but a good port from from the ground to fully use MANTLE can take up to 3 months and 1/10 of the budget, which is quite fast/affordable in terms of Game Development and Cost.

Anyways, why the hell no one is Talking nVidia GameWorks? which basically can cripple up to 40% performance to non GeForce hardware and leaves them without the possibility to optimize the game themselves... no one, seriously NO ONE is talking about that WTH.

I find Gameworks a horrible SCHEME, i know this is AMD thread but seriously, people need to be aware of this crap, i told several of my Steam friends and they were like: so what? AMD has always been crap with their drivers, and i told them this was different because it will cripple performance on non nvidia hardware, and they all replied: i don`t care, i`m on nVidia... seriously people WTF!? i`m on a GTX780Ti and a GTX480 as PhysX, but hell Gameworks is just evil... and people still insist EA is the worst Company in the US.

In regards to OpenGL, i think it will be future of Gaming and can`t wait till AMD and nVidia start tapping the true power of OpenGL which can be up to x15 times faster than DX, basically faster than even MANTLE:

http://www.kdramastars.com/articles/17889/20140321/opengl-vs-directx-gdc-2014.htm


http://www.dsogaming.com/news/amd-aims-to-give-opengl-a-big-boost-api-wont-be-the-bottleneck/

DX12 is good news, but i rather see OpenGL in a more mainstream way and i hope the Game Industry moves as far from MS as they can.

DX12 will only be half as strong in regards to MANTLE LOW Level access, i think Devs should really sopport OpenGL, MANTLE and DX12 from now on... YES all three of them in their titles.
 

8350rocks

Distinguished


It takes less time than 1 month...I exaggerate slightly by saying 20-30 minutes...however, from what I have seen and heard from the Crytek guys and AMD...you are looking at roughly a week of adjustments. (Have not had hands on yet...waiting on updated render path from Crytek). Now, if they were discussing QA and debug for polish...that seems like a reasonable time frame for the whole process...
 

colinp

Honorable
Jun 27, 2012
217
0
10,680
There's a reason why OpenGL is the only thing that will ever get people from Intel, Nvidia and AMD all together up on stage outside of a cagefighting arena. It's because they have a shared interest in ensuring its success, as only OpenGL will support their hardware on any operating system on nearly all form factors DirectX only supports one operating system on 2 form factors (if you count both Xbox and PCs).

And games makers are switching on to this fact as well. Wider support = more sales, simple as. And don't forget that today's high end PC title could be on the Android / iOS market in a few years time, and it'll be a darn sight easier and cheaper to convert if it's already using OpenGL.
 

juanrga

Distinguished
BANNED
Mar 19, 2013
5,278
0
17,790


I respect your opinion, but I want remark that other developers disagree on some of your points. From the link given above:

At the Game Developers Conference 2014, in a panel including NVIDIA's Cass Everitt and John McDonald, AMD's Graham Sellers, and Intel's Tim Foley, explanations and demonstrations were given suggesting OpenGL could unlock as much as a 7X to 15X improvement in performance. Even without fine tuning, they note that in general OpenGL code is around 1.3X faster than DirectX. It almost makes you wonder why we ever settled for DirectX in the first place—particularly considering many developers felt DirectX code was always a bit more complex than OpenGL code. (Short summary: DX was able to push new features into the API and get them working faster than OpenGL in the DX8/9/10/11 days.) Anyway, if you have an interest in graphics programming (or happen to be a game developer), you can find a full set of 130 slides from the presentation on NVIDIA's blog. Not surprisingly, Valve is also promoting OpenGL in various ways; the same link also has a video from a couple weeks back at Steam Dev Days covering the same topic.

Nvidia developers have also shown that OGL can do some things thar are impossible to do on DirectX.



DX12 only offers a subset of the optimizations available on MANTLE. AMD has stated that will support both DX12 and MANTLE. AMD will continue improving MANTLE.
 

juanrga

Distinguished
BANNED
Mar 19, 2013
5,278
0
17,790
As showed before, DX12 is last Microsoft attempt to save the Xbox1 from complete disaster. But Brad Wardell considers that even with the performance boost provided by DX12, the PS4 is still faster

http://www.cinemablend.com/games/PS4-Substantially-Better-Than-Xbox-One-Even-With-DirectX-12-Says-Stardock-CEO-63486.html

Very interesting his plain characterization of the DX11 inefficiency to use the AMD hardware on the Xbox1 (this also explain why AMD hardware has traditionally performed poor under Windows games than under linux games):

One way to look at the XBox One with DirectX 11 is it has 8 cores but only 1 of them does dx work. With dx12, all 8 do.
 

8350rocks

Distinguished
Not contesting that OGL can run better than D3D. Nowhere once did I say it was not less resource intensive.

However, making the API do it is quite convoluted and far more complicated.

As I said, OGL has a LOT of legacy features that really should have been written out long ago. As much as I like open source, unless they dial back all the legacy stuff in OGL, then developers are going to lament using it. The API has some good stuff in it, but it is so user combative that it really is not conducive to getting new developers to work in it.
 
Which begs the question: Does preemption in all cases make sense anymore? Wouldn't it make sense, for multi-core systems, to allow the programmer to specify threads that can NEVER be preempted unless manually specified, for the purposes of maximizing performance?

And that's exactly what consoles do. By guaranteeing the program has a static set of resources available, it allows the developer maximum flexibility in writing their code.

Solaris has something kinda of similar with out FSS is implemented. You can assign CPU shares to specific projects (collections of process's), to zones, or to specific process's. Projects and zones can be permanent assignments, process's need to be set every time the process is run. Thread homing is another feature that attempts to keep threads executing on the same processor they started on, though it will move them if there is contention. When it's time for a thread to get it's run on the CPU (FSS controls how often it's allowed to run), it will be run on an idle CPU before any other process is evicted. This is the primary reason modern SPARC CPU's have massive core / thread counts, it prevents process eviction and allows critical process's to run uninterrupted.

When AMD designed the BD uArch they were basically taking a page out of the SPARC book and going for that massively wide "many things going on at once" system design. They failed to take into account how the NT kernel schedules threads and how most users actually user their home PC's. And while I think that eventually we'll be needing that kind of environment for our home, we aren't anywhere close to it yet.
 
Let me make it clear: OGL is a PITA to code in. It's stuck supporting 20+ year old rendering programs, so it will never get a major re-write it desperately needs. I've had to support it once, and I pray I never have to support the thing again. Worst API I've EVER worked with. So OGL is not going to be the answer.
 

juanrga

Distinguished
BANNED
Mar 19, 2013
5,278
0
17,790


Ok, but my quote included the next part:

particularly considering many developers felt DirectX code was always a bit more complex than OpenGL code.

And it is not the first time that I read about OGL being simpler. I suppose we can agree on that "simple" and "complex" are subjective words. What can be complicated for you can be easy for me and viceverse.
 

juanrga

Distinguished
BANNED
Mar 19, 2013
5,278
0
17,790
Some time ago I shared AMD plan to use APUs for supercomputers. The response that I received was that the link was very old and that AMD had actually changed its opinion...

I am proud that in a recent 2014 interview, AMD continue stating its plan to use APUs for supercomputers

http://www.hpcwire.com/2014/02/13/amd-refreshes-vision-hpc-future/

Some potential users:

He notes that initial interest in their APUs is coming from a range of potential areas, including the oil and gas industry (where there’s some stiff competition since many oil and gas shops were early CUDA adopters) as well as for those seeking a platform for Hadoop framework and machine learning or pattern recognition workloads.

Hint at future plans:

While AMD has only released their 2014 roadmap thus far, Gopalakrishnan says that further down the line, they’re planning on rolling out a steady stream of GPU capability increases that follow the same trajectory of their Radeon discrete GPUs. They’re also bolstering efforts on the developer ecosystem around their APU, by bringing new compilers from PGI and others, working to support OpenACC directives, partnering with SUSE for a GCC compiler to provide OpenMP directives and perhaps most important for their broader goals, working with Oracle to bring Java to entire GPU/CPU side.

It is also interesting the history of what happened at SC13 and what real experts in HPC required to AMD. They required to AMD the same that I do :)
 

blackkstar

Honorable
Sep 30, 2012
468
0
10,780

No one talks about Gameworks because it's going as far as Tegra. If you're making a cross platform game, why would you want to target only Nvidia PC users and then cripple Xbone and PS4 with AMD GPU and CPU? By using Gameworks, you're basically guaranteeing that your console version is going to be far worse than it could be and that you'll be behind the competition.

This is what I was talking about long ago about the importance of AMD console wins. It completely crushed The Way It's Meant to be Played and lord knows what other crooked things Nvidia does, because it makes it so that if you want to side with Nvidia, you have to turn your back on the new consoles.

The only big Nvidia title I see coming is Watch Dogs, and look at what happened there. Everyone is whining about how the graphic quality is getting worse and we were lied to. It just seems to coincide with Nvidia getting their fingers in Watch Dogs.

@Juanrga, yes, of course they will use APU. HSA is too strong, but it doesn't mean that they're going to settle for APU HSA forever. I am glad to see it though, it's a small step in the direction I would like to see. And it will give us some good HSA software, or at the very least some programmers with experience with HSA tools.
 
http://www.pcper.com/reviews/Graphics-Cards/AMD-Mantle-and-NVIDIA-33750-Scaling-Demonstrated-Star-Swarm-AM1

Compares the latest NVIDIA DX drop to Mantle:

star3_0.png


Impressively, the Radeon R7 260X sees a performance improvement of 91% by enabling the Mantle version of Star Swarm on the lower end Athlon 5350 APU. On that same system, NVIDIA's GeForce GTX 750 Ti is able to achieve a 49% frame rate increase while continuing to use DirectX 11 and improving driver efficiency. On the high end platform with the Core i7-3960X, the R7 260X GPU improves by just 27% from enabling Mantle; the GTX 750 Ti scales by 21%.

So NVIDIA is able to squeeze out almost as much performance via DX driver side improvements.
 
No one talks about Gameworks because it's going as far as Tegra. If you're making a cross platform game, why would you want to target only Nvidia PC users and then cripple Xbone and PS4 with AMD GPU and CPU? By using Gameworks, you're basically guaranteeing that your console version is going to be far worse than it could be and that you'll be behind the competition.

No really; you could always re-implement the API's through OpenCL, rather then CUDA. AMD is just a free to support these API's as NVIDIA is to supporting Mantle.

Hence why I've said from day 1 there's ZERO chance any company not named AMD supports it. Mantle = PhysX.
 

con635

Honorable
Oct 3, 2013
644
0
11,010

Nice article but once again no graphs showing the gpu/cpu usage and no min frame rates.....
 
Silly question: Why does CPU/GPU usage matter? Isn't the goal performance, not metrics?

The point here, is in a game DESIGNED around Mantle, NVIDIA squeezes out just as much performance optimization via driver side performance improvements. SS is supposed to be the "best case" for Mantle gains.
 

juanrga

Distinguished
BANNED
Mar 19, 2013
5,278
0
17,790


Of course they will use, I was 100% sure when I did the claim several pages ago, but then some posters pretended it was impossible...



Hum, sorry but 49% is nowhere close to 91%. Of course with a $1000 i7 the CPU bottlenecks reduce a lot (for a single card config.) and a simple driver update can be effective for a single card config..

Besides your misinterpretation of their findings. Are you aware that the same link was given before and was commented a pair of messages above yours.
 

8350rocks

Distinguished


Juan, you can sit back and quote others about the differences, however, I LIKE Linux, and I am telling you no it is not, what would I stand to gain by lying there? Nothing...?

Ok, trust me...OGL is absolutely *NOT* easier. Unless we are talking OGL 2.0 vs. D3D back in the 1990s...at that time it may have been true. However, in the last 8-10 years, it has absolutely not been the case.

EDIT: Is your quote referring to HLSL? If that is the case, the language is more complex to a degree, but not unnecessarily. Additionally, I am not referring to coding languages themselves, I am referring to implementation.

This is the best meat and potatoes explanation I have ever seen for the difference between the 2 for outsiders: http://programmers.stackexchange.com/questions/60544/why-do-game-developers-prefer-windows
 
Hum, sorry but 49% is nowhere close to 91%. Of course with a $1000 i7 the CPU bottlenecks reduce a lot (for a single card config.) and a simple driver update can be effective for a single card config..

Besides your misinterpretation of their findings. Are you aware that the same link was given before and was commented a pair of messages above yours.

The larger % is due in part to the 750 Ti being 2 FPS faster and getting 2 FPS less with Mantle, both which fall into the "statistical noise" category. I tend to give a 1-2 FPS fudge factor with benchmarking for that reason, so both are preforming just about the same.

Ok, trust me...OGL is absolutely *NOT* easier. Unless we are talking OGL 2.0 vs. D3D back in the 1990s...at that time it may have been true. However, in the last 8-10 years, it has absolutely not been the case.

Anyone who thinks OGL is easier to use then D3D needs to have their head examined. Its not even close. OGL still is mired in its class C-style interfaces, where most of D3D is fully Object-Oriented now. Imagine coding in nothing but Java, then having to go back and do something in C. And I mean, standard C; none of those fancy extensions you guys get now. Not fun, isn't it?
 

con635

Honorable
Oct 3, 2013
644
0
11,010

It doesnt really matter the ave fps on this demo, its goal is to demonstrate how mantle gets rid of the horrible gpu usage drop/cpu bottleneck with dx, if the dx11 version on the nvidia gets 10-20 more fps when theres not many units on screen but still has the horrible stutter fest minimum fps its not much use is it? A simple screen to show 100% gpu usage right through the test or even minimum fps would say much more than average frame rate. I think you should try mantle before rubbishing it, I'll try get the screens I promised tomorrow, a good example would be Arma2, if it had mantle I'm sure it would hold 60fps and not have the horrible frame drops in towns.

 

Cazalan

Distinguished
Sep 4, 2011
2,672
0
20,810



Maybe the part after Carrizo. They would have had to tape out on Samsung 14nm prior to this announcement for any chance of it being out in 2015. Tape out to fab is like 16mo at best.
 

wh3resmycar

Distinguished


i have 2 saphire r7 260x oc.. didn't see any coupons inside the box for the never settle bundle. contacted sapphire and was told to go direct to AMD... do customer support from support@amd4u.com actually reply?
 
Status
Not open for further replies.