AMD CPU speculation... and expert conjecture

Page 478 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.
And in other news: Real-Time Ray Tracing is closer then I thought:

http://techreport.com/news/26178/powervr-wizard-brings-ray-tracing-to-real-time-graphics

If we were to assume an average of a single light ray per pixel, the hardware would be capable of driving a five-megapixel display at 60Hz. Realistically, though, you'll need more rays than that. Imagination Technologies' James McCombe estimates that movies average 16-32 rays per pixel in order to get a "clean" result. He notes that this first Wizard GPU is capable of tracing an average of 7-10 rays per pixel at 720p and 30Hz or 3-5 rays/pixel at 1080p and 30Hz.

We're getting there. Figure another 10 years before we start to see the switch. Would be interesting to see a subset of effects done via ray-tracing though...
 

truegenius

Distinguished
BANNED


Smells like amd :miam: or a :bug:


10x ! ಠ_ಠ
tumblr_lwuis4jQbt1qi0tzv.jpg


meanwhile 790 sited
http://wccftech.com/details-nvidia-geforce-gtx-790-finally-surface-4992-cuda-cores-10-gb-memory-5/
funny thing to notice is this part
Nvidia ”GTX 790” Will have 10 GB GDDR5 of Memory, 640Bit Bandwidth and 4992 Cuda Cores – Stated to Release sometime February.
640bit bandwidth !
rofllg.gif
 

jdwii

Splendid


Some claim the performance benefit is much higher than that. Again times are different now its not like it was in 2000 we need more efficient programs that run faster on even weaker hardware so it saves on battery life. If you ask me programmers have been living in a easy time lately and i know from experience that higher level languages take less time to learn, but programs made in higher langues also take less time to code. I'm not really into programing however as a hardware+networking guy i see the future as being more efficient programs taking advantage of both the CPU and GPU in the most efficient way possible.
Back in 2008 the argument was why do i need more powerful hardware my desktop and laptop is fast enough however i wish my tablet+cellphone+laptop battery would last longer so now we are making hardware more efficient and yeah software is next.
 


Higher level programming langs are made for stupid people. That's a cold hard (and sad) fact. Good to broaden the coding spectrum, bad for quality of the code in the long term.

Since today most companies who develop software need their stuff out fast, you need less skilled programmers to pick up code and that in turn makes coding people something we all call "monkey coders", delivering poor code quality and depending on QAs (quality and assurance people) to detect even the stupidest issues with the code. Also, more monkey coders, to business people, equals more code, meaning delivering the software faster, haha. That rule of thumb used by most business people I know of, besides being very stupid, it's so inaccurate that is hard to believe anyone would actually follow it.

Long story short, more than software needing to "scale up" or "step up" in quality, we need more software competition to make companies actually code better. Companies should start the "quality" competition of the software instead of the "features" competition.

And yes, all this rant is related to MANTLE, haha.

Cheers!

EDIT: Typos.
 

jdwii

Splendid
http://www.forbes.com/sites/jasonevangelho/2014/03/19/exclusive-amds-matt-skynner-talks-new-crytek-partnership-mantles-beacon-of-leadership/

Seems like Mantle may make it for a little longer at least.
 

blackkstar

Honorable
Sep 30, 2012
468
0
10,780
Everyone is assuming that because DICE is using Mantle in BF4 to lower CPU requirements, that that's all Mantle is good for.

People seem to be forgetting that Star Swarm is taking advantage of the lower CPU requirements by making the game more complex.

If Mantle is making $60 pentiums run current games as well as a higher end CPU, then I don't see why we won't be seeing Mantle games become much more complex to push those higher end CPUs.

Some of you are being rather silly. DICE used Mantle in a certain way, it doesn't mean that that's all that Mantle is good for.

I mean, if I made a spinning 3d cube in DX12 and it was the only serious DX12 "game", you wouldn't run around going "OMG THIS IS ALL DX12 CAN DO!"
 

juanrga

Distinguished
BANNED
Mar 19, 2013
5,278
0
17,790


This has been my argument, when I mentioned above that real performance of MANTLE will be observed in next gen games with 10x more draw calls.

Current games are limited both in visual effects and gameplay due to DX bloatware. Artists can design superb graphics but their original designs are simplified or eliminated in the coding phase due to DX bloatware. Game developers have been asking for the elimination of DX bloatware for years... still Microsoft ignored them. Now that a gaming company as AMD has given developers what they asked for, we will see better games, which couldn't run on DX bloatware.

As I have mentioned plenty of times, the oxide demo gives a hint to the number of draw calls that developers are targeting for next gen games. MANTLE provides about 2--5x framerates for such targets.

The good news is that Microsoft has been forced to change and is finally adopting MANTLE inside DX12.
 

juanrga

Distinguished
BANNED
Mar 19, 2013
5,278
0
17,790
Crytek is also supporting MANTLE

BjHm45mCUAAlMEV.jpg


"Crytek prides itself on enabling CRYENGINE with the latest and most impressive rendering capabilities,” said Cevat Yerli, Founder, CEO & President of Crytek. “By integrating AMD’s new Mantle API, CRYENGINE will gain a dimension of ‘lower level’ hardware access that enables extraordinary efficiency, performance and hardware control."

 

jdwii

Splendid




That information was was given before, in this same page
 
That happened much faster then I expected. I figured they would conduct a standards war for a year or two with one eventually subsuming the other. Didn't expect MS to just skip to the end and absorb Mantle into their DX API, though it makes sense in that it avoids a needless standards war.

To those thinking it will "auto-magically" work on Nvidia / Intel GPU's, your wrong. Mantle itself is an open API, anyone anywhere can write drivers that support it for their own hardware, AMD isn't limiting it's distribution. MS can't (and won't) write drivers for Nvidia / Intel GPU's, those manufacturers need to implement the Mantle API into their own driver suite just like they would implement any other API feature. The good news is that since it'll have a different "name" on it, both NVidia and Intel won't be forced to plaster "AMD Mantle support" under the feature list of their own products, which compete with AMD. Instead they'll just put "Full Direct X 12 support" while implementing AMD's Mantle API inside their drivers. Gotta love marketing folks.
 

colinp

Honorable
Jun 27, 2012
217
0
10,680
So, let's recap where Mantle is at in its short existence so far:


  • ■ Adopted by DICE for use in Frostbite
    ■ Adopted by Eidos for use in Thief
    ■ On its way in Starswarm
    ■ On its way in Cryengine
    ■ Allegedly to be adopted by Microsoft as part of DX12
    ■ Generally well received on review sites so far, while recognising that it is still early days
    ■ Well liked by developers, as it opens their games up to people with less powerful hardware (i.e. more sales for them)

Did I miss any?



 

juanrga

Distinguished
BANNED
Mar 19, 2013
5,278
0
17,790


Rebellion Developments will support MANTLE as well. Just announced at GDC:

http://www.overclockers.com/amd-announces-mantle-api-gaming-evolved-partnerships/
 

juanrga

Distinguished
BANNED
Mar 19, 2013
5,278
0
17,790


No.

Haswell i3 leaps up 5%
A10-7850K leaps up 23%

http://www.extremetech.com/extreme/178725-amds-mantle-brings-23-performance-boost-to-thief-trueaudio-impresses

Of course, MANTLE can bring up to 50% better framerates

AMD-Mantle.png
 

8350rocks

Distinguished


This comes as no surprise, while we were discussing details for use of CryEngine, one of the questions I asked was about MANTLE specifically, and integration. They were already at work on this shortly after the reveal of MANTLE...so I am a bit surprised the announcement has come this far down the road, but yes, CryTek was on board from day 1 basically. Epic is not going to adopt it, and this coupled with various other issues are the reasons we did not go with UE4.

EDIT: To further elaborate, ALL EA games based on Frostbite 3.0 moving forward will have a MANTLE back end built into the engine. Also, I know of a few other games based on CryEngine that are as of yet unannounced that will be offering a MANTLE render path too. It will see lots of use coming up.
 
Assumptions fail: DX 12 supports all existing DX11 H/W:

http://techreport.com/news/26199/directx-12-to-support-existing-hardware-first-games-due-in-late-2015

Which hardware will be DX12-compatible? AMD said all of its Graphics Core Next-based Radeon GPUs (i.e. Radeon HD 7000 series and newer) will work with the new API. Nvidia pledged support for all Fermi, Kepler, and Maxwell (i.e. GeForce GTX 400 series and newer) parts. The keynote included a demo of Forza 5 running in DirectX 12 mode atop an Nvidia GPU. Finally, Intel said the integrated graphics in its existing Haswell processors will also have DX12 support. All in all, Microsoft estimates that 100% of new desktop GPUs, 80% of new gaming PCs, and 50% of PC gamers will be able to take advantage of the new API.

Also looks like Multithreaded-rendering got a bit of an improvement (expected). Would need to investigate to see how hard implementation is though.
 

juanrga

Distinguished
BANNED
Mar 19, 2013
5,278
0
17,790
^^ From your same link:

In this respect, DirectX 12 will mirror many of the improvements AMD implemented in its own Mantle API. We've suspected this development since the first DX12 pre-announcements were made.

Roughly: DX12 = DX11 + MANTLE
 

etayorius

Honorable
Jan 17, 2013
331
1
10,780
I am glad, thanks to mantle even my old GTX470 will benefit from this, as nVidia promised to support Fermi, Kepler and Maxwell for DX12, now hopefully MS wont shit on it and make it only for next Windows.
 

8350rocks

Distinguished


Erm...this is M$ we are talking about...right? They are going to find a way to hook users into new software some way or another...

I cannot wait to see the push into Linux and OSX as gaming platforms expand.
 


I like how it is on the "Normal Settings" Preset and not maxed out when running on the higher end GPUs. That is a pure marketing BS slide. I am running Thief on a HD7970GHz maxed out at 1080P (everything maxed) and I average between 100-130FPS. I don't use the benchmark as it is a absolute worst case scenario but in normal gameplay I have great FPS in that game.

Using a lower graphics setting is cheating the numbers. It is like cherry picking. Crank the settings to max and Mantle does not help with a i5-4670K by 50% by any means.

Hell I have a i5-4670K and a 7970. I will test it to see if Mantle works in the game then run the benchmark and see if anything changes, but according to that I should see almost 30% better for my GPU.



I love how it is Microsoft throwing Mantle into DX12. We really have no idea as to how they are implementing the feature. As well Tessellation (a major feature for DX11) was not a DX original feature. A lot of ideas come from other people and get integrated into DX if they are beneficial.

*Edit*

Here are my Thief benchmark results. My specs:

Core i5 4670K Stock
16GB DDR3 1600MHz
HD7970 Vapor-X OC

With settings maxed out (and I mean everything even AA and SSAO) in DirectX 11 at 1080p Here are the results:

Min: 38.1
Max: 67.6
Avg: 49.0

Mantle, maxed settings 1080p:

Min: 40.9
Max: 69.3
Avg: 51.4

That could be considered margin of error, I would have to do the benchmark a couple of times to really figure it out.

On Normal settings with everything else checked off:

DirectX 11:

Min: 43.1
Max: 94.5
Avg: 66.9

Mantle:

Min: 57.4
Max: 105.3
Avg: 79.3

I would expect to get this kind of boost since the game is more CPU bound at lower graphics settings. But who in their right mind is going to pay $650 for a R9 290X and play a game on normal settings?
 


Something worth noting, AMD and NVIDIA are usually VERY well aware of upcoming DX specs, since MSFT is more or less dependent on them putting in the necessary HW changes into their cards, which are designed at least 18 months prior to their release. So AMD/NVIDIA were likely VERY well aware about these changes for at least a year now.

Which opens up another possibility here: Rather then DX stealing Mantle, isn't the reverse case possible? Isn't it possible AMD knew these changes were coming, and decided to support it early to try and push GCN GPU's?

In either case, the fact no HW changes are necessary basically DOA's Mantle once DX12 releases.
 
Well Intel have made the leap to DDR4, can't help but think that AMD will do the same with Excavator based APU's. In all seeing how well Kavari did in its gaming performance it would be trite to expect that DDR4 will open up serious bandwidth. So if we assume the standard CPU gains or 5-15% somewhere in that window the resources will pool to the iGPU maybe a 20+ ROP with a native 40-50gb/s bandwidth which should legitimately put the APU at entry level gaming on chip performance.
 

colinp

Honorable
Jun 27, 2012
217
0
10,680


No, that would be completely ridiculous and doesn't stand up to any level of scrutiny. I can't believe that even you would believe that, and if you don't believe it then you are just trolling.
 
Status
Not open for further replies.