AMD CPU speculation... and expert conjecture

Page 324 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.
Regarding Mantle: I have been thinking, there are only 2 gpu companies these days for gaming (I can't believe any gamer really counts Intel). If game publishers wish to expand beyond Windows, they'll have to program for OpenGL, or OpenGL+DirectX. However, everything I hear about OpenGL is how unreliable and bass-akwards the organization behind it is, which makes me doubt its possibilities for further adoption. So, with only two gaming gpu setups to program for, why wouldn't companies choose to use Nvidia's API and Mantle rather than DirectX and OpenGL?
 


Excellent remarks. I already explained before that Windows is not the future.

I already mentioned that AMD announced, early this year, that they are abandoning Windows exclusivity and will develop for other OS.

I already mentioned how Intel during its annual conference, a pair of months ago, praised Google and criticized Windows.

I already gave the link with HP claiming a week ago that Windows is not the future and that they are moving away.

We know that Valve and other game developers are abandoning windows.

I find amazing that some people still believes that Microsoft and Windows are driving the market or the industry. I expect Windows 7 will remain for users slow on migrating to anything new. Just as XP is still used still by lots of Windows-oriented people.

Rest of world will move to Linux. About market share, please don't spread the 1% myth. Share is about 8%--12%, which put linux at same level than Windows 8. This explain why Valve and others are ready to switch to linux bandwagon.
 


But the key here is that MANTLE can be considered like a _port_ of the XB1 low-level API to PC. It is not an entirely new API that developer have to learn from zero. As mentioned in ExtremeTech article the differences between both APIs are minimal. Those differences account for the difference between a Windows PC and XB1.

Contrary to Gamer/Palladin idea MANTLE simplifies porting from console to PC. This is what AMD has said in response to all this confusion:

Mantle is NOT in consoles. What Mantle creates for the PC is a development environment that's *similar* to the consoles, which already offer low-level APIs, close-to-metal programming, easier development and more (vs. the complicated PC environment). By creating a more console-like developer environment, Mantle: improves time to market; reduces development costs; and allows for considerably more efficient rendering, improving performance for gamers. The console connection is made because next-gen uses Radeon, so much of the programming they're doing for the consoles are already well-suited to a modern Radeon architecture on the desktop; that continuum is what allows Mantle to exist. ^RH

It is about reusing code and retaining performance when porting to PC.
 


About 20 games with MANTLE support are being developed and will be announced soon.

Activision has already announced that will support MANTLE. Crytek is now under the radar. Unreal is Nvidia oriented, but who knows?
 


No. You are completely wrong once again.

ARM will be not replacing x86 in 2014/2015. In the first place it will took longer, in the second place I already explained you that x86 will remain for legacy workloads as a niche market. I gave the analogy with windows XP as well...

What is being replaced in 2014 are the jaguar servers by the new ARM servers.

Also nobody said you that x86 will stop advancing. What happens is that ARM is advancing faster than x86. As I already said before, but you don't read:

ARM is the fastest-growing CPU architecture in history

A diagram comparing performance growing for both x86 and ARM was given, but you missed this as well. ARM achieves in one generation what x86 achieves in three or four approx.

Now please ignore what is being said to you, make silly things in your mind again and post another laughable reply. You have been doing this for weeks, therefore it would be effortless for you

:kikou:
 


I think you mean "about 6000-7000% off reality", because you are living in your own distorted reality where Intel chips are 2000% faster than AMD and 35000% more efficient. Intel is cheaper than AMD also in your own reality.
 


No. He says because he knows you are plain wrong.

You have been corrected in innumerable occasions, but you return with the same nonsense again and again.
 


That's only, what, 18 FPS less in a game getting 45 FPS on average, if the cores are clocked at the same speed?

Not really making the best argument here...
 


Two reasons:

First off: Remember the days before DX9.0a and SM2.0, back when AMD and NVIDIA had separate pixel models. DX Incorporated several modes prior to DX9, but for the most part, PM 1.1 = NVIDIA and PM 1.4 = AMD. Depending on which one you programmed with, you would bias game performance one way or the other. When SM 2.0 came out, a lot of the performance bias in benchmarks started to go away [some still exists due to architecture differences, but for teh most part, cards line up in order of expected performance].

Using low level native API's will bias benchmarks, significantly. And since benchmarks are a large factor when purchasing a card, whichever company pushes their API the best will basically win a near monopoly on the GPU market. And given the differences in cash NVIDIA can basically do this with ease.

In short: If it becomes a war of low-level API's, AMD looses since NVIDIA can just buy the market.

Secondly, on PC's, there isn't a need for Mantel. The worse case overhead for DX11 is "maybe" 5% performance loss, which for a game running 60 FPS equates to ~3 FPS worth of performance loss, which is acceptable. There simply isn't much overhead left in the API.

So is re-writing the software and adding a secondary graphical backend, that only benefits the minority player in the GPU market, worth maybe a 5 FPS gain for developers? No, it isn't.
 


That's all well and good, but I have a question here:

So, why is the number I am hearing tossed around inside the industry 30-40%+ performance gain? I got wind of some stuff last night, one of the guys I work with was talking about MANTLE. As we are investigating it on several different levels, he has heard that initial tests in BF4 were looking at a 30-40%+ performance gain for MANTLE over DX on AMD cards in Windows. He mentioned they were experimenting with other OS as well; however, he had no information about relative performance for them.

Last point:

NVidia's low level API is dead in the water because they have no foothold in the consoles. No one will develop on that API just like no one really uses PhysX much anymore. However, MANTLE is an open standard, which means that NVidia can use it as well. This would likely be the most intelligent thing that NVidia could ever do, as their low level API uses proprietary code, and MANTLE uses HLSL. It's literally 2x the work to use the NVidia API (and their market share is shrinking over the last 12 months), with MANTLE, it's a few minor tweaks and a recompile.

You and I both know developers are LAZY! They won't do 2x the work to use NVidia's low level API regardless of how much money is tossed around. However, if they have to change a few lines of code and recompile (an hour worth of work, perhaps, plus compile time), then you might actually convert some of them to your cause.

The issue with NVidia is, everything is proprietary...nothing is cross compatible. *THAT* is why MANTLE will succeed where they failed. NVidia's low level API will suffer the same fate as GLIDE.
 


This forum really needs a F&%*($& ignore button for garbage like this...

Can we get one of those? Please...?
 
Why all the hate for OpenGL? I admit I am no game developer, but I do write 3D games as a hobby. Like any games developer would, I use a game engine that basically hides all the technical muck of OpenGL from me. I work with a scene graph and write GLSL shaders - and I hear very few complaints about GLSL itself.

The technical details of the 3D API are only of concern to those implementing a game engine.

Are all these OpenGL haters just people used to D3D and D3D tools? I think so.
 


Mantel is NOT an open standard; it REQUIRES a GCN based GPU. Which means mantle dies when AMD eventually replaces GCN.

Secondly: Ignore the marketers. Remember what AMD claimed about BD performance?

Thirdly: Let the API wars commence: http://www.guru3d.com/news_story/nvidia_does_5_million_deal_with_ubisoft.html

NVIDIA now throwing money around. Lets see who buys Activation (probably NVIDIA).
 


1.) MANTLE was advertised as an open standard API in Hawaii, as it currently sits, it is written for GCN. Though AMD would not do NVidia's work for them now would they (nor could they for that matter)...

2.) That was word from DICE, not AMD. AMD have been tight lipped about word for the most part.

3.) Again, wait and see...NVidia is going to fight, I don't doubt it. Though who has more money AMD + EA + Activision/Blizzard, or NVidia + Ubisoft?

I would bet EA has more cash than all other parties combined, personally...just saying...

Additionally, I would like to point out, that AMD chose a deal with the largest publisher in the world over Ubisoft, and NVidia is picking up the "leftovers" after AMD walked away from the deal.

EDIT: Just FYI, I thought this interesting...

When I was contacting different companies for quotes on server infrastructure for the upcoming MMO to gauge cost/performance...I found out from SeaMicro that Activision/Blizzard use SeaMicro servers with Opterons for all their game servers for D3 and WoW. They mentioned that they had a contract for "future projects" with them as well.
 


You still do not understand what an API is and what an implementation is.

Mantle is an open API. You are free to make your own implementations of it to run on whatever hardware you like. If I wanted to, I could write a Mantle implementation that worked on Ardeno GPUs. It doesn't matter.

You seem to think that because the only existing implementation is for GCN, that that's the only implementation that will exist. AMD could write a compatible Mantle implementation that worked on their old VLIW cards. Nvidia can make one for Kepler or Maxwell. Intel could even make their own.

This is the part that you don't understand.

DirectX is closed, it is not open. Look at how long it has taken WINE project to reverse engineer DX and get a working implementation of it. It has taken a VERY VERY long time and they still don't have DX support beyond DX9.

You do not understand much out-side of gaming related projects.

Mantle is going to make it extremely easy to make Linux and OSX ports of your games as the API is under the control of the hardware manufacturer instead of the OS vendor. Meaning that it is AMD (and whoever else decides to implement it)'s decision on what OS it will work on.

This is the point you are missing. I wish you could crawl out from underneath your "single core is king Wintel" rock and understand what Valve, EA, and tons of indie developers are trying to do, which is get as far away from Windows dependence as possible.

Understand this: Mantle being an easy port to OSX or Linux and only limiting it to GCN cards is much better than DirectX being a difficult port to OpenGL that anyone can use. Specially regarding Steam Machines that run GCN GPUs.

 
since this post has a on-tpoic reference (technicality! 😀)... you are gonna miss it because you just skipped over to this reply. wiser people are actually reading the reply after mine ... i assume.. :ange:

not an amd user, and your comparison is way off. a10 apus are for entry level pcs, core i5 is for upper mirange to high or lower high end pcs.
kaveri isn't out yet, so comparison is meaningless.

how do you know if the product is not even out yet? are you yet another addition to the group of people who claim having inside contacts?

no.
if fx8320's cores are in use, it'll beat the 4 core i5 as long as the workload is using 6-8 cores. by how much - will depend on the type of task. turbo, imc performance, heat dissipation will factor in as well. your percentage calculation is off.

ivy bridge isn't 2x better (context not given, thus a blanket term) than piledriver.
you keep disregarding the cpu prices. that makes your already wrong calculations..er... more wrong...
 


Pull your head out of the blue koolaid.
Watch the if i5 4670 @4.4 ghz lose to the 8350 @ 4.5 ghz.

http://forums.pureoverclock.com/video-card-overclocking/22523-post-your-cinebench-r15-scores.html

ivy i5's get smoked by the 8350 and is even with the ivy i7s. Only one faster is the 4770 and the 1000+ intel cups that don't even include a cooling fan.


http://www.overclock.net/t/1431032/top-cinebench-r15-cpu-scores
 


WTF is cbscores? That's not a reputable site that I have ever heard of...

How is it all the "way off the beaten path" review sites I come across referenced on this forum are in your posts???

How about this:

http://www.hwbot.org/compare/processors#2741,2493,2743,2495,2689-57,94

I trust hwbot.org more than your site...
 



What you don`t seem to get is that Intel is only 40-50% faster, not 100 or 200%... i think you are comparing Clock Speeds of both Companies CPU`s, which is pretty much POINTLESS.

AMD just decided to use a different approach, AMD offers the Same Multithreaded performance and about 40% slower Single Threaded performance. Kaveri will shorten the lead Intel has over AMD in regards to IPC or Single Threaded performance.

It`s not 100-200%, all benchmarks and Games only see a 30-40% Lead in Intel CPUs, so ignore the Clock Speeds because at the end of the days they are irrelevant.

I will go as far as to say that i would buy a 6 Ghz CPU with better performance compared to any Intel CPU overall (Single and Multi Threaded) as long as it is priced the same or less than most Intel CPUs, i would care less if AMD CPUs has much higher clock compared to Intel, if you care about that then you may just be still living in the late 90s and early 2000, you`re a decade behind.
 


I would go as far as to say it's closer to 15-20% single threaded deficit on average, and typically they only lose to the i7's in multi threaded performance by a lesser margin than that. In games it's typically a difference of 5-15% depending upon the game...
 


Maybe you misread my last post...

THE 2 LINKS YOU POSTED HAVE NOTHING TO DO WITH IPC.

THANKS FOR PLAYING.

NOW GO HOME TO YOUR BRIDGE UNTIL YOU UNDERSTAND WHAT YOU ARE TALKING ABOUT. HAVE A NICE DAY.


Is that clearer for you?

IPC is a hard wired limitation of the architecture itself, it has nothing to do with games.

 
Status
Not open for further replies.