AMD CPU speculation... and expert conjecture

Page 305 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.

juanrga

Distinguished
BANNED
Mar 19, 2013
5,278
0
17,790


I also think that must be prepared for MANTLE. It would be illogical to announce the new Radeon cards and MANTLE and then a month after Kaveri without support. Also Charlie speculated that Kaveri was being delayed because used a Hawai iGPU.

Yes, still there is bandwidth bottleneck, but the improved memory controller and faster DDR modules would alleviate that something.

I don't see unreasonable Kaveri offering 2x the graphics performance of Richland.
 

Faulk55ner

Honorable
Sep 25, 2013
3
0
10,510
I want to make a few things clear though.
1.jpg
2.jpg
3.jpg
4.jpg
5.jpg
 

juanrga

Distinguished
BANNED
Mar 19, 2013
5,278
0
17,790
ARM is already in the same performance envelope than x86 for floating point computations. Seattle is about as fast like a FX-8350. However, the FPU is still the weak point of the ARM architecture. Integer performance is better.

The new A57 core offers at least 4.1 DMIPS/MHz. For the sake of comparison a Piledriver core offers ~3.78 DMIPS/MHz. This implies:

Phone (ARM A8): ~2500 DMIPS
Phone (ARM A9): ~15000 DMIPS
Desktop FX-8350 (Piledriver): ~120960 DMIPS
Desktop i7-3960X (Sandy-E): ~185200 DMIPS
Server Xeon E5-2687W (Sandy-EP): ~233500 DMIPS
Server Seattle (ARM A57): ~131200 DMIPS

In integer performance ARM Seattle would slightly outperform the Piledriver FX-8350, but consuming less than a 50% of the power.

The above estimation for AMD Seattle is taking the lowest possible performance value for the A57 core and the lowest possible clock speed for Seattle. AMD claims that Seattle could be clocked above 2GHz and ARM claims that the A57 core can provide up to 4.76 DMIPS/MHz depending on implementation.

Taking the most favourable case with Seattle clocked at 3Ghz and the core being well implemented by AMD, Seattle could top to a maximum of ~228480 DMIPS which is close to the Xeon performance. This Xeon model is the fastest Xeon, the fastest chip made by intel, and the fastest x86 chip.

Nvidia promised us that its custom ARM core will be faster than a A57. Therefore, I think it is rather reasonable than Nvidia future ARM chip for servers/supercomputers could offer about the same performance than the top Xeon chip today. Ok, when Nvidia chip was in the market Intel will have faster Xeons, but how much? 5%? 20%? It is evident that ARM _can_ compete in high-performance.

It is time to abandon the myth that ARM is slow and cannot compete with x86 in high-performance.
 
1) I go on the assumption that you speak for "most developers" since you say so in the absolute "no dev will". but will connect this later

Time and money. No one is going to spend a lot of time using an API that is only supported by the minority member of the GPU market, that will break if the architecture changes in the future, and also still requires a normal DX/OGL code path.

So I can guarantee that MANTEL will see less use then PhysX. For the same reasons PhysX isn't used, so will MANTEL not be used.

2) So you are saying that it cannot be constantly evolved generation to generation?

Understand how to the metal works; you make assumptions about what the hardware is doing. When the underlying hardware changes, those assumptions no longer hold. Low-level API's like MANTLE are slaved to the hardware they serve.

3) The only analogy that comes to mind is Toyota and Bugatti, Toyota mass produce and have hundreds of thousands of defective cars sent back because they want numbers over quality, Bugatti build individually with care and precission to make the ultimate motor vehicle. This kind of goes with point one being you will not optimize code over an expanse of platforms because its time consuming, but this doesn't mean that there are others out there that will not do it. There are people out there that are less concerned about mass but the quality of the mass and actually do give a crap about the absolute best they can tailor towards and if Mantle helps they will use it.

Except low-level code is good for performance, not code quality.


Look people, we went away from API's like this because of the same exact reasons I've pointed out. No one likes having to code different paths for different architectures; thats the entire point of high level API's. You accept the 10% loss in performance to save months, if not years, of development time and the ability for one code path to handle every piece of hardware that supports the higher level API.

Look, as much as people pile on MSFT, they did a lot of good. Back in the dark days, you had titles which had to create drivers for every piece of HW under the sun. Just for audio, you had drivers for the following:
Sounblaster 16
Roland MT-32
Adlib Music Synthesizer
Roland Sound Canvas
AWE32

And guess what? Some games didn't support some hardware. Its easy now, since everything follows Directsound, and later Xaudio spec as well has having Soundblaster 16 and AWE32 compatability, but 20 years ago, HW support was a major problem. High level API's like DirectX and Directsound ensured all HW that supported those standards would work, period. It made the developers life easier, and ensured competition on the marketplace.

API's like MANTEL threaten to bring back the days where some titles simply will not support some hardware. Though I'm sure if we get to the point where some studio's simply drop NVIDIA or AMD support, none of you will have anything to say about it.
 
Mantle's sole purpose in the light of things is for being able to port between platforms without the traditional limitation on hardware and scaling. Mantle may be designed for GCN but perhaps AMD releases new API's per generational evolution or maybe like DX it can be carried across hardware, not sure exactly at that point worst case scenario they just adapt a API per generation while maintaining support for older generations.
 

juanrga

Distinguished
BANNED
Mar 19, 2013
5,278
0
17,790


No. AMD is not the minority member here: GCN is on XB1 + PS4 + RADEON PCs/tablets.

No one is using MANTLE... except BF4 developers who are already using it, together with dozens of other game developers.

In the past, you missed the fact that the eight-core chip used in the PS4 was a game developers choice (some asked Sony for a 16-core chip!!!), not an imposition by AMD. Now you also miss this fact about MANTLE (from AT review):

Mantle doesn’t just exist because AMD wants to leverage their console connection, but Mantle exists because developers want to leverage it too.

PhysX was a proprietary solution no one really begged for. However, developers specifically asked AMD to create a lower-level API. If you read the release, MANTLE has been developed in close collaboration with game developers.

Not only Frostbite 3 engine will default at MANTLE (not DirectX) on AMD GPUs running on Windows, but that the engine will "utilize all 8 CPU cores"

2013-09-25_17-35-31.jpg


I recall someone here who guaranteed us here that no new game was going to use more than two cores...
 
Not only Frostbite 3 engine will default at MANTLE (not DirectX) on AMD GPUs running on Windows, but that the engine will "utilize all 8 CPU cores"

2013-09-25_17-35-31.jpg


I recall someone here who guaranteed us here that no new game was going to use more than two cores...

ALL games "utilize" all 8 cores. Heck, all games would "utilize" 100 cores if you gave it to them.

GPUViewWow.png


WOW, running two heavy threads. Note all four cores are being used. But everyone knows WOW drives two cores, and only two cores.

Now, "scaling" across 8 cores...that's a totally different thing.

Beware the PR department.
 

Cazalan

Distinguished
Sep 4, 2011
2,672
0
20,810


I guess you forgot already that Kaveri is a 2H part. It's well into the 2H already and still no Kaveri. Seattle is also a 2H part. It's AMDs first ARM chip do you really think that will go off without a single hitch? The first batches will likely be reserved solely for SeaMicro dense servers anyway. You probably won't be able to buy one commercially for a desktop linux box until 2015.

You're living in the land of ideals and paper specs. NVidia said after their disappointing Tegra3 sales to just wait for Tegra 4, it will take over everything. Tegra4 comes out with hardly any design wins. So few they had to make their own game pad (Shield, which likely didn't break 50k units) and tablet (Tegra Tab). Now they say just wait for Tegra5, it will rule the world.

Even the mighty Samsung has issues. Their Octo-core big.LITTLE part didn't perform up to snuff so they still buy Qualcomm parts.

There's only 2 companies doing really well with ARM cores and that's Apple and Qualcomm. Apple doesn't sell to the public so that only leaves Qualcomm. Both are heavily focused on the ultramobile as that's where the volume is. Server chips will come but at a much lower priority as the volumes are an order of magnitude lower than consumer phone/tablet.
 

8350rocks

Distinguished


I guess you missed the part where MANTLE uses HLSL from DX too?

Essentially, you can design this to run on DX, and enable the MANTLE drivers/API as the default setup when AMD Radeon graphics are detected (HD 7XXX and newer).

Now what was your argument about having to do multiple code paths again?

This thing is being designed by Raja Koduri in collaboration with game developers. I am just telling you now...your concern over how proprietary you think this will be is entirely unfounded. Additionally, they already have EA as a partner. Say what you want about EA's customer service, but, they make a *LOT* of games for consoles and PC. If a company that large is a partner on this, it means that any game EA publishes would potentially have access to MANTLE, and the number of games they produce/publish is a pretty staggering number.

Want to bet all the EA sports franchises utilize MANTLE? They were the top volume publisher in gaming in 2013...think of all the partners and affiliates in addition to the in-house stuff. ALL future BioWare titles could easily run MANTLE, you groups like DICE, EA Sports, the new exclusive Star Wars title coming soon.

Just a few of the franchises they publish that could run on MANTLE in future installments:

Army of 2
Bard's Tale
Battlefield
Black & White
Brutal Legend
Burnout
Command & Conquer
Crysis
Dead Space
Dragon Age
EA Sports Active
F1
FIFA
Fight Night
Grand Slam Tennis
Harry Potter
Lord of the Rings
Lord of Ultima
Madden NFL
Mass Effect
Medal of Honor
Mirror's Edge
NASCAR
NBA Live
Need for Speed
NHL Hockey
Orange Box (HL2 + Portal)
Rock Band
Rugby
The Secret World
SimCity
Sims
Skate
Spore
SSX
Star Trek
Star Wars
Star Wars: The Old Republic
Syndicate
Team Fortress
Tiger Woods PGA
UEFA
Ultima
Warhammer


Those are the franchises that most people on the street would have heard about. There were many more games they published for partners as well that were lower profile.

I think you get the picture...
 
And in AMD related console news:

http://www.neogaf.com/forum/showthread.php?t=687189

Watch Dogs will run at 30FPS on PlayStation 4 and Xbox One, creative director Jonathan Morin has told VideoGamer.com.

"Right now the frame rate we're focusing on [is] a steady 30[FPS]," said Morin speaking to us at Eurogamer Expo this afternoon. "There's always a balance, especially for open world, between the simulation and the rest.

"I think for where we are, the most important thing is the steadiness and [ensuring] that it's always capped the same so when you play it it feels right."

Despite confirming the frame rate, though, Morin could not confirm the game's native resolution on next-gen platforms, claiming that he didn't know whether the game would run at 720p or 1080p.

Which mirrors other devs, who are currently either using 720p, or 30 FPS. Very few titles seem capable of the 60FPS, 1080p everyone assumed would be easily doable on the next generation of hardware.

What I suspect is going on is twofold:

1: A higher-level OS is eating enough resources, and increasing latency to the point where a constant 60FPS @ 1080p simply isn't feasible.

2: The CPU architecture of both the PS4 and XB1 NECESSITATE heavily threaded titles. As I've noted several times, games do not naturally scale. It is quite possible, as I speculated over a year ago, the coding is at a high enough level where you aren't getting the fine-tuned programs you did for the PS3/360. In short: the programs are less optimized, and you are getting less CPU scaling.

Its becoming readily apparent, for whatever reasons, devs aren't getting 60 FPS 1080p out of the new consoles.
 

8350rocks

Distinguished


I honestly expected 30+FPS @ 1080p. I figured 60 FPS wouldn't happen necessarily, considering PS3 and XBOX 360 were running 30 FPS @ 720p.
 

Cazalan

Distinguished
Sep 4, 2011
2,672
0
20,810


To be expected for first generation titles. Devs are likely trying to use code that will compile for prior gen consoles as well, to maximize sales.

You're looking at 300 million PS3/XB360 out there compared to the 8 million or so in the first 6 months of PS4/XBOne.
 

jdwii

Splendid


Agreed just give the dev's some time(2 years), i bet we will see a decent amount of games running at 1080P with 60FPS, But open world games are also one of the hardest to do that. 60FPS/1080P is more reasonable on these type of games first person shooters, racing games, and even adventure games or fighting games.

I even think their is some games running at 1080P with 60FPS for the Wii U
 

noob2222

Distinguished
Nov 19, 2007
2,722
0
20,860


Kabini was a laptop cpu and can be purchased today. http://www.newegg.com/Product/ProductList.aspx?Submit=ENE&DEPA=0&Order=BESTMATCH&Description=amd+kabini&N=-1&isNodeId=1

but ya ... he knows exactly what the performance of A57 is going to be 1 year before its release. No clue how he comes up with these fake figures. Seems now all of a sudden A57 is going to be twice as fast as x86 since a 2 ghz A57 = 4 ghz x86.

I did the GFLOP computation for 2 GHz.
 

juanrga

Distinguished
BANNED
Mar 19, 2013
5,278
0
17,790


I guess you forgot already that Kaveri is a 2H part. It's well into the 2H already and still no Kaveri. Seattle is also a 2H part. It's AMDs first ARM chip do you really think that will go off without a single hitch? The first batches will likely be reserved solely for SeaMicro dense servers anyway. You probably won't be able to buy one commercially for a desktop linux box until 2015.

You're living in the land of ideals and paper specs. NVidia said after their disappointing Tegra3 sales to just wait for Tegra 4, it will take over everything. Tegra4 comes out with hardly any design wins. So few they had to make their own game pad (Shield, which likely didn't break 50k units) and tablet (Tegra Tab). Now they say just wait for Tegra5, it will rule the world.

Even the mighty Samsung has issues. Their Octo-core big.LITTLE part didn't perform up to snuff so they still buy Qualcomm parts.

There's only 2 companies doing really well with ARM cores and that's Apple and Qualcomm. Apple doesn't sell to the public so that only leaves Qualcomm. Both are heavily focused on the ultramobile as that's where the volume is. Server chips will come but at a much lower priority as the volumes are an order of magnitude lower than consumer phone/tablet. [/quotemsg]

If you pay attention to my development I assumed the poor implementation possible and lowest possible clocks. AMD would do it better because implementing an ARM design is easier than a x86 design. ARM is directly implemented on hardware, whereas modern x86 has to be implemented in microcode. Also the ARMv8 ISA has been designed for simplifying implementation in modern processes.

I am not being over-enthusiast. At contrary, I believe my estimations are well-balanced.

Also who said you that Seattle could be purchased for desktops?


One thing is that you don't understanding anything, another is when you post nonsense and attribute it to others.
 

juanrga

Distinguished
BANNED
Mar 19, 2013
5,278
0
17,790


I know. I have been saying that for months. I posted benchmarks showing how an 8350 beats both i5 and i7 with well-multithreaded games. I also mentioned the poll where all triple-A game developers chose the FX-8350 as best gaming CPU for future games, over the i5-3570k. They did because the FX-8350 will run better with next gen of games, thanks to both consoles being 8-core designs.



Therefore you decided to ignore the "perfect parallel rendering" and "avoid bottlenecking the GPU" parts of the slide. Noted.

Here a CPU profiler for a forthcoming game. Pay attention to the game spreading well among six cores:

91


No problem with scaling well to eight-cores, except probably for _you_



It is other thing which is becoming readily apparent

http://www.eurogamer.net/articles/2013-06-11-call-of-duty-ghosts-runs-at-1080p-and-60fps-on-xbox-one-and-ps4

http://www.eurogamer.net/articles/2013-09-26-guerrilla-killzone-shadow-fall-multiplayer-runs-at-60-fps-a-lot-of-the-time

And the number of titles will increase with years, evidently.
 

8350rocks

Distinguished


Really, where is ARM's desktop OS? Ubuntu? They're not exactly a flagship OS in terms of volume you know. What else? Android? That's not a desktop OS, nor will it be anytime soon.



Ok, let's examine the *not* David vs Goliath real quick shall we?

AMD - Has not *yet* shipped an ARM CPU at all.

NVidia - highest selling ARM device is the shield with under 100k units sold. Not a major player

Samsung - they have *some* promise, though they're working out the kinks and relying on Qualcomm for their high end ARM devices.

Apple - entirely proprietary, combined with low DT presence, I doubt they're a serious threat at all. Plus iOS is not viable as a DT OS.

Qualcomm - The 1 major ARM player that *could* potentially make a move at desktop, except they're not...why? They know what you fail to recognize. ARM won't make a dent in the installed base of x86 PCs worldwide.

You realize out of 7,000,000,000 people...there are roughly 2,500,000,000 households. Of those 2,500,000,000 households, 2,250,000,000 have a desktop PC of some sort. Of those households with a PC, roughly 2,000,000,000 of those run windows, which is only on x86 hardware.

You getting my point yet? Windows has roughly 90% market share in desktop. 24% of the world's devices run on windows...and they're 99% desktop computers.

You see why it's futile for ARM to even bother? Linux is *FREE* and it still hasn't overcome Windows.

I hate M$ as much as the next guy for their stagnation, and proprietary software, that keeps the little guys, and innovative software writers, out of the mainstream. However, they have some clout in desktop PCs.

I am not anti AMD, on the contrary, I want them to succeed HUGELY. Though, I can see the forest for the trees and know what AMD pushing ARM is up against. It's a fool's errand.



Where are you extrapolating numbers from? Source?
 


I'll give you another reason. Just like in the movie world, the "30FPS, 720p" is "good enough". So you move to 1080p for less heavy titles and optimize for more eye candy at 720p. The amount of effects on screen for the next gen will be higher in all fronts, so it will still be prettier, I guess.

Movies have been at 24FPS when we've been able to reproduce them at much higher FPS because Hollywood. I wouldn't be surprised this is something close to that.

Cheers!
 

8350rocks

Distinguished


Hmm...you are clearly uneducated as to how that all works. AMD CPUs allow GPUs to be loaded to 90%+.

There is a guy on overclock.net running Quad CF'ed HD 7970 GPUs, and while the 8350 is at 30-50% load typically, his GPUs are ALL running at ~95-100%.

He has been able to get FPS numbers into the 400s on FRAPS.

Care to elaborate how his system is the highest FPS system on their board out of all the PCs tested running AMD or Intel CPUs?
 

GOM3RPLY3R

Honorable
Mar 16, 2013
658
0
11,010
I feel like Mantle is going to be one of those things like PhysX and OpenGL controversy. Mantle will only be used on AMD cards, kinda like PhysX with Nvidia. The fun part will be trying to compare the two. I personally like Direct X because to me, it looks better, and kinda runs smoother. OpenGL does get better, sometimes insanely better frames, but when are you going to need a 50 frame advantage on a game that's already running over 100 FPS?

Really anyone with $250 will be able to run Direct X in almost every game at medium settings with a more than desirable frame rate. A GTX 760 can run Unigine Valley (which does take a lot of power), on Ultra settings, 1920x1080, 2x AA, at ~ 50-60 Frames average, with pretty consistent frames. If that's not good enough, then I don't know what is.
 

GOM3RPLY3R

Honorable
Mar 16, 2013
658
0
11,010


I'm just going to say this. Fairly, even if a game will utilize 8 cores, the Intel advantage is still at representation. If you can force hardware acceleration and make your game run X FPS with your AMD CPU, then my Intel CPU should be able to do the same if not better.

Just because it will utilize more cores, doesn't mean AMD > Intel. We all know this.

Hardware doesn't change, it's software. I do give AMD props because now their full potential will be brought out, however, the fact of the matter is that, imagine four Titans, or GTX 780's in similar setup like those 7970s? Using all that power, I wouldn't be surprised if it surpassed 500 FPS.

Food for thought by stating facts. ^_^
 

jdwii

Splendid


I'm not completey sure i understand what you mean your saying if a 8 core Amd processor gets 100FPS(using all 8 cores heavily) with a 7970 a Intel I5(4 core) should get the same if not more i'm confused on how this makes any since at all
 

griptwister

Distinguished
Oct 7, 2012
1,437
0
19,460
Oh my goodness guys!!! The R9 290x + Mantle is RIDICULOUS! Looks like I'll have to save up some bills and wait for it to go on sale like the 7970s. I'm curious to see what Nvidia will do in this situation.

http://www.overclockarena.com/battlefield-4-demo-was-running-on-a-single-r9-290x-at-amds-event-at-5760x1080/
 

Sigmanick

Honorable
Sep 28, 2013
26
0
10,530
MANTLE: It looks to me like AMD finally has what it has needed for a long time - LEVERAGE.

Intel did the absolute minimum they needed to do to the ICC compiler to avoid further fines or litigation. Now AMD has something to bargain with.

Let's assume Mantle IS able to be used by Intel and Nvidia and it becomes an adopted standard. We come to a point where AMD could have a check for Genuine AMD and run crap code for competitors - unless they play ball. I'm talking fixing the bleeping ICC and making PhysX amd Friendly.
 
Status
Not open for further replies.