AMD CPU speculation... and expert conjecture

Page 600 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.
Oh, don't get me wrong, re-writing OGL from scratch is a good thing. Very good in fact. But remember, NVIDIA/AMD could be supporting up to five different graphical API's in their drivers now: DX11, OGL 4.x, DX12, OGL NG, and for AMD, Mantle. Which begs the question: Which API's are going to get shafted due to difficulty writing the drivers?

Hence why both MSFT and Khronos better be careful designing the API, since implementation will likely determine its success.

Standards wars are good to have every know and then. It forces the entrenched parties to rethink their methods and implementations and the sudden increased competition puts pressure to create the best all around solution. Remember the entire reason MS Windows dominates the gaming PC industry is due to it's monopoly on the leading graphics API, DirectX. They don't have much of an incentive to innovate or improve and so stagnation sets in. This upcoming API war will change that, I fully expect a ton of good features and software improvements to result from this.
 

szatkus

Honorable
Jul 9, 2013
382
0
10,780


If it's not design win then what is?

http://www.extremetech.com/mobile/187752-new-nexus-9-leak-points-to-64-bit-nvidia-tegra-k1-4gb-of-ram-and-2560x1600-screen
 
AMD to release A68 chipsets in September, sources say
http://www.digitimes.com/news/a20140811PD206.html
AMD A10-7800 APU Processor Review
http://benchmarkreviews.com/18744/amd-a10-7800-apu-review/5/
Sony Partnering Up With Micron to Design Ultra Fast 3D Stacked RAM for PlayStation 5 – Microsoft Might Leave the Gaming Business
http://wccftech.com/sonys-playstation-5-reportedly-feature-blisteringly-fast-3d-stacked-ram/
microsoft doesn't have any future goals regarding desktops, laptops and gaming (hardware and software). it's not based on wccf's claims. i haven't seen them put any effort behind dx12, xbone, windows 8.
 

juanrga

Distinguished
BANNED
Mar 19, 2013
5,278
0
17,790


The original project was a x86 CPU, but Intel canceled the x86 cross-licensing to Nvidia and Nvidia had to change to ARM ISA and wait to 64bit to be ready.

Probably Nvidia could also use a POWER ISA if IBM license one. Nvidia could provide an alternative to AMD Skybridge for the all the licenses that Nvidia manages to obtain.
 

8350rocks

Distinguished
@szatkus it may be a design win, but considering the price will be around ipad money, I doubt it moves as many units as other devices. I would not buy one, would rather spend that money on a Galaxy Tab or something else. Google's last few Nexus devices, outside the Asus models, have been plagued with issues, and I think Google takes a more hands off approach about hardware on such devices. I will not be buying one.
 


The problem with the XBone is MSFT wanted it to be MORE then a gaming console. Problem is, people didn't want it to be anything else then that. Hence the rapid change in direction there.

And the XBox brand isn't going anywhere, as its profitable thanks to Live.
 

CooLWoLF

Distinguished

Actually, I LOVE the xbox one's features. Everything is seamless and easy. If you have not spent time using them, find a friend who has one and just play around.

And I also own a PS4, so I'm not some fanboy. PS4 may be more powerful, but xbox one's interface is HANDS DOWN better.
 

jdwii

Splendid
The issue i have owning both consoles is why bother when 80% of those games are on the PC anyways. The other 20% which is exclusives i think Microsoft sucks in anyways always being more about multiplayer then single player, and we have a LOT of that already on the PC with a actual mouse since i hate the control stick for aiming. I can live without lat of us and some other titles sony has that looks great and heavy rain to. Nintendo thing is kinda killing me i will pick one up when it drops to 200$ we all know nintendo wants this gen to be over.

I read more about the next gen fx from this site and it speaks of 10 cores which isn't so crazy but still makes me think they are going for low performance per core like PD to a many core design once again.
 

blackkstar

Honorable
Sep 30, 2012
468
0
10,780


The one which is easiest to port to other APIs will win. Game budgets are spiralling out of control and developers are looking to cut development costs where ever they can, which is why xbone, ps4, and PC are x86. I can see Mantle being relevant for the future if AMD can keep it as something that's the easiest to develop for while being the easiest to port to other APIs and platforms. I know AMD itself has said it expects developers to bail on them for DX12, but all it will take is MS going "DX 12 is Windows 9 only!" to make them change their tune.

DX seems to be in the most trouble in my opinion. Microsoft is going to be conflicted with locking people into Windows and Xbox as much as possible while it competes with open standards that are aiming for portability. I can easily see MS doing something like offering the best tools to work with and then going "but it only works well with Xbox and Windows!" Mantle's big strength right now is that it's easiest to port to other platforms and APIs.

If it continues to be so, and MS is stupid/brash enough to do something like make DX12 W9 only, it could happen. Remember, Mantle came from game developers being tired of Microsoft's BS with locking down and lack of progress. I have no doubt you'll see developers run back to MS with DX12, but I do feel a lot will be apprehensive. Remember, not only did MS ignore pleas for upgrades to DX11, but they also took a huge, steamy dump on all gaming PC users with Windows 8 by forcing touch screen controls onto them.

If we see SteamOS take off with SteamBoxes, OGLNG has a lot of potential too.

But I do think DX is in a weak state right now. Xbox also uses DirectX and it's losing pretty badly right now. Confidence in MS is also eroded.
 

szatkus

Honorable
Jul 9, 2013
382
0
10,780


Intel could easily release 10 core CPU even at 22nm without GPU, so it doesn't mean that AMD is going to sacrifice single thread performance.
 


Mantel is easiest to port...if you happen to limit your platform choices to platforms with an AMD GCN based GPU and exclude everything else.

Likewise, with the PS4 using mainly libgcm, XB1 using DX11, you're going to have at least three graphical API's used for quite some time, simply because that's what current gen consoles were built on.

The way I see it, Mantel is OBE due to being tied to AMD only. DX12 is likely going to hit before OGL NG, so devs are going to build their engines around that. That leaves OGL NG needing to convince devs why it is better then DX enough to invest in adding support to already existing engines. I figure DX wins by virtue of being the first of the vendor neutral APIs to get released.
 

juanrga

Distinguished
BANNED
Mar 19, 2013
5,278
0
17,790
The reason why different sites mention different cores count is because AMD has not still decided what is a core in the new arch. Thus what one consider is a 20 core other consider is a 10 core.

In any case, the info given/leaked suggest that K12 and Zen will be small cores, a kind of jaguar cores on steroids, and very far from the single thread performance of Broadwell and Skylake.
 

jdwii

Splendid


Not with the 10 core designs its more scary reading the 16 core designs that seems like its going to have jaguar type performance. The 10 core design makes me happier and it seems more likely even more so at 14nm. Its way to early to tell right now anyways we are 2-3 years before launch
 

jdwii

Splendid


Juan why do you think they did this i thought they learned their lesson with BD
 

8350rocks

Distinguished


As I said if OGLNG uses HLSL then DX dies...
 

8350rocks

Distinguished


Where did you hear the codename Zen?

Also, Zen with something similar to big cat cores on steroids would not be bad at all. Think about that...it is power efficient, good/better branch prediction, fewer front end issues with logic, IPC that competes with Intel already...

Now, scale it to 8/12 big cores...with some extra ALUs...I would be up for that...
 

8350rocks

Distinguished
http://www.tomsitpro.com/articles/amd-opteron-seattle-arm-soc,1-2105.html

While 4 or 8 low power/lower performance cores at 2GHz may seem counter-intuitive for a server, it does provide a low power overhead solution for applications that need a good amount of memory or disk space but not much compute.

Emphasis mine...

So much for ARM servers ruling the world...

I seem to recall someone calling this out on being ULP and not even remotely close to an x86 competitor in the server market...who was that?

Was there someone who was talking about how these ARM v8 SoCs would blow the doors off x86 parts too? Something about IPC being superior or something...what? Who was that?
 
@palladin
My reason for not sweating over Shield: http://www.youtube.com/watch?v=LhatpAaUBaA

I have it (2nd gen) and it's AMAZING. My S2 became a portable gaming machine.

---

In regards to the nVidia announcement, we'll have to see. They're comparing it to a Celeron Haswell, so there's the most obvious fine print we can see right away. The ARM core won't be an i7 fighter, but I'm guessing they're telling Intel to get ready for the takeover of the low end spectrum of mobile.

Cheers!
 

juanrga

Distinguished
BANNED
Mar 19, 2013
5,278
0
17,790
Nvidia is comparing to a Haswell Celeron consuming roughly the double of power, this important detail cannot be forgotten. Of course third-parties analysis/benchmark is needed, but the interesting part for me is that their in-order-DCO design could be easily scalable. One can dream that their Boulder core for high-performance servers is an ultrawide machine. There is some irony on the fact that Nvidia is matching Haswell using an Itanium-like VLIW architecture.
 


In fairness- I think the claims about IPC and ARM v8 were relating to the planned custom cores (K12), the current solution is more a stepping stone imo- they need to start somewhere. Also it *should* offer quite a bit more raw performance than the jaguar based opteron. There is no technical reason as far as I'm aware that a K12 based server part *couldnt* be scaled up to large core level performance (the real question is though would it actually offer any benefit in terms of perf/w at those levels? Will be interesting to find out)....
 

juanrga

Distinguished
BANNED
Mar 19, 2013
5,278
0
17,790


As everyone knows AMD's Seattle Opteron is aimed at microservers and uses Cortex A57 cores, which are phone-class cores. The A57 Opteron offers near 3x more performance than the jaguar Opteron, and about the same power consumption.

The answer to your question about efficiency is a sound "yes". I already mentioned before in this thread that the ARM ISA efficiency doesn't vanish when scaling up to HPC levels, but now I have some data from server-class custom ARMv8 cores to back up my former predictions.

For instance, Applied Micro has announced that its ARMv8 HPC SoC provides 80--90% of the performance of a 100W x86 Xeon but only dissipates 30-40W. Applied Micro is now giving a talk at Hot Chips.

Therefore ARM small cores can be about 3x more efficient than x86 small cores (jaguar), whereas ARM big cores can be about 2x more efficient than x86 big cores (Haswell-EP).

AMD K12 would offer similar efficiency advantages (recall Keller talk).
 

8350rocks

Distinguished


You know, uninformed people used to think that exact same thing about POWER and x86 both, until they tried to scale it up so high as to where it is now...

Then they came to the stark realization that the ceiling was just far enough above them that it had not yet come into sight.

The primary difference there is going to be that ARM will be stepping on their heads to get there in such an extremely short time frame that people who are aware of limitations of the uarch already understand it scales upward poorly.

The only people in denial of that are the ones who have this contrived notion that ARM will rule the world.
 
Status
Not open for further replies.