AMD's Future Chips & SoC's: News, Info & Rumours.

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.


For now.. But it seems ARM is getting stronger all the time. If anyone can license the architecture then why hasn't Intel or AMD done so ?
 


Wow! lot of questions!

Yes, Microsoft switching to ARM is huge change in Microsoft's agenda. It just doesn't surprise me much because I have known for a while that they ported Windows server to ARM and tested it in-house. Recall how many years I have claiming that servers were going to switch away from x86 to ARMv8.

Yes, it is a threat to both Intel and AMD, and also to other server companies like IBM or Sun/Oracle. Regarding Microsoft they have confirmed that >50% of their servers will be changed by ARM servers. This is a huge number of units and less opportunity for AMD or Intel to sell Naples and Xeons.

Cavium uses 28nm. Qualcomm uses 10nm. Microsoft is porting the SO to those chips and also purchasing servers for their own datacenters. I don't understand your question about licensing.

Intel and AMD can drop prices, but most costs in a datacenter are working costs like electricity bill and the ARM chips are more efficient.

Don't sell the bear's fur before hunting it. Naples can increase AMD marketshare on datacenter or not. The more optimistic scenarios I saw predicted about 6% market share for Zen. But this was the past year with the Zen hype train at full speed and before all those recent datacenter announcements.

Did you heard the recent rumor about new X399 platform from AMD? I have a 'theory' about that. Well, it is not a theory, but just a guess I cannot prove. I believe that new platform is in reality a failed SP4 microserver platform rebranded for HEDT.
 


There are cores that rivalry with best x86 chips on single-thread performance

http://www.hpcuserforum.com/presentations/santafe2014/Broadcom%20Monday%20night.pdf

And recall Jim Keller said that K12 was faster than Zen.



AMD Seattle failed because AMD delayed the product until it was DOA, because better SoCs from both Intel and ARM companies were available in the market. Also Seatle used phone cores and did target microservers, not big servers.
 
@Jauan

I believe Qualcomm an Cavium are licensing the Architecture for their ARMv8 from ARM holdings. My question was what to stop Intel or AMD doing same.. or would they not license it to them ?

Also I notice there you said Jim keller said K12 is faster than Zen so does that mean AMd already has a viable ARM product.. how does it fair out in comparison to Qualcomm an Caviums products ?

Sorry Haven't heard anything as of yet regarding x399 details...will have a look into it though.
 


ARMv8 is the ISA, just as x86 is the ISA. Any company that want to design its own 64bit ARM CPU has to license the ISA from ARM Holdings. Just as an any company that want to design its own x86 CPU would that have to license the x86 ISA from AMD and Intel, with the difference that neither AMD or Intel want to license the ISA because they don't want competition. Remember that during years Nvidia tried to get a x86 license to design its own CPUs.

Jim Keller gave a talk in AMD Core Innovation Summit May 2014 where he mentioned the advantages of the ARMv8 ISA over the x86 ISA and he hinted how those advantages could be used to make K12 a better core than Zen. But so far as I know K12 was canceled by AMD and the reason for Keller's departure. AMD doesn't have any viable ARM product. All the eggs are in x86 with Naples.

It is difficult to predict how Naples will perform compared to competition, specially when AMD doesn't give us basic details such as clocks. A well-informed source claims base clock of 1.9GHz and single core turbo 3.0GHz for Naples, but this is not still officially confirmed.

Whereas we wait for details, we can use this trick: check how Skylake-EP, Centriq, ThunderX2, Vulcan, and XGene3 server designs have got customers and desing wins in the datacenter, whereas Naples didn't get any still. This can give us an idea of how competitive it is. Doesn't?

The X399 platform seems confirmed. It is the old SP4 platform for workstations/microservers rebranded for desktop.

The specs for SP4 were: single socket; 150W TDP; dual-die socket (MCM2); quad-channel (two channels per die); from 8C/16T up to 16C/32T; 2.4--2.8GHz.

Since each ZP die cost about $500. This 16C CPU would cost about $1000.
 
Leak on the 16c/32t Chip... it say's gaming issues have been Ironed out ?

"The gaming issues that were causing the Ryzen AM4 CPUs to behave erratically to say the least have been ironed out. It's akin to a newer revision on a newer platform."
http://www.overclock.net/t/1625803/anand-forum-amd-launching-x399-hedt-zen-platform-2h-2017

Goes on to say that these chips with the "Ironed out" issues are new sillicone that will be used on all SKU's scaling all the way down the product line...

I guess if you already have you one you wont be getting this fix.... It's good if true as they are changing it straight away instead of sticking to a flawed design until Zen2.
 
The source for that has been promising us magic fixes since launch. He first said a BIOS update, then said a SMT patch, then said a W10 scheduler patch,...

That same source is now posting in AT forums the nonsense that the RyZen chips in the market are "engineering samples" (sic), that everyone that purchased they are beta testers and that final silicon is coming soon.
 
AMD is planning to announce its top-end 16-core Ryzen processor and X399 platform in the third quarter to compete for the gaming market according to this article:
http://www.digitimes.com/news/a20170419PD207.html

An check this out....
AMD buys wireless VR startup Nitero
https://www.msn.com/en-ie/news/techandscience/amd-buys-wireless-vr-startup-nitero/ar-BBzH0UB
AMD is striving to achieve wireless VR headsets in what seem's to be a very smart move.. They would be able to provide a full package as in CPU, GPU and Wireless VR chips that are designed in parallel to work seamlessly with eash other.. in the hope of doing away with big bulky an troublesome cables on VR headsets.
 
"Inside the next Xbox: Project Scorpio tech revealed"

Sporting an AMD 2.3ghz Jaguar 8 Core (4x4) SoC running 31% faster than the Xbox One's.. An also it will have a hardware Dx12 Chip that sits on front of the GPU an processes everything coming in from the CPU, This is Dx12 in hardware.. I've never heard of this before, this apparently halves the load on the CPU when it comes to rendering. A Dx12 chip ?..
I presume this is made by AMD also in conjunction with MS (Dx12 is supposedly a copy of Vulken) ..
Interesting stuff...
As Far as I'm aware Dx12 is run in software on the PC an talks to the GPU from there...
Can anyone shed some light on this, what is going on here with this Dx12 chip.?
An can it be implemented on the PC ?

What this mean's for PC's, well... it mean's all games that are being developed for the Xbox Scorpio will be developed for at least 8 cores an Dx12. I can only presume the PS5 will be taking a similar approach with more cores as well an maybe Vulken...
This should get all the dev's used to programming for more threads an for Dx12 or Vulken across the board in the not too distant future... 😀

Check it out here:
http://www.eurogamer.net/articles/digitalfoundry-2017-project-scorpio-tech-revealed

An here:
https://youtu.be/RE2hNrq1Zxs

Jay
 
A hardware processor built to specifically handle a low-level API? I wonder if they are just adding a co-processor so they don't have to change the underlying architecture. That way the new one performs better, but the exact same code will execute on the older hardware but run on the processor instead.
 
As it turns out it's a dedicated Directx 12 "draw call processor"

mitch074 has weighed in on this, in the Vega thread with this info...

"What AMD and Microsoft did (AFAIKT) was improve the CPU's instruction decoder so that when a game makes use of DX12, the decoder catches DX12 calls and provides dedicated instructions and/or pipes draw calls directly to the GPU. This, I guess, sidesteps the need to decode the instruction in-CPU then route it to the GPU, instead the instruction decoder sends it to the GPU directly. This can easily save a dozen cycles or more per draw call."
That sounds amazing, excellent stuff.. This is a forward looking move for sure.. Dx12 is capable of a record amount of draw calls compared to any of the previous directx API's.. An according to AMD in 2017 we will have 3 times the amount of Dx12 games developed and 3 times the amount of VR headsets sold.
This certainly sounds like something that would benefit us on our graphics cards in the future.

Edit: More info on this "GPU Command Processor" this is what MS is calling it:

"What’s more interesting about the Scorpio console is that, according to Microsoft, it’s designed to incorporate basic, oft-used DirectX12 draw calls into the GPU command processor itself, potentially freeing up some processing power for devs."
“It's the first time I'm aware of us ever doing something like this,” Gammill said. “We actually pulled some of the DX12 runtime components directly into the hardware. So basically, these high-frequency DX12 draw calls you'd normally call [to output a frame, for example] which would take up a lot of GPU and CPU cycles, now that that's baked into the system itself, it makes the system significantly more efficient.”
Gammill estimates this can lead to situations where hundreds of specific API calls can be cut down to 10-15, potentially giving developers a bit of extra efficiency to play with.

Significantly more efficient ! The full article can be read here:
http://www.gamasutra.com/view/news/295800/Inside_the_next_Xbox_Project_Scorpio_and_its_brandnew_dev_kit.php

I wonder is this some of AMD's secret sauce... maybe it this will be incorporated into the GPU Command Processor in Vega !

Jay
 
According to this article it's not a seperate Dx12 chip.. Direct3d is incorporated directly into the GPU Command Processor on in AMD's GPU on Xbox Scorpio...

“We essentially moved Direct3D 12,” says Goossen. “We built that into the command processor of the GPU and what that means is that, for all the high frequency API invocations that the games do, they’ll all natively implemented in the logic of the command processor – and what this means is that our communication from the game to the GPU is super-efficient.”

If AMD get's to do this with their future GPU's it could give them a serious leg up on Nvidia.. An we all know what a "Forward thinking company" AMD is.. They may struck a deal with MS on this... certainly looks it the way this article reads..

https://www.extremetech.com/gaming/247266-microsofts-new-project-scorpio-xbox-blow-ps4-water-challenge-high-end-pcs

It's already built into their GPU in the Scorpio weather they have to sole right to continue to do so remain's to be seen... In fact who's to say this isn't already baked into Vega ?

 


I could see this potentially being huge if AMD can incorporate it, and other API's into future GPU's. Now, that is a move in the right direction!
 
Big time !

Microsoft Claims "CPU load is reduced by 50% (actually doesn't say up to) and thousands of instructions are now reduced down to just eleven."

It's already baked into the GPU in the Xbox Scorpio, for all we know it could baked into Vega wishfull thinking I know IMO..

This is huge..
 


This is showing a real incorporation of directx12! The fact that this is happening inside a AMD 480 GPU is also huge. AMD could start doing this with more manufactures and API's.
 
AMD talks the future of VR: movement, wireless and better graphics:
http://www.techradar.com/news/amd-talks-the-future-of-vr

This is a great article about an exciting new wireless technology that is sure to bring VR to the next level of immersion.

Roy Taylor agrees. “It’s very exciting, and the wireless technology can be very, very impactful in terms of the future of VR. We put up with wires today because we have to, in order to have the experience. But the wires are an issue in two ways; one is we are always conscious they are there, so some part of our mind is always going ‘don’t trip up,' or ‘don’t pull it out of the wall.' And that is the enemy of presence.”
 
AMD’s Naples Looks to Make a Mark in the Data Center Market:

"Intel’s Skylake-EP Xeon CPU has 32 cores and has countered Naples’s claimed performance advantages. While Skylake has equaled the core count, it lags behind Naples in terms of memory. Skylake has just six DDR4 memory channels per chip compared to Naples’s eight."

http://marketrealist.com/2017/04/amds-naples-looks-to-make-a-mark-in-the-data-center-market/
 
This is a very good video delves into some of the more "interesting features" in Vega.. but also touches on some features that appear to be designed to work closely with Zen (Ryzen & Naples).

This YT states that where Zen is weak in AVX, Vega is strong.. I love that he picked up on this as it's very important...
An with good reason, ya can see the puzzle start to come together here with regard to AMD's Server Solution that is.
He goes onto say that this synergy may give AMD the ability to take advantage of a hardware "niche" in the HPC..
I also believe that this is the kicker that will hopefully cause us to start hearing of design win's in the HPC for Naples & Vega..

Also note at 25.05 how the Infinity Fabric connects the L2 cache on the GPU directly to the CPU an the PCI Express lanes, interesting..

Vega & Zen together look like quite a team.
https://www.youtube.com/watch?v=m5EFbIhslKU&feature=youtu.be
 


Yes mate I thought so too.. The Infinity Fabric linking into Ryzen an the L2 cache on Vega is very interesting for sure. I wonder will this be taken advantage of in games.. Offload certain workloads to the CPU instead of the GPU. I'm sure it will be taken advantage of in the Data Center...

Also the FP16 performance 25 Terraflops... that is just insane !
 


I think it would be the opposite send work to the GPU.
 
Why not just use PCI Express... ? It's hard to believe it's faster as it uses pci express lane anyway... There must be a some reason for this. Someone said it's similar to HSA design in an APU...
 
Status
Not open for further replies.