AMD's Mysterious Fenghuang SoC Spotted in Chinese Gaming Console

  • Thread starter Thread starter Guest
  • Start date Start date
Status
Not open for further replies.
Fenghuang SoC and Kaby-G are both early evolutions of Exascale Node Architecture (ENA) within the Exascale Heterogeneous Processor (EHP).

AMD goes on in great length about the Exascale APU in a White Paper titled "Design and Analysis of an APU for Exascale Computing".

It is a pdf available here:
http://www.computermachines.org/joe/publications/pdfs/hpca2017_exascale_apu.pdf

Of course in the paper AMD is talking about 32 core cpu's with many gpu dies with stacked HBM memory.

Interesting times are coming.
 
AMD you traitors , you gave us AM4 Socket that does not support this technology and now you give that technology to china ? Why not making The whole AM4 platform accepts such APU ?
 


You are not getting it , this is a PC and it runs Windows. AMD already made the same technology in Xbox one X and PS4 Pro .


I was waiting and waiting and waiting for them to release the same hardware for Windows PC , and it never happened , and now they just give it to China . This is TREASON.

I demand one , and if they release one with 16GB GDDR5 the better . because 8GB for a PC is not enough today. This is the system memory as well , not only the GPU memory.

 



DUDE!

Fenghuang is an SOC!! OKAY? GET IT? It is board mounted not socketed. It is also semi-custom just like 8 core Jaguar SOC was semi-custom for Microsoft and Sony.

AMD still makes AM4 APU's. However you have to get realistic here, the GPU and memory bandwidth needed likely exceeds the pin-outs for AM4 now.

When AMD gets around to releasing HPC APU's it may be on AM4. But I seriously doubt it



 


I would suggest that you pay AMD $250million+- and they can fabricate a semi-custom SOC just for you.

Of course you might also have to buy another 50 million copies or so.

 



AMD is already taping out the latest SOC for the Sony and Microsoft consoles.

If you are a student of history here is a link to a Forbes piece that outlines just WHY AMD won both Microsoft and Sony Consoles. AMD will likely continue to provide Radeon silicon going forward
 


they can make a Socket that takes SOC as well . not a big deal .
 

Whether or not it's socketed doesn't necessarily have anything to do with it being an SoC. I mean, even desktop Ryzen chips are sometimes referred to as SoCs because they have stuff like SATA and USB controllers included on the chip.

Also, if we're talking about integrated graphics with VRAM included in the package, what does the external bandwidth (i.e. through the pins to the socket) matter?
 


True enough, although if they were going to do something like this, I would think they might put together a package with an ITX sized board, or one of the STX sized boards for a nuc-like SFF computer. They already make their own reference design graphics cards, an ITX board with a soldered on SOC would be easy enough.

 

This is not comparable to anything in AM4. It probably has little or no external PCIe connectivity and maybe only a couple SATA ports.

Most importantly, it will require quad-channel soldered GDDR5. This wouldn't work as a socketed processor.

The most interesting scenario might be to scale down the clocks and put it in a laptop. It remains to be seen how much GDDR5 will impact CPU performance, assuming it's shared.

P.S. did you have the same reaction to various iterations of PS4 and XBox One? Because those are the same sort of thing.
 

No... can't be that much. If we're talking up-front costs, I think you're off by an order of magnitude.
 

But it's actually not, so that's the key deal-breaker.

I'm waiting for an APU that uses HBM2, but this isn't it (and I don't consider Kaby-G a proper APU... more of a Frankenstein).
 

Well, since this seems to be a custom SoC, they probably worked with the customer to design the board. Reference boards would only be designed for parts sold on the open market.


In fact, they already sell embedded SoCs and have reference designs for those.
 


Actually it can work . They are AMD not a small company , they CAN introduce Quad channel GDDR5 DIMMS and sell them 4/8GB per stick. AMD has their own Memory brand if you remember .

They can surprise us .. and make it happen. but they never did.

Actually I think that we should move from DDR to GDDR for the whole system , not only the GPU. This will make onboard graphics alot faster . and maybe introduce a new GPU slot with enough connections to share the GDDR on board without the need to put the GDDR5 on the card itself.

And lastly , they could make the CPU Socket pins compatible with both CPU and APUs (GDDR5 APU) and then put 4 Dimms for GDDR5 , and 2/4 DIMMS for DDR4 ... and you choose what you want.

 
It isn't a matter of working. It's a matter of making sense, both fiscally and operationally.

Nobody is going to make a motherboard expensive as you describe so that end users still end up with an experience that is less than one would get from a CPU and dedicated GPU.

GDDR memory and DDR memory have both been around for quite some time. There is a reason they are still applied in different areas. DDR memory has better latency characteristics which CPUs need and GDDR has better throughput characteristics which massively parallel GPU work needs. A CPU would not benefit from the increased throughput of GDDR memory but it would certainly suffer from the increased latency.

There is a limit to the amount of graphics horsepower you're going to see integrated with a CPU, whether it's on die, package, interposer, or EMIB. The extra expense of the motherboard is likely to price the solution out of it's target market. The most likely solution, as it's already been suggested, would be some sort of HBM, used only for the graphics, but currently the cost of an HBM solution is prohibitive. Pairing an expensive, high end memory solution to a low end, or even mid range graphics product makes no sense. Nobody is going to pay that much for that little performance.

China, on the other hand, may have enough of a user base that doesn't care if their CPU performance is lackluster, because frankly, that's a whole lot better of a solution than nothing at all.
 

Doesn't matter whether the article is right. I've been saying that APU VRAM ending on-package is the logical conclusion of CPU-GPU integration. It is only a matter of time. 2GB of HBM2 + Vega24 + hex-core Zen 2 would be a pretty neat mid-range combo for 2019.
 

Quad-channel is a deal-breaker for AM4. They only support quad-channel on their ThreadRipper, for good reason - cost. Quad-channel motherboards require more traces and larger sockets, making them more expensive.

Secondly, you have no idea whether it's physically possible to put GDDR5 on a DIMM and still satisfy the electrical requirements of the standard. I suspect not, or it would probably be out there.


If this were possible and really such a good idea, then the industry would've gone from DDR3 to GDDR5, instead of creating DDR4 (or DDR4 would at least perform much closer to GDDR5).

Aside from the likely electrical issues of putting GDDR5 on DIMMs, some things I've read suggest it would have a negative impact on CPU performance, probably due to latency. I haven't found a good source on this, so consider it an open question.


There's no free lunch. You're talking about a much bigger socket, which costs money. They have that in TR4, though it's still DDR4.

Look, if you want a PC that's based on this thing, the Chinese company behind it is supposedly building one. There should be benchmarks of it, after launch, so you can actually see whether it's as good as you think it'd be. I'd follow sites that more closely track PC news from China (like videocardz, I think). After that, if you still want one, you can probably find grey-market imports, if you're resourceful.

A note of caution: AMD has experience with big APUs and GDDR5 since the original PS4 and now XBox One X. If it were really such a good idea, I'd expect them to have build a laptop CPU like this. The fact that they haven't might tell you something.
 

That could work, if it then adds DDR4 for non-graphics use. Yeah, sounds like it wouldn't be cheap.

I don't see the benefit of 2 GB HBM2 over 8 GB of GDDR5. HBM2 speeds tend to scale with capacity. At 2 GB, you probably get a measly 120 GB/sec, whereas 256-bit GDDR5 is good for more than double that.

I was hoping MS or Sony would be first with a HBM2-enabled APU.
 

...except you said it was a step closer, when (at least the GDDR5 version mentioned) is no closer than PS4 or XBox One X.
 
My suspicion, ever since AMD started referring to the HBM on their Vega GPUs as High Bandwidth Cache, was a possible plan to use main system memory to supplement lower amounts of HBM. No need to replace DDR4 as system memory.

Yes, the throughput scales with capacity, but a single stack of HBM2 should be able to manage 256 GB/s bandwidth, more than enough to be competitive with GDDR5.

I wouldn't mind if AMD dropped such a thing on AM4 first.

 
Status
Not open for further replies.