AMD New Horizon Event - Breaking News

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.


From TSMCs 16nm. Not the current "12nm" that AMD uses, which is an improved 14nm, from Global Foundries who got it from Samsung.

It will be interesting to see but I doubt the TSMC numbers are equatable to going from Samsung/GF to TSMC vs TSMC to TSMC.
 

GeoffCoope

Honorable
Sep 16, 2013
22
0
10,510


"The GPU is based on the advanced Vega architecture and is the first PCIe 4.0 GPU on the market. It also is the first to use the Infinity Fabric over the external PCIe bus and the first GPU to have 1TB/s of memory bandwidth. The MI60 offers up to 7.4 TFLOPS of FP64 and 14.7 TFLOPS of FP32."

As they are using Infinity Fabric on the GPU does that mean we will get multi-gpu's (or is that the bridge being external?) on a single card? Also those stats are higher than a 2080 Ti so looking interesting. Has AMD gone for Intel and Nvidia in one almighty punch?
 


This does seem pretty awesome, but i feel as if the perfect dies with all 8 cores per CCX enabled will be used in higher end products such as Threadripper and Epyc while mainstream Ryzen (<=8 cores) will continue to use infinity fabric to connect imperfect dies for different core configurations.

If I'm wrong though, it would be great, since IF latency would be able to be eliminated from single CCX chips.
 

none12345

Distinguished
Apr 27, 2013
431
2
18,785
Now the real question, is the next desktop chip doing a io+computer chiplet? You could use the same compute chiplet, smaller i/o chiplet, and an optional gpu chiplet for the apus.

3 chiplets would cover everything from desktop to mobile to hedt to server. Also gives them 16 cores in the mainstream desktop if they want.



Im also wondering what the geometry of the 8 core chiplet is....did they recycle the 2x4 ccx config, or is it something new.
 

InvalidError

Titan
Moderator

One of the main reason CCXes only had two cores was distance between cores and caches increasing latency. By making the cores and caches physically less than half as large, AMD should be able to cram twice as many cores around a single shared cache without having to increase latency by much, so I'd expect chiplets to have eight cores sharing a single L3 cache.

Also, you probably don't want to layer chiplet fabric on top of chipset fabric on top of motherboard fabric, so I suspect chiplets may be using a lower latency bare-metal interface to let the chipset take care of application-specific fabric magic or possibly bypass it altogether for single-socket CPUs.
 
Nov 6, 2018
2
0
10
Odd that a major tech company representative isn't more careful with their fingerprints. I can see some of Lisa's ridges in that shot. The original might have more resolution. She also has a very faint papercut on her ring finger. You'd think they'd put a little liquid band-aid or makeup on their fingers before holding something up on stage.
 
The chiplet design is interesting, and I suspect it could greatly increase yields by utilizing lots of tiny chips like that. There should be a greater percentage of "perfect" chips per wafer, and if there's a defective part of the chip, it seems more likely that it will be located within individual cores that can be disabled for use in "cut-down" processors, rather than in a part of the chip that is shared between cores and needs to function. That could be good for a new process that might initially be prone to an increase in imperfections per wafer.

Meanwhile, the central IO interconnect is built on 14nm, allowing them to better utilize limited 7nm production resources for the cores themselves, where it likely makes more of a difference.


If I had to guess, the desktop chips would probably be a single piece of silicon. A multi-chip design would likely be more expensive to manufacture, and that could be more of an issue for lower-cost processors. And they're obviously not going to need a large interconnect chip like that, as there should be significantly less IO, as well as less area dedicated to connecting cores to one another.

In the consumer desktop space, having even 8 cores with 16 threads is currently kind of overkill for common applications, and they already have the Threadripper platform to provide greater core counts for those who have need of them. Maybe they could offer more cores to stand out from Intel, but past a certain point, the availability of more cores becomes of questionable value for most of the those who will be using the chips. There's also the likelihood that separating the IO like that will impact performance to some degree, and for less heavily-threaded applications that consumers are likely to run, that can be of more importance than access to core counts that most won't even care about. If they can offer similar or better performance in most applications compared to Intel's leading chips at better prices, that could be enough to stand out.
 

InvalidError

Titan
Moderator

While an MCM adds some assembly cost, it may not be that significant compared to cost savings from manufacturing the high-performance chiplets on the premium 7nm process and MCM chipset on much cheaper processes. As such, MCM even at the low-end may still make more sense for now.

As someone else pointed out earlier in the thread, there is the possibility of either integrating the IGP into the MCM chipset. Possibly even better would be if the mainstream MCM chipset has two chiplet ports: one used by a CPU chiplet, the other either unused for 4-8 cores mainstream CPUs, fitted with a second CPU chiplet for 10-16 cores high-end mainstream or fitted with a GPU chiplet for APUs. It opens up a broad spectrum of mix-and-match possibilities.

With 7nm being so new and expensive, I can imagine Zen 2 going MCM across the board until costs go down and yields go up before committing to monolithic implementations for mainstream Zen 2+ or Zen 3 parts. Zen 2 and its chiplets were developed for servers first, it would make sense for AMD to be looking for opportunities to use working chiplets that didn't make the EPYC or ThreadRipper grade somewhere, much like what it did with Zeppelin dies.

We'll see in a few more months. If it really does turn out to be MCMs for everyone, we can only hope it won't hurt the wallet and performance much.
 


One question I have is heat and dissipation. We all know that smaller process can lower power and heat but a smaller die also presents a smaller surface area to dissipate heat. With 7nm and moving some parts off the die do you think heat will become more of an issue or stay the same?

I would assume it might run warmer than previous gen but then again without some of the I/O on the CPU that typically uses more power and creates more heat it might even out.
 

InvalidError

Titan
Moderator

If you look at CPUs from the Willamette P4 to present, power density for mainstream desktop chips has remained fairly stable in the neighborhood of 50W per 100sqmm of die area. I'd expect 7nm to continue in the same ballpark and not pose any more or less of a cooling challenge than previous chips.

In contemporary ASIC design, power is part of the design process, not something that is determined after-the-fact.
 


So you think power design would stay the same even with all the different factors in play? I guess we will see when it comes out.
 

paul prochnow

Distinguished
Jun 4, 2015
90
5
18,535
Where is the total chart on the release date of
AM4 CPUs? Now I think that was a short termer...a dead set up.
Two years seems real short.

I bought AM4 assuming a regular life on that Ryzen.
Back to the future for a gaming chip....aka INTEL.
 

hannibal

Distinguished


Well horizon is allways far away ;)
Good to hear new good news in anyway. Tighter competition can only be good thing to customers!

 

hannibal

Distinguished


Amd use am4 untill at least 2020... so it has a lifespan of 4-5 years. Not bad at all!
After that Hopefully new platform for ddr5...
 


The news reporting was on point.
However you'd think that a large publication like Tom's would get event names right, which they thankfully fixed to the article although the forum post still calls it New Horizon.
 

InvalidError

Titan
Moderator

Between PCIe 4.0/5.0, USB3.2, DDR5, Intel making ThunderBolt royalty-free, etc. since AM4 launched, I'd be surprised if we don't get AM5 in 2H2020 for early adopters with the last generation of new AM4 products earlier in the year.
 


I did not know they did that. Thats fantastic and should make it easier to adopt. The TB standard is better than USB speed wise, even 3.2 it seems.
 

InvalidError

Titan
Moderator

ThunderBolt isn't only about speed, its other major benefit is that since it is fundamentally external PCIe with optional DisplayPort stream multiplexing, it eliminates the whole USB protocol stack. This means lower CPU utilization, lower overheads, direct DMA transfers between devices instead of strictly host-target with USB, etc.

For the regular end-user, it doesn't make much of a difference. Now that a large chunk of the cost overhead is gone though, it is that much closer to being viable in the mainstream. With both ThunderBolt and USB3(.2) using Type-C connectors, this could get confusing and possibly frustrating.