AMD CPU speculation... and expert conjecture

Page 483 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.

vmN

Honorable
Oct 27, 2013
1,666
0
12,160

You must have misunderstood me, I'm not saying the general product was bad, just that they assigned the entire 9xx0 series for them.
it should have been the 8370 and 8390, because that is what you do with higher binned products.
 

Cazalan

Distinguished
Sep 4, 2011
2,672
0
20,810



Judging by AMDs roadmaps and Intel's announcements the HEDT is really not an option for them anymore. They've made a commitment to mainstream gaming and HEDT would be a distraction from that. They've let go too many employees to fight that battle.

AMD is kind of stuck. They have some great ideas but lack the budget to really make them fly. They need to be bought out or adopt quicker turnaround strategies. The APU for all it's merits has slowed them down on the execution side.

 

Cazalan

Distinguished
Sep 4, 2011
2,672
0
20,810


Where did you see numbers for Devil's Canyon? The 100MHz clock bump is for Haswell refresh parts, not Devil's Canyon.

"re-engineering refers to improved thermal interface and CPU packaging materials that are expected to enable significant enhancements to performance and overclocking capabities"

I don't think anyone would consider 100Mhz "significant" but we'll see.
 


And if they were able to shove out tons of APU's now, or had found and fixed BD's flaws before release, how much more market share/profits could they have had?
 

vmN

Honorable
Oct 27, 2013
1,666
0
12,160
Do you expect a realistic number? Nobody cannot spot what could have happen, as obvoiusly other things would change too.

I doubt the can fix their cluster-cores, without needing to start basically all over.

If they could improve their APUs cpu performance, they might actually be competetive on the laptop market.

 

Cazalan

Distinguished
Sep 4, 2011
2,672
0
20,810



They had like 3x as much debt as they have now. They would have gone bankrupt if the fabs weren't sold off.

The fault was agreeing to those insane wafer contracts.
 

blackkstar

Honorable
Sep 30, 2012
468
0
10,780
I don't know what you folks are on about with AMD abandoning HEDT.

Their HEDT has been a different market segment that's been retrofitted to work on HEDT. With FX it was servers converted into HEDT chips. Stars cores were server chips first as well.

However everyone is looking at AMD's server market share and how Bulldozer families have performed there, and the only logical conclusion is for AMD to give up on it.

But, the thing is that people seem to be thinking that AMD no longer retrofitting server chips into HEDT chips means no more HEDT chips.

The next big thing, as I have been saying in this thread forever, is a fabric that can get dGPUs and dCPUs working together heterogeneously.

No one has really believed me at all, but guess what? Nvidia has already done it with Nvidia NvidiaLink! They're doing the same exact thing I predicted would happen.

Yet some of you insisted that APUs were the only solution forever, while I claimed that APU was a stopgap for something bigger because there currently isn't a good interconnect for dGPU and dCPU.

They are going to come. AMD is going to come up with a really good workstation/professional and HPC solution and it's going to get retrofitted into HEDT.

AMD is far from done. L3 cache takes up around half of Vishera die, and it serves very little purpose since it's barely faster than system memory (although the latency is better). L3$ could easily be replaced with modules on excavator or something while still using 32nm and AMD would have a roughly 300mm^2 die with 8m/16c.

And if AMD threw HDL into the mix it might actually end up smaller.
 

colinp

Honorable
Jun 27, 2012
217
0
10,680


And if my grandmother had wheels, she'd be a wagon.

I'll believe AMD has plans for HEDT when I see it, and I bet the phrase never gets brought up in the boardroom.

Every piece of evidence there is points to AMD pursuing a different market segment now for CPUs / APUs; lower power, small form factor, etc. Rack em up and sell em cheap is where the money is at, not some niche market taking on the 250kg gorilla.

If somehow one of their future designs happens to scale up to high frequencies and core counts, then you may see the FX line being revived as an afterthought to that, not as an afterthought to the server line.
 


You get out of debt by growth, not by cuts. That's why AMD is now in the position of having a desirable product, but not being able to produce enough to make money off it.
 

juanrga

Distinguished
BANNED
Mar 19, 2013
5,278
0
17,790


The real problem is not mobo makers. The real problem is CPU makers. Neither Intel nor AMD will support this. Reason which Nvidia will continue supporting PCIe to connect to their processors. We will see NVLINK enabled CPUs from IBM and from Nvidia itself (Boulder?).

And about the "up to"... The new interconnect will be still faster than future PCIe 4.0.



But who said you that AMD is abandoning HEDT? If you check the roadmaps AMD continues selling 8000-9000 series FXC processors and 12/16 core Warsaw.

What AMD is doing is migrating from CPUs to APUs. High-performance APUs with 8-cores are being designed. Just they are not still ready to replace the mentioned CPUs.



But this would left open the door to the speculation that FX 8500 series (FX with Steamroller) and FX 8700 series (with Excavator) is coming, when they are not.

With the 9000 series labeling, AMD clearly emphasized the end of the traditional FX series: "9000" breaks the naming key of the entire family.

There is speculation that the FX brand will be reused for APU-derived CPUs such as the new FX-670K.



The cross-license with Intel clearly mentions that AMD loses the x86 license if it is bought by anyone.
 

vmN

Honorable
Oct 27, 2013
1,666
0
12,160


You seem to not understand how marketing works. For them to growth they needed huge investments. Where would they get that from? Who want to invest in such a company which year after year are falling lower and lower with their dept. Nobody would.

They need to get a stable economy.
 

juanrga

Distinguished
BANNED
Mar 19, 2013
5,278
0
17,790


As it was showed in this thread, AMD, Intel, and Nvidia are migrating away from dCPUs + dGPUs to ultra-high-performance APUs. Let me recall that the ultra-high-performance APU that Nvidia is designing has ~40 TFLOPS of performance. The new Titan Black GPU only has ~5 TFLOPS ;-)

Each company (intel, Nvidia, and AMD) is designing an ultra-high-performance APU that will replace their corresponding dGPU (or Phi for Intel). The reason for the abandon of discrete cards was mentioned as well: the ultra-high-performance APU will provide about 10x more performance than fastest dCPU+dGPU design possible.

Intel is migrating directly from discrete card to APU

Xeon-Phi-Knights-Landing-GPU-CPU-Form-Factor-635x358.png


AMD has been planning the same for years

evolving2.jpg


Nvidia cannot do it, because their own high-performance CPU (Boulder cores?) is not ready at this time. They only have ready the Denver cores for Tegra APUs. But those are mobile/tablet APUs.

Nvidia has to rely on another CPU maker at this time. Evidently neither Intel nor AMD will do collaborate with Nvidia. However IBM will do, because IBM joined Nvidia to compete against Intel on HPC market. Integrating CPU and GPU on die is very difficult, just look how many years it took to AMD to integrate graphics with CPUs (check above image) and still integration is not complete (luckily Carrizo APU will do finally); moreover integration of CPU and GPU on same die would limit flexibility of partnership of IBM or Nvidia each with other people.

The solution? An intermediate step between the traditional dCPU+dGPU design and the future ultra-high-performance APU that Nvidia is designing. This is what Nvidia presents today. An interconnect that basically means that they are moving to socketed GPU:

GPU on PCIe card (traditional) --> GPU on socket --> GPU on die (ultra-high-performance APU).

In fact, the NVLINK has been designed by Nvidia's Daily, the same scientist who is leading the Nvidia team designing their ultra-high-performance APU. Their ultra-high-performance APU is so fast that doesn't use a traditional bus system to connect CPU and GPU cores, but use a special network on chip to build a heterogeneous fabric.

The new NVLINK share parts of that ultra-high-performance APU design and unsurprisingly allows for developing CPU-CPU and CPU-GPU networks

The links can be ganged together to form a single GPU↔CPU connection or used individually to create a network of GPU↔CPU and GPU↔GPU connections allowing for fast, efficient data sharing between the compute elements.

http://devblogs.nvidia.com/parallelforall/nvlink-pascal-stacked-memory-feeding-appetite-big-data/

Thus Nvidia is migrating from traditional, slow and power-hungry, CPU plus GPUs card architecture to a network made of CPUs and GPUs and finally will migrate to a network of APUs.

Everything is going just as planned...
 


No, you need to make money. The idea you need to undergo massive cuts and reduce future growth due to short term financial distress is wrong. AMD's debt was still small in comparison to the total worth of the company, and was thus sustainable for a few years. In some ways, AMD's financial situation is WORSE now, since its debt is a higher percentage of the companies total net worth. Any form of slip up now could easily bankrupt the company. The fact the dollar amount of debt is less is not a benefit of the companies worth also shrinks.

And now we've moved into a debate on Supply-Side versus Keynesian economics.
 

vmN

Honorable
Oct 27, 2013
1,666
0
12,160
Need to make money? How? Make it out of air?
To make money you need to spend money.
AMD didn't have much money to spend, so the logical thing is to reduce expenses, until they find some investors.

A company needs to go under several changes when in huge debts, and first thing is to reduce expenses.

When a company have a bad economy, they need to reduce expenses, they cannot just spend even more money when you are on a downgoing hill.

AMD had a bigger change of recovering going the road they did.

The chances of surviving such dept by not reducing expenses is minimal compared to do.
 


You make money by being able to sell your products, something that, due to lack of having a decent fab, they are currently unable to do. At the VERY least, they should have maintained majority control of GF.

And yes, if spending more money, even if it puts you in deeper short term debt, is expected to make you money over the long haul, do it. Increasing short term debs in exchange for long-term potential growth is an acceptable economic strategy.
 

vmN

Honorable
Oct 27, 2013
1,666
0
12,160
Are you implying they didn't sell their products before?
by GF I assume you are referring to GLOFO.


Spending more money is less secure MEANING higher risk of failure. How exactly do you expect them to improve their product in a short term with a VERY limited budget?

It was a no-go, and they made the very right decision, and hopefully they will grow back into been able to sustain their own fab.

 

colinp

Honorable
Jun 27, 2012
217
0
10,680
AMD were losing $3bn per year in the two years up to spinning off GF. Funnily enough investors don't like that.

If AMD were able to simply sell more CPUs, don't you think they wouldn't have tried, or do you think they were just soft peddling because they couldn't be bothered?

When a business is haemorrhaging money then investors will expect to see cost cutting on a grand scale. Stabilise, consolidate play to your strengths and then look to regrow.
 

Cazalan

Distinguished
Sep 4, 2011
2,672
0
20,810


If I recall they were about 6 billion in debt AND they still needed to sink another 6 billion or so to update the fabs. No bank would have funded 12B debt for a company the size of AMD. The banks may have been "too big to fail" but the government wouldn't bail out AMD like that. Remember this was 2008 with the crash. Lots of companies just sank.

 

Cazalan

Distinguished
Sep 4, 2011
2,672
0
20,810


Considering NVidias GPU and Mobile roadmaps just completely changed overnight I wouldn't call that "just as planned".

And that special high speed link isn't that different than PCIe 4.0. It's 20GT/s instead of 16GT/s.

"The end result is a bus that looks a whole heck of a lot like PCIe, and is even programmed like PCIe, but operates with tighter requirements and a true point-to-point design. NVLink uses differential signaling (like PCIe), with the smallest unit of connectivity being a block. A block contains 8 lanes, each rated for 20Gbps, for a combined bandwidth of 20GB/sec. In terms of transfers per second this puts NVLink at roughly 20 gigatransfers/second, as compared to an already staggering 8GT/sec for PCIe 3.0, indicating at just how high a frequency this bus is planned to run at."


You're looking at a fairly high density mezzanine connector to hook these things together with 32 differential channels and all the extra grounds for signal integrity. You're looking at roughly double the connections for a PCIe x16 slot or ~160 pins. Probably even more pins because they'll need more power. PCIe can do this with fewer pins because they have auxilliary giant molex power connectors. Those fat 6 and 8 pin connectors can carry more juice.

NeoScale_575px.jpg


You say it's getting away from discrete GPUs cards plugged in PCIe slots when it's more like GPU modules plugged on even wider/fatter mezzanines.
 

juanrga

Distinguished
BANNED
Mar 19, 2013
5,278
0
17,790


The funny part is not on that AMD head did the correct thing and that AMD is now live, away from red, and with good financial credibility, but that it is Intel which is having serious problems with their foundry business and are anxiously looking for new markets to minimize the huge loses per month. Their need of money is so big than are opening foundries to competitors such as ARM.
 


Fine, but then ask this: Who else has the capacity to make that many CPU's for Intel at an affordable price? Everyone knew sub 14nm was going to be a problem, so is it shocking initial yields are very low? Intel will get it sorted out, then turn a profit on the product they ultimately release. Thus, the cost of running their foundry is less then what they make in profits off it.
 

Cazalan

Distinguished
Sep 4, 2011
2,672
0
20,810


April fools isn't for 5 more days. Surely you can't be serious. Intel is not losing money. They are still making money hand over fist and paying record dividends (4.5B in 2013). Their margins went down like 2.3%.
 

blackkstar

Honorable
Sep 30, 2012
468
0
10,780

Yeah, the thing is, I'm making the claim that APU is significantly faster because it doesn't share the significant bottleneck that dCPU and dGPU have of having to go over a very slow bus.

APU might be much faster because it doesn't have to go over horrendously slow PCIe 3.0 bus. But the point I'm making is that if PCIe is replaced with something that would alleviate that bottleneck and transfer it back to something like system memory, the dGPU and dCPU would be a vastly superior option.

But we need a fabric like that regardless. A single APU might beat dGPU and dCPU over PCIe, but if you want to use one APU, you are still screwed and need that faster link.

I have heard from a random IBM employee that AMD is also working with IBM to make some sort of really fast bus to solve this problem.

But just leaving things at one APU to transfer over ancient HT 3.0 or PCIe is not going to work.

Even if AMD ends up going multiple APUs, they will want to make sure that all the APUs can speak to each other with as little bottleneck as possible.

Unless you think we're going to see HPC evolve into one giant APU, AMD and Intel are going to have to come up with something like Nvidia NvidiaLink.

It might not end up dGPU working with dCPU, but at the very least we will get APU working with APU along the lines of "I have a system with 4 4m/8c Excavator cores and 1024 GCN cores, giving me 16m/32c and 4096 GCN core system" and they all work together as one heterogeneous system.

But my point is that if you can get rid of the bottleneck between chips, there's no reason why you can't have a big dCPU with a big dGPU, or any combination of them. In fact we're sort of seeing it with what Intel is doing as well.

But I think you are misunderstanding me a little Juan, I don't think this is going to happen right away. I think we will get a generation or two of APU only set ups and then we will migrate to what I am talking about. Which is why we have no new HEDT platform from AMD right now. They don't have anything special they can do other than MOAR COARS and MOAR JIGGAHURTZ so there's no point in doing anything other than waiting until they get infiniband over hypertransport up and running or something.
 
APU might be much faster because it doesn't have to go over horrendously slow PCIe 3.0 bus. But the point I'm making is that if PCIe is replaced with something that would alleviate that bottleneck and transfer it back to something like system memory, the dGPU and dCPU would be a vastly superior option.

The PCIe 2.0 bus isn't even being saturated much less the PCIe 3.0 bus. Bandwidth isn't an issue at all right now, at least for consumer computing. The real issue is latency when your trying to use HSA style computing on dGPU's because you can stall your code stream out. A dGPU is an offboard high powered processor, lots of bandwidth and capability but the turn around time is much larger then if the processor was on the same die. On the flip side, it's physically impossible to get the dGPU's level of power on the same die as the CPU. So the dGPU will never become obsolete, it may become less common or a specialized solution, but it will never die off.
 
Status
Not open for further replies.