Does AMD has some future?

Page 15 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.
Your question was REVENUE, I answered in REVENUE. WTF! :pt1cable:

Console sales do not peak the first or even second year. It ramps up, levels off then tapers down.

total_tv_sales.png


 


ARM sell licenses, not chips. AMD sell chips, not licenses. $300M can be good for one company and bad for other. AMD is closing offices, firing engineers, and canceling products.

Add that AMD has three big problems: billionaire debt, WSA, and lack of market.

Yes, if the debt doesn't exist, and if WSA doesn't exist, and if AMD has lots of competitive chips, and if Intel, Nvidia, Qualcomm, Broadcomm, Cavium,... don't exist, and if people working at AMD works for free don't exist then AMD has a bright future, but reality is other.
 


Except it is more like something that consumes X and performs as Y while costs Z, instead of something that consumes 0.8X and performs as 1.2Y but costs 1.6Z.
 


I don't know, Juan... I'm sure ARM has to develop at least prototypes to show, so they might not be taping out actual SoC's, but they do get their hands dirty with manufacturing, just not for volumes. Now, my post was not aimed to paint a "if pigs fly" picture, but a more realistic one. AMD is going to get rid of their debt at some point in time and when they do, the profits will show up I'm sure (green numbers). Firing people is a bad sign, I won't deny that, but it's far from calling a company being "doomed". In regards to the R&D budgets, you need to think about the product and markets they're thinking of / aiming at and if the budget makes sense for those. Like I said, Intel and nVidia are chasing MORE markets than AMD is, hence their R&D budgets fully reflect that. So that makes your "AMD is doomed because is having less R&D for what they do!" wrong; simple reason being, Intel has fabs and a *very* uphill battle against ARM (at least for now) in mobile and nVidia started developing SoCs from scratch and trying to get into mobile from phones to tablets and cars just like Intel. Hell, how could you justify the business case for "Shield"? Anyway, I'd love to see more greens assigned to R&D on AMD's part, but my ARM comparison was basically stating: if ARM can make do with a tiny R&D budget and dominate the low power market, why can't AMD? I mean, designing a CPU does not take USD$1B, right?

In regards to consoles, you're wrong they peak the first year. Consoles move according to the games they get. And the best titles tend to come out either between the second year and fifth year.

Cheers!
 


AMD vicepresident claims otherwise about future sales of PS4/Xbone.

Your above graphic is a bit misleading. E.g. the PS3 was released on 11 November 2006 in Japan, and latter on rest of world, still the graphics starts on 2015. However, it is true that the PS3 sales did peak latter than second year.

Similar criticism about Wii data. The console was released on 19 November 2006 but your graph of sales starts at 2005. Right graph showing the early peak on sales and then decline is next
wii_sales.png

Your graph also lacks other consoles, e.g. the gamecube was released worldwide on 2002 and also show the early peak on sales and posterior decline.
gamecube_sales.png

I agree with AMD's Kumar on that sales of PS4/Xbone will peak this year and will decline in next years.

EDIT: I was just curious about how are selling
Screenshot-2014-04-18-05.55.09.png

It data is accurate, it looks as Xbox One did already peak and starts to decline. Let us wait for more data.
 


ARM tapes out samples to check everything is ok and the chip works as planned. Besides that, AMD also needs to fabricate chips on big volumes, pack them, and distribute them worldwide. Just check AMD current inventory/channel problem. That problem cost them about $100M, which is one-third of your illustrative $300M. ARM doesn't have that kind of problems/costs.

Taking AMD own numbers as realistic, they cannot pay the debt even if the debt is frozen at current level and all the expected profit for 2015--2018 is spend on paying the debt. If you know which is the miracle, please share it with us. And that is ignoring the WSA which is another big problem on the AMD side.

I already considered R&D numbers in context. That is why I have said that AMD will be no longer competitive on the PC market, has zero chance on mobile, and will not make that comeback on servers/HPC that some are expecting. AMD could be competitive on some semicustom/embedded market.

The cost of designing a CPU depends: it even depends of the ISA used. If the data I have is right Zen/K12/Skybridge is costing about $500M on R&D. It is barely sustainable now thanks to closing offices, firing people, and canceling projects. Post-Zen products will be more complex and will require more R&D, whereas AMD revenue will be smaller. The math is easy to do.
 


That's a more fair assessment for AMD and let me pick the bolded part:

They've been saying for a while now they're not pursuing Intel in performance anymore (with all the anger and sorrow it gives us) and they won't dive into Mobile / ULP markets; although they wanted to. They just want to streamline their development processes and focus on custom and semicustom markets for the time being, which they have been doing. Plus the GPUs. They're reducing their targets by a lot by not facing Intel directly anymore and focusing on semicustom will start getting money for them sooner than later (I hope).

I'm pretty sure they're very aware the amount of money they're not getting from weak PC sales (their sales, not the market itself) and piss poor Server market share, but they are at the mercy of GF, TSMC and now Samsung (I think?) for manufacturing. We can go back and bitch all we want about selling their foundries, but what's done is done and they have to make do adjusting their expectation on what they *can* do: design and build semi custom stuff with their LEGO approach. Plus GPU stuff.

I really wonder if, like some have pointed out, AMD would become irrelevant in the PC business and become another VIA. If the semi custom business brings more greens, they might make a comeback; at least I hope they do, haha.

Cheers!
 
Yuka, this is an old post from mine found in the first page of this thread:


 
The difference is I don't see AMD exiting neither the Server market, nor the PC business market for good. Since they're the *only* company, besides VIA, that can make CPUs that actually compete with Intel at being decent, they're in the unique position to make big profits if they get their act together. Just like Gamerk points out time after time, re-writing software is *very* expensive for companies, so even if ARM gets on par in performance and efficiency with Intel in whatever market, you will still *need* to make a business case for general server software and middleware to support ARM. That's no easy feat. Do you think Oracle or IBM will want to do that? Well, food for thought.

Also, you see AMD as abandoning ship, but I see them trying to get their act together, desperately, getting a hold of the first thing in their grasp to make some cash: semi custom. Think about it from an strategic point of view: what other company, in the entire world, can get an X86 license (and by extension, X86-64) to even *try* to compete against Intel. Just name one...

I think, that's what the Chinese company saw in AMD: a big ship with a decent captain again, waiting for fuel to make it go again.

Out of all of the companies out there, I really believe AMD is in a very good position ever since they bought ATI to differentiate themselves from Intel and any other ARM licencee. The price was dumb, but the idea was good. Execution has been crappy as hell. Time to clean up the house and start acting to the height.

Cheers!
 
All AMD base are belong to Intel? (I had to)

If I were in AMD's position (a disturbing thought), I guess I would focus on driving HSA development forward, ensure that the development tools for it get fleshed out, maybe work with some developers of high-profile software to get them to support it (like with Mantle), redirect focus to GPU R&D (where the competition is closer), keep the focus on APUs and the budget segment, but steadily try to make higher-and-higher-end APUs (if we could get to 45-50FPS 1080p med settings gaming on the latest titles with a $160 APU, it would sell wildly). Oh, also: give up on moar core CPUs. If it can be done in parallel, it can probably be better-done in parallel via HSA on the iGPU, and I have a hard time imagining task-parallel loads that use more than 4 cores. Higher per-core performance is crucial, since not everything can be parallelized (anything involving time, for example, is inherently serial... as potentially mind-blowing as a game with non-serial time could be, I don't see it happening).

I'm sure there are plenty of technical issues with my hypothetical "what I would do" situation (how do we fix the RAM bottleneck the iGPU suffers from? Beats me, I'm the hypothetical head, not the hypothetical engineer), but I am not sufficiently technically intelligent to answer them.

Any guesses as to how they'll try fixing the RAM-bandwidth issue? I've heard something about color compression with Tonga, but I'm not very familiar with exactly what that is and how it works... while DDR4 (afaik) focuses so far on same-performance-lower-power, will we see noticeable performance improvements to APUs as it matures and reaches higher clocks?

About what level of GPU-horsepower does it take (in GCN-architecture) to drive, say, battlefield 4 at 50FPS 1080p (that is, what GPU in the radeon R-series is capable of that (assume un-bottlenecked))? How much smaller would the transistors need to be to fit that many shaders on an APU with 4 cores? What's the schedule like for availability of transistors that small? What's the max read rate of the fastest DDR4 around these days? How does this compare to the bandwidth needed to feed (that is, not-bottleneck) *number of GCN cores needed to reach specified metric*? Out of curiosity, how many FPUs are there in a GCN core, so as to better calculate the theoretical max GFLOPS throughput? Will I ever learn to google?

Questions that need answering! For me, anyway.
 

I agree with what you say but the actual takeover/merger with ati was a disaster from what I've read, they also could've bought nvidia and at the time amd/nvidia had a close relationship with most amd systems using nvidia gpu/chipset. The reason they didnt take nvidia? Jen-Hsun Huang wanted to be ceo of the new company as part of the agreement, there is no facepalm big enough for this mistake imo.
 
Old article but relevant to the topic and a great read I think:
http://arstechnica.com/business/2013/04/the-rise-and-fall-of-amd-how-an-underdog-stuck-it-to-intel/
 



AMD has made it clear that K12 is aimed at "dense servers". No comment about any other kind of servers. No comments about Zen servers. By technical/economical reasons that I mentioned before, the chances that AMD releases a x86 CPU that competes with Intel in performance and efficiency are zero. Precisely that is the reason why K12 is at the center of AMD strategy to recover some market share in servers. But even taking the optimistic predictions of the company as gospel, they temselves expect to be a minor player on the field: <25%.

Similar thoughts about PC business.

It is not true that AMD can compete with Intel on CPUs. Statistics and financial data says otherwise. The CPU division has been loosing money during years and is not sustainable.

I have given plenty links showing lots of companies selling ARM hardware. There are lots of companies (dozens) because the demand is high. The software is here. Even Microsoft promised an ARM port of Windows Server.

I think the Chinese company saw on AMD an company on problems that couldn't say "no", because needs the money.

I only see good wishes in your posts, but not a single data supporting it. And I see again that you ignore any problem that I mentioned in former posts, such as WSA.

I agree with gamerk on that AMD is transforming itself in another VIA.






The problem with HSA is try to solve problems that can be solved easily using other approaches. In fact it looks as HSA is reinventing the wheel and doing it square. AMD doesn't have the money to incentive developers to use HSA, and even having it developers would stop supporting it when the money is gone. Check Mantle, good initial start, several titles suporting Mantle and then, when money finished, Mantle remains in perennial beta stage and we don't hear anymore about the new Mantle games that Huddy promised us the past year. It looks as if Mantle was suddenly cancelled/frozen.

dGPU is a death way without future. Precisely Nvidia is preparing itself to move away (that is why are so serious about SoCs now). iGPUs have future, but this requires a good iCPU, as shows the fact Intel sells more than AMD. There is no doubt that Intel will caugth AMD on graphics, thus APUs is another camp where AMD cannot compete.

DDR4 doesn't solve the RAM-bandiwth issue of APUs. The RAM-bandiwth issue will be fixed with stacked RAM: AMD will use HBM and Intel will use HMC. This provides plenty of bandwith from 500 GB/s to several TB/s.

Color compression just send part of the data in compressed form saving space. If a 1Kb is compressed to 800 bytes you can send more info on the same bandwith, which has the same effect that if you had send the original non-compressed 1Kb by thougth a channel with 28% more bandwith.

To your last question each GCN core has one FPU. Thus: GCN core <--> 2 FLOP (Single Precision).

Thus a card with 2048 GCN cores @ 1 GHz has a maximum throughput of 4096 GFLOPS.
 


And the second part

http://arstechnica.com/business/2013/04/amd-on-ropes-from-the-top-of-the-mountain-to-the-deepest-valleys/
 


The phrase "reinventing the wheel" typically implies that it has already been done, yet I don't see any similar efforts that let any processor (CPU, GPU, etc) schedule work for any other processor, share memory, develop software that is ISA-agnostic (read: portable) and can compile from high-level-languages. Is there something I am missing?

The last time I saw you calling it "re-inventing the wheel and doing it square", if I remember correctly, you were talking about how it would be better to use a "single-ISA" approach, which sounds like exactly the opposite of what the HSA Foundation is trying to achieve.

Not only does a single-ISA approach mean that you leave out anyone who makes chips that don't fit that ISA, but it also defeats the purpose of the project as a whole: picking the right tool for the right job. If you require all processors in the system to use the same ISA, you're effectively taking a hammer and a saw and requiring that the hammer be able to finely separate boards and that the saw be able to pound nails into a material. It defeats the purpose of using two separate tools, and makes each one less efficient at what it's intended to be used for.

 
Juan(or anyone else really) just to ask, wouldn't HBM cause heat issues? Seems like the heat rised from the CPU and GPU will effect the already hot HBM memory or am i mistaken? It would seem like the APU would always be throttling for this or that.
 


Heat is always a concern but not more so than before. HBM sits on the package so you don't need as high a voltage (1.2V) to drive stable signals compared to GDDR5 (1.5V). It also uses a wide (1024-bit) but slower (1 Gbps) interface. Similar to how GPUs run a lot more "cores" but at lower speeds than a CPU. Combined they provide 4x the bandwidth of GDDR5 with 40% less power.

Memory-Bandwidth-640x319.jpg


http://www.extremetech.com/computing/197720-beyond-ddr4-understand-the-differences-between-wide-io-hbm-and-hybrid-memory-cube/2

 


I used the term because HSA tries to imitate existent approaches without providing any clear advantage or at least not one that justifies the effort. For instance, HSA spec. demands both LCUs and TCUs to run a common HSAIL ISA. because tries to imitate the single-ISA approach where LCUs and TCUs run the same native ISA (aka the "reinvents the wheel" part). The problem is that the HSAIL is not native but an intermediate layer between the OS/application and the native ISA of the hardware. This intermediate layer adds complexity and performance overhead (aka the "doing it square" part).

"Schedule work for any other processor" is another example. On single ISA approaches both LCUs and TCUs are first kind processors and each one can query any other. HSA tries to imitate this with H-Query. The difference is that native single-ISA approaches can run any task on any processor, the scheduler can even migrate whole tasks from LCUs to TCUs and viceversa in function of different parameters such as performance and power consumption. Under HSA each component only can run a subset of the task.

Single-ISA approaches not only have shared main memory, but even can access the same LLC. HSA tries to imitate the main memory part with hAUMA, but not the cache part..

HSA software is only partially portable. The parallel part runs under HSAIL ISA and is portable to any other hardware, but the serial part runs on the native ISA of the CPU. This is why the HSA spec states that a HSA compatible CPU has to run both HSAIL and the native ISA. An HSA application running on x86 hardware will not work on ARM hardware, unless you port the x86 part to ARM. Moreover Single-ISA approaches are compatible with portable software a la Java. Instead executing the software directly on the native ISA you can compile to some intermediate ISA and then run this on any hardware using an interpret on the fly a la Java.
 
Instead executing the software directly on the native ISA you can compile to some intermediate ISA and then run this on any hardware using an interpret on the fly a la Java.

Which costs a good amount of performance over something lower level.

Again: Parallel tasks should go to the parallel processor, Serial tasks should go to the serial one. Trying to mix and match is going to leave you with a power hungry beast that does neither well.
 
I've said this before but people might of forgotten, we've been running systems with different co-processors for decades now. Ultimately HSA is just standardizing the interface between the different processors in such a way that the single central CPU doesn't need to micromanage every any other co-processor. This enables processors to cohabit within the system and play nice inside the central memory. Previously you had a central processor that controlled which memory space the specialized co-processor had access to along with having to spoon feed the co-processor it's binary instructions instead of them being streamed. With something like HSA you could have two different ISA's worth of binary coding inside the same code stream, similiar to how you can put SIMD SSE instructions (technically it's a completely different processor) mixed with x86-64 instructions. People need to remember it's about different kinds of processors complimenting each other.

Take x86 and something like ARM / PPC. You could have an Intel Comet (or whatever they want to call their CPU design in five years) and a low power ARM-64 CPU inside the same system. The core OS UI could run on the ARM CPU while the x86-64 CPU is used to run heavy programs / games / ect but otherwise powered off when not needed. These two CPU's could very well be different sockets as HSA though that brings issues with memory access though by then who knows what it'll look like. You can do lots of interesting things if all the hardware is designed around a central open standard.
 


Dissipated heat is a function of consumed power. HBM reduces power consumption compared to GDDR5. Power grows roughly linear proportional to the wide of the interconnect, but grows roughly cubic with frequency. HBM increases the width of the interconnect but then reduces the working frequency, this is how HBM can be faster than GDDR5 but consumes less power.

Although HBM stacks will dissipate less heat than GDDR5 modules, the HBM stacks will be placed near the APU/GPU. Could this affect the APU dissipation and force it to throttle? I guess no.
 


Indeed, which was my point, but on single-ISA heterogeneity the use of intermediate layer is an option for the programmer. You can compile to native ISA if performance is critical. However, HSA forces you to compile always to the intermediate layer, HSAIL, loosing performance.



And that is precisely the reason why the single-ISA schedulers run the serial phases of the applications on the latency-optimized cores and the parallel phases on the throughput-optimized cores, obtaining the maximum performance for a given power consumption.
 


If we as consumers decide that we want two substantial x86 CPU manufacturers, then we should give AMD the resources to compete more effectively. This would mean, in the current situation, making a small effort to buy AMD-powered devices when that option is available.

The pace of progress has declined since AMD fell behind Intel, and consumers can make it pick up again by enabling competition in the field.

Intel has abused its position in the market. Intel has accepted money from Israel's government and built a fab on ethnically cleansed land in that country. If consumers knew this, many of them would view Intel's products as conflict chips and avoid them.
 
Status
Not open for further replies.

TRENDING THREADS