AMD CPU speculation... and expert conjecture

Page 594 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.

jdwii

Splendid




Oh arm is so great having IPC around a Atom yeah show me a benchmark per core where a Arm processor even comes even with piledriver let alone haswell
 

Cazalan

Distinguished
Sep 4, 2011
2,672
0
20,810


That's been the rumor for a while now, but the A8 is another dual core. Not suitable for desktop replacement.
 
once i had a dream about using a gt440/gtx750 gddr5 card as a temp. primary gfx. later buy a radeon and use the gt440 got physx. looks like it won't be possible anymore:
http://www.brightsideofnews.com/2014/07/31/nvidia-disables-gpu-physx-2nd-non-nv-gpu-installed/
that's anything but bright....
oh, and it should affect apus, laptops with amd apu and nvidia gpus (however rare), mixed setups etc.

edit2: i wonder if nvidia dislikes a win-win situation where both amd and nvidia gfx cards/gpus can be used. may be they dislike marketshares too.
 

szatkus

Honorable
Jul 9, 2013
382
0
10,780


It's not a win-win situation for them.
[nvidia mode]
Why AMD and Nvidia? Our GPUs are superior!!111oneone
[/nvidia mode]
 

the 2nd nvidia gpu as physx accelerator? i don't think so. but running two display drivers in parallel may be disabled in the o.s.

i was trying to get more out of my cards (if i bought those for that purpose) without replacing one with the other. that is why nvidia's recent announcement kinda pissed me off. :)

i'd like amd, intel and nvidia gpus run in tandem in my pc. all the connected standby, low-res video playback, quick sync transcoding for intel igpu. primary gaming and display(s) for the radeon. physx acceleration, low power+high res. (fhd, 4k) video playback and nvidia-biased gaming on the nvidia gpu. no replacing. *dreams*
 


I really think your wrong here. If it is successful then it may be slow, but ARM has been around for a LONG time (they date back as far as Intel does, albeit under a different name), and I think you over-estimate Intels hold on servers.

The server eco system is very different to consumer, there are still quite a few different architectures co-existing, and companies are more open to custom solutions than in any other arena. AMD are also being quite shrewd in my opinion by offering a common socket (as it should allow mix and match solutions). The thing is, whenever you look at technology trends over history, the winning business strategies (at least in terms of market share / adoption) are always the solutions which include other companies. Whether the 'winning' product is technologically the best solution or not has little to do with it.

A couple of cases: Microsoft Windows PC (available to all OEM) vs Apple (closed system). Guess what? Microsoft dominated the market due to the number of vendors and price. Google Android (available to all OEM) vs Apple (closed system) and look what's happened there, first in phones and now in tablets. Next battle Intel (single company) vs ARM (licencing designs to everyone), Intel's attempts at getting into phones have fallen flat, and there are quite a few companies going into ARM servers with more to follow.

This has nothing to do with who has the better product really, the issue is with lots of companies all pushing these servers, a price war will ensue. Intel's server parts will become a premium niche product and cheap (and efficient) ARM based solutions will get the dominant market share. This will take quite a long time to happen, but the precedent is there.

The other thing you need to consider, for this to be a 'success' for AMD has little to do with overall market share. If AMD can get a couple of % market share with their ARM server chips, that is a BIG win for them. Final thing on the 'bubble bursting' is this: Tablets and Smart Phones aren't going anywhere, and more and more devices are becoming 'smart' (TVs, Fridges that monitor the content and can re-order supplies for you, Connected thermostats etc) so there are plenty of places for ARM chips to go. The biggest threat to ARM would be another company doing what they do gaining significant traction.
 


No he said multiple display drivers aren't possible in the current Windows NT environment. This is based on how MS treats display different from other forms of hardware which itself goes back to the days of AGP devices. You can actually have multiple PCI display devices and they'll work perfectly fine.

Basically AGP devices were allow direct access to memory without first having to get permission from the CPU. This enabled an AGP video device to use system memory to store textures and other data, it was slow but it allowed a working set larger then what the AGP video device had in local memory. This capability was carried forward with PCIe but didn't exist in PCI devices.

Some of you may remember this as GART.

Because only one AGP bus existed there could never be two independent GART regions and thus MS only supports one video device having a GART. Now you can trick the system into supporting multiple GART enabled video devices by having a single driver handle all the video requests, which is what we do now. Two video drivers would require the OS kernel to support two or more independent GART's.

That's the technical reason anyhow.
 

The vast majority of the server market is commodity servers. Those are cheap x86 servers that either run Windows NT, or run a hyper-visor that must be capable of running Windows NT inside it. Most of the Linux stuff are net appliances, usually based on a cheap x86 architecture. The reason for all this is cross platform support, everything is now virtual in the data center which in turn requires everything to operate on the same binary machine code.

So while an ARM based linux net appliance could easily exist, it won't be part of a virtual datacenter which precludes it from the lions share of the market. x86 Micro-servers are actually what people want, small modular units with extremely high density. You load each of them with ESXi and create a large cluster that allows for seamless migration of virtual servers. You can take a VM from one microserver to move it to another microserver without it shutting down, all that happens is a pause as the memory contents and machine state are snapshoted and sent to the new host and resumed. This feature is used extensively in high availability environments when the system must have 99.999% uptime.
 


Ah ok, I still don't think this precludes there being a small niche for AMD's ARM based servers- it doesn't have to be massive for it to be successful financially for them (which is what matters). I think the ARM play at the moment is target at very large firms (e.g. Google) who will be happy to run a fully custom system they maintain themselves. The point is though that once a large enough eco-system is established for ARM servers, and the prices are cheap enough more companies will start taking notice. This isn't going to happen quickly but I wouldn't write off ARMs entry into the market just yet.
 


Actually, you could do multiple display drivers back in the XP days, just like you can still do multiple audio devices. When MSFT did Vista, they decided to finally crack down on GPU driver quality (due to AMD/NVIDIA accounting for like 70% of all BSOD's in XP), and programmed the WDDM to only allow one display driver to be installed at a single time so they could simply things on their end.

That's one reason why we'll probably eventually see GPUs have two driver suites: One for acting as the primary display render, and one for acting as a gpGPU (thus, allowing it to be used without needing to install a display driver).
 

blackkstar

Honorable
Sep 30, 2012
468
0
10,780


Most of the BSODs in XP were due to Nvidia, not ATI. ATI had "SMARTGART" and it would handle driver crashes gracefully. In fact, it worked just like display manager in Vista and later did. Black screen, drivers recover, go on your way.

I remember this very, very clearly because I went from an x800 Pro to an 8800 GTS 320mb after listening to Nvidia fanboys rave about the drivers, and I went from drivers that crashed and flickered the screen on and off for 30 seconds to drivers that crashed and BSODed.

We've already reached peak x86. Performance is good enough and Intel can't make anything faster for single core. So Intel decided to focus on mobile instead.

We're about to reach peak ARM. Performance in traditional ARM devices is good enough, so now we're going to see ARM try and push into a new market.

AMD's biggest semi custom customer is AMD. They are building a bunch of tools that can quickly adapt to changes in the market and do well in new markets. They probably won't be as profitable as if they were more traditional and had a good product, but semi-custom is a lot lower risk for AMD and they need something like that to recover.

Look at Bulldozer's original design goals and then look what happened in reality. AMD wanted a chip that was good for servers and good on desktop. The market shifted toward mobile and Bulldozer wasn't efficient enough to compete with Intel. So now, since original Bulldozer launch, AMD has been stuck with a single core design that they can't adapt to the current market very well. They're getting there with Steamroller but it's still not exactly what the market wants. They want more single thread (in comparison to Intel) with longer battery life. They were able to do something with the small cat cores but even then, it's not winning many design wins.

AMD going semi-custom means they can adapt their products to the needs of customers who need unique parts. It also means AMD can adapt their own products to fit the needs of the market much better. It also means they can spread their IP, and indirectly their R&D around much better. Now, AMD's work with GCN can now filter everywhere from gaming, to professional market, to ARM + GCN market with HSA. Custom ARM core will more than likely filter over from servers to other devices, and I definitely expect some semi-custom design wins for it.

AMD semi custom is about reducing R&D cost by re-using R&D in multiple products while trying to be agile enough to adapt to changes in market demand. It's an extremely good idea if you ask me. Imagine if Intel followed the same method and they had a tool kit which allowed them to build big core + knight's landing cores, Iris Pro, regular GPU, etc or little core + KL + IP + HD, etc and they were selling semi-custom solutions. You'd go from having a product stack of "you can buy this or that and use them together" to "oh, you want a little atom core with a ton of KL cores for compute and you don't need a big CPU to go with it? Here you go!"

 
Most of the BSODs in XP were due to Nvidia, not ATI. ATI had "SMARTGART" and it would handle driver crashes gracefully. In fact, it worked just like display manager in Vista and later did. Black screen, drivers recover, go on your way.

I remember this very, very clearly because I went from an x800 Pro to an 8800 GTS 320mb after listening to Nvidia fanboys rave about the drivers, and I went from drivers that crashed and flickered the screen on and off for 30 seconds to drivers that crashed and BSODed.

While the driver restart that's mandated in WDDM in nice, it assumes there isn't a physical HW problem that prevents the driver from restarting. That's exactly how the 0x116 and 0x117 BSOD's get generated in Vista and later. In any case, the driver shouldn't be crashing in the first place, and the fact ATI needed a program to handle the occurrence kinda says something by itself. In any case, GPU drivers on the whole were not stable in XP, and I noted they didn't seem terribly stable in Vista's early lifecycle either. I haven't had a driver problem in official driver drops in years though, so the situation is a LOT better then it used to be.
 

Cazalan

Distinguished
Sep 4, 2011
2,672
0
20,810


Semi-custom CAN be good if the design wins outpace their market share decline in their traditional markets, but they're not currently. By my estimates AMD is averaging 23 million a month with PS4/XB1, or 70 mil a quarter profit. But their operating expenses are over 400mil a quarter. To break even last quarter Sony/Microsoft would have had to double their sales, or find more big fish like Sony to sell to. These are not small volumes (for AMD). Even if facebook replaced every single one of their servers that wouldn't be much more than a 1/4 million units.
 

Cazalan

Distinguished
Sep 4, 2011
2,672
0
20,810


There's more than 1 way to skin a cat. I was estimating down to the per unit profits, with a $100 cost and 15% margin. Or roughly $15/unit gross profit. AMD doesn't break it down further as GPUs are lumped with the consoles, so you have to work backwards from the 14mil total unit sales over 9 months.
 
Well AMD says that >20% of their 1.4b revenue is driven by semi custom SoCs. Thats >300M revenue per quarter Which means they sell at least 3m chips a quarter. I would expect the margin to be much better than 15% for a SoC like this, wouldn't be surprised if they get >50% margins, it all depends on yields they can get now. 28nm should be cheap now and the console dies are all relatively small. They been rampping production for more than a year now, yields should be good.
 

Cazalan

Distinguished
Sep 4, 2011
2,672
0
20,810


There were multiple confirmations in the earning calls stating a mid teens margin, nowhere near 50%. 13/14/15/16/17 take your pick. 300M@16% is only 48M which would be lower than what I averaged it to.

AMD had been earning 50-55% margins on their core markets, so they have to sell a lot more semi-custom to make up for that. The plan can work but they need to be out there pushing this stuff like crack dealers.
 

blackkstar

Honorable
Sep 30, 2012
468
0
10,780


Perhaps I did not explain myself properly when I said AMD's biggest semi-custom customer was AMD. It means AMD uses the same tools they would use for their semi-custom customers and use them to create products they want to sell themselves. Yes, PS4 and Xbone aren't paying for the development of Jaguar and GCN, but it was never supposed to. It's about spreading the risk of products across multiple markets so that if one tanks, you don't end up with a portfolio of bad products for 5 years.

Because of semi-custom, AMD was able to take a lot of off the shelf parts, glue them together how a customer wanted, and then sell it. Compare it to how semi-custom chips used to work in consoles. Nintendo, Sony, Microsoft, etc would approach AMD/ATI or Nvidia and have them modify existing architectures to be something unique. PS4 and Xbone are just off the shelf x86 jaguar cores with GCN cores and a bit of fancy glue.

Original Xbox is a good example of old semi-custom model. The Nvidia GPU original Xbox is some sort of chimera of two different Nvidia chips. There's no documentation on it anywhere and people can't reverse engineer it to make an emulator. That is how the old semi custom model worked. Chip designers would create special designs for a client, let them use the design, and then never touch that IP again.

AMD semi-custom is about additional profit through existing IP. It's a way to skate around the issue of OEMs not using their products by offering a service to those who can afford it to create a product that fits their needs.

But look at AMD Jaguar core. Take a look at all their OEM wins. It's horrific and nearly comically bad. Now, thanks to semi-custom, AMD was able to land two massive wins for Jaguar cores. Xbone and PS4 are design wins for AMD that would have never existed for Jaguar if it weren't for the semi-custom approach they're taking. Every PS4 and Xbone sale to Sony or MS is a huge win for Jaguar and it not only helps fund previous R&D for Jaguar and GCN, but it helps bring in more money for further R&D that would not have been there in the first place.
 


Ok something you gotta realize is that all FX chips are the exact same die, same with all APU's. A 7850K costs the exact same as a 860K or an A6-7400K to make. AMD then sorts the chips by quality (binning) and sells them accordingly. Because of this method the margin on one chip (fx6300) will be different then another (fx8350) even though they cost the exact same to make. This is also why they went with the modular approach, it dramatically cut the costs of production.
 


I was under the impression that the second driver couldn't do any form of 3D acceleration (second monitor plugged into the second graphics card). I remember trying that once with XP and I couldn't get any form of acceleration on the second card, it just acted like a glorrified frame buffer. This could of been a problem with DX but I noticed that there was no additional virtual address space mapped to the second card. Maybe I can attempt this experiment again at sometime.
 
Status
Not open for further replies.