IBM's Power9 CPU Could Be Game Changer In Servers And Supercomputers With Help From Google, Nvidia

Status
Not open for further replies.
Interesting read , but now it made me question something about Nvidia's strategy. Assuming they are selling large quantities of P100's for these super computers and large servers, and these P100's are not cheap at all, how large a segment is this for their company?

Originally I thought that Nvidia and AMD made most of their money with consumer and workstation GPU's. But I'm thinking 1 of these P100's would probably net them more money than a ton of consumer cards. I also kind of figure that the big companies buy these high powered cards in bulk.
So do NVidia/amd make consumer/workstation cards almost as a side business? or is the market share ALOT bigger than im assuming
 

Those ultra-high-end chips may net AMD and Nvidia a $1000+ margin per sale but they sell only a few thousand of these per month. Mainstream GPUs on the other hand may only net them $10-50 in gross profit per sale but they sell millions of these a month.

Mainstream sales may not be as appealing individually but they still account for a bigger chunk of revenue and gross profit. If they want to maximize their revenue from all the R&D they invested into architecture and die optimization with their fab partners, they need to produce chips for every slice of the market where it should be reasonably profitable to do so.
 
The DGX-1 with 8 of these P100's is selling for $129,000.00 per. So they probably are making a good deal more than 1K however the mainstream / lower cost sales will more than make up for that in volume. Its not even remotely close either.
 
Software is not quite optimized for Open Power architecture yet & I am talking about Linux & from what I sow as a market offering so far it's not exactly much cheaper than making a costume Xeon server.
Even the hardware have a potential & I am certain that with in the house built accessories & on the large scale it will cost significantly less software is not properly optimized yet.
Surce:
http://www.phoronix.com/scan.php?page=article&item=talos-workstation&num=1
Still the Power architecture is simply not more power efficient than current X86 offering & that should be a prime concern for the future as super servers are becoming the biggest power consumers in the world & they actually contribute most to climate changes & global warming mostly thanks to need for expensive & power un efficient additional cooling needs so it's a grim picture actually.
On the other hand Nv or GPU's for that matter certainly aren't the future of super computing no matter how much money Nv invests in marketing it with all of their dumb stuff claiming it on science conventions. For starters GPU's are only good for the single really massive parallel tasks & most server and good part of supercomputer tasks are not like that & even then they are in best effort 3x less power efficient than currently offered FPGA & or combined with DSP solutions.
Truth is Nv is making a good money on those computing cards as they actually cost almost the same as their comercial "Gaming" contraparts, actually they sell them for 10x+ price. Than again FPGA's & DSP's are much more reprogrammable & FPGA's are suitable for moderate number of simultaneous parallel tasks at once & that makes them also much more suitable for most server & super computer tasks not to mention actual less cost of software adaption (using new or optimized algorithms), less maintenance & power consumption cost & much better adaption of open source standards.
Simply not a way to go & who ever tries to convince you otherwise I hope at least he got paid good for doing so from Nv.
At the end just to mention how RISC V is not close to rival ARM on traditional mobile space (lack of software suport along with optimizations along with lack for now of more powerful [longer pipe & OoO]) but it certainly represents a large win, win for the emerging IoT market & it culd retake a larger part of embedded market over the time but we will see about that...
 
Windows hasn't supported Power architecture since Windows NT 4.0. It sure would be nice to see them start that up again.

while tantalizing, the main reason why windows for ARM flopped was because it couldn't run x86 software, so i suspect a Power architecture build would do similar in most markets. there's far to much legacy software for most businesses to do the switch and the consumer market seems oddly resistant to the change, despite generally not needing decade old software.
 

While the average consumer may not require decade old software, most PC users still have a software library that goes back a few years and expect to be able to reuse it on their new PC, which means there won't be an overnight transition there either no matter how hard the industry might push for it.

Just before AMD dropped the AMD64 bomb on Intel, it seemed like Intel was on the verge of successfully pushing Itanium into the mainstream. I doubt PCs will ever come that close to getting a new mainstream architecture ever again unless you count ARM's on-going march on mobile platforms as a PC replacement.
 
Software availability will be a big challenge, not matter how good the new processor is. Unless IBM can build an x86 emulator into the Power9, it is going to be faced have difficulties. Not sure what google expects to get out of this x86 incompatible architecture (or ARM incompatible for that matter). NVIDIA partnering with IBM for Power9 is obviously politically right for them given what intel has done to them.
 

Why would Google care about incompatible hardware? Google runs mostly on proprietary in-house software and have plenty of smart people who can fork the software and rebuild it for whatever new hardware Google chooses to use in the future. Also, IBM's Power9 should scale to performance and bandwidth levels for the datacenter, supercomputer and high-reliability markets that ARM is not going to scratch anywhere in the foreseeable future.
 
This is for mainframe use, not consumer. System-Z has never been x86 compatible, but it's not like they run microsoft Office on this.

As for the consumer vs pro question, it's probably a mixture of both. You get the volume from the consumer, so economies of scale, and then the high margins on the pro segment. It's a natural extension. I doubt either nVidia or AMD could survive without covering both segments exactly the way they do.

Most pro cards are just consumer cards with 1/8th less ram for ECC (the latest pro cards, you can choose like 7 GB ECC or 8 GB non-ECC, on the fly), and then they charge 5 times as much for the driver, which you get support if it doesn't work.

Paying a few thousand dollars for a card, or 150k for a server, isn't a big deal for most corporations. The price of software licence and support is way higher than the hardware, usually.
 
You have got it right. These systems are developed for very specific tasks, and the software and systems are developed for that specific hardware. I have done quite a bit of specialty modeling work, and the old multi-million dollar Xeon systems we used to use have been replaced by a few boxes filled with Nvidea graphics cards. They still have the huge SAN for the data, but the cost is like 15-20%. All the software is custom.

Most of the companies using IBM chips have all the software to run on it (I do wonder if after they bought SPSS they got it running on their systems). I have never heard of large modeling and analysis systems running Windows. It is needless expensive and way too unstable. Unix is common, and increasingly Linux or some kind of BSD.


While the average consumer may not require decade old software, most PC users still have a software library that goes back a few years and expect to be able to reuse it on their new PC, which means there won't be an overnight transition there either no matter how hard the industry might push for it.

Just before AMD dropped the AMD64 bomb on Intel, it seemed like Intel was on the verge of successfully pushing Itanium into the mainstream. I doubt PCs will ever come that close to getting a new mainstream architecture ever again unless you count ARM's on-going march on mobile platforms as a PC replacement.

This is for mainframe use, not consumer. System-Z has never been x86 compatible, but it's not like they run microsoft Office on this.

As for the consumer vs pro question, it's probably a mixture of both. You get the volume from the consumer, so economies of scale, and then the high margins on the pro segment. It's a natural extension. I doubt either nVidia or AMD could survive without covering both segments exactly the way they do.

Most pro cards are just consumer cards with 1/8th less ram for ECC (the latest pro cards, you can choose like 7 GB ECC or 8 GB non-ECC, on the fly), and then they charge 5 times as much for the driver, which you get support if it doesn't work.

Paying a few thousand dollars for a card, or 150k for a server, isn't a big deal for most corporations. The price of software licence and support is way higher than the hardware, usually.
 
Status
Not open for further replies.