AMD Reports $590 Million Loss Due to Breakup with GF

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.
[citation][nom]reaper123[/nom]Although i love ATI/AMD i dont consider myself a fanboy of either company.Just wanted to point that out first. The thing is that if you compare the 7970 with the Gtx 680 at same clock rates they perform pretty much the same in gaming. However there are a few things to consider:The 7970 is a computation beast and is better for 3D designing /3D applications compared to the gtx 680. However the so called Gtx 680 was initialy the Gtx 660ti/670 and the correct equivalent of the 7970 is supposed to be the Big Kepler Gtx 685. Am pretty sure that Nvidia people are rubbing their hands from the stunt they pulled trickering people to pay double the money for that gcard. Seeing that their midrange kepler card was able to outperform a stock 7970 they changed the label to 680 so they could rip us off and pull the big kepler card in the unlike event AMD produces something to rival Nvidia.[/citation]

Nvidia used GK104 to compete with AMD in the high end instead of GK100 because AMD switched from gaming oriented chips to compute oriented chips while Nvidia switched from compute oriented chips to gaming oriented chips. Did anyone mock Nvidia for needing massive dies (greater than 500mm, the GF100/GF110 from the GTX 580, 570, 560 TI 448 core was 530mm2 if I remember correctly) just to keep up with or barely surpass AMD's much smaller dies (Cayman was something like 375mm2 or 374mm2 and fought with the 530mm2 GF110)?

The reason for Nvidia winning now is obvious: AMD doesn't want to use massive dies like Nvidia has in the past. Achieving high compute performance requires extra die space and that extra die space also requires extra power. This is because compute oriented chips often have more compute oriented hardware such as more complex scheduling hardware than a gaming oriented chip. For example, Kepler's schedulers are a lot simpler than GCN's and Fermi's.

Pitcairn is far less compute oriented than Tahiti and has extremely similar gaming performance per mm2 of die space and per watt to Kepler. If AMD made a Pitcairn-style chip about the size of the GK104 (just short of 300mm2), then it would perform very similarly to GK104.

The fact that the 7970 is so close to the 680 despite it having a much more compute oriented GPU tells us that AMD managed to merge gaming and compute performance far better than Nvidia did with Fermi, especially with the 7970 being over three times faster than the GTX 580 for dual precision compute despite it being only one process node ahead of the 580. That the 7970 is still in line with it's performance difference between it and Nvidia's last generation top single GPU card when compared to the last process node jump in addition to it's huge compute performance jump is nothing short of spectacular for AMD.

That Nvidia decided to take the easy road with a gaming oriented GPU after making a living off being compute-heavy is exactly the opposite. However, for gaming performance, Nvidia still did make the proper jump when comparing it to the last process jump.

Here're the examples:

Radeon 4000 and GTX 200.
Radeon 4870 is right behind the GTX 285.
Radeon 4870X2 and GTX 295 are roughly equal.
next process.
GTX 480 roughly equals the 295 in gaming performance. the 5870 is right behind the 4870X2, 295, and 480.
Same process, next generation
Radeon 6970 finally roughly equals the 4870X2, 295, and 480. GTX 580 surpasses them all by about one tier's worth of gaming performance.
next process.
Radeon 7970 is right behind the 6990 and 590. GTX 680 is in line with the 6990 and 590.

So, despite there being some odd differences, the gaming performance is still on the same trend as it was before. The differences being how computationally oriented each card is and as another result of that, the power efficiency of each card. Also consider that despite the 7970's much higher TDP, even Guru3D, someone known for being a little Nvidia biased, showed the 7970 hardly using much more power than the 680 during gameplay! So, AMD managed a better unity of gaming and computational performance than Nvidia did AND did it, AMD managed this with a better ratio of power efficiency between the compute heavy cards and the gaming heavy cards of the same generation, and even a site called out for being pro Nvidia shows this in their charts from their review of the 680.

Considering how games are becoming more compute performance reliant, AMD might turn out to be the winner of this after all. Which company wins will probably depend on just how long it takes the most compute reliant games to come out. If they are out around the time of the Radeon 8000's arrival, then AMD will do FAR better. If they aren't out until the Radeon 9000s or later (it's possible), then Nvidia may win, at least for a while.

Of course, this is only going by the variables that have already been presented and assumes that neither manufacturer will screw something up or do something else that is exemplary. Anything could happen between now and then.

With the 680 only having half of the GTX 580's compute performance and also having a huge memory bandwidth bottleneck, this would go very poorly for Nvidia buyers. Of course, Nvidia would either respond by making another compute oriented architecture or by getting the game developers to lean against compute, so it's hard to say exactly how this will go down. As of right now, things look good for AMD no matter what happens.AMD had great head start on Nvidia and is still the only one of the two to have their next generation cards in consistently good supply right now.

With Nvidia Kepler cards almost completely MIA and AMD's price cuts, Radeon 7000 looks like an excellent buy right now. At least for a while, this does not bode well for Nvidia if they can't get more cards out and in good supply regardless of how compute heavy games are.

As of right now, AMD is the only option for next generation cards and AMD also now has decent prices on their cards. If Nvidia doesn't get back up soon, then many of the people who want a next generation card will have already gone with AMD. This could leave Nvidia relying on late adopters just because the early adopters had no choice but to either wait, or buy AMD now. With AMD's compelling performance, prices, and power usage compared to the current cards, who could blame them for not wanting to wait even longer?
 
AMD Reports $590 Million Loss Due to Breakup with GF

703 Million. !

AMD's CEO is a piece of crap. I thought AMD did the who GF thing to come ahead, not go behind. So what was the whole point in creating GF in the first place?
 
[citation][nom]frozonic[/nom]i see amd is having a bad time, they suck at making decent CPU´s and their GPU´s will get outperformed by nvidia´s gtx 6xx series.... they either go loud or go home.[/citation]

AMD isn't going under because of one generation of gpus that didn't win everyone over. I think they will make it be it though a smaller company than it is today.
 
[citation][nom]supall[/nom]I don't know what you've been reading, but AMD's 7970s go clock for clock with NVidia's 680s and its pretty comparable, especially once you OC the two. And AMD is the better choice for compute performance. So where has NVidia gained such a huge advantage (10-15%) as to consider it a runaway. Like the 8000 series cards, the GTK110 isn't arriving anytime soon. Truth be told, you can go with either AMD or NVidia right now and still win based on what you want to have. Let's quit making it seem like NVidia has a hugely superior product like Intel over AMD.[/citation]

You are totally CORRECT, considering I just payed $449.99 for another 7970 and the 680GTX isn't in stock and is selling for $530ish tell me who's better...
 
[citation][nom]zloginet[/nom]You are totally CORRECT, considering I just payed $449.99 for another 7970 and the 680GTX isn't in stock and is selling for $530ish tell me who's better...[/citation]
The CF'd 7970s I'm using don't seem to be having trouble running anything and the image quality is very very good.
 
[citation][nom]blazorthon[/nom]Did anyone mock Nvidia for needing massive dies (greater than 500mm, the GF100/GF110 from the GTX 580, 570, 560 TI 448 core was 530mm2 if I remember correctly) just to keep up with or barely surpass AMD's much smaller dies (Cayman was something like 375mm2 or 374mm2 and fought with the 530mm2 GF110)?[/citation]
You're kidding me right?
[citation][nom]blazorthon[/nom]The fact that the 7970 is so close to the 680 despite it having a much more compute oriented GPU tells us that AMD managed to merge gaming and compute performance far better than Nvidia did with Fermi, especially with the 7970 being over three times faster than the GTX 580 for dual precision compute despite it being only one process node ahead of the 580. That the 7970 is still in line with it's performance difference between it and Nvidia's last generation top single GPU card when compared to the last process node jump in addition to it's huge compute performance jump is nothing short of spectacular for AMD.[/citation]
So when Nvidia's compute oriented GPU consumes more power but also outperforms the competition at gaming, it's somehow a less elegant solution than AMD's compute oriented GPU that consumes more power and under performs the competition at gaming? How does that work?

And the GTX580 had massively crippled DP performance in comparison to its Fermi counterparts in Quadro and Tesla cards. The Fermi architecture is capable of 1/2 SP, however the GTX580 is limited to 1/8. Kepler has dedicated DP hardware, capable of 1 to 1 SP performance, the first of its kind. The configuration in gk104 simply has very few of these units. Like the GTX580, the GTX680 does not speak even remotely close to the compute potential of its underlying architecture.
[citation][nom]blazorthon[/nom]Radeon 4870 is right behind the GTX 285.[/citation]
There's a significant gap between the two in recent benchmarks, to the point where I would not consider the HD4870 "right behind" the performance of the GTX285. Check again.
[citation][nom]blazorthon[/nom]Radeon 4870 and GTX 295 are roughly equal.[/citation]
Assuming that's a typo...
[citation][nom]blazorthon[/nom]With the 680 only having half of the GTX 580's compute performance and also having a huge memory bandwidth bottleneck, this would go very poorly for Nvidia buyers[/citation]
I think it's an oversimplification, and actually quite misleading to simply state that the GTX680 achieves just half the compute performance of the GTX580. That's probably the worst case scenario. There are also certain compute tasks where the GTX680 outperforms the GTX580 by a sizable margin, but the average probably falls somewhere in between.
[citation][nom]blazorthon[/nom]As of right now, things look good for AMD no matter what happens.AMD had great head start on Nvidia and is still the only one of the two to have their next generation cards in consistently good supply right now.[/citation]
You constantly switch between short and long term advantages, but skew the picture by only mentioning the ones that favor AMD. Nvidia has had the overwhelming compute advantage for years now, really since its start with the g80, in terms of both architecture and market share. If anything, even now with the 3 month launch advantage of the HD7970, AMD is the one playing catchup in this area, not Nvidia. There's much more to GPU computing than just designing a compute oriented GPU architecture. It's about having the infrastructure in place to effectively utilize that architecture, and that's an area where Nvidia has had a half a decade head start.
 
Its just a contract brake . What makes me sad is all the fanboyism going on in this thread . Its sad that we only have 2 real competing Companies in this world for high end GPU and CPU.s

What;s going to happen once these 2 companies join forces ?

Its coming , One day we wont have a choice
 
[citation][nom]nOv1c3[/nom]Its just a contract brake . What makes me sad is all the fanboyism going on in this thread . Its sad that we only have 2 real competing Companies in this world for high end GPU and CPU.s What;s going to happen once these 2 companies join forces ? Its coming , One day we wont have a choice[/citation]
What, pray tell, makes you so certain these two separate companies will join together as one, prophet?
 
Amd, when they cheat, turn to drink! ... And get a new one. Then send the smex pics to the old one.
 
In other news Intel has gotten a big new client for its fab production capabilities.
From anonymous client, "We have always respected intel's speed in adopting new chip production. I think this will be a great partnership to assure the production of new powerful chips."
 
[citation][nom]huamei998[/nom]That's... Pretty sad that the contest is only open to US citizens.[/citation]
Yeah...that's really not fair...too bad your post isn't relevant to this topic.
 
[citation][nom]santiagoanders[/nom]AMD breakup with GF? I see what you did there. It's like AMD has a girl friend.[/citation]

They actually divorced. You don't loose any money when you separate with your girlfriend, in fact you actually gain money.
 
[citation][nom]blazorthon[/nom] Did anyone mock Nvidia for needing massive dies (greater than 500mm, the GF100/GF110 from the GTX 580, 570, 560 TI 448 core was 530mm2 if I remember correctly) just to keep up with or barely surpass AMD's much smaller dies (Cayman was something like 375mm2 or 374mm2 and fought with the 530mm2 GF110)?The reason for Nvidia winning now is obvious: AMD doesn't want to use massive dies like Nvidia has in the past. [/citation]
I have to mention my +1000000000000000000000000000
 
A loss is a loss. They can blame this on the breakup with GF, but the fact remains that AMD made bad business decisions and that now it is losing a lot of money. It should also be pointed out that AMD has a lot less money to lose than say Intel or Nvidia even. While a bad product cycle or two would not seriously hurt Intel or Nvidia, it is coming at a very bad time for AMD which is struggling to deal with other bad business decisions.

At the end of the day, every dime wasted on bad business decisions is a dime that can not be spent on R&D. AMD has already reduced the size of its workforce. No doubt some of the reasons behind that were its astronomical loses.

Will AMD be able to hire and retain the talent that it needs to compete in a highly competitive industry?

That is the question.
 
While you girls were having your p'ing contests, no one felt the need to comment on:

[citation]For the second quarter, AMD expects revenue to increase about 3 percent sequentially as Trinity and Brazos 2.0 will launch "later this quarter".[/citation]

Ultra Deathmatch
June, 2013

I predict there will be twisted panties.


 
All you you yapping a bout AMD this and AMD that, if it wasn't for AMD, Nvidia and Intel would be pricing the crop out of all of us with their CPU and Graphic cards. So hands up AMD, you know what are doing.
 
While discussing whether or not nvidia should make their own fab plant with blazorthorn, I decided to look up the histories of ati and nvidia... ATi was a semiconductor manufacturer founded in the mid 80's and released their first vga card in 1986. Nvidia was founded in 1993 and are chip designers not manufacturers.... AMD bought ATi in 2007 for about 5.5 billion. Currently nvidia is worth just under 4 billion. Now for a company that's essentially the interior designer of graphics cards they've made a hell of a living out of drawing gpu's on paper. At the end of 2010 AMD was worth just a hair under 5 billion...

Regardless of who you like for cpu or gpu manufacturer its the sales that end up making the difference. If AMD can diversify its market share enough to be in the mobile, desktop, server, and discrete markets and still meet consumer demand without GF then in the end it will help them. Now if they cant meet demand for their products they'll end up regretting this decision. It could also illustrate the company's loss of market share in the desktop/server market since wasnt it GF that made their CPUs?
 
Status
Not open for further replies.