AMD Reports $590 Million Loss Due to Breakup with GF

Page 3 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.

siasrees

Honorable
Apr 22, 2012
3
0
10,510
Thanks for sharing your experience Tarnovsky.
g.gif
 

_zxzxzx_

Honorable
Mar 6, 2012
459
0
10,860
[citation][nom]halcyon[/nom]The CF'd 7970s I'm using don't seem to be having trouble running anything and the image quality is very very good.[/citation]

The CF'd RV770s I'm using don't seem to be having trouble running anything and the image quality is great!
 

marthisdil

Distinguished
Sep 21, 2010
80
0
18,630
[citation][nom]tmk221[/nom]I don't agree with the gpu part.Come on nVidia released their new GPU 3 moths later than AMD. So I guess it's nothing special that 680 is 10-15% faster that 7970. AMD is almost done with 7xxx series while nVidia is at the very begining of introduction of 6xx series. I think nothing changed in AMD-nVidia race. It's constantly amd taking over performance crown, then nVidia follows up 2-3 moths later and takes over. And the cycle repeat.That's my opinion. Maybe something gonna change if nVidia has gk110 ready to go. But that is rumor so far. And even if it's true, AMD will be ready with 8xxx series by the end of the year.[/citation]And? You could say the same about AMD releasing their latest what, 9 months after the last round of NVidia GPUs?
 

bigbaconeater

Honorable
Feb 29, 2012
21
0
10,510
[citation][nom]killerclick[/nom]$10M because of the breakup with GF, and $580M because their CPUs suck.[/citation]
It's so funny how human nature is. CPU's have more power than could even be dreamed of 15 years ago. While Intel has faster, more efficient processors than AMD. AMD has really good processors that are maybe 10-25% slower than Intel's offering, but still VERY capable.

Yet that's enough to say "they suck"

It's just funny.
 
[citation][nom]dragonsqrrl[/nom]You're kidding me right?So when Nvidia's compute oriented GPU consumes more power but also outperforms the competition at gaming, it's somehow a less elegant solution than AMD's compute oriented GPU that consumes more power and under performs the competition at gaming? How does that work?And the GTX580 had massively crippled DP performance in comparison to its Fermi counterparts in Quadro and Tesla cards. The Fermi architecture is capable of 1/2 SP, however the GTX580 is limited to 1/8. Kepler has dedicated DP hardware, capable of 1 to 1 SP performance, the first of its kind. The configuration in gk104 simply has very few of these units. Like the GTX580, the GTX680 does not speak even remotely close to the compute potential of its underlying architecture.There's a significant gap between the two in recent benchmarks, to the point where I would not consider the HD4870 "right behind" the performance of the GTX285. Check again.Assuming that's a typo...I think it's an oversimplification, and actually quite misleading to simply state that the GTX680 achieves just half the compute performance of the GTX580. That's probably the worst case scenario. There are also certain compute tasks where the GTX680 outperforms the GTX580 by a sizable margin, but the average probably falls somewhere in between.You constantly switch between short and long term advantages, but skew the picture by only mentioning the ones that favor AMD. Nvidia has had the overwhelming compute advantage for years now, really since its start with the g80, in terms of both architecture and market share. If anything, even now with the 3 month launch advantage of the HD7970, AMD is the one playing catchup in this area, not Nvidia. There's much more to GPU computing than just designing a compute oriented GPU architecture. It's about having the infrastructure in place to effectively utilize that architecture, and that's an area where Nvidia has had a half a decade head start.[/citation]

Kidding you? What's to kid about?

Yes, the 4870 = 295 was a typo, it should have been 4870X2 is similar to the 295.

Kepler's cores aren't capable of 1 to 1 SP to DP compute. There are two types of Kepler cores... One type can ONLY do SP, the other can only do DP. Consumer cards such as the 680 have very few of the DP only cores and that is why it has poor compute. Sure, Nvidia could have made all of their cores the DP cores, but they are more power hungry than the SP cores and don't do SP math, so they would have been fairly useless for gaming at this time. AMD did not make such a compromise with their GCN cores.

Furthermore, nothing can do 64 bit math (DP) at the same speed as 32 bit (SP), unless it's SP math is limited. Fermi's 1 to 2 ratio is the best possible, unless the 64 bit math is specifically optimized instead of the 32 bit, meaning that the 32 bit is not being used in an optimal manner.

GCN has great gaming performance and compute performance. Whether or not Fermi was capable of four times greater DP performance than Nvidia allowed in these cards doesn't matter because Nvidia didn't allow it. Even if Fermi was capable of more performance than it had, the cards don't have the better performance, so AMD gets to win by default because AMD sold cards that had the performance that the Fermi cards (and especially Kepler) cards lack, regardless of why they lack it.

The 680 has HALF of the 580's DP compute. The ways in which it beats the 580 are purely SP or mostly SP. Half of the 580 is not some worst case scenario, it has half of the 580's DP compute at best. The 680 has DP compute about as good as the 560 or 560 TI.

AMD's 7970 keeps in line with where it's gaming performance should be relative to previous generations. For example, the 5870 was right behind the 4870X2, and now the 7970 is right behind the 6990. AMD did this without increasing their die size over the previous generation. In fact, Tahiti is smaller than Cayman. Nvidia, on the other hand, has always had huge dies just to keep up with and slightly beat AMD's cards.

Pointing that out is not stating only the good for AMD. That is all there was to it. Nvidia's GPUs were always huge compared to AMDs because of thieir compute oriented nature. However, AMD managed to make a compute oriented architecture without needing the huge dies. Nvidia, thus far, has failed to do that. THAT is why AMD managed to combine the best of both compute and gaming performance without sacrificing efficiency like Nvidia has done. You are ignoring what AMD has done that Nvidia hasn't done.

Moving on, Nvidia can have as great of an infrastructure as they want, but they have cut down on performance. An infrastructure is nothing without the performance to back it up, just as the performance is next to useless without the infrastructure . AMD has a decent infrastructure in OpenCL and AMD has the performance. Nvidia has CUDA and OpenCL, but has abandoned the performance. AMD isn't playing catch up because they have already caught up. Now, Nvidia has fallen behind and it is all their fault! Nvidia is forcing all of their customers who require great DP compute to either go AMD or to pay the exorbitant prices for the Tesla and Quadro cards.

With games on the move to be more DP compute heavy, this can hurt the longevity of the Kepler cards because unless Nvidia stalls the oncoming of compute heavy games, Kepler cards will be almost useless for such games with the Fermi cards outperforming the Kepler cards greatly and the AMD cards far ahead of even the Fermi cards. AMD provided a balance of the best DP performance in the consumer market with excellent SP performance that rivals Kepler.

You're the one skewing the picture here. Nvidia having the compute advantage up until now doesn't matter right now because they no longer have it. Mentioning it changes nothing. AMD is winning there now and that's not changing, unless Nvidia actually decides to release GK100 in a consumer card (not looking to likely right now).

Also,

Considering how games are becoming more compute performance reliant, AMD might turn out to be the winner of this after all. Which company wins will probably depend on just how long it takes the most compute reliant games to come out. If they are out around the time of the Radeon 8000's arrival, then AMD will do FAR better. If they aren't out until the Radeon 9000s or later (it's possible), then Nvidia may win, at least for a while.

Clearly shows where I said that Nvidia might win out for a while, so I obviously was not only saying the good for AMD. Come again when you have a valid thing to say.
 

dragonsqrrl

Distinguished
Nov 19, 2009
1,280
0
19,290
[citation][nom]blazorthon[/nom]Kidding you? What's to kid about? Also,Clearly shows where I said that Nvidia might win out for a while, so I obviously was not only saying the good for AMD. Come again when you have a valid thing to say.[/citation]
lol... you mad kid?
 

dragonsqrrl

Distinguished
Nov 19, 2009
1,280
0
19,290
[citation][nom]blazorthon[/nom]There are two types of Kepler cores... One type can ONLY do SP, the other can only do DP. Consumer cards such as the 680 have very few of the DP only cores and that is why it has poor compute. [/citation]
Wow, good job repeating what I said verbatim. Do you ever argue anything of substance in your perpetually self serving essay's? Or are you just trolling? I can never tell.
 

dragonsqrrl

Distinguished
Nov 19, 2009
1,280
0
19,290
[citation][nom]blazorthon[/nom]Kepler's cores aren't capable of 1 to 1 SP to DP compute. Furthermore, nothing can do 64 bit math (DP) at the same speed as 32 bit (SP), unless it's SP math is limited.[/citation]
Limiting SP math? What are you even talking about? Umm, sorry, but it is possible to achieve a 1/1 SP execution rate...

"The CUDA FP64 block contains 8 special CUDA cores that are not part of the general CUDA core count and are not in any of NVIDIA’s diagrams. These CUDA cores can only do and are only used for FP64 math. What's more, the CUDA FP64 block has a very special execution rate: 1/1 FP32. With only 8 CUDA cores in this block it takes NVIDIA 4 cycles to execute a whole warp, but each quarter of the warp is done at full speed as opposed to ½, ¼, or any other fractional speed that previous architectures have operated at. Altogether GK104’s FP64 performance is very low at only 1/24 FP32 (1/6 * ¼), but the mere existence of the CUDA FP64 block is quite interesting because it’s the very first time we’ve seen 1/1 FP32 execution speed. Big Kepler may not end up resembling GK104, but if it does then it may be an extremely potent FP64 processor if it’s built out of CUDA FP64 blocks."

http://www.anandtech.com/show/5699/nvidia-geforce-gtx-680-review/2
 
[citation][nom]dragonsqrrl[/nom]Limiting SP math? What are you even talking about? Umm, sorry, but it is possible to achieve a 1/1 SP execution rate..."The CUDA FP64 block contains 8 special CUDA cores that are not part of the general CUDA core count and are not in any of NVIDIA’s diagrams. These CUDA cores can only do and are only used for FP64 math. What's more, the CUDA FP64 block has a very special execution rate: 1/1 FP32. With only 8 CUDA cores in this block it takes NVIDIA 4 cycles to execute a whole warp, but each quarter of the warp is done at full speed as opposed to ½, ¼, or any other fractional speed that previous architectures have operated at. Altogether GK104’s FP64 performance is very low at only 1/24 FP32 (1/6 * ¼), but the mere existence of the CUDA FP64 block is quite interesting because it’s the very first time we’ve seen 1/1 FP32 execution speed. Big Kepler may not end up resembling GK104, but if it does then it may be an extremely potent FP64 processor if it’s built out of CUDA FP64 blocks."http://www.anandtech.com/show/5699 [...] 0-review/2[/citation]

Doing 64 bit math as fast as 32 bit math means that the processing core must have increased optimizations for 64 bit math and not be optimized for 32 bit math, hence the 32 bit math is crippled in comparison. If both are equally optimized, then 32 bit math is always twice as fast. The only way to get a 1 to 1 is to have 32 bit math crippled in comparison to 64 bit math. 1 to 2 is the maximum when both are optimized for. A gaming card has less use for doing something like this, so unless gaming math becomes 64 bit heavy (this is supposed to happen sometime in the future, but who knows when), no gaming GPU would have the crippled 32 bit math in order to get the 1 to 1 64 32 bit to 64 bit.

You don't understand how chip design works if you think otherwise. I can't be done with current technology unless 32 bit math is crippled. Maybe something such as quantum computing could get around such a limitation, but not current types of technology.
 
[citation][nom]dragonsqrrl[/nom]lol... you mad kid?[/citation]

I'm annoyed when I'm accused of being a fanboy when I am not a fanboy. I have a decent background in chip design (although it is not professional, so I don't know too much more than most of the basics and some advanced). Calling someone a kid online is just asking for trouble, regardless of whether or not the person who you are insulting is the sort to give trouble. For all you know, I could have some hacking knowledge and screw you over.

Sure, I honestly don't really know much about how to hack unlike my knowledge of computer chips and hardware, but you don't know that. I explained some problems that I see for the AMD and Nvidia that could cause either of them to get a serious advantage over the other and if you don't like that then you can do your own research and try to prove me wrong instead of pulling text from a page about a subject that you really don't understand even as well as I do, let alone on a professional level.
 

dragonsqrrl

Distinguished
Nov 19, 2009
1,280
0
19,290
[citation][nom]blazorthon[/nom]GCN has great gaming performance and compute performance. Whether or not Fermi was capable of four times greater DP performance than Nvidia allowed in these cards doesn't matter because Nvidia didn't allow it. Even if Fermi was capable of more performance than it had, the cards don't have the better performance, so AMD gets to win by default because AMD sold cards that had the performance that the Fermi cards (and especially Kepler) cards lack, regardless of why they lack it.[/citation]
It's funny that your putting so much emphasis on DP performance, and of all things in gaming, when this has traditionally been an strong point for Nvidia. Correct me if I'm wrong, but I'm not aware of a single existing or upcoming game that employs FP64 instructions. And when did this even become an issue for AMD fanboys? When the HD7970 launched? ...lol.

Professionals who really rely on GPGPU computation, use Fermi based Quadro or Tesla cards simply because they offer the best DP performance while delivering on stability, rock solid support, and an established platform for GPU computation, which believe it or not, are more important in this industry than absolute performance. When you have all of this, in addition to the better performing products, there is no competition. When the professional version of the HD7970 launches it will undoubtedly claim the performance crown, but contrary to how you seem to think this industry works (trust me, it's VERY different from gaming), the platform and infrastructure are paramount.

People who are serious about GPGPU computing don't buy Geforce or Radeon cards. You're arguing the compute potential of GPU architectures, and gaming cards as though they're the same thing. Where DP performance really matters, the HD7970 is irrelevant.
 

dragonsqrrl

Distinguished
Nov 19, 2009
1,280
0
19,290
[citation][nom]blazorthon[/nom]I'm annoyed when I'm accused of being a fanboy when I am not a fanboy. Calling someone a kid online is just asking for trouble, regardless of whether or not the person who you are insulting is the sort to give trouble. For all you know, I could have some hacking knowledge and screw you over.[/citation]
Well... you certainly have free time on your hands, I can only surmise as to why that might be. That response came with amazing haste, I'm impressed.
 
On to what the 1 to 1 DP to SP ratio of these special Kepler cores. Why aren't they used for all of the consumer GPU cores, instead of a mix of 32 bit and 64 bit cores? The 64 bit cores undoubtedly take up more die space because of their 64 bit optimization. They probably could run 32 bit math as quickly as the 32 bit cores if made properly, but they would be wasted if they were used for this because the regular 32 bit cores could do the same thing with less die space, meaning less cost and higher power efficiency for 32 bit math. Now, had these cores been optimized for both 64 bit and 32 bit (IE, they could do two 32 bit operations with the 64 bit optimized hardware at once or a single 64 bit operation), then they would have probably been even larger, but they would be optimized for both and the 32 bit performance would no longer be crippled. However, then they would need to be comparable to two 32 bit core's worth of die space or else they would still be inferior, albeit not as much as the purely 64 bit optimized cores. So, they could provide an in-between.

However, they would then be crippled on the 64 bit side (they can still only do one full speed 64 bit operation, but would be larger than a single 64 bit core because a 64 bit core would be smaller than two 32 bit cores). So, it would just be a compromise between 64 bit and 32 bit math. This gives us three options. One is optimized for 32 bit math, another is optimized for 64 bit math, and the last option is a compromise between the two. The direction that games take (purely 64 bit or a mix of 64 bit and 32 bit) will decide what future consumer cards use. Professional compute cards are generally more 64 bit heavy, but not all forms of compute are 64 bit, so some workloads could prefer the compromising cores.

Point is, 1 to 1 64 bit to 32 bit is not even all that it may seem to be cut out to be, except for purely 64 bit work (or primarily 64 bit work). All of these are compromises for different workloads.
 
[citation][nom]dragonsqrrl[/nom]It's funny that your putting so much emphasis on DP performance, and of all things in gaming, when this has traditionally been an strong point for Nvidia. Correct me if I'm wrong, but I'm not aware of a single existing or upcoming game that employs FP64 instructions. And when did this even become an issue for AMD fanboys? When the HD7970 launched? ...lol.Professionals who really rely on GPGPU computation, use Fermi based Quadro or Tesla cards simply because they offer the best DP performance while delivering on stability, rock solid support, and an established platform for GPU computation, which believe it or not, are more important in this industry than absolute performance. When you have all of this, in addition to the better performing products, there is no competition. When the professional version of the HD7970 launches it will undoubtedly claim the performance crown, but contrary to how you seem to think this industry works (trust me, it's VERY different from gaming), the platform and infrastructure are paramount. People who are serious about GPGPU computing don't buy Geforce or Radeon cards. You're arguing the compute potential of GPU architectures, and gaming cards as though they're the same thing. Where DP performance really matters, the HD7970 is irrelevant.[/citation]

You fail to realize that for the money, cards like the 7970 can provide more 64 bit performance than professional cards because the professional cards are so expensive. The pro cards are also mostly outdated. You are being overbearing your assumptions. Many people use the consumer cards because of both of these reasons for some workloads, especially if they don't need the performance of some professional cards (regardless, the 7970 is probably close to some of them and may even beat most of the professional cards).

AMD has an infrastructure. OpenCL might be inferior to CUDA, but it is more universally supported by hardware and is probably cheaper to use. Also, if all that Nvidia had was the infrastructure and AMD was several times faster, that would be a problem for Nvidia. Many people would go for AMD because their infrastructure can be tolerated if performance is really needed.

I'm not trying to argue that Nvidia is better or that AMD is better. My point was that as games become more compute related (Civilization 5 has a little compute, but it's not a whole lot. However, it is a start. Games are supposed to be taking this trend to a greater extreme in the next few years), then the market may change. Nvidia has decided to go against this trend with the GTX 600 Kepler cards, despite the fact that they are a big supporter of it. Perhaps they know something because of this, such as that it will be at least long enough for GTX 700 or 800 to arrive and by then, they might have something such as the 32/64 compromise optimized cores in the works. If that is how Nvidia is working this through, then making Kepler cards that suck for 64 bit compute, but are excellent for most current games is a highly strategic move that will mean that Nvidia buyers might need more upgrades, thus spend more money on Nvidia cards.

Regardless of why, AMD and Nvidia have switched places as the compute king for this generation of graphics cards in relation to their previous stances on this and I brought up how that could be problematic for Nvidia buyers as compute becomes more important in game. Flame away if you want, but I'
m not the company directors or whatever up in Nvidia who chose to do this. They certainly chose an excellent time to do this, what may be their last chance to make such a change.
 
Also, compute was always something that I considered, just not in a gaming context (until I read the Tom's article about it a month or so ago). That's why I have several Geforce cards instead of just Radeon cards right now. For gaming, AMD was an obvious king for pretty much every price point (although the 560 TI competed very well with the 6950, the other Geforce cards did not compete with their competitors nearly as well) with the Radeon 6000 cards (up until the Nvidia price cuts right before the 680's launch). Tom's often acknowledged this with the best graphics cards for the money articles.
 
G

Guest

Guest
basically the news is saying that, GF has lost the exclusive right to produce 28nm apus but it didn't state that whether or not AMD is switching kabini/temash apus to other foundries(TSMC likely). So my question is where will AMD manufacture their 28nm kabini/temash apus. Kaveri is confirmed at GF
 
TSMC simply provides exceptional leverage to AMD in negotiating with OEMs. Issues of supply in the retail channel, too, are just plain eliminated.

Intel, essentially, loses that past leverage they've held over AMD. Trinity/Brazos is the first real example of AMD being able to say to OEMs and retailers, "We can do that" to nearly any order.

Kaveri will be an even better example, as will the 28nm successor to the Ontario line.

This also allows AMD to 'bin' the high-flying super-efficient chips and market them at the top end.

Trinity-based Opteron APUs, anyone?

[:lutfij:4]




 
Status
Not open for further replies.