AMD HD 8800 Series SKUs Surface in Leaked Document

Page 3 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.
[citation][nom]gramps[/nom]IF this is accurate it would be impressive to say the least. Personally I'm a little skeptical (but hopeful) - that much increase in die size and such a substantial performance boost with a slightly lower TDP - all on the same 28nm process? AND that price!Too good to be true? I hope not!![/citation]


What's so unusual about that? Last year they've introduced a new architecture on a new manufacturing process. They've had an year to fine tune the architecture and the production, so anything else than that would have been an disappointment and sloppy work. I think they have done it just fine. It's just another iteration and a second generation of GCN.
 
I do not think so, all they really need to do is work on the computing aspect including Open CL.
something the GTX 6 series (Kepler) doesn't have..

GTX 880 = FTW. LoL
 

It would FLY on all maximal maximums (I mean, the most demanding UE4-based games..."Unreal III" anyone? BRING IT ON!!!).



By that time "HD 9999" (M-M-M-MAXIMUM DAMAGE!) will be out. %))
 
I think it would be a dumb business decision to sell the card at those prices. I like what AMD has rolled out in recent times, competing with Nvidia but you need to remain profitable and strong. We can enjoy powerful cards at low prices today in exchange for what? Nvidia dominance tomorrow? Great.
 
[citation][nom]jb1007[/nom]I think it would be a dumb business decision to sell the card at those prices. I like what AMD has rolled out in recent times, competing with Nvidia but you need to remain profitable and strong. We can enjoy powerful cards at low prices today in exchange for what? Nvidia dominance tomorrow? Great.[/citation]

That doesn't make much sense at all. If AMD had high prices, then Nvidia would undercut them and AMD would have almost no profits from their graphics division. Besides, price/performance should constantly be dropping. That's part of how the industry works. If it didn't drop and cards instead just got more expensive every generation, few people would buy them and the computer gaming market would die in a few years.
 
I haven't upgraded my graphics card in some time and have been looking recently. I am thinking waiting another few months is going to be worth it if these rumors are accurate.
 
The price is the most unlike thing in this "leak" This can be as fast as Nvidia 680/670 ) or even faster) and price will be near 200$... Not likely, so most propably this is fake or it has some disinformation.
The specks themselves would be very believable. This seems to be very much like Nvidia 680 (aka 104) is. (striker type)
It will be fast in games, but not so much in computing (relative to 7970 and 7950). Actually it would make sense to release 8870 as the top gaming GPU and release 8970 as top compute GPU. Much like Nvidia has 104 and 110 series.
Those who need fast gaming GPU would buy 88xx series GPU and those who need a lot of computational power would buy 89xx series cards. It would make sense for gamers, for those who needs a lot of computation power and to the GPU maker. (Why to pay for something that you don't need?) But as I said this is far too cheap for it specks... so not likely to appear at that price. In the reasent times the price has been relative to the speed related to competative products. Why would they chance the system now? It would be very nice for us but I am sceptical at this moment. We can get free lunch not so often...
Guite often the second time when using the same production node the chips has been larger, because the manufacturing technology has matured, so it is easier to make bigger chips that don't eat too much power and you allso get better yealds. So it is guite possible that we really get bigger GPUs this time.
 
Looks like "wait for me".
Well, to bad for AMD, was about to buy 7870 :).

[citation][nom]A Bad Day[/nom]Going Nivida's style of throwing more transistors, while keeping TDP and possibly price under control. I like that.[/citation]
Since when is that "NVidia style" lol? Not like Fermi series was AMD's chip.
 
[citation][nom]kartu[/nom]Looks like "wait for me".Well, to bad for AMD, was about to buy 7870 .Since when is that "NVidia style" lol? Not like Fermi series was AMD's chip.[/citation]
Yeah but that style was with Kepler at least...
 
As much as I would like this leak to be true, it simply isn't.

Unless there was some huge flaw in their 7xxx series designs, it just doesn't make engineering sense for transistor count to go up, frequency to go up, die area (and thus capacitance) to go up, and yet power to go down. P ~ afCVV/4 (ref: CMOS on wikipedia), so increasing f and C increases P proportionally. From 8870 specs above, 1.27 * 1.05 = 1.33, or a predicted 33% increase in power consumption based on die size (a rough relative approximation for C) and frequency - all other things being equal. So some combination of a (alpha, the % of clocks that each transistor switches states, on average), V and C have to go down enough to counteract this effect. Those stated performance gains contradict a lower a, and the stated die size does not support a lower C very well. So unless voltage decreases by about 18%, the stated comparison is nonsensical.

The performance increase is also unrealistic. 1.05 * 1.21 = 1.27, or a 27% increase in performance assuming equivalent performance per transistor. So either they did a heck of a lot "more with less", or the stated 60% - 75% performance increases are bogus. And for pixel rate, since it's based on ROPs which are typically some multiple of 8 (16, 24, 32, 40, 48, whatever), I'm curious how they got a 10% performance increase with just a 5% clock rate increase.

The price drop could be realistic because the fab could have worked out the kinks with the 28nm process, improving yield substantially. Manufactured cost is roughly proportional to die size when the process is static, so that would also mean opportunities to drop pricing on the existing products by substantial amounts. I'd love to see that happen too. But I'm very skeptical of this chart.
 
[citation][nom]hannibal[/nom]The price is the most unlike thing in this "leak" This can be as fast as Nvidia 680/670 )or even faster) and price will be near 200$... Not likely, so most propably this is fake or it has some disinformation. The specks themselves would be very believable. This seems to be very much like Nvidi 680 (aka 106) is. (striker type)It will be fast in games, but not so much in computing (relative to 6970 and 6950). Actually it would make sense to release 8870 as the top gaming GPU and release 8970 as top compute GPU. Much like Nvidia has 106 and 110 series.Those who need fast gaming GPU would buy 88xx series GPU and those who need a lot of computational power would buy 89xx series cards. It would make sense for gamers, for those who needs a lot of computation power and to the GPU maker. (Why to pay for something that you don't need?) But as I said this is far too cheap for it specks... so not likely to appear at that price. In the reasent times the price has been relative to the speed related to competative products. Why would they chance the system now? It would be very nice for us but I am sceptical at this moment. We can get free lunch not so often... Guite often the second time when using the same production node the chips has been larger, because the manufacturing technology has matured, so it is easier to make bigger chips that don't eat too much power and you allso get better yealds. So it is guite possible that we really get bigger GPUs this time.[/citation]

The leak is probably incorrect, but that's not necessarily why. The 8800 series (according to this *leak*) states (greatly) improved compute performance over 78xx, so I don't know what you're going on about with that. Furthermore, the prices could be accurate. This is something to not expect until maybe first or second quarter of 2013, so pricing could change a lot by then. Pricing could easily adjust enough by then to make prices for cards such as these be comparable to those listed in this spec sheet.
 
[citation][nom]TeraMedia[/nom]As much as I would like this leak to be true, it simply isn't.Unless there was some huge flaw in their 7xxx series designs, it just doesn't make engineering sense for transistor count to go up, frequency to go up, die area (and thus capacitance) to go up, and yet power to go down. P ~ afCVV/4 (ref: CMOS on wikipedia), so increasing f and C increases P proportionally. From 8870 specs above, 1.27 * 1.05 = 1.33, or a predicted 33% increase in power consumption based on die size (a rough relative approximation for C) and frequency - all other things being equal. So some combination of a (alpha, the % of clocks that each transistor switches states, on average), V and C have to go down enough to counteract this effect. Those stated performance gains contradict a lower a, and the stated die size does not support a lower C very well. So unless voltage decreases by about 18%, the stated comparison is nonsensical.The performance increase is also unrealistic. 1.05 * 1.21 = 1.27, or a 27% increase in performance assuming equivalent performance per transistor. So either they did a heck of a lot "more with less", or the stated 60% - 75% performance increases are bogus. And for pixel rate, since it's based on ROPs which are typically some multiple of 8 (16, 24, 32, 40, 48, whatever), I'm curious how they got a 10% performance increase with just a 5% clock rate increase.The price drop could be realistic because the fab could have worked out the kinks with the 28nm process, improving yield substantially. Manufactured cost is roughly proportional to die size when the process is static, so that would also mean opportunities to drop pricing on the existing products by substantial amounts. I'd love to see that happen too. But I'm very skeptical of this chart.[/citation]

We've heard of things such as AMD's high-density library that they plan to use in their Excavator CPUs, so maybe AMD implemented something like that along with other architectural improvements. Voltage settings on the GCN cards are far higher than they usually needed to be, so with process/yield improvements and more reasonable voltage settings, the power requirements stated in this *leak*, again, could be reasonable.

As for the ROPs, AMD could have made some improvements in their designs rather than increasing ROP count.

I'm skeptical too, but these points still stand.
 


Considering what AMD did with Radeon 6800 compared to Radeon 5800 and similar situations, I'd definitely call it AMD's style.



Nvidia cut a lot of stuff out of Kepler. They didn't do more with less, they did less with more, but what they still did, they did better. For example, AA efficiency, tessellation efficiency, dual-precision compute performance, and more was greatly sacrificed in Kepler FP32 to optimize for raw pixel crunching. That sacrifice is more apparent when you throw in heavy AA, tessellation, and dual-precision compute workloads and compare the efficiency. Overclocking is also a good way to see the memory bandwidth loss that was greatly related to the loss in AA efficiency.
 
[citation][nom]alextheblue[/nom]Considering it's a highly scalable engine and not a game, I'm sure that UE4 (in some form) will run on a $50 GPU from two years ago. Now if you come up with a specific game using UE4 that is brutal, then you might have something. But not all UE4-based games will be taxing - again, scalability.[/citation]

lets put it this way,

the semeritin demo was a 3tflop demo. if properly coded, any game should look at least that good on either of these cards.

it all comes down to how well the game is coded now.
if only physx would do us a favor and die, we would have an even graphical playing field.

[citation][nom]Regor245[/nom]You don't need to upgrade if Games runs fine on your current card and you don't need to Max out Settings just to play games.___________________________________________________________________________________If the 8770 doesn't need a 6-pin... cool[/citation]

how often are games optomized for anything but the highest settings? most of the rest comes form a computer scaleing things back till it can run on a lower tier, and it almost always looks horrific.
its a main reason i do everything i can to max out texture size no matter the game. muddy textures stick out like a sore thumb to me.

 


The compute power increases yes, but not to the level of older 79xx series. This GPU seems to be faster gaming card than 79xx series is but not as powerfull as a compute GPU. So guite ultimate gaming GPU for those who need good gaming card. Very much what everyone has hoped. Well maybe not those who runs seti programs etc. but it can be easy to see that 89xx series can give guite huge improvement in computational task compared to 79xx series if AMD is really increasing their die size this much!

 


I get that, but you previously said that it'd be weak in compute compared to Radeon 69xx and that wasn't true. Also, you said that the GTX 680 uses GK106 which is incorrect; the GTX 680 uses GK104.
 


So true... Too late to write posts at this moment, long past midnight... Thanks for clarification!
 
Status
Not open for further replies.