AMD Pitcairn With 768 Shaders: What is This Mystery Chip?

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.
I think the 7830 name makes a lot of sense. The 5830 was a castrated 5850 after all. This one in single-slot might be a lot nicer than the 5830 since this cut-down sounds like it'll do well on thermals so it actually has a purpose.

I hope AMD reads these comments and realizes that they actually should launch this card.
 
maybe it will be tom´s fault that the crowd will demand that amd produce this thing! and then maybe amd can get enough money to at least improve in the cpu market and be intel competition again
 
i agree with semisonic.
the form factor is just for the ease of setup in a dev box.



 
[citation][nom]eddieroolz[/nom]An incomplete board is not always the best thing to be using for validation, since it might behave differently from the complete board. In either case though, I think we just ruined AMD's 7830 launch party.[/citation]

As long as the IO is good, it can be useful for debug (think memory testing). If the front-end or backend are intact, it is more useful. If some shaders are all that is missing, then you can validate your PCB. What you are missing at that point is maximum power consumption, temperatures, and performance. You would need the full part to validate your cooling solution. Functionally, you have everything you need, though.

That said, the price gap has me wondering about the 7830.
 
[citation][nom]dreadlokz[/nom]probably to compete with the GK110![/citation]

Why do people say GK110? If there EVER is a big Kepler for the GTX 600 cards, it will be GK100. GK110 would mean it's a second generation Kepler, presumably GTX 700 cards at that point. I fail to understand how this is a difficult concept. The GF10x GPUs were in the first generation Fermi cards (GTX 400 series) and the GF11x GPUs were in the second generation Fermi cards (GTX 500 series). The GK104 is what is in the GTX 680, why are people so ridiculously hooked on the GK110 when we don't even know if it will ever be made (Nvidia might just make a new arch instead of second generation Kepler GPUs)?
 
[citation][nom]captaincharisma[/nom]AMD's next failiure[/citation]

OK, Maybe you'd call AMD a slower GPU than the latest Nvidia has to offer. Maybe you'd criticize them for their lack of PhysX support. But then, you have to admit that they offered some technologies than Nvidia only offer in the 600 series like Eyefinity on a SINGLE board. I'm sitting here staring at my 1080p 23" display imagining the possible screen estate if I get three of them running in Eyefinity mode. AMD's price/performance couldn't be matched by Nvidia until the recent introduction of the GTX 680 and Nvidia had to pull the plug on any compute performance improvements. Also, AMD's tessellation is significantly improving with every generation while Nvidia is simply sitting there. Another thing is Nvidia's accelerated video encoding which is significantly worse than what AMD has to offer despite Nvidia's CUDA being SEVERAL years older than AMD's Stream /APP. Nvidia is also locking out the PhysX capability if it detects another GPU in the system. I can't understand this move since it will boost the sales of cheaper GPU which is a section dominated by AMD and most of graphics card profits are in the lower end of their portfolios.

Last but not least, AMD never released a driver that fried GPUs. So, I think my money is way safer with AMD than it'll ever be with Nvidia.

Please,my kind sir, look at The Best Graphics Cards for the money column before saying AMD's GPU's are a failure (that's also the right spelling for failure, not what you wrote)
 
[citation][nom]youssef 2010[/nom]OK, Maybe you'd call AMD a slower GPU than the latest Nvidia has to offer. Maybe you'd criticize them for their lack of PhysX support. But then, you have to admit that they offered some technologies than Nvidia only offer in the 600 series like Eyefinity on a SINGLE board. I'm sitting here staring at my 1080p 23" display imagining the possible screen estate if I get three of them running in Eyefinity mode. AMD's price/performance couldn't be matched by Nvidia until the recent introduction of the GTX 680 and Nvidia had to pull the plug on any compute performance improvements. Also, AMD's tessellation is significantly improving with every generation while Nvidia is simply sitting there. Another thing is Nvidia's accelerated video encoding which is significantly worse than what AMD has to offer despite Nvidia's CUDA being SEVERAL years older than AMD's Stream /APP. Nvidia is also locking out the PhysX capability if it detects another GPU in the system. I can't understand this move since it will boost the sales of cheaper GPU which is a section dominated by AMD and most of graphics card profits are in the lower end of their portfolios.Last but not least, AMD never released a driver that fried GPUs. So, I think my money is way safer with AMD than it'll ever be with Nvidia.Please,my kind sir, look at The Best Graphics Cards for the money column before saying AMD's GPU's are a failure (that's also the right spelling for failure, not what you wrote)[/citation]

Nvidia's only faster because they were willing to sacrifice compute performance in an attempt to get us to turn to Quadro and Tesla. Furthermore, it's not even a big difference. There aren't many games where a difference between the 7970 and the 680 can be seen (at least in FPS). Granted, the 680 wins more often then the 7970 does, but it's worth every penny, just as the 680 is. The difference is that the 7970 has the memory capacity to last more than a year or two before AA/AF needs to be lowered to stop the VRAM capacity from getting overloaded, whereas the 680 already shows problems caused by it's low VRAM capacity for it's performance in some games and resolutions, settings, and AA/AF. AMD also has cards that can do six monitors in Eyefinity instead of just three and has had this for years. The list goes on, but I'll stop here before looking like an AMD fanboy.

Let's see what Nvidia did... They most certainly do have the most energy efficient and for the most part, the fastest single GPU and dual GPU cards in the gaming world right now. However, if games become more compute focused like so many say they will (including Tom's), then how long will that last? Well, that depends on whether or not such games can be released before Nvidia releases another compute focused architecture on their Geforce cards. I hope so because if not, then it would be a one-sided competition until Nvidia did. In that, I hope that the next generation of Nvidia cards have more compute performance just in case we get our compute heavier games soon. Nvidia does have one advantage in that with Kepler, the dual precision performance comes from cores that aren't related to the 32 bit gaming cores, so it should be able to keep it's regular performance unchanged when it adds in it's little bit of compute performance, whereas AMD will need to have some cores allocated to the 32 bit math and some allocated to the 64 bit math. If Nvidia simply adds more of the 64 bit cores, then it could have a winner in the next generation. Maybe AMD will take a similar approach in their next generation too by separating the 32 bit and 64 bit math into different cores. It definitely is an interesting concept, although I'm more partial to keeping everything in one for this so that if one type of performance is improved, it can all get improved at the same time.

It would be interesting if we could change the clock frequency of the 64 bit cores relative to the 32 bit cores. That way, we could overclock what really needs it more than what is already fast enough.
 
Status
Not open for further replies.