Report: HD 6970 Release to be Delayed

  • Thread starter Thread starter Guest
  • Start date Start date
Status
Not open for further replies.
how can you even call this news? From taiwanese sources? This is a reproduced article based on another site that posted this earlier, which is claimed to be bogus. Pure speculation
 
[citation][nom]kresso[/nom]Sounds like your selling a murder mystery.[/citation]
Does the rumor indicate a problem with the Cayman architecture? Or are TSMC’s fabrication capabilities lacking? Most importantly, will AMD ship on time?
Either way i swear i've already read this story like a year ago it seems...
 
Yields of single digits?! That is terrible. Is that accurate? For the prototype attempts? Am I wrong in criticizing that poor yield? What are the normal yields for a first-stage production of a consumer graphics card? In my industry, yields that low mean that something is terribly wrong, and it should never have even progressed to production if such errors are even a possibility, let alone a reality.
 
"Both high-end video cards will be overkill for any single-display system."

That's a pretty audacious statement. They're both just die shrinks. Nothing revolutionary, just evolutionary.

I can pretty guarantee that still, neither of them will be able to run Crys... er.. the unmentionable game at highest settings at 2560x1600 with a min of 30fps.

Unless you mean both cards together in some future actually working version of Lucid Hydra. That would be a mean machine.
 
Right...
Both Nvidia and AMD have been using his process for a year now... and only now AMD is having a yield problem...

More like Nvidia friendly parties are trying to move as much of their 465 and higher inventory as they can before the 69xx and 580 force a fire sale of the entire upper bracket of cards.
 
i like the "future actually working version of lucid hydra) i am gonna try to get a lucid hydra board next build thoguh just to support the tech i know it isn't as good as acrossfire and sli but want it to expand.

as for it beign overkill... yea i'd say in a huge resolution like your mentioned 2560x1600 for playable framerates that would tax either... but for the mainstream and even mid-high end enthusiasts we're still stuck in teh age of 1080p where monitor companies seem content on keeping the mainstream... but i shouldn't complain my 8500gt can't even do 1080p >_<
 
If this is to be accurate which it probably isn't. The it sounds like the tables have turned in Nvidias favor and AMD is having the same problem that Nvidia had when trying to launch fermi the first time around. Isnt it ironic, dont ya think? Wonder what the fanboys will holler this time. Im sure every argument that had last year against nvidia will be flipped for in AMD's favor. Oh well I look forward to seeing how both flagship products perform. Competition is always a good thing for consumers saves us money woot!
 
[citation][nom]LORD_ORION[/nom]Right...Both Nvidia and AMD have been using his process for a year now... and only now AMD is having a yield problem...More like Nvidia friendly parties are trying to move as much of their 465 and higher inventory as they can before the 69xx and 580 force a fire sale of the entire upper bracket of cards.[/citation]

Actually, Nvidia had terrible yields on Fermi initially. I think I read sub-5% once, in fact, I think it might have been literally just 5 GPUs on a test run once (based on GPU serial numbers). They fixed their problems eventually, and probably learned a thing or two for their next-gen fabrication.

Why doesn't AMD use GlobalFoundries for Radeon-production? Does GlobalFoundries not have the necessary tech, are they busy with other business? Seems silly to own part of GlobalFoundries and yet pay someone else to make your first batch, especially if they then have problems doing so.
 
What exactly is your meaning of "overkill"? Cause it has been a while since I saw the nominal 100fps that I used to get in HL and CS. Is there a video card in existence that will play Crysis, GTA4, SupCom, or even Star Craft 2 at 100fps using the "standard" 1080p resolution with max settings? No, there isn't. We aren't even starting to approach a single monitor "overkill" video cards.
 
[citation][nom]elysium2k7[/nom]If this is to be accurate which it probably isn't. The it sounds like the tables have turned in Nvidias favor and AMD is having the same problem that Nvidia had when trying to launch fermi the first time around. Isnt it ironic, dont ya think? Wonder what the fanboys will holler this time.[/citation]

amd rolled out their mid range already, and those are amazing. they are not in the same boat as fermi
 
These cards are not overall for single displays. Until there is single high end card that can run Crysis at 1920x1200, 8xaa 16xaf at maximum settings (customized with user end mods that improve on quality) @ a minimum of 60 fps, we still have much ground to cover.
 
[citation][nom]rmse17[/nom]What exactly is your meaning of "overkill"? Cause it has been a while since I saw the nominal 100fps that I used to get in HL and CS. Is there a video card in existence that will play Crysis, GTA4, SupCom, or even Star Craft 2 at 100fps using the "standard" 1080p resolution with max settings? No, there isn't. We aren't even starting to approach a single monitor "overkill" video cards.[/citation]

Anymore than 60 frames per second is overkill because that's the limit of 95% of LCD 1080p monitors around (refresh rate).
 
Hmmm, yes Fudzilla is also saying that, but I also read somewhere that they're on track, although pretending they're not so they can observe what the GTX 580 is capable of before shipping the final Bios with the final clocks.

BUT, Fudzilla also says that Cayman will have 1536 cores. Now, if the 1536 core number is correct, there is no plausible explanation for a manufacturing problem, given that the 5870 has 1600 cores and probably a bigger die size.


I've done a little speculative math based on that number. Here it goes for your consideration:

If we take into account the optimizations that Barts brought, where a 960 core GPU is equivalent to something like a 1340 core Cypress (a little slower than the 1440 Core HD 5850), and a 1120 core GPU is equivalent to a 1500 Cypress (slighly slower than the 1600 core HD 5870), then I could say that, on average, AMD has "gained" around 380 cores with their optimizations.


In that case, a 1536 core Cayman GPU will have more or less the performance equivalence of a hypothetical last generation 1980 core Cypress. And this makes sense, since this number, give or take, was circling around the net for a while. They just didn't take into account the performance optimizations where AMD has managed more for less.

Taking this into account, let's make a small (yet again speculative) comparison:

The 5850 has a 160 core difference to the 5870 (1440 -> 1600)

The GTX 470 has a 32 core difference to the GTX 480 (448 -> 480) and again another 32 core difference to a full 512 core Fermi.

Now, the GTX 480 is faster than the 5870.

And the GTX 470 is faster than the 5850, but slower than the 5870.

So, and given the differences in cores of these parts, one could say that:

448 Nvidia cores is faster than 1440 AMD cores. But an increase in 160 cores makes the HD 5870 faster.

If you were to give the 5870 another 160 cores, you would probably get GTX 480 performance or slightly better.

So, say 1600+160= 1760 cores.

But if Nvidia is to release a GTX 580 with the full 512 cores, then the distance would remain the same. Now, if you add another 160 cores to the AMD part 1600+160+160 = 1920, which would put them both competing on the same level.

But 160+160 = 320. Given that AMD has gained around 380 cores in optimization, it's possibly slightly better than Nvidia.

Now you have to factor in the fact that Nvidia may also have made improvements to their own architecture, which could make this more complicated.

But if they haven't, it might all be down to effective CPU and RAM clockspeed.

This is why AMD is probably trying to figure out how Nvidia is going to market their cards. If Amazon's pre-order page (that was removed since) is true, then it might just be a full 512 core affair and little else in respect to improvements.

I think this will be closer than we might think.
 
Status
Not open for further replies.