Ati 5870

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.

Im thinking that the G300 may be the last huge monolithic monster made, and duals and such are the way of the future.
It all depends on the tech used too
 
I think you also need to think about the way they go about it, and who you include in that list (Larrabee still seems to be in a similar direction for Intel and may end up being the last to get the hint and produce a few TERRAlithic monsters 😱 ).

I personally see the multi-die + single package as still being the best design strategy, the problem of course as we've know for a while is in getting it to work as well as it does for the CPUs.
 
Nv will always has big developers on their side "The Way It's Meant To Be Played program" will always providing the best gaming experience for nv cards.
 
Like they did in the FX generation, eh ?

There is little to guarantee that nV will have the same amount of influence this time, especially when like the FX series they are the second to the table with the spec.

For SM4.0 and SM3.0 nV brought hardware out first, so it somewhat dictated development, with SM2.0 ATi brought the R9700 to market first and it dictated alot of early DX9 game development, so there's little history to go on nV continuing their influence when they have no product in the marketplace.

Also nV's influence is waning as intel prepares to inject itself into more dev teams as well (either buy paying devs or buying them like Project Offset), although AMD is still the red-headed step-child, even just the recent events show they aren't going to be holding back like in the past, even if they might not still match the other players in the field.
 

Why should there be a distinction? A GPU isnt a discrete processor like a CPU. Its massively multicored anyway. I think that is where Nvidia's break dancing president thinks he is going to jump ahead of the rest of the world. Instead of the concept of descrete GPUs you just have single logical bank of cores to hypermultiprocess the crap out of any job thrown at them, not just graphics. CUDA it work?
 


Agreed on the first sentence.

Disagree on the 2nd (unless your talking about inter-chip communications).


As dndhatcher says, its massive amounts of homogenous cores. Get the 2/3/4 discrete GPUs (chips) talking properly, and it should be seamless.


When that happens, the large monolithic core is dead. Why make 1 large chip and disable parts of it for the mid-range market when you can make 1 small chip for the low range, add another for mid, and add another for top.


Far cheaper in R&D and manufacturing. The whole product line can also be got to market far quicker.
 


Dude, did you miss the last generation of chips?

BIG = BAD

Lower raw yield per wafer, higher failure rate, higher costs to develop a larger chip, resulting in much higher cost per chip to make, also giving you less attractive SKU options more often using chips in low priced cards rather than doubling them up for a healthy mark-up.

Also without an X86 core there's only so much they can do that requires that kind of power; and CUDA isn't enough to make them money in the Enterprise & Government market.

Smaller dies with better cost structures and more options is a far better solution, as was proven last round, where nV hemorrhaged money on the first gen of G200s while ATi had control on pricing due to their lower costs and greater flexibility.

This may help bring you up to speed;
http://www.anandtech.com/video/showdoc.aspx?i=3469&p=1

That also relies on the idea that there are no major external forces, like a bad DX implementation or problematic process @ TSMC (like 80nm HS and the first gen of 40nm) which can hurt the best of intended strategies.
 


Actually I think you're confusing who's saying what, JDJ is for the multiple GPUs acting as one but being designed, built and implemented as small individuals, not vice versa. That's how the last generation went, and we've discussed it a few times, the last time before this probably being the release of the Anand article.

 
Does anybody know when we are going to see proper benchmarks, reviews for the 5870?

The 5870 is going to be release in two versions 1GB and 2GB? (like the 4870 512MB and 4870 1GB)?



 

You are just looking for excuses to argue with me. Amiga500 understood what I meant.

I said it doesnt matter and shouldnt matter. Virtualize it into one logical unit and no one cares how the hardware is manufactured. If you virtualize it correctly, the entire concept of an x2 GPU or even multiple GPU cards is irrelevant. Making one huge virtual screen out of 24 physical monitors and one large virtual bank of GPU power out of 4 graphics cards (or 4x2 cards, who cares) frees hardware manufacturers to do whatever they need to without forcing software developers to duplicate their work for multiple hardware configurations.


One of the articles said the NDA is up in about a week.

IIRC a 5870(1GB), 5870(2GB) and 5850 are the ones they officially talked about shipping.
 


No, you toss around terms you don't understand in reply to comments you obviously don't understand, and I doubt Amiga500 would agree with you if he really looked at what JDJ said and then looked at your reply again. It's like you're saying it doesn't matter how you mfr a part, if you use flubber and magic it'll be better, we just need to learn how to get the right balance of flubber and magic. :sarcastic:

JDJ is talking about current manufacturing process benefits and strategic choices, and there is does make a difference, a huge difference. Virtualization is not a replacement because of hard-set architectural barriers and advantages to chip making, things like i/o interfaces and latencies, memory alone is a major issue. It's fine for supercomputing whose requirements are not bound by latency just raw large scale computing that take a long time to compute things that may be measured in 1x10-15s but it doesn't need to do it in a set time or else throw out the result; whereas for GPUs, you need speed and efficiency for the power you want, and from a manufacturing perspective you want better economies of scale to maximize your available resources as well as make sure you're not paying much more than the other guy to build your solution.
Virtualization helps when there is no other option, but it not practical in a discussion of base chips. You can try to virtualize an HD5870 with a bunch of HD5300 chips, but the number of chips required to equal that makes it less practical. The barriers for inter-VPU communication and even memory communication would make it impractical to implement, especially since there needs to be a single point of communication with the CPU, in which case you end up with another bottle neck, or else you have them all communicate and then you have oversaturation and duplication. Also the SW overhead is much larger where the CPU has to manage the virtualization, thus meaning you need a more powerful CPU to virtualize a more powerful GPU or you need to build dedicated hardware to manage it, neither of which is attractive if it means more resources need to be added outside the current production line. Just like you can emulate (errr virtualize) DX11 hardware in a DX10 card, it still won't be as fast as actually having the hardware resources to do it within the chip.

If you virtualize it correctly, the entire concept of an x2 GPU or even multiple GPU cards is irrelevant. Making one huge virtual screen out of 24 physical monitors and one large virtual bank of GPU power out of 4 graphics cards (or 4x2 cards, who cares) frees hardware manufacturers to do whatever they need to without forcing software developers to duplicate their work for multiple hardware configurations.

They don't need to work on multiple hardware configurations, that's why we have DX and OGL, and then tweak after the fact, and whether you virtualize it or not you still have to change how the application in conjunction with the drivers handles the workload. Some software developer has to work on your virtualized model, either the game dev or else the IHV's driver team.
And your example of a virtual screen out of 24 physical monitors, once again is a perfect example of how it's nowhere near as good as a single monitor with the same resolution, but also you have to ask if the tradeoff of that 24 monitors worse than a single monitor of 1/6 of the resolution with an improved DLP projector? If you've ever watched a movie I think you'd find the single 50ft screen showing a soft 12MP (4K) image (or even 4MP [2K] image) would be far more attractive than a 20' wall of 24 monitors showing a total of 55MP that have distinct seems to them.

The goal is to remove the seems, and that's the flubber and magic again.

SO in short, you answer is like saying, "does it really matter, we're going to be moving to nano-tubes and optiocal processors and the whole process will change...." That's all well and good, but totally irrelevant to the near term context JDJ was talking about.
 


You will care because you pay for it in costs.


A smaller chip is easier to design.


A mid range composed of 1 or 2 small chips* is cheaper to make than a midrange composed of one crippled gigantic chip**


*where you are making full use of the transistors fabricated.

**where you are only using half the transistors fabricated.



Those are two fundamental reasons why small chips are better.... assuming the discrete chips appear seemlessly integrated to the software.
 
one interesting development with the hd5800s is that ATI finally abandoned its sweet spot strategy which ran for 2 gens. no more intentionally handicapped (although not necessarily slow) cards to "fit" a certain price segment (150$/250$). this is like the x1950xtx, pure unadulterated fun. you'd actually believe ATI is more than willing to deliver a killing blow.
 
I agree to a point. Their highend is supposedly much larger compatatively than the last few gens, which will really hurt nVidia here, but I also see this as a more forwards looking design.
The 40nm process was the process from hell, much like the 80nm.
Going forwards, the 32 and 28 nm processes are seen to be much easier handled, and a possible shrink to those nodes could come much quicker this time as opposed to the 55 to 40nm time, so, even at 330mm , at say 32nm, were right back in the fold again
 

Thank you Amiga. That was sort of my point.

Your post has clarified to my why I am having so much problem explaining my perspective. Everyone here seems to be thinking at a hardware manufacturing level and I look at it from a software development view. Software is always the slower component of computer technology.

What I mean by no one cares is that I can code once and dont have to worry about the underlying hardware.

When I write a database, I dont have to write a bunch of code to look at how many hard drives there are and figure out how to split up data between them, RAID handles that for me. Seeing that happen with video is to me far most significant than any single generation processing power increase.



Yes, GrapeApe, you are correct. I'm thinking long term, not just at next month's new toy.
 
Long term past Larrabee, and still you have to think about what runs it and how that's made.

Your problem is that you're thinking database; when you need to think instructions and dependent functions. A GPU is not simply accessing pictures or textures and displaying them, they are building images from thousands of components that are being run through thousands of operations, hundreds of which are dependent on shared inputs & outputs, so access to shared resources like registers and caches is a major component in making the chips faster the tougher the workload gets where splitting the resources decreases the efficiency significantly.
 


And you just explained the power of competition, companys keep having to improve on their technology to get an advantage over the other. But what you are saying is complete stupidity. Nvidia will be better than ATI at times, and ATI will be better than nvidia at times, its only a matter of time before the other comes out with something better, stop thinking like a fanboy...
 


I am not thinking as a fan, I am simple stating what have been happening. I have cards from both companies, so how can I be a fan of only one?

The point I was trying to make is that nvidia, because they were ahead of ATi was simply renaminging their old cards and passing them off as new. I refuse to go over all the 8800 ---9800---gts names and specs., but that is what they did. Did Ati do the same, at times, yes, but they did bring dx10 and 10.1 to their cards. Now they are bringing eyefinity and a great performance boost from the previous gen. nvidia refused to move to dx10.1...

Now that dx11 is coming out, guess what, ATi is already there waiting and where is nvidia?

I am NOT saying that ATi is a a better company than nvidia, all I am saying is that this time ATi is pushing nvidia hard to produce the video card crown. Nothing more, nothing less.

Oh, and before you call someone a fan or stupid, make sure you know what card they have and check and make sure that your IQ is higher than 5. That way you wouldn't come off sounding like a monkey on crack. Even though I compared you to a monkey, I think I insulted the monkey.

Sorry mods, I hate being called stupid.
 
I am enjoying all the verbage for sure! One thing I know ATI will be "king" for a time, when it will be nVidia. If it wasn't for this competition I'm sure improvements would come at a much slower pace. I currently have 2 GTX 295's and I am almost 100% sure that I will sell them when the 5870x2 hits the stands. I believe that atleast 20% of an nVidia card price is for name recognition.