GDDR2, 3, 4, and 5?

landsavage

Distinguished
Jan 6, 2009
81
0
18,630
Whats up with GDDR2, 3, 4, 5. Is it that huge of a deal? If so should I trade this card in

VisionTek Radeon HD 4650
Model
Brand VisionTek
Model 900254
Interface
Interface PCI Express 2.0 x16
Chipset
Chipset Manufacturer ATI
GPU Radeon HD 4650
Stream Processors 320 Stream Processing Units
Memory
Memory Size 512MB
Memory Interface 128-bit
Memory Type GDDR2
$115

For this card....

Model
Brand SAPPHIRE
Model 100225L
Interface
Interface PCI Express 2.0 x16
Chipset
Chipset Manufacturer ATI
GPU Radeon HD 3870
Stream Processors 320 Stream Processing Units
Memory
Memory Size 512MB
Memory Interface 256-bit
Memory Type GDDR4
$90

Its shockingly $25 bucks cheaper after rebate. There is a GDDR3 version for the same price before (no rebates though) so I figured this was a steal.
 
Yes it a lot better card, you will need a decent 450-500 Watt Power Supply with about 28 Amps to run it though.
GDDR 2 is noticeably slower than GDDR 3 the difference between 3 + 4 isnt all that much.
If your board supports crossfire and its something you may do at a later date then get the GDDR 3 card, it will be easier to find one later on as not that many GDDR 4 cards seem to have made it to the shelves.

Mactronix
 
short answer is yes. There is a difference between GDDR2 and GDDR3. Never buy a GDDR2 card, you are not going to get better performance, this just makes a card a few dollars less and GDDR2 cards are the worst compared to GDDR3 and up. In this case the HD3870 GDDR4 is the better choice, unless you are looking at a HD4850. I imagine there is not a huge difference between GDDR4 and GDDR5 but GDDR5 is only on the HD 4870.
 
Yes i see a GDDR5 for 160 roughly, but I think I am going to stick with the GDDR3 or 4 for price, I did stumble into this though, if someone could help me understand I would be so appreciative.

So I was ready to hit buy and I saw this

SAPPHIRE 100265L Radeon HD 4830 512MB 256-bit GDDR3 PCI Express 2.0 x16 HDCP Ready CrossFire Supported Video Card - Retail
Chipset Manufacturer: ATI
Core clock: 575MHz
Stream Processors: 640 Stream Processing Units
Memory Clock: 1800MHz
DirectX: DirectX 10.1
OpenGL: OpenGL 2.1
HDMI: 1 via Adapter
DVI: 2
Model #: 100265L
Item #: N82E16814102803

Remember this is what I was looking at...

SAPPHIRE 100225L Radeon HD 3870 512MB 256-bit GDDR4 PCI Express 2.0 x16 HDCP Ready CrossFire Supported Video Card - Retail
Chipset Manufacturer: ATI
Stream Processors: 320 Stream Processing Units
DirectX: DirectX 10.1
OpenGL: OpenGL 2.0
HDMI: 1 via Adapter
DVI: 2
TV-Out: HDTV / S-Video Out
RAMDAC: 400 MHz
Model #: 100225L
Item #: N82E16814102719

Notice the new one I listed first actually shows double the stream processing units (dont know much about em I know they are a big deal).

The first one also shows a Core clock speed and Memory clock speed where the second doesn't even give a spec on either.

Lastly the OpenGL on the First one is rated 2.1 where the second one I was originally looking at is only rated for 2.0. (not sure how important this is)

So what do you all think is better? The new one is only GDDR3 the original is GDDR4.
 
Basically, what TYPE of memory is used, on its own, matters little. What is important is that it determines what sort of memory speeds it can reach. GENERALLY, the type yields the following speeds, though there are some exceptions to the rule:
DDR - 400-600 MHz
DDR2 - 500-1000 MHz
GDDR3 - 1000-2200 MHz
GDDR4 - 2000-2500 MHz
GDDR5 - 3600+ MHz
However, in spite of the slightly lower memory speed of the 4830, (as I believe the speed is some 2250MHz or so for the 3870) it is a superior card. That's because, for the most part, a card's performance is based upon a combination of multiple factors: it's memory bandwidth, as well as the shader/stream processor power, and the texturing power.

In short, the older 3870 does still hold a slight memory bandwidth advantage over the 4830, but the latter has a much higher lead when it comes to stream/shader power (with double the number of stream processors) and ESPECIALLY texture power, with 32 TMUs to 16; historically, the 3870's crippling weak point was its low texturing power, so a doubling of the TMUs really would give the 4830 the lead in pretty much everything. Also, as an additional note, the 4830 works FAR better with AA, which normally makes a big hit on the 3870's framerates.
 
Awesome TY! I am buying that one its a decent price of $100, i do have to pay the shipping where as the other i didn't. I have been having framerate problems with my hd4650 so I want something nice.
 
Unfortunately with Graphics cards its just not as simple as comparing the count of the SP's or the Clock speeds.
What Nvidia call a SP and what ATI call a SP are different. Nvidia cards will have less SP's generally but thats not to say the card cant compete with the ATI card. Even these two ATI cards you have just posted have differences in the SP's and general Architecture, or how the card is structured and works that means you cant really directly compare them.
However in this case the 4830 card is a lot better than the 3870 and is really a no brainer. I have seen reviews where they have overclocked the 4830 and nearly ended up with 4850 (next card up the scale) performance.
As far as understanding it all goes.... well there is no set of rules you can apply to tell which is best. You just have to put the time in, read reviews of new and old cards i guess, keep visiting the forums and asking questions. Some people on these forums work with computers day in day out, some like myself are hobbyists who learn new stuff every week.
One thing most of us have in common is that no matter how small the question or how obvious it is that it must have been asked a thousand times before, we will gladly answer it, we all started somewhere :) We have what are called "stickies" at the top of each section to answer these type of questions.
Mactronix
 


Yes, if I'm not mistaken, 900Mhz GDDR5 is equivalent to 3600Mhz GDDR3 (effective). I don't know what the conversion with GDDR4 is, but AFAIK the difference isn't that much.

GDDR3 is significantly better then GDDR2. Never buy a GDDR2 card.
 
@mactronix Well I thank you for the help. I would say I am a new "hobbyist", but have always had a passion for games and computers. This is really the first testing of my mettle, I want everything to match and have a complete computer that I can feel good about, gaining the will to get very serious in my computer fun, more then my younger years of pure gaming. I did make the purchase of the hd4830. Bottom line on this subject, I got a hd4650 for roughly $127 with tax, I now just purchased the hd4830 for $108.xx including shipping, so I am making smarter moves, and thats a good move for a newbie on a tight budget I believe. Thanks for your help everyone!
 

Very true, though as I've noticed one can get a decent, rough estimate of raw performance when comparing cards of similar architecture; while I wouldn't quite use the comparison between contemporary ATi and nVidia cards due to, as you noted, the difference in their stream processor architecture, (ATi using superscalar SPs, and nVidia using vector SPs) the Radeon HD 3000 and HD 4000 series are similar enough in their architecture to merit using such a system, especially since, if memory serves, their Stream Processors, functionally, are pretty much identical; the 4000's SPs actually have FEWER transistors, as they didn't actually add any more logic to them to make them more powerful/flexible, instead just trimming down their size in order to fit more in, feeling that such would result in a bigger performance increase. (i.e, going from 320 to 800 in RV670 to RV770)



A few comments:
■There is no such thing as GDDR2. Video cards used DDR, then DDR2, and then actually went to use GDDR3; no video cards use DDR3. Yes, it's a little confusing, but basically, GDDR3, 4, and 5 are all derived as evolution from DDR2; you could kind of say that at that point, PC main system memory and graphics memory kinda diverged there, and each went to follow its own evolutionary path.
■When it comes to external signaling, I believe DDR, DDR2, GDDR3, and GDDR4 use the same ratio of frequencies; i.e, the data pins transferred at an effective rate double that of the command pins; i.e, two bits sent on each data pin for every bit sent on the command pins. GDDR5 is the only one to increase this ratio to 4:1.
■When looking at the technological differences between the first four types of DDR/GDDR used in video cards, internally is where the differences lie; DDR2, as most of you might know, has an internal interface that runs at half the clock speed of the external interface, but at double the bit width; as an example, each DDR2-800 chip on a video card has an external data interface that runs at 400MHz double-pumped, for an effective rate of 800MHz, that is 32 bits wide per chip. Internally, I believe it's 200MHz, double-pumped (or 400MHz single-pumped, I don't quite recall) that is made up for by having the internal interface be 64 bits wide.
 
NTTK, the thing to remember is that the HD4650 also has the improved TMUs and ROP assisted AA, so it's not the front end SPUs that tell the best part of the story IMO. Even the memory bandwidth limitation comes into play too, where like some other card, finally the HD46xx series just falls off at high res and AA.

I love how the performance ping-pongs all over the place in this Xbit review, especially in Crysis, big win to drawing even (spectacular separation in others);
http://www.xbitlabs.com/articles/video/display/powercolor-pcshd4670-512mb.html
 

Yes, the ROP changes for AA make a huge difference in the entire Radeon HD 4k line, though I'll admit to an extent that I wasn't entirely aware of what they did with the TMUs... More or less just that there were a lot more than the norm for AMD's high-end cards for the previous, oh, three-plus years or so. (X800, X850, X1800, X1900, X1950, HD 2900, and HD 3800 series... All with 16 TMUs) Basically, I felt that even in the face of a growing shader:texture load ratio, that the only marginal increase in raw theoretical maximum texturing power (some 55%?) was making it more and more of a performance bottleneck.
 
Most video cards, even with the lower-RAM configurations, have plenty enough on it. And in fact, in most cases when you're dealing with mid-range or lower-end cards, the larger memory size is often too much to even be usable in a practical sense.

More or less invariably, though, having faster memory will yield a better level of performance than having slower memory, but twice as much; more memory does not speed things up; the only question is if you have ENOUGH memory; and 512MB is enough for anything but the highest of resolutions, combined with the highest of texture settings, AA, and AF... And that's only in a small handful of games anyway where you could use 1024MB.
 


Added to what NTTK said, it's also about the bitwidth, because 1(2) GHZ of GDDR4 sounds nice compared 800(1600) Mhz of GDDR3, but if the bit width is 128 vs 256, then the 800Mhz 256bit memory would have more bandwidth.
 

Yes, there is that too, but I kinda left it out because... Well, actually, now that I think about it, I *do* recall some cases where the bit width is changed as well. (infamous example: the 512MB version of the GeForce 9600GSO, which has a 128-bit interface, compared to a 192-bit interface for the 384MB version of the card) So perhaps I should've mentioned that too, since it's the other factor for memory bandwidth.
 


so if I use aa and af on all games the 1 gig is useful? If so on what games?
 
That's a value article, so from that you should be able to read that there is value in the difference for the HD4850, but not enough in the GF9800, where it's value is based on performance elsewhere not just the 1GB and that increase in memory doesn't add much since it doesn't have the core power to effectively use that extra memory.

There is benefit to 1GB over 512MB, but when the core is too weak to exploit that sweet spot where it matters (res+AA above 19x12 w/ 4XAA) then does it matter if you get 4fps instead of 2fps, they're still unplayable despite being 200% better.
 
but what if overclock wouldnt it increase the core power enough for it to make a difference? and what if you play older games or games using lower reoslutions but a lot of aa and af?
 
The resolution has a huge impact on your memory requirements. Advances in AA and AF efficiency have reduced their impact, but no advances can change the simple fact that if you go from, say, 640x480 to 1280x960, you have four times as many pixels to read from and write.