PowerColor Radeon HD 5870 LCS: The GHz Limit, Broken

Page 3 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.

Pei-chen

Distinguished
Jul 3, 2007
1,282
6
19,285
[citation][nom]Cleeve[/nom]The title refers to the GHz limit imposed by the BIOS on this specific card Pei-chen, not all 5870s or graphics cards in general.Nor do I ever mention a GHz war, nor do I suggest that one AMD MHz = one Nvidia MHz...It's pretty clear you didn't read it at all. Might I suggest that you actually read the article before knee-jerking a sensationalist complaint?It might make more sense to you...[/citation]
Do you mind explaining the giant picture on the 1st page that reads "Powercolor Radeon HD 5870 LCS Passing the GHz limit" and “It is only natural that we look to PowerColor's new Radeon HD 5870 LCS as the weapon of choice in our charge to slay the megahertz (or could it be gigahertz?) dragon” in the 4th paragraph?

Don’t tell me that Don finally learned how to overclock a graphic card and were so excited that he had to write an article on it. This article smells the same as the old P4 overclocked to 4GHz article from years back except Sapphire actually sold 4890 @ 1 GHz months ago. (remember how many people used to point out that Athlon 64, while slower in GHz is actually faster in game?)

BTW, don’t pretend that you read the article and I didn’t. I actually read all the articles/news I commented on. It is common knowledge that review teams usually don’t read articles published on Tom’s. Plenty of people has raised question why articles draw different conclusions and data don’t match.

In the Conclusion, Don asked why 5870 isn’t as fast as two 4890s and he took AMD’s marketing response at it face value. Christ already asked the same question on 5870’s launch review of believed the problem to be 5870’s limited memory bandwidth. (Twice the core but not twice the memory bandwidth)
 

cangelini

Contributing Editor
Editor
Jul 4, 2008
1,878
9
19,795
[citation][nom]hixbot[/nom]This makes absolutely no sense to me, DX11 adds to DX 10 and 9, how can something be optimised for 11, and not 10 and 9?![/citation]

You have a general doubling of on-die resources and ASIC complexity. However, some of those transistors are dedicated to enabling DirectX 11. Assuming this was at the cost of elements that would have improved performance relative to the previous generation, ATI's comment makes sense.
 

cangelini

Contributing Editor
Editor
Jul 4, 2008
1,878
9
19,795
[citation][nom]overshocked[/nom]No liquid nitro? Dissapointed.[/citation]

Bear in mind that you don't run LN2 through a water-cooling circuit :)
 

avatar_raq

Distinguished
Dec 8, 2008
532
0
19,010
[citation][nom]Cleeve[/nom] AMD's answer was that the 5870 is optimized for Dx11 and we'll see the advantages in future titles.[/citation]

I can't believe that. Dx11 titles are expected be so demanding, for example Dirt2 demo showed a 30% performance hit between Dx9 and Dx11 (without much of a visual improvement). After all, ATI can claim whatever they like since there's no way we are going to be able to test the 4800s in Dx11.
 

Adroid

Distinguished
[citation][nom]annihilator-x-[/nom]I have convinced myself watercooling is a must for GPUs.I am running a watercooling setup on my CPU alone at the moment. Despite majority of heat gone from internal case, my HD4850 card would barely escape overheat in room temperature of 23deg C in UK winter. I once tried moving my soundcard closer leaving 2 inch clearance from HD4850 and it crashes every single time when I run Mass Effect. Now it crashes once in a while and I am 100% sure it's due to overheating of the GPU. Load temperature reaches 106 deg C. The card crashes at 110 deg C.[/citation]

That is too stinking hot my friend! Sounds like your case has poor airflow...
 

Steve911924

Distinguished
Jul 12, 2006
43
0
18,530
I think it's fitting AMD being first to break the 1 GHz GPU barrier. After all, it was the AMD Athlon which was first CPU to reach 1 GHz

Good read

 

brockh

Distinguished
Oct 5, 2007
513
0
19,010
What is with people being suprised that the 5870 didn't easily beat the HD 4890s? I was suprised it was basically offering the same performance, that's impressive. It's a single card offering the previous-gen flagship card's crossfire performance -- what are you complaining about?
 

snorojr

Distinguished
Oct 14, 2009
77
0
18,630
or you say sod off warranty, you flash the bios to asus one with unlocked voltage snd go crazy and acheive even better gpu core clock speed than that. Also even with some good gpu heatsink, the 5XXX serie is really cold even at load. Dont need crazy liquid cooling except if you want 30c temp at load or go up to 1200mhz core speed. Seriously even my xfx hd5770 was able to run a 960mhz with stock xfx bios and with afterburner and i swa alot of people running a 1ghz core speed with the 5850 and the 5770, its really not that hard.
 
I can't believe that. Dx11 titles are expected be so demanding, for example Dirt2 demo showed a 30% performance hit between Dx9 and Dx11 (without much of a visual improvement). After all, ATI can claim whatever they like since there's no way we are going to be able to test the 4800s in Dx11.

Ever since 3D cards started, we've had small and large improvements of visuals at the cost of framerates. Developers create games to run at 30-60 FPS at optimal settings. If the game is running over that, they add more visuals, even if they aren't huge.

This is because most people do not run a benchmarking tool when playing, and can't see or barely notice a drop in FPS once you reach that point.

While games like Dirt 2 don't show a large visual improvement with DirectX 11 features, you can barely tell you lost those 30% FPS as well (I couldn't tell a difference with the demo). Other games might show more of an improvement.

I found the new lighting effects that was added to games a few years back to barely improve visuals as well. Once I got used to using those new shadowing/lighting features, I couldn't live without them.
 
I can't believe that. Dx11 titles are expected be so demanding, for example Dirt2 demo showed a 30% performance hit between Dx9 and Dx11 (without much of a visual improvement). After all, ATI can claim whatever they like since there's no way we are going to be able to test the 4800s in Dx11.

I also do not believe they meant directX 11 will allow you to reach the same framerates at directX 9 once it's enabled. At least not in it's most literal translation.

I believe what was meant is that while using directX 11 features, like tesselation, it would out perform directX 9 or 10 software trying to do the same quality of tesselation, which currently is either not done or to lesser quality.
 

Studly007

Distinguished
Nov 1, 2008
30
0
18,530
Hmm - makes me wonder how well my 2 x Asus 5870's will do (properly tweaked), since I have them as part of a water cooled setup through a 4 x 120mm (480) Feser Radiator by a MCP655 pump with an i7 940 on a Rampage II Extreme mobo?

 

cleeve

Illustrious
[citation][nom]Pei-chen[/nom]Do you mind explaining the giant picture on the 1st page that reads "Powercolor Radeon HD 5870 LCS Passing the GHz limit" and “It is only natural that we look to PowerColor's new Radeon HD 5870 LCS as the weapon of choice in our charge to slay the megahertz (or could it be gigahertz?) dragon” in the 4th paragraph?[/citation]

Like I said, I was passing the GHz limit on the card's BIOS. I certainly didn't say anywhere in the article that I was the first person to ever pass a GHz on a graphics card. If I was suggesting that I'd probably have made a big deal out of it, dont'cha think? Maybe by saying something like "I'm the first person to pass a GHz on a graphics card!" or something like that.

Instead, I believe I said "We want to know just how far a reasonable human being can take the Radeon HD 5870."

Does 'reasonable' equate to 'super-mega ultra extreme maximum possible overclocking' in your vernacular?


[citation][nom]Pei-chen[/nom]In the Conclusion, Don asked why 5870 isn’t as fast as two 4890s and he took AMD’s marketing response at it face value. Christ already asked the same question on 5870’s launch review of believed the problem to be 5870’s limited memory bandwidth. (Twice the core but not twice the memory bandwidth)[/citation]

I shouldn't have to remind you that This is a Powercolor 5870 LCS overclocking article, not an in-depth 5870 architecture comparison. It's not an appropriate place for this kind of analysis, it's a tangent worth mentioning but not the focus and you know it.

BTW, the 5870 still has more effective memory bandwidth than a pair of 4890s in crossfire because the memory isn't shared, it's dedicated to each card. If bandwidth was the only concern the 5870 should still be getting faster frame rates than CrossFired 4890s. The 5870's don't double the memory bandwidth, but they do increase it. Slightly, but nevertheless, faster.

It was appropriate of me to repeat AMD's response for the readers because it makes sense and because this particular article doesn't suit deeper investigation - not to say I won't be doing that in the future - regardless, the issue has to be architectural in nature as it can't be blamed on bandwidth. (I *am* Don, BTW)

Your complaint regarding this is therefore neither well thought out nor appropriate. Frankly, it seems a lot more like sour grapes than a valid concern.
 

nurgletheunclean

Distinguished
Jan 18, 2007
150
0
18,690
[citation][nom]Cleeve[/nom]BTW, the 5870 still has more effective memory bandwidth than a pair of 4890s in crossfire because the memory isn't shared, it's dedicated to each card.[/citation]
I think you are confusing people. A pair of 4890's have more memory bandwidth. each 4890 has nearly the same memory bandwith as a single 5870. The only difference between the 5870 and the 4890 is a 225mhz increase in memory frequency. Total bandwidth for a 5870 is 156GB/sec a single 4890 is ~ 125GB/sec. But you have 2 GPUs interfacing the memory (the the case of dual 4890s) that effectively doubles the bandwith. That memory access doesn't travel across the crossfire interface.

with 4800's memory bandwidth was more than enough to supply the GPUs with enough data (that's why overclocking memory had little impact on frame rates). But the 5870 is basically 2 4890s, yet it's saddled with the same 256bit interface. Now memory is becoming a limiting factor. The 5870 would need to have a 512bit memory interface to truly beat 2x4890's.
 

cleeve

Illustrious
[citation][nom]nurgletheunclean[/nom]But you have 2 GPUs interfacing the memory (the the case of dual 4890s) that effectively doubles the bandwith. .[/citation]

I believe you are mistaken. Each card is dedicated to it's own memory. I have heard of people suggesting that a CrossFire setup or an X2 card has a 512-bit memory bus, but this is misleading. In CrossFire, it can be referred to as 2 x 256-bit, but the speed is equivalent to a single 256-bit interface because the textures are mirrored for each card, not shared through CrossFire.

To clarify, when you have two 1GB cards in CrossFire you don't have 2 GB of memory to play with. Each card has to mirror the same textures. It acts as a faster card with 1 GB of memory. Even a 2 GB 4870 X2 card (or the new 5970) can only use the equivalent of 1 GB of memory and a 256-bit bus for all intents and purposes.

I admit a small amount of memory might be freed up compared to a single card as there might only need to be half the frame buffer for each card ( as each card will be responsible for half of the screen), but as far as doubling the bandwidth, that is a myth.
 

nurgletheunclean

Distinguished
Jan 18, 2007
150
0
18,690
[citation][nom]Cleeve[/nom]I believe you are mistaken. Each card is dedicated to it's own memory. I have heard of people calling this a 512-bit memory bus, but this is misleading. In CrossFire, it can be referred to as 2 x 256-bit, but the speed is equivalent to a single 256-bit interface because the textures are mirrored for each card, not shared through CrossFire.To clarify, when you have two 1GB cards in CrossFire you don't have 2 GB of memory to play with. Each card has to mirror the same textures. It acts as a faster card with 1 GB of memory. This is even the case with the X2 cards.[/citation]

True you have a copy of the same data on each card, effectively halving the AMOUNT of memory. but in crossfire, each card works on half of the frame. Here's a simplified scenario. Lets pretend the frame takes 250MB of memory. The 5870 has to access all 250MBs but the 4890#1 only has to work with 125MB of it, sure it's got all the the same info in memory that the 5870 does but the GPU is only working with 1/2 of it, as the 4890#2 is working with the other half.
 

cleeve

Illustrious


Think about it: are you suggesting that rendering 800x600 increases bandwidth four times compared to 1600x1200?

That doesn't make sense. Bandwidth is limited by memory speed and bus width, not the resolution. Working on half of the frame does not double the bandwidth.
 

nurgletheunclean

Distinguished
Jan 18, 2007
150
0
18,690
[citation][nom]Cleeve[/nom]Working on half of the frame does not double the bandwidth.[/citatioper[citation][nom]Cleeve[/nom]Working on half of the frame does not double the bandwidth.[/citation]
5870 250MB/156GB/s=.0016 seconds
4890 125MB/125GB/s=.0010 seconds
Which one finishes first? Again we are talking about effective bandwidth. If you want to talk about absolute bandwidth, it's black and white 250GBs for 2x4890s

 

cleeve

Illustrious


And if I put ten 5870's in a cardboard box, that box has a 2560-bit memory interface.

You can bandy about meaningless numbers but you're straying from the point. If it's for the sake of argument, call it what you want. But the real-world impact is not increasing usable bandwidth or performance.

AMD's official specs say the 4870 has 115.2 GB/s of bandwidth. Do you know how much the official spec of the 4870 X2 is? The exact same 115.2 GB/s. Neither CrossFire nor an X2 card adds a bandwidth advantage. If there was, I'm pretty sure AMD would be all over that.

This isn't new information. Where did you hear otherwise?

A 5870 has slightly more real-world usable bandwidth to work with than a single 4890 or a pair of 4890s. You therefore cannot point your finger at memory bandwidth when looking for the reason for a performance disparity between the two.


 

nurgletheunclean

Distinguished
Jan 18, 2007
150
0
18,690
Bandwidth is limited by memory speed and bus width, not the resolution. Working on half of the frame does not double the bandwidth.[/citation]
No it doesn't, but having a 2nd GPU and memory subsystem does, which is the original point. Go back to being bewildered by how 2x 4890's with thier "lower" memory bandwidth are outperforming the 5870, even with all their crossfire overhead.

 

cleeve

Illustrious


I submit that the only person displaying bewilderment - demonstrated by inventing a mystical method of doubling bandwidth via lowering resolution - is you, sir. :na:
 

nurgletheunclean

Distinguished
Jan 18, 2007
150
0
18,690
[citation][nom]Cleeve[/nom]AMD's official specs say the 4870 has 115.2 GB/s of bandwidth. Do you know how much the official spec of the 4870 X2 is? The exact same 115.2 GB/s. Neither CrossFire nor an X2 card adds a bandwidth advantage. If there was, I'm pretty sure AMD would be all over that.This isn't new information. Where did you hear otherwise?[/citation]
http://us.msi.com/index.php?func=proddesc&maincat_no=130&prod_no=1558 230GB/s specified.
http://www1.sapphiretech.com/us/products/products_overview.php?gpid=251 again

and if you actually want to use AMD's specifications.
http://www.amd.com/us/products/desk...ages/ati-radeon-hd-4870X2-specifications.aspx
They are flatly calling it a 512bit interface.

Some call it 512bit some call it 256x2, nobody's calling it 256x1 Bit.
 

nurgletheunclean

Distinguished
Jan 18, 2007
150
0
18,690
[citation][nom]nurgletheunclean[/nom]230GB/s specified. againand if you actually want to use AMD's specifications.They are flatly calling it a 512bit interface.Some call it 512bit some call it 256x2, nobody's calling it 256x1 Bit.[/citation]

Sorry the links didn't show up
Sapphire's 230GB/s Link
http://www1.sapphiretech.com/us/products/products_overview.php?gpid=251

MSI's 230GB/s Link
http://us.msi.com/index.php?func=proddesc&maincat_no=130&prod_no=1558

AMD's link to being 512bit
http://www.amd.com/us/products/desktop/graphics/ati-radeon-hd-4000/hd-4870X2/Pages/ati-radeon-hd-4870X2-specifications.aspx
 

cleeve

Illustrious
*sigh*

It's marketingspeak, nurgle. You're being duped by numbers that they use to sell more cards. There is no relation to real-world performance. You can call it 256-bit or 512-bit but the end result is the same bandwidth. Like I said, you put ten 256-bit cards in a box and you can technically say the box has a 2560-bit interface, but that doesn't mean squat.

I've shown a couple colleagues your rants and frankly you're making a bit of a fool of yourself.

I don't know what more I can tell you. Believe what you want, but you're misleading people.
 
Status
Not open for further replies.