RV870 is 1.6 times faster than RV770

Page 3 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
I agree with that too. A lot depends on how much you can afford to spend on tech tbh. I have 2 pc's that are good high quality gaming machines and I like to keep them near the top, but I can't really afford to have both at the very top constantly.

There is nothing worse than having 1 really good pc at home and 1 really bad one at my friends house (which is what i used to have). I'm spending a good 2 to 4 hours at her place most days. If the new 5x's are too expensive for 2 computers I will be passing them up until they are affordable. I'm happy with my 4870 and 4770's xfire...it wouldn't take a whole lot for me to upgrade both so long as there is reason to do so. The reason has to be tangible though.
 
4870x2 is a bit too high, it is still $370 when u can get 2 4870 1gb for $290...

Price for having 1 slot only is $80... kind of expensive...
 


Very true, but when comparing single cards the 4870 X2 will be a hard act to follow.
 
I think this is great news for anyone that is building a new system or has been planning on a GPU upgrade for awhile. But if you already have a 4800 series its probably not the huge leap in performance your looking for. I myself and holding off building my new system till i see if theres going to be a 5870 toxic or not. Also the fact thats its going to be DX11/ shader 5.0 makes me want to wait for it even more. I would rather have newer tech so that it might hold me longer till the next upgrade.
 
OK, a rehash.
The 4770 has 640 shaders. The 5870 should have at least 1280.
The 4770 has a clock of 750, we can all guess these clocks will be higher
The 4770 has a memory clock of 800, the 5870, most likely 1.2Ghz

Just taking the 2 4770s in cf, which typically loses perf, at least 20% on average, its comparable to a 285

Add 20%, no cf, add 15% core clock speeds, add possibly 33% more shaders than 2 4770s together, add 50% memory clock speeds and 100% wider bus,and people think itll barely clear a 285s perf?

Im thinking the 285 will be 200$ card, if its lucky, most likely 175$


@ L1qu1d, that was for hardly ever posting heheh, I was j/k
 
hey not my fault lol work consumes my life and if its not work then its gf, and if its not gf....battlefield 1943...

SHHH don't even say it jaydee LOL!
 


Welcome to the world of graphics chips. :sarcastic:

This is typical for new cards, especially when compared in old games.

Guess what, the GTX 280 was quite often slower than the GX2 or even 2 GF8800GT in SLi; and same with the G80 vs 7900GX2 or 2 X1900.

I've never seen a doubling of performance except for in areas they were fixing something that was previously a noticeable issue, like the FX series or the HD2/3K's AA performance.

1.6X is healthy, and it is the 'best case scenario', but it also depends on what that scenario is as to what it means compared to the competition and even compared to the current HD series cards.

The HD4890 is about 10% better than the HD4870 in most situations, so to get a 40+ % boost ontop of that without the potential drawbacks of CF in some situations, it's not bad, and also quite comparable to the 4870X2 depending on the situation;
http://www.xbitlabs.com/articles/video/display/mainstream-roundup-2_15.html#sect2
http://www.xbitlabs.com/articles/video/display/catalyst-9-8_5.html

The main thing will be can it beat the GTX 285 enough to avoid bad press of ..... "everything new is old again, ATi still in second place with new card...." that's the only way that this is a problem. It has to be about 20% better than the GTX285 in the major titles at launch, otherwise it'll be a dissapointment to most in the numbers sense, but still interesting in the feature sense.

Anywhooo, as I already said..... We'll see. :sol:

 


So basically its just work and battlefield 1943...
 
I believe the real gains are to be had in games that use DX11, remember this what Evergreen has been designed and built for so the true potential of the card won't be known until reviewers can benchmark the same game running under DX10 as opposed to DX11.

BTW TheGreatGrapeApe - I seem to remember the Radeon 9800XT PE was about 90/100% faster then previous cards, there were so popular you couldn't buy one for love nor money. I waited ages to get a hold of one (nearly 3 months) and in the end settled on a Geforce 6800GT as I couldn't wait any longer.
 


Well the 4xxx series was a big update over 3xxx series. It was a big surprice, maybe even to ATI. So That is why I consider "pessimistic" option this time. But if jaydeejohn is right and there is allso other upgrades... It may even better.
Well in anyway. It's update big enough for me, and I am guite optimistic considering both companies support DX11. When was the last time that Nvidia and ATI had same DX-model in use?
 
I am sitting here in 1am in the morning, having lot of work ahead but wondering about stuff and ideas pop in my head. Here is the latest one 😀 :

I think ATI is preparing hell of a trap for nVidia. The fact that they are grabbing market now with their smaller, better desing from which they actually win money and nVidia fighting for the only chance they have to make BIG chip , and still losing money, that fact is very disturbing for nVidia.

If I was nVidia I will suppose that with moving to 40nm ATI will try to at least upgrade a lot their compute power and make them a lot faster (as there is a lot of headroom for them). However, if I was ATI I wouldnt do that!!! Instead I will do a minor upgrade like 50% faster than previous generation, but DX11 being a strong card that can bring even more benefits. What will give me that approach. Well for instance CHEAPER cards. Cheaper to manufacture as the chip will be even smaller die size and still faster and still consuming same or lower power.

Now pay attention. Currently there is no game you cant max out with current generation cards (and have leftovers even). May be Crysis is exception but thats only on high resolutions with AA and VeryHigh details. (0.0001% of the market cares about that). Looking ahead at 2010 , I havent heard 1 big title which will have monster hardware requirements like Crysis did. Hell if you have 4870 1GB I bet you will be ok till end of 2010. So the hardware is ahead of software.

Now if you produce a tiny chips which are powerful enough and can place them in the mid/low market -then you are going to WIN big time. Milions and milions users get just whats good enough for the least money. only 0.1% market cares about the VERY VERY high market. And still who will want to buy a monster video card , that exsausts 300W of heat, and gives him 300fps in every game. Some will say , well I will be able to play games even after 2-3 years then. Well who cares. in 2-3 years you will buy another card or generation which will still be better and for less money.

The period at the moment is such that a small compact chip will (only by my opinion) rule the market. I bet they can make the die size 1.3x and still include 1200 shaders, 1.5x- 2x Texture Units, 1.5-2 x ROPS and DX11. When you move to 40nm you have about 2x reduction so 256x1.3/2 = 170mm2 die size. BRRRRRR

you can even dual core it to put some faster cards on the market and still be reasonable priced(like $300-$350). Hell you can dual chip / dual core it (4 cores 2 chips) and get x4 and still have competitor for the GT300 and still have MUCH better yields and lower manufacturing. And you will RULE the 90% of the market.

The trap is set. Lets see what happens. What do you guys think , am I dilusional or I need to get back to my work :)
 
^ True,
but ati is going to aim for similar performance (-10% at most) against nvidia equivalent with a cheaper price tag (10-15% cheaper). Much like how this generation is.
 
Even if ATI goes all out with performance, it is almost impossible that their GPUs wont be far cheaper to manufacture than anything nVidia can dream up. DX11 cards likely wont sell just for DX11, if I where ATI I would be trying to destroy everything nVidia in the DX9/10 department until nVidia can answer in turn, which likely will be next year. As ATI my aim would be to put a 5850/5870 in every gaming rig possible, because the production cost is already pretty damn good for them.
 
Very very very unlikely Modern Warfare 2 will have any DX11 features. In fact it is pretty much no question given the game is coming out in like 2 weeks so is already done and being printed and shipped.

And only high end games in 2010 will have DX11 features. Likely Mass Effect 2 might have it, Crysis 2 of course, Bioshock 2 etc. As well as a few random games but mainly at first all you will be seeing them in is games that many people with DX11 hardware will demand for, or very graphically intensive games where many people play for the looks like Bioshock and Crysis.
 
Yeah, hopefully we will have some real DX11 titles fairly soon (less than a year, maybe?) rather than the seemingly endless number of hacked in DX10 titles we had that ran horribly. It seems hard to do worse than the DX10 launch, which might be good news for DX11 (as in, we'll be like, wow, this is great!).
 


Huh? :heink:

The 'previous cards' to the R9800XT (not PE which was an OEM add-on, like for HIS) was the R9800Pro 256 DDRII, and then before that the R9800Pro, and Before that the R9700Pro. And the R9700 Pro was generally faster, with the very rare 100% boost (where the previous R8500 became hampered by memory 64 vs 128), overall the performance difference was much smaller at launch;
http://www.firingsquad.com/hardware/radeon9700/default.asp

Perhaps you could argue that the GF6800 was such a boost in PS2.0/SM2.0 games versus the busted FX series, but that again is one of those 'fixing a problem' situaions I was speaking of, and speaking of the GF6800.....

I waited ages to get a hold of one (nearly 3 months) and in the end settled on a Geforce 6800GT as I couldn't wait any longer.

GF6800 was a next generation card, it's counterpart was the X800XT/XT-PE, and if you were thinking of that card, it wasn't +90% either, it too only stretched out that far in the rare occasions where the previous cards simply had no juice left, which has been the case with all generations, and may be the same this time around as well, but no one reports the one situation where the +100% appears since it is the true anomaly, like only when you play GTA4 @ 100% View or 25x16 with 8XAA on a 512MB HD4870 versus a 2GB HD58xx, it's essentially as X approaches infinity better since it can't run with the lower memory amount.

Really those are not indicative of overall boost or even optimistic boost, they are aberrations that get dropped even by PR pushers.

I would suspect the +60% number is an average of best case scenarios, not the absolute best case scenario, which can see such updates from drivers alone on things that are borked.
 
The X800 XT PE that's the one I meant, stupid mistake to make but it has been a while since I last had to look at cards that old.

As for performance gains I can only go by what I see, according to this review the X800 XT PE was consistently 100% at high resolutions (with quality settings enabled) in X2, UT2004, and Call of Duty 4 and 33% quicker in the worst case scenarios.
 
i would be disappointed by the performance if true, im going to go with 2000 shaders down from 2200 because of TSMC bad process and the performance of 4870x2
 


Don't keep reading these rumour threads, there are a lot sites on the Internet that claim they have internal source that leaks information to them such as Theo Valich from Brightsideofnews. As far as I know he makes stuff up and posts on the internet and suddenly we get a thread about it.


Just wait until Toms, AnAndtech, Hardocp and other reputable sites review the product then you will be able to see without any shadow of a doubt how well ATI's products perform. Stay away from the bullshiters and NDA breakers (like Theo Valich, Inquirer and Fudzilla) and shut them out of your mind.