4870x2 HardOCP Preview - taking the fluff out of reviewing t -.- t

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
right. i get that they don't scale which is really to bad. if they did we would see some crazy numbers. it just would be nice to see best available vs. best available. not that i don't want to see comparably priced items compared (assuming they are??). that is probably more valuable to the average person since you buy based on price (typically). but also keeping in mind lots of people considering buying 2 X 280's or 2 X 4870x2 could probably afford to throw another 280 into the mix if it was worth it. so a comparison would be nice.
 
Considering the 4870 X2 uses about the same amount of power as one GTX 280 youre going to have to take alot more things into consideration when putting two 4870 X2's against 3 GTX 280's, like needing a new power supply more than likely, among other things.

At that point you are at 500W estimated for the ATI setup and 750W estimated for the nvidia setup. $1000 estimated for the ATI setup and $1500 estimated for the Nvidia setup, plus another $300 for a 1200W power supply while the ATI setup will only need a 800W or so which would be around $100 cheaper. So..

ATI = ~ $1200
Nvidia = ~ $1800

...with probably 5% performance difference seperating them, if that. And if you want to REALLY nicpic youre gonna need a full tower case to house 3 dual slot cards so add another $250 for a CM Stacker, oh and another $300 for a 790i Ultra board which youll need for that full speed Tri-SLi. While the two 4870 X2's would fit inside my $80 Raidmax Smilidon mid tower or any mid tower with no/a removable bottom drive cage for that matter. Also while needing a $150-$250 AMD 790FX or x38/x48 motherboard.

Total...
ATI = ~ $1400
Nvidia = ~$2400

This can go on all day as you can see. Basically the idea here is that Tri-SLI is preety much pointless and pure eye candy at least until scailing is MUCH MUCH better. Could really say the same about quad-crossfire. There is a reason Tri-SLI and Quad-Crossfire is never benchmarked, and its not financial reasons. The Dual-card setups are going to give you like 90% of the performance a 3/4 way setup would.
 


What? No Warhammer Online: Age of Reckoning?
 


Faulty Xigmatek? Which one do you have I can most likely help you.
 



I'm not that into MMOs really, but from what I saw (last year) with WHo - the graphics didn't seem to be that groundbreaking...



In titles with next-gen graphics, the GPU will more than likely become the bottleneck again as there will be more demand on it; for instance if you take lets say a 3way SLI gtx280 or a quadfire 4870x2 and put them at maximum forcible details at 2560*1600 on a 30" display with highest possible AA - its reasonable that they could become the bottlenecking factor even in today's titles. In real world terms though even if you pressure the hell out of a gpu or pair of gpus, you still need a good processor to back them up regardless.
 
Since GPUs handle (obviously) the making of each frame of the game and will soon handle physics (once drivers and developers are up to speed), that pretty much leaves non-GPU calculations, sending off the frames to the monitor and driver overhead to the CPU. So as time goes on and the GPU gets utilized more, the CPU should be less and less of the bottleneck.

Am I right in thinking that?
 
^^ You are quite right, though the CPU still need to tell the GPU what to do (while not doing it itself) which is why it'll (the bottleneck) still increase but at a slower rate, though I can't be sure.

BTW wasn't there benchmarks done recently showing that AMD vs Intel processors didn't do that much of a difference in games (cept the ones that were hard core CPU intensive) I think there was a thread about it like 2 weeks ago... don't remember
 
Depends if ATI wants to work with Nvidia and PhysX otherwise there will be games using Havok (CPU host physics) and games using PhysX (GPU physics). If the GPU takes over the physics chore, whats to say FPS goes down since it has to do more work, whats to say the lower cpu load will increase fps.

Don't forget, the CPU is responsible for Artificial Intelligence.
 
Well AMD/ATI has both Havok and PhysX open to them, I wonder if you could have a game doing both? 😛

Also, are you sure that havok and physx are stuck as CPU and GPU implementations?
 
Just foung this
http://en.wikipedia.org/wiki/Havok_%28software%29
"The company was developing a specialized kit called Havok FX that made use of the GPUs in ATI and NVIDIA videocards for physics simulations."

(I wasn't looking for it specificly, I was seeing if the two were better at one thing than the other and if they could be both used at the same time)

*Edit* Wasn't there a post just above mine a second ago?
 



This is somewhat true, its largely that it doesn't matter what kind of processor you have as long as your gpu's are the limiting factor

For instance, Crysis with 16x Quality AA on an Nvidia setup is going to be GPU bottlenecked; likewise on an AMD 24xCSAA it will be bottlenecked

However, thats about where it ends right now for ultra high end hardware. More to the point, if you have like lets say an 8800 GT single gpu card - you might see some small improvements between AMD->Intel processors running at the same clock speeds, but it won't be astronomical.

But once you start talking about dual gpu setups thats when CPU bottelneck becomes more of a priority - as typically the GPUs will be able to handle a lot between them, but they will get held back by a slow cpu - this is all app dependant of course.
 
I think we need to see just how great of a bottleneck a CPU can be. Someone should compare results of the following

low-end vs high-end dual core AMD processor
low-end vs high-end quad core AMD processor
low-end vs high-end dual core Intel processor
low-end vs high-end quad core Intel processor

Give each processor from that list the following setups

dual 4870x2
GTX 280 Tri-SLI

Then we can see how much of an effect dual vs quad CPU, AMD vs Intel and low-end vs high-end will really hold back the uber setups
 
Does the dx10.1 reduce the proses that GPU have to do, but the CPU vs GPU depate is very program dependable. Reach for the star use very intensive AI that reguires a lot of CPU power... the graphic can be handled by even with intel's integrated graphich... And then there is crysis...
The problem is balancing the whole thing. Many game developers are really annoyed by the difference between normal home PC and enthusiast pc. How to make a game that can run with single core celeron and "blindlindly fast IGP", and scale it for quadcore and sli or cf...

Summasummarum... the botlenecking situation is very much situation dependable.
 
^^ So pretty much your beef is that we're calling it a botleneck when you think it should be called a limiting factor? (your post isn't very clear on that subject)

Isn't a bottleneck and limiting factor are the same thing, it's the component/program that is the slowest and limits the flow of information of the system. (like a bottle's neck)

I'm sorry if I misinterpreted your post but it just doesn't seem to make sense.

*Edit* " but no one will listen" 😛 lol might want to take a look at your sig then
 
speaking of 4870x2 vs gtx280 has anyone seen this article?

http://www.engadget.com/2008/07/14/lucid-logix-hydra-tech-brings-together-any-gpus-for-powerful-mat/

What are the chances this works as advertised? And if it does and could really cater to a GPU's strengths it would be pretty cool to combine different ones?

Does anyone have more scoop on this? probably just a pipe dream.
 


Hmmm... it's not easy task if this is true at all...
What this hydra is pupposed to do is to split the rendered are to different craphic cards and the combine these "fragments" somehow... Not an easy tast for similar GPU's extremely difficult to different kind of GPU's...
Everybody knows that Nvidia and ATI does render the same picture a little bit different. So we would end having a pusle with parts that does not completely fit to each others... maybe with some clever blending...

I am expecting more when we see some GPU maker with solution with shared frame buffer with identical GPU cores...
 
i'll cross my fingers. it sure would be nice to find a way to get better performance when combining, even if it has to be like GPU's. Right now what you get when doubling, tripling ect as talked about above is just poor. especially since you pay the full price for an additional one. (somehow i have a feeling we'll never see: "prove you have a gtx280 and the next one will only cost you 15% of retail because that is all the benefit you will get" ads) :) :) :)
 


The comparison is between 4870X2 vs GTX280 SLI of course 280 would win but considering you may have 4 = 4870 at the same price for 2 280GTX.

1 280GTX = 2 4850
 



Were you trying to make a point, because i'm not understanding exactly what you were inkling at.
 
Yea rodney, but thats why I don't buy into bs fluff reviews anymore because again in that techreport article they aren't pushing either card near enough.

1: they are using too low-clocked of a processor (3.0ghz) for EITHER gpu array

2: they aren't forcing high AA modes to really show what either configuration can do.

Thats why hardocp's article is superior to any of this trash, is because they are actually pushing the limits on the hardware in extremely intensive scenarios to show what they are truly capable of.

I'm honest to god really surprised people are still using blasted Half Life 2 benchmarks .. who cares. The game runs on anything you throw at it, and its very cpu limited in their testing

I didn't make a "custom timedemo" but on gtx260 SLI I can do a Lost Coast timedemo and get 156 avg fps on a e8400@4.05 ghz in 1680*1050,16xAF, 6xAA, max possible details - in their testing it shows a gtx280 SLI getting 118 fps which is a total joke and shows that the testing platform was bogus. Furthermore the 4870 CF should perform a lot higher as well at that resolution and at the higher resolutions.

This is really all aside the point - the bottom line is that the 4870x2 is largely superior to the gtx280 because the 4870x2 has better anti-aliasing hardware and while the 4870 doesn't really show this as a solo card or in crossfire, the 4870x2 has a more efficient design that allows it to really flex its anti-aliasing muscles, especially in quadfire mode - but don't use articles like techreport because it doesn't really show the true story at all and 3 year old games just don't push these cards enough.
 
hardocp, superior, hahaha.

your kidding right?

also, HL2 and source based games are very popular and many people like to know how it performs.

seriously, i wouldn't trust that site as far as i could throw them.


Fine, but you're just blinding yourself to the issue that all of these previews have - which is making cpu the limiting factor and not pushing either gpu hard enough. I really don't give a flying crap if people can get 120 fps at 4x aa 1920*1200 in Half life 2 on these gpus - why? Because you could probably get the exact same FPS with 16x quality AA on an Nvidia array - or likewise you could get the same FPS at 24xCSAA on the AMD array because both configurations are held back by the cpu on this old game.

I don't really care if you like Hardocp's articles or not - because they have given unbiased and highly positive reviews of the 4850, 4870 and now the 4870x2; so if you "don't trust them as far as you could throw them" when they have the amount of data available to back up their claims as they do - then you might as well write off Tomsadvertisingguide because THG has been doing trash reviews and has shown how they sell themselves off (ibuypower, system builder marathon, tri-sli vs quad sli)
 
Does anyone use Skulltrail for benchmarking purposes? I would think that dual overclocked Q9775 processors to at least 4.0 GHz each should alleviate the whole CPU limitation thing.
 
[H] is ok, but my problem is, you have to trust them too much, other than that, theyre ok. Looking at Tech Power Up, when the 4xxx series came out, they had several res in review. BUT, at the lowest res, 12x10, they only used 2xAA, which really put these cards in a bad light, and is totally backwards in my thinking, that being, the lower the res, the higher the eye candy. @ OTP, remember this, the 3xxx series totally sucked at AA, but the x2 showed it held its own using AA. The 4xxx series shows a better ability using AA than nVidia cards in most games, thus the x2 of course will own
 
AMD/ATI's bet on CFX config. for high-end is doomed now.

4 yes FOUR 4870 in 4870X CFX couldnt beat two GTX280 in SLI!

another failure design just like 38xx
 

TRENDING THREADS