Let's make it Official: 8800GTX/GTS yay or nay thread.

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.

Primitivus

Distinguished
Apr 21, 2006
324
0
18,780
To answer that shortly, in the simplest way possible- An 8800GTX/GTS on anything short of a Core2Duo QX6700 will make your CPU officially the limiting factor of your gaming rig.

For example, on FEAR @ max settings, 4x AA 8x AF, the 8800GTX had neck and neck frame-rates @ resolutions 1600x1200, 2048x1536 and 2560x1600 with Intel that it did with AMD. This means that while the Intel Core 2 Duo pushed a higher ceiling of head-room to the 8800GTX at lower resolutions. (Meaning the Intel processor was the factor in the FPS lead over the AMD when combined with the same GPU) This also concludes that a person even with a 6800 Extreme processor would sometimes not get better frame-rates at really high resolutions when compared to an AMD processor.

This essentially proves what others in the thread have said.

I'm not sure if I understand you correctly but are you saying that a person who hits a ceiling at high resolutions regardless of processor is CPU bound? Because it's actually the opposite; at high resolutions it doesn't make any difference what CPU you're using because the GPU is the LIMITING factor. When a reviewer wants to test a particular CPU's abilities in games they focus on lower resolutions since these can be handled easily by a powerful GPU and therefore the strengths and weaknesses of the CPU in question are revealed. At high resolutions, especially with eye-candy on, the GPU starts to run into trouble and the CPU becomes less of a factor. Let me point you to a FiringSquad article about that very subject: http://www.firingsquad.com/hardware/geforce_8800_gtx_gts_amd_cpu_scaling/ If you read the conclusion you'll see that the GTX is usually CPU bound, the GTS only sometimes, but it heavily depends on the games too. Older titles are obviously no sweat for the GPU and the CPU is the limiting factor. But newer and shader heavy titles scale well with different CPU's because even the mighty 8800 cards start to show their limitations. And it is rather probable that a person who buys a 8800 card today will do so in order to play newer titles as well as the upcoming ones like Crysis and UT2007, at high resolutions and with eye candy on. Of course you might still benefit with a C2D X6800 but if you want to play Oblivion with HDR + 4xAA + 16xAF @ 1920x1200 you can do so better with an X2 3800+ and an 8800GTX than with a FX-62 and a X1950XTX ( http://www.firingsquad.com/hardware/geforce_8800_gtx_gts_amd_cpu_scaling/page11.asp )

I'm not actually recommending that anyone in the market for a 8800 card stick with an X2 3800+, I'm just saying that if we're talking about high resolutions then the CPU matters much less than the GPU and even the X2 3800+ owner who wants to play F.E.A.R., Oblivion, CoD2, Crysis etc. at high res + eye candy will see some benefit with a GTX/GTS over a previous gen card.
 
I popped over to the vr-zone site, and I found ... no real news. Well, OK. I learned that R600 is coming in March - or April. R6XX will have three or four models. It will use DDR2, DDR3, or DDR4 memory. R600 will be fast.

Right now, it is a semi-mythical product. I suspect that ATi was aiming for a Christmas '06 release, either ran into technical or performance problems, and decided to go back to the drawing boards. And now they are trying to build marketing buzz - the classic IBM FUD factor. The announcements trickling out imply, "The R600 will be faster. It will be cheaper. It will lower your cholesterol, and it will not rust."

I hope so. As with Intel and AMD, competition is good for us.

john
 

Track

Distinguished
Jul 4, 2006
1,520
0
19,790
To answer that shortly, in the simplest way possible- An 8800GTX/GTS on anything short of a Core2Duo QX6700 will make your CPU officially the limiting factor of your gaming rig.

For example, on FEAR @ max settings, 4x AA 8x AF, the 8800GTX had neck and neck frame-rates @ resolutions 1600x1200, 2048x1536 and 2560x1600 with Intel that it did with AMD. This means that while the Intel Core 2 Duo pushed a higher ceiling of head-room to the 8800GTX at lower resolutions. (Meaning the Intel processor was the factor in the FPS lead over the AMD when combined with the same GPU) This also concludes that a person even with a 6800 Extreme processor would sometimes not get better frame-rates at really high resolutions when compared to an AMD processor.

This essentially proves what others in the thread have said.

I'm not sure if I understand you correctly but are you saying that a person who hits a ceiling at high resolutions regardless of processor is CPU bound? Because it's actually the opposite; at high resolutions it doesn't make any difference what CPU you're using because the GPU is the LIMITING factor. When a reviewer wants to test a particular CPU's abilities in games they focus on lower resolutions since these can be handled easily by a powerful GPU and therefore the strengths and weaknesses of the CPU in question are revealed. At high resolutions, especially with eye-candy on, the GPU starts to run into trouble and the CPU becomes less of a factor. Let me point you to a FiringSquad article about that very subject: http://www.firingsquad.com/hardware/geforce_8800_gtx_gts_amd_cpu_scaling/ If you read the conclusion you'll see that the GTX is usually CPU bound, the GTS only sometimes, but it heavily depends on the games too. Older titles are obviously no sweat for the GPU and the CPU is the limiting factor. But newer and shader heavy titles scale well with different CPU's because even the mighty 8800 cards start to show their limitations. And it is rather probable that a person who buys a 8800 card today will do so in order to play newer titles as well as the upcoming ones like Crysis and UT2007, at high resolutions and with eye candy on. Of course you might still benefit with a C2D X6800 but if you want to play Oblivion with HDR + 4xAA + 16xAF @ 1920x1200 you can do so better with an X2 3800+ and an 8800GTX than with a FX-62 and a X1950XTX ( http://www.firingsquad.com/hardware/geforce_8800_gtx_gts_amd_cpu_scaling/page11.asp )

I'm not actually recommending that anyone in the market for a 8800 card stick with an X2 3800+, I'm just saying that if we're talking about high resolutions then the CPU matters much less than the GPU and even the X2 3800+ owner who wants to play F.E.A.R., Oblivion, CoD2, Crysis etc. at high res + eye candy will see some benefit with a GTX/GTS over a previous gen card.

Thats exactly what he said, and u are very right, Primitivus.

I guess he looked at the results at 2560x1600 and though "hmm, both CPUs arent getting above 40 FPS.. must need a better one".

Very funny. Perhaps someone should proof-read the "guide".
 

yakyb

Distinguished
Jun 14, 2006
531
0
18,980
i think you would have to be stupid to buy a 8800 Now (unless your badly in need of an upgrade or your current GPU breaks). I would guess that the majority of people reading this who are interested in getting a 8800 have currently a 6600gt or better i think that with r600 coming out next month and revisions of the 8800 coming up and with no DX10 games availible (let alone the driver problems to run them on Vista) then also factor in that most people will be looking to buy a whole new computer to do this, C2D is 8 months old now and AMD is on the horizon with K10 nevermind penryn as well i would suggest that people wait until the first DX10 game is released and benchmarks obtained before considering an upgrade.

Normally i would say hell there are always technologies on the horizon you may as well buy what you can afford now. but with the current software/hardware situation m,akeme think otherwise and im going to hold off my build unitl July (Crysis out at the end of June). This Gives me the chance to do some case modding before i get my parts so that it all runs in the case i want
 

Track

Distinguished
Jul 4, 2006
1,520
0
19,790
You are right. If we only wait 5 months we can a Wolfsdale (Penryn is for laptops) and an 8900 GTX. Or we could wait another 7 months and get the 9800 and the 32nm Conroe.. it never ends.
 

merc14

Distinguished
Jan 15, 2006
267
0
18,780
Everyone here is looking at today and not tomorrow when they say that the 8800 series is CPU limited. DX-10 is here, it isn't fully supported but it is here and it is the future. One of the primary changes that DX-10 brings is that it relieves the CPU of many of its graphics related duties as assigned under DX9. If you look at some comparisons between DX-9 and DX-10 you see that the CPU is working much, much less on graphics duties under 10 so how does that relate to the need for a more powerful CPU with a DX-10 card? I do agree that the CPU can limit these cards when running DX-9 since DX-9 depencds so heavily on the CPU but in a few months time DX-10 will be mainstream and the CPU will no longer be a limiting factor per se.

I can't follow the logic as far as it being stupid buying the first DX-10. If you need a card now, and if the price is reasonable and comparable to the 7 series cards, why in the world would you buy a 7 series? Of course, if you wait a few months there will be a new DX-10 card out that will be better but when is that not the case?

No matter when you buy a new vid card you will find yourself with the last great thing in a matter of months since vid cards are continually evolving. If you can wait a few months then absolutely do it as the 8 series price will always be decreasing and there will surely be a great new card on the market soon, but this will be the case six months from now, one year from now and five years from now. As soon as you plunk down the money you are obsolete.

If you really need a card, and the prices have leveled and are reasonable, then get yourself an 8800 series. It is an awesome card and this driver issue will be a forgotten episode in 2-3 months. Don't worry so much about the CPU as DX-10 will adjust that. Don't spend any money on a DX-9 only card, you will kick yourself for that stupid move a few months from now. DX-9 is a technological dead end.
 
i think you would have to be stupid to buy a 8800 Now (unless your badly in need of an upgrade or your current GPU breaks). I would guess that the majority of people reading this who are interested in getting a 8800 have currently a 6600gt or better

That's me. Box #2 below was my current computer. If I had had something newer, I would have waited until my next vacation and brought back newer parts.

There is always better (faster or less expensive) parts on the near horizon. But this WOULD be a good time to wait a few months.
 
G

Guest

Guest
I agree with you and I said something similar on DX10 and future card.

I must say that right now buying a 8800 is nto the smartest thing with R600 available in a month or so, it is worth the wait, 2 month ago, it wasnt the case!
 

NamelessMC

Distinguished
Nov 27, 2005
321
0
18,780
The consensus is, today it's a mistake because we're literally almost at the shore line of seeing the R600 launch.

If you want SLI regardless, that's fine, but realize that the R600 launch is significant to you too.

8800 Prices come down, and Nvidia announces it's next G80 high-end card.

It'll either be an 8900GT or maybe an 8850GT or something along those lines.

Whatever it is, it'll be revised and better than the 8800GTX, I can assure you that.
 

merc14

Distinguished
Jan 15, 2006
267
0
18,780
I am sure it is. I bought my 8800GTS used for $360 and it runs like a dream so I'm not too worried about missing the next best thing and the 600 sounds superb. That is my personal number for investing in new graphics tech ($350-$370). If you get the right deal at the right time you have some pretty good tech, certainly not the best, and don't want to slit your throat when your $680 investment becomes obsolete a month down the road.

When the card starts to drag a little I just get another, at a much discounted price, and run SLI. Using this plan I am generally able to run anything out there at very high settings for around 2.5-3 years at which time I sell the cards or move one to my wife's rig sell the other and start the process over.

I know I'll never be king of the benchies by spending $360 on a vid card but it works for me. My 6800GTs are really dragging now after 2.5 years and moving to this 8800GTS was awesome for the price. In a few months there will be a bunch of GTS cards on the used market for $250 and I'll grab another and sit tight for a couple of years. Total outlay for 2-3 years of high end graphics is around $600 (XFX has double lifetime warranty so they are great with this plan).
 

sojrner

Distinguished
Feb 10, 2006
1,733
0
19,790
meh, my thoughts would be that if you have any card within the last year or so... wait until dx10 actually hits and we can see what has the better dx10 performance. The 1900 and 7900 cards both do well enough in dx9 games to hold you until then. AMD still has yet to pull out their dx10 part (no rush looking at vista's current state and lack of dx10 games) and we have no idea what the cards from either company will do there.

only caveat: if your card is crusty-old and you have $ to burn then go ahead. Otherwise looking at the hierarchy of current gpus the ones I just mentioned are very well suited to last a while yet.

JMO of course.
 

bruce555

Distinguished
Sep 24, 2006
603
0
19,010
Looking to the future I don't know how the R600 will stack up to the G80. I know that ATI has some big advantages over the Nvidia being that this is their 2nd unified design and they're much further ahead of Nvidia in the driver front.

I know this is all speculation because nothing is released and nothing is official but at least the leaked specs at about this time in the development seen correct. The G80’s leaked specs were so I’m thinking these are somewhat correct ignoring any clock speeds.

If we talk about each of their unified shader design then let’s look at ATI’s first. Looking at the leaked specs for this ATI has 64, 4-way SMID units that can perform 128 shading operations per cycle. To me it looks like they have taken the standard way of shading design being that a straight throughput of information is going to be used. Having the unified shaders inline with the normal pipeline with the shading operations running at the core speed is a drawback compared to Nvidia’s design. Nvidia’s design has the shaders running completely independent of the initial core. Being that more work can be done on developing the pixel before sending it to the ROP. These differences probably account for the high memory bit rate and core clock speed of the R600. The high core clock speed also makes up for the R600 only having 16 ROP’s.

The r600 high core clock speed makes up for Nvidia’s independent shader clock speed operation and the high bandwidth is probably for storing pixels on the main memory to account for Nvidia’s independent texture unit’s which allows for storing of information on them to then be recycled back for more stream processing. Simple, look at the G80's 128 SP's @ 1350 Mhz, compared to R600's 128 shader op's per cycle @ 700-1000 Mhz core.

The reason I think I’m right about ATI using a traditional style of “pipeline” design is because they are done with all of their drivers across the board, even Linux. To me this looks like (remember the R600 has been in development for some years now) after they found out what G80 has done with their design and has independently clocked shaders that are independent of the core operation that they had to go to respin after respin to bring up the clock speeds. The initial core design stayed the same allowing for the driver designers to start on their drivers long ago, and the only real obstruction in the driver design being to write for the unified shaders and the general reprocessing of the geometry shaders after the vertex calculations are done thus making the general design of the driver relatively simple to adapt.

I think the reason that Nvidia is taking so long is that they want to completely use the stream processors independency along with high use of the texture filtering units to allow for a much higher throughput of shader operations to the core. Making this happen is a driver design nightmare because of the complete independent design of everything on the G80 (independent shader clock and independent textures) and needing a complete new approach to driver design. Looking at these points I wonder if Nvidia will ever be able to write a driver to allow for complete full potential usage of the G80.

By the way I’m not a Nvidia fanboy, this is the first Nvidia product I’ve owned.
 

Eviltwin17

Distinguished
Feb 21, 2006
520
0
18,990
davidrio: i think you should get a core2duo 6400, get some good ram, get some other nice things like an xfi sound card, overclock anything you can and then buy the new ati dx10 card that is coming out really soon. you cant go wrong :)
 

enewmen

Distinguished
Mar 6, 2005
2,249
5
19,815
The consensus is, today it's a mistake because we're literally almost at the shore line of seeing the R600 launch.

If you want SLI regardless, that's fine, but realize that the R600 launch is significant to you too.

8800 Prices come down, and Nvidia announces it's next G80 high-end card.

It'll either be an 8900GT or maybe an 8850GT or something along those lines.

Whatever it is, it'll be revised and better than the 8800GTX, I can assure you that.

YUP.