Discussion Favorite GPU From HD 5000 to HD 7000 Series?

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Yes, but spec-wise wise the 750 ti should destroy the 295. The 295 total (both GPUs) has 480 cores compared to 640 on the 750 ti. The GTX 295 had 1.7GB of GDDR3 compared to 2GB of GDDR5 on the 750ti. I don't understand how that makes the GTX 295 faster. Not to mention that in most games you would have to disable one of the 295's GPUs since most games don't play nicely with dual GPU cards.
Funny thing is that even the GTX 560 Ti seems to perform synonymously with the GTX 750 Ti.

I think the friend I'd had with the i5-2400 and HD 6850 got a GTX 750 Ti after his 6850 died.

I've heard it doesn't use a lot of Wattage and runs cool; maybe that is why it isn't indefinitely powerful.

Another point is that everything depends on the generation. Back in the 560 Ti days, Nvidia could achieve a lot more per a lot lower core count than AMD could.

For example, the 560 Ti has two versions - one with 384 cores and one with 448 cores. Even the version with the 384 cores beats the Radeon HD 5870 which retains 1600 cores.

So simply said, all generations differ. And it is quite possible the aforementioned scenario has definitively taken place within Nvidia's own generations.
 
The GTX 750 Ti had 640 Maxwell cores, the GTX 295 had 480 Tesla cores. It doesn't really mean anything to compare the two counts. Specs aren't comparable across different architectures. Performance isn't just about counting the number of cores and their clock frequency, that's why we do performance benchmarks.

GTX 580 had 512 Fermi cores at around 800 MHz, while the GTX 660 Ti had 1344 Kepler cores at 900 MHz and got the same performance. You'd think more than double the cores running at a higher speed would destroy in performance, but it doesn't. Because how many Kepler cores are equivalent to a Fermi core? Apparently... that many. But really it is an improvement, because the 660 Ti has a 294 mm² die size while the GTX 580 has a die size of 520 mm² and consumed probably double the power. So the same performance was achieved while using half the silicon (= lower cost) and half the power. So it's a superior GPU.

How it is achieving those results, with lots and lots of weaker cores vs a smaller number of larger cores, is an engineering detail that is not relevant at the consumer level. We as enthusiasts may be interested to know all the numbers just for the sake of it, but they don't actually matter in the end. What matters is what goes in, which is cost and power, and what comes out, which is performance. The design of how it's achieved inside doesn't matter. Anyway, the point is these design details can change from generation to generation, so the "number of cores" and the clock frequency are not actually real "specs" in the sense that they don't tell you anything absolute about the GPU performance. The only thing it can do is tell you relative performance between cards of the same generation. If two cards are both Kepler architecture then you can see that one has twice as many cores and will probably get around twice as much performance, and so forth.
correct, different architectures are impossible to compare apples to apples, but the part that doesn't make since to me is that the 750 ti has not only more and faster cores, but those cores are also newer, so it doesn't make any since for a GPU with slower older, and less cores to outperform one with faster, newer, and more cores.
 
correct, different architectures are impossible to compare apples to apples, but the part that doesn't make since to me is that the 750 ti has not only more and faster cores, but those cores are also newer, so it doesn't make any since for a GPU with slower older, and less cores to outperform one with faster, newer, and more cores.
As the example I gave illustrated, newer cores aren't always faster core for core. The 660 Ti Kepler architecture is newer, but 1 Kepler core is weaker than 1 Fermi core. It doesn't matter because an equivalent design will have many more Kepler cores than Fermi cores.
 
strange that an older core architecture would be faster than a newer one. I would think that the newer architecture is faster and has more cores.
I was just thinking about the same thing. But apparently it's true.

Screenshot-2023-11-01-202019.png
 
Transistor count or FLOPS per transistor is about the only moderately useful measurement between large generation gaps. That or a benchmark.

GDDR3 vs GDDR5 isn't all that important. Bandwidth that matters.

86.4 GB/s vs 112 GB/s (per GPU)

Same reason we complain about the 40 series GPUs. They reduced the memory channels down the whole line, so while the GDDR5 and cache got faster, they still lost out on bandwidth.
 
Transistor count or FLOPS per transistor is about the only moderately useful measurement between large generation gaps. That or a benchmark.

GDDR3 vs GDDR5 isn't all that important. Bandwidth that matters.

86.4 GB/s vs 112 GB/s (per GPU)

Same reason we complain about the 40 series GPUs. They reduced the memory channels down the whole line, so while the GDDR5 and cache got faster, they still lost out on bandwidth.
Is there a GDDR4? I know there is a DDR4 version of the 1030, but I don't know if GDDR4 is a thing. I think you meant GDDR6 as all the 40 series use GDDR6.
 
Transistor count or FLOPS per transistor is about the only moderately useful measurement between large generation gaps. That or a benchmark.

GDDR3 vs GDDR5 isn't all that important. Bandwidth that matters.

86.4 GB/s vs 112 GB/s (per GPU)

Same reason we complain about the 40 series GPUs. They reduced the memory channels down the whole line, so while the GDDR5 and cache got faster, they still lost out on bandwidth.
From what I've heard Crysis could allocate a maximum of 2 GB system RAM.
And cards in the beginning of 2010 were harboring 1 GB seriously.

Now, even the RTX 4090 cannot get exorbitant frames in all games and under all resolutions. That is because software is written in a way I will not even dare to indulge in.

Currently I am just web browsing and am using 15.8 GB RAM (sure, I have many tabs open).

What I am propounding is that sooner more than later the software will end up in such a way (maybe it's already ended) that no matter of the hardware programs will not run with speed or well at all. And I think we have begun to spin in a loop which never ends and to provide nothing good in terms of stable, smartly-written software.

The way the software is written, well, that is 80% of everything; the other 20% should be the hardware... What we have in 2023 is 95% on the tab of powerful hardware and 5% on the tab of 'well-written' software. The former will continue taking concussions due to the latter most indefinitely.
 
Last edited:
  • Like
Reactions: artk2219
The 7970 GHz edition was a really nice card. I picked up one in 2012 and it had basically given AMD the lead over the just-released GTX 680. The 770 didn't catch it either so AMD basically had the best "normal" consumer GPU (the GTX 590 was basically two GPUs) for another year until the GTX 780 and the first Titan came out.

Only problem was that it was an extremely power hungry GPU and while it doesn't feel like much compared to the high-end RTX GPUs today with their boost behaviors, you could spike at 300W with a 7970 GHz edition and that was a lot in 2012. Mine basically worked itself to death two weeks before warranty expired and XFX sent me a 280X to replace it.
 
Is there a GDDR4? I know there is a DDR4 version of the 1030, but I don't know if GDDR4 is a thing. I think you meant GDDR6 as all the 40 series use GDDR6.
https://www.techpowerup.com/gpu-specs/radeon-hd-3870.c203 - The HD 3870 does employ GDDR4 most reverently.

A quote I did find on the topic:


'GDDR4 on GPUs was only briefly used in some ATI/AMD cards from the X1000 series to the HD 3000 series. But even then, most offerings stuck to the more conventional gddr3.'
 
how much different is GDDR4 compared to DDR4, and how did it compare to GDDR3?

A little thread I'd made with some illuminating information on the topic.

Well, GDDR3 was first (my Mobility 4530 from my HP Pavilion dv6 from 2010 had such memory). Then came GDDR5 which was very fast.

DDR4 is system memory and is different from GDDR4 because it is used for slightly different purposes. The same way CPU clocks go up to 5 or 6 GHz, while even the RTX 4090 cannot surpass 2.52 GHz.

GDDR3, GDDR4 and GDDR5 are pretty outdated currently.

From what I've come to note from the thread I've linked above, GDDR RAM will always be better than DDR RAM in terms of a video card. If your video card uses the system RAM it will be a much longer and - in the end - valueless process than if it takes a swift bite from GDDR memory which is specifically created for fast graphics adapters and the computations within.
 
Fermi 400(480 mostly) >6000 series>>>>>5000 series(Weak ass tesselators and lack of Vram).
7000 series>700 (770 below) series>>> 600 series.

5000 series, sip power, overclock OK but don't gain much from an OC.
GTX 400, conservatively clocked power hungry monsters, easily match and beat stock Fermi 500 series with a good OC.
6000 series, a repeat of the 5000 series with the same VLIW arch, weak OC performance, weaker tesselators.

HD 7K, much better forward looking arch vs Kepler which was a 180 turn just to say we consume less power now after Fermi.

Fave tinkering GPU, GTX 480 by a long shot.
Living with a GPU? 7970, well the 280X.
 
Last edited:
I've retained a Sapphire Radeon HD 6770, Two Sapphire Radeon HD 7870 GHz cards in CF and a Sapphire Radeon HD 7790... (they were all staunch and wonderful).
Those golden, immemorial, beauty-filled times, currently vanished and faded away...

What was your favorite GPU in those times? I remember the Radeon HD 5850/5870 being the first class of cards able to pertain a terrific blow toward Crysis' consummate system requirements.

Man were those days a wonder! Do write up. I would be glad to know about your preferences; mauve-tinged in those glorious Radeon-fuelled hours.

Thank you! :)
Hrmm, id say its between the 6870 and 7870 LE for me. The 6870 was an awesome budget card, it was priced well and offered very decent mid range performance for its day. The 7870 LE because it was essentially a cut down 7950, it performed better than a 7870 and with a bit of an OC (you could hit 1200 Mhz, up from 975, at stock voltage on most cards) could reach around stock 7970 levels of performance. Albeit with a 256bit memory bus instead of 384 and correspondingly 2GB of VRAM instead of 3. It was a really nice card that not many people knew about.

 
  • Like
Reactions: jnjnilson6
HD5450 is the only AMD gpu I own, so it's the favorite.

it's rather weak, but stil better than integrated graphics of core 2 duo. Good enough to play some old Xbox games. This little old gpu brought me plenty entertainment during the corona lockdown. Later upgraded to a GTX 460, and later a GT 1030 GDDR5. Got a new pc with RTX 3050 when the lockdown was over.

still keep the HD540 safe inside a box as a goodluck charm.
 
Originally ran the 6770 on a Celeron G530 (2.4 GHz / 2 cores / Sandy Bridge). Thereafter I got an i7-3770K and the difference in stability was extreme. Hosting a server on Crysis with more than 5-6 people on the G530 resulted in extreme performance repercussions, whilst the 3770K could go up to 32 people without so much as a shudder in the framerate.

Then I got a single 7870 GHz and afterward another 7870 GHz and used them in CrossFire. Crysis was not one of the games which handled CrossFire very well.

The 7790 was on another machine and it ran very, very well for a long time.

I had a friend with an i5-2400 and a HD 6850 and a friend with AMD 1090T (or 1100T) and a HD 5850 and a friend with an Ivy Bridge i5 (perhaps the 3570K) and a HD 7770. Those were the days...

How long did you use the 2500K and how was the 6970 Xfire? I remember the Radeon HD 6990 was a pinnacle in and of itself back then.
I used the 2500k for years for gaming. Long enough for my overclock to degrade from 5.2 to 5.0, then 4.8.

The Xfire gave from 30 to 50% boost in most games that could support it.
 
I used the 2500k for years for gaming. Long enough for my overclock to degrade from 5.2 to 5.0, then 4.8.

The Xfire gave from 30 to 50% boost in most games that could support it.
I remember that one of the best games for CrossFire was Battlefield 4. Maybe Battlefield 3 as well, so far as memory serves me. I got about 100% (x2) more FPS on those games running two 7870s GHz in CrossFire than I did only on a single 7870.

I was able to hit 5 GHz (stable) on my i7-3770K. At that speed CineBench scored a little more than it would on a 3930K at stock speeds.

I wonder what would happen if we paired a 3770K with an RTX 4090. It would probably not be very pretty. Like pairing a Pentium 4 with a Radeon HD 6770 (which I actually did do). Had even made a video of it. I wonder how Crysis would have run... Too bad I did not think of testing it out at the time for lingering inexplicable reasons...

View: https://www.youtube.com/watch?v=-PbLRWC9pPA
 
I remember that one of the best games for CrossFire was Battlefield 4. Maybe Battlefield 3 as well, so far as memory serves me. I got about 100% (x2) more FPS on those games running two 7870s GHz in CrossFire than I did only on a single 7870.

I was able to hit 5 GHz (stable) on my i7-3770K. At that speed CineBench scored a little more than it would on a 3930K at stock speeds.

I wonder what would happen if we paired a 3770K with an RTX 4090. It would probably not be very pretty. Like pairing a Pentium 4 with a Radeon HD 6770 (which I actually did do). Had even made a video of it. I wonder how Crysis would have run... Too bad I did not think of testing it out at the time for lingering inexplicable reasons...

View: https://www.youtube.com/watch?v=-PbLRWC9pPA
Im curious to see those 4k results, i remember with a 3090 ti you could just about get away with using a 3770k at 4k, I'd be curious to see what it would look like now with a 4090.
 
Im curious to see those 4k results, i remember with a 3090 ti you could just about get away with using a 3770k at 4k, I'd be curious to see what it would look like now with a 4090.
It still ought to be better than using an i7-2600K or 2700K, because they supported PCIe 2.0 and Ivy Bridge made the jump to the 3.0 standard which effectually brings about more bandwidth.

I wonder, though, if we got an i7-990X (which was the be-all and end-all back in the day; the apotheosis of power and supremacy) and paired it with a 4090 what the results would be like. It is still technically a 6 core / 12 thread CPU and, perhaps, with an overclock it would muster a little decency even within the 4K spectrum. An intriguing concept. The CPU used to be on the verge - the sole winner in a breathless game, a beauty to see it lingering on the top charts for many years after release date.
 
  • Like
Reactions: artk2219
Im curious to see those 4k results, i remember with a 3090 ti you could just about get away with using a 3770k at 4k, I'd be curious to see what it would look like now with a 4090.
I did end up finding something:

Core i7-3770K and RTX 4090 ->
https://www.gpucheck.com/gpu/nvidia-geforce-rtx-4090/intel-core-i7-3770k-3-50ghz/


Core i7-990X and RTX 4090 ->
https://www.gpucheck.com/gpu/nvidia-geforce-rtx-4090-max-q/intel-core-i7-990x-3-47ghz/ultra

If this information is true that means the 3770K pummels the Core i7-990X about 11 years since its release and about 12 years since the release of the latter. I suppose it's got to do with the PCIe configuration.

But it is still breathlessly frustrating how the higher core count of the i7-990X ended on a diminutive note regarding performance on all resolutions in contrast to the 3770K's 4 Cores and 8 Threads. Which does bring out the notion that the aforementioned links may harbor incorrect information. Otherwise it is quite odd, even despite the fact the 3770K is newer technology.
 
  • Like
Reactions: artk2219
I did end up finding something:

Core i7-3770K and RTX 4090 ->
https://www.gpucheck.com/gpu/nvidia-geforce-rtx-4090/intel-core-i7-3770k-3-50ghz/


Core i7-990X and RTX 4090 ->
https://www.gpucheck.com/gpu/nvidia-geforce-rtx-4090-max-q/intel-core-i7-990x-3-47ghz/ultra

If this information is true that means the 3770K pummels the Core i7-990X about 11 years since its release and about 12 years since the release of the latter. I suppose it's got to do with the PCIe configuration.

But it is still breathlessly frustrating how the higher core count of the i7-990X ended on a diminutive note regarding performance on all resolutions in contrast to the 3770K's 4 Cores and 8 Threads. Which does bring out the notion that the aforementioned links may harbor incorrect information. Otherwise it is quite odd, even despite the fact the 3770K is newer technology.
While the 990x has a core and thread advantage, the 3770k has a 40 - 50% IPC advantage, and a max OC advantage, which for the most part neutralizes the core and thread advantage of the 990X. It also support slightly newer instruction sets so some code may take a more efficient path and gain an even bigger performance advantage. That said, if i were running a VM, or two or keeping simultaneous tasks running around in the background, I'd probably take the 990X over the 3770K. Ideally I'd have picked up an LGA 2011 board and used that. That would have upped my core count options considerably, and (depending on the CPU) I could still overclock it to make up for some of the performance difference between it and newer processors. I had a Xeon E5 1650 I ran at 4.3 for a month or so, it performed about like a Ryzen 5 2600, it was a much older architecture and less efficient, but it definitely got the job done.
 
Last edited: