Discussion Favorite GPU From HD 5000 to HD 7000 Series?

jnjnilson6

Distinguished
I've retained a Sapphire Radeon HD 6770, Two Sapphire Radeon HD 7870 GHz cards in CF and a Sapphire Radeon HD 7790... (they were all staunch and wonderful).
Those golden, immemorial, beauty-filled times, currently vanished and faded away...

What was your favorite GPU in those times? I remember the Radeon HD 5850/5870 being the first class of cards able to pertain a terrific blow toward Crysis' consummate system requirements.

Man were those days a wonder! Do write up. I would be glad to know about your preferences; mauve-tinged in those glorious Radeon-fuelled hours.

Thank you! :)
 
I've retained a Sapphire Radeon HD 6770, Two Sapphire Radeon HD 7870 GHz cards in CF and a Sapphire Radeon HD 7790... (they were all staunch and wonderful).
Those golden, immemorial, beauty-filled times, currently vanished and faded away...

What was your favorite GPU in those times? I remember the Radeon HD 5850/5870 being the first class of cards able to pertain a terrific blow toward Crysis' consummate system requirements.

Man were those days a wonder! Do write up. I would be glad to know about your preferences; mauve-tinged in those glorious Radeon-fuelled hours.

Thank you! :)
I, unfortunately, can't really answer this as I was too young to remember most of these cards and I never owned one. I am interested to hear other people's experiences though.
 
I, unfortunately, can't really answer this as I was too young to remember most of these cards and I never owned one. I am interested to hear other people's experiences though.
Not that I am very old myself (half-winking into the darkness), but the gladdest, most enjoyable memories have stemmed from those youthful times. I remember playing Crysis Multiplayer in the drooping hours in about 2010 on a Mobility Radeon HD 4530 (512 MB), with 4 GB RAM and an AMD M300 (2 Cores, 2 GHz)... It is quite unfathomable, staring back remorselessly toward those vanished youthful hours and remembering the music that spun on incessantly in the background and the NanoSuits and the lasers and the taunting dilemma of aiming and movement between the clutches of shadowy graphics and sinking sunlight. So long ago...

Too bad they stopped the Multiplayer for that game in 2014 with the death of GameSpy.
 
Not that I am very old myself (half-winking into the darkness), but the gladdest, most enjoyable memories have stemmed from those youthful times. I remember playing Crysis Multiplayer in the drooping hours in about 2010 on a Mobility Radeon HD 4530 (512 MB), with 4 GB RAM and an AMD M300 (2 Cores, 2 GHz)... It is quite unfathomable, staring back remorselessly toward those vanished youthful hours and remembering the music that spun on incessantly in the background and the NanoSuits and the lasers and the taunting dilemma of aiming and movement between the clutches of shadowy graphics and sinking sunlight. So long ago...

Too bad they stopped the Multiplayer for that game in 2014 with the death of GameSpy.
My first PC gaming experience was not the most enjoyable one. I had an old (2017) school laptop with an Intel Celeron n3350, Intel HD 500 graphics, and 4GB of DDR3 RAM. It could hardly run Roblox at more than 30 fps and fast-paced Roblox games would go down to 3 fps for entire matches. Minecraft was unplayable without Optifine. Not to mention I couldn't even install any other games as the laptop only had 64GB of EMMC storage.
 
My first PC gaming experience was not the most enjoyable one. I had an old (2017) school laptop with an Intel Celeron n3350, Intel HD 500 graphics, and 4GB of DDR3 RAM. It could hardly run Roblox at more than 30 fps and fast-paced Roblox games would go down to 3 fps for entire matches. Minecraft was unplayable without Optifine. Not to mention I couldn't even install any other games as the laptop only had 64GB of EMMC storage.
My first gaming experience has taken place on a machine from about 2001. Celeron Tualatin (1 core / 1.3 GHz) and GeForce2 MX 400 (w/ 64 MB of dedicated memory) and 256 MB RAM and a 40 GB HDD w/ Windows 98SE, Windows 2000 Pro and RedHat Linux 7.3 installed. There were a lot of DOS games in those days, despite the machine could handle much heavier titles. The RAM was a little short in the upcoming years, despite for 2001 it proved to be more than plenty.

I've went across the whole field - to the starry exuberance of immemorial authors and the hefty Philosophies shining fluorescently through the mind at night. The words of begone authors expressing dazzling streets and blue-sodden silhouettes in the darkness and the beauty of breathless waltzes and immemorial dustless traces... I've read it and imagined it. But I would not give those years, the years I was most within hardware and gaming for nothing. Play now, when you are very young and enjoy yourself, because that's the good part of life - the saturated, beautiful part. Later when you grow up and continue on with whatever your medium should be, you'll always look back to those times and be bound breathlessly within their beauty for as much as an hour on a windy evening or a mellow afternoon.
 
I was on the Dark Side during those haydays! Nvidia GTX2060 Core 216. I loved it, although felt let down at the time with just 896mb vram. Still performance was awesome. Went to GTX560ti after that. IIRC.
The GTX 560 Ti was Nvidia's signature at the time. I've known innumerable people who've had it. It was between the HD 6850 and 6950. It was indeed an enthusiast-bound component, or at least it partook within that sphere from my perspective. How was it for you? Did it run smoothly and play most of the games of the time on High / Very High settings? Did you run it at 1080p or lower?

1080p was the 4K of 2010-2011. We all aspired towards it back in those days.
 
I ran Xfire 6970's with a 2500k
Originally ran the 6770 on a Celeron G530 (2.4 GHz / 2 cores / Sandy Bridge). Thereafter I got an i7-3770K and the difference in stability was extreme. Hosting a server on Crysis with more than 5-6 people on the G530 resulted in extreme performance repercussions, whilst the 3770K could go up to 32 people without so much as a shudder in the framerate.

Then I got a single 7870 GHz and afterward another 7870 GHz and used them in CrossFire. Crysis was not one of the games which handled CrossFire very well.

The 7790 was on another machine and it ran very, very well for a long time.

I had a friend with an i5-2400 and a HD 6850 and a friend with AMD 1090T (or 1100T) and a HD 5850 and a friend with an Ivy Bridge i5 (perhaps the 3570K) and a HD 7770. Those were the days...

How long did you use the 2500K and how was the 6970 Xfire? I remember the Radeon HD 6990 was a pinnacle in and of itself back then.
 
Wait a minute, the RX6400 only beats a 7970 GHz edition by 12%. How?! The 7970 GHz edition is an 11-year-old card.
TechPowerUp is a very good website indeed. Despite on GPU.userbenchmark.com - https://gpu.userbenchmark.com/Compare/AMD-RX-6400-vs-AMD-HD-7970/m1834749vs2163 - the difference is a little bigger.

Well, back in the day the HD 7000 generation was something of a renaissance in the video card industry. They were very fast and more expensive than generations up until that point. The HD 7850 was looked upon as a towering example of performance, overshadowing the 6850 majorly. Nvidia did push out some very powerful cards back then too.

It is interesting how the RX 580 is synonymous to a R9 390X too.

There was a certain point in time, I believe this may well have been the 7000 generation, after which cards began growing with a steadier pace and miracles in productivity were lesser seen.

Hell, I've got a gaming laptop from 2022 and its card - the RTX 3050 Ti - would hardly surpass my 7870s in CrossFire from 2012 on games which are specifically built to handle CrossFire well.

A very good question though. And it does indeed tell us a lot concerning the way the industry has been heading.
 
What was the fastest (non dual GPU) Radeon HD card? Do any of them fully support DX12? (I assume not, but maybe.)
That would be the Radeon HD 7970 GHz. (there were Radeon HD 8000 series but only for laptops / notebooks; I had the HD 8730M on a DELL laptop and it was very nice). On paper they (the higher-end GPUs from the 7000 series) do support DirectX 12. All AMD GPUs past the higher-end 7000 Series do. Nvidia, on the other hand, at the time of DirectX 12's release had cards much older than the ones supporting the technology on AMD's side which had support over the API too.
 
TechPowerUp is a very good website indeed. Despite on GPU.userbenchmark.com - https://gpu.userbenchmark.com/Compare/AMD-RX-6400-vs-AMD-HD-7970/m1834749vs2163 - the difference is a little bigger.

Well, back in the day the HD 7000 generation was something of a renaissance in the video card industry. They were very fast and more expensive than generations up until that point. The HD 7850 was looked upon as a towering example of performance, overshadowing the 6850 majorly. Nvidia did push out some very powerful cards back then too.

It is interesting how the RX 580 is synonymous to a R9 390X too.

There was a certain point in time, I believe this may well have been the 7000 generation, after which cards began growing with a steadier pace and miracles in productivity were lesser seen.

Hell, I've got a gaming laptop from 2022 and its card - the RTX 3050 Ti - would hardly surpass my 7870s in CrossFire from 2012 on games which are specifically built to handle CrossFire well.

A very good question though. And it does indeed tell us a lot concerning the way the industry has been heading.
I feel like something like crossfire and SLI would work nowadays if some form of chip-to-chip interconnect would be used. I mean it's kind of like AMD's whole chiplet concept except the chiplets themselves would be extremely powerful. what do you think? I, for one, would live to run 2 GPUs in my system, but only if it was implemented well without any of the major downsides of SLI or Crossfire.
 
  • Like
Reactions: jnjnilson6
What was the fastest (non dual GPU) Radeon HD card? Do any of them fully support DX12? (I assume not, but maybe.)
I believe that the Nvidia GTX 400 series was the oldest series to support DirectX 12.

'If you are on an older card from the Fermi generation (400 and 500 series), then the new Nvidia driver might be interesting for you. Fermi is the codename for a GPU microarchitecture developed byNvidia, first released to retail in April 2010, as the successor to the Tesla micro-architecture. It was the primary micro-architecture used in the GeForce 400 series and GeForce 500 series.



The new GeForce 384 series graphics drivers added DirectX 12 API support for such GPUs. Support stretches to Direct3D feature-level 12_0 games or applications, and completes WDDM 2.2 compliance for GeForce "Fermi" graphics cards on Windows 10 Creators Update (version 1703).'

The 400 series came roughly 2 years before the higher end 7000 series. (From what I have read back in the day, middle and low-end 7000 series do not support the API).
 
  • Like
Reactions: Order 66
I believe that the Nvidia GTX 400 series was the oldest series to support DirectX 12.

'If you are on an older card from the Fermi generation (400 and 500 series), then the new Nvidia driver might be interesting for you. Fermi is the codename for a GPU microarchitecture developed byNvidia, first released to retail in April 2010, as the successor to the Tesla micro-architecture. It was the primary micro-architecture used in the GeForce 400 series and GeForce 500 series.



The new GeForce 384 series graphics drivers added DirectX 12 API support for such GPUs. Support stretches to Direct3D feature-level 12_0 games or applications, and completes WDDM 2.2 compliance for GeForce "Fermi" graphics cards on Windows 10 Creators Update (version 1703).'

The 400 series came roughly 2 years before the higher end 7000 series. (From what I have read back in the day, middle and low-end 7000 series do not support the API).
I am so confused about Techpowerup's relative performance chart. Like, I know that it is not 100% accurate, but I just don't understand how a GTX 295 outperforms a 750 ti. Someone, please explain this. (unless I am missing something)
 
  • Like
Reactions: jnjnilson6
I am so confused about Techpowerup's relative performance chart. Like, I know that it is not 100% accurate, but I just don't understand how a GTX 295 outperforms a 750 ti. Someone, please explain this. (unless I am missing something)
Well, everybody'd get confused, keeping in mind those chronicled numbers are mind-bogglingly impertinent to anything in touch with the current reality.

Well, the same way a Radeon HD 7990 from 2013 (which consists of two HD 7970s) would beat my RTX 3050 Ti (Laptop) released in 2021.

It is just that higher-end components oftentimes beat low-end and middle-end such a long time into the future.

For example, an i7-990X from 2011 would beat an i3-10100 from 2020 due to its high core count and markedly enthusiast initial stance within the hardware sphere.

So we may see very new components surpassed by very old ones due to the fact that some of them are high and very high-end and others - lower-end.
 
I tried to run on integrated HD 6310 in an E350, but just wasn't quite enough for 1080p playback. So I dropped a 6670 in my HTPC for a while. Was the fastest GPU available that didn't require a 6-pin power connector. Only retired it when latest drivers and BIOS just wouldn't let it scale a 4K TV properly, so I switched to a 750Ti (which was Maxwell before the 900 series launch, so had more up to date connectors)

Yeah, 750Ti was the second slowest Maxwell with only 1870 million transistors, while the GTX295 was a massive dual GPU with 2800 million combined transistors.

4090 today has 16,000 CUDA cores. It is going to be a long while before your 50 class cards have that level (about 2560 now)
 
Last edited:
I tried to run on integrated HD 6310 in an E350, but just wasn't quite enough for 1080p playback. So I dropped a 6670 in my HTPC for a while. Was the fastest GPU available that didn't require a 6-pin power connector. Only retired it when latest drivers and BIOS just wouldn't let it scale a 4K TV properly, so I switched to a 750Ti (which was Maxwell before the 900 series launch, so had more up to date connectors)

Yeah, 750Ti was the second slowest Maxwell with only 1870 million transistors, while the GTX295 was a massive dual GPU with 2800 million combined transistors.

4090 today has 16,000 CUDA cores. It is going to be a long while before your 50 series cards have that level (about 2560 now)
Wow, just realized that my rx 6800 outperforms this 2010 IGPU by 9790%.
 
GTX 295 was a top end card of its day, and GTX 750 Ti was a low end card of its generation.
Yes, but spec-wise wise the 750 ti should destroy the 295. The 295 total (both GPUs) has 480 cores compared to 640 on the 750 ti. The GTX 295 had 1.7GB of GDDR3 compared to 2GB of GDDR5 on the 750ti. I don't understand how that makes the GTX 295 faster. Not to mention that in most games you would have to disable one of the 295's GPUs since most games don't play nicely with dual GPU cards.
 
Yes, but spec-wise wise the 750 ti should destroy the 295. The 295 total (both GPUs) has 480 cores compared to 640 on the 750 ti. The GTX 295 had 1.7GB of GDDR3 compared to 2GB of GDDR5 on the 750ti. I don't understand how that makes the GTX 295 faster. Not to mention that in most games you would have to disable one of the 295's GPUs since most games don't play nicely with dual GPU cards.
The GTX 750 Ti had 640 Maxwell cores, the GTX 295 had 480 Tesla cores. It doesn't really mean anything to compare the two counts. Specs aren't comparable across different architectures. Performance isn't just about counting the number of cores and their clock frequency, that's why we do performance benchmarks.

GTX 580 had 512 Fermi cores at around 800 MHz, while the GTX 660 Ti had 1344 Kepler cores at 900 MHz and got the same performance. You'd think more than double the cores running at a higher speed would destroy in performance, but it doesn't. Because how many Kepler cores are equivalent to a Fermi core? Apparently... that many. But really it is an improvement, because the 660 Ti has a 294 mm² die size while the GTX 580 has a die size of 520 mm² and consumed probably double the power. So the same performance was achieved while using half the silicon (= lower cost) and half the power. So it's a superior GPU.

How it is achieving those results, with lots and lots of weaker cores vs a smaller number of larger cores, is an engineering detail that is not relevant at the consumer level. We as enthusiasts may be interested to know all the numbers just for the sake of it, but they don't actually matter in the end. What matters is what goes in, which is cost and power, and what comes out, which is performance. The design of how it's achieved inside doesn't matter. Anyway, the point is these design details can change from generation to generation, so the "number of cores" and the clock frequency are not actually real "specs" in the sense that they don't tell you anything absolute about the GPU performance. The only thing it can do is tell you relative performance between cards of the same generation. If two cards are both Kepler architecture then you can see that one has twice as many cores and will probably get around twice as much performance, and so forth.