Best Graphics Cards For The Money: October 2014 (Archive)

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.

Thanks for the response.
I am reading TH daily and I was surprised too to see the GTX 760 fall behind, or to be more accurate, the R9 270X to provide really good results.
But my question was about the consfusing results between hierarchy and page 8, which you just answered :).

Thank you also for the update you guys will make. I think a lot of people (myself included) that reading and using Tom's Hardware as one of the most respected hardware knowledge sites, are waiting for this.
 
So... looking ahead a bit.. should a GTX 980 paired with a solid i5 CPU and 1080p res be able to max out almost everything going into holiday 2015?
 


I think that we can be almost certain that it will unless games suddenly need more than 4 cores to run them which I don't see happening for a while
 


BWkBmUj.gif

 


That gif, I nearly lost my s*!t over it
 
Yep...looking back a year ago right now, most of us were ticked because of the bitcoin craze and the jerks that bought up all the R9's to fuel their habit...

Where are they now?

I'm so close to splashing for a couple of 970s...but something keeps telling me to wait just a bit longer.
 

There will be a step increase due to 14-20nm manufacturing but after that, it will be another long lull.

One of the big challenges with 20nm and beyond will be reducing dependence on high-bandwidth memory interfaces: you get little to no performance increase from shrinking the die, adding execution resources and bumping clocks when everything is predominantly bottlenecked by GPU memory bandwidth.
 


In the 17 years I've been buying PC components and dealing with rebates, I've only had one fail to go through out of dozens, and it was my fault for missing an item required in the instructions. My most recent one was a $25 rebate card (AmEx) which I used towards buying another tech item. People who don't even bother with rebates, even if it's only $5, are the perfect consumer that companies offer rebates to: they hope the customer doesn't send it in. If you make regular purchases of items that have rebate offers, it adds up significantly over time.

 

If getting your $5 claim requires mailing your proof items, sending your proof items using registered mail to make sure the company cannot claim they never received it costs more than the MIR is worth so it makes little sense to bother in the first place.

Much of the time, MiRs are precursors to price drops, new product introductions or other vendors' discounts so I usually end up waiting for the MiR discounts to translate into lower retail prices before buying instead of bothering with MiRs.
 

Both AMD and Nvidia have recently introduced compression techniques that make this much less of an issue*. The GTX 980 is on a mere 256-bit bus, they could easily move up to a 384-bit bus and 512-bit is certainly not unheard of either. On top of this comes stacked DRAM.

The compression thing has more to do with cutting costs and power consumption, the latter of which is not as important for desktops as laptops. They certainly have room to grow on the desktop, whereas laptops are limited by battery life and cooling concerns.

*See in particular the R9 285, which performs slightly better than the R9 280 despite having much lower memory bandwidth.
 

Yes, I know they improved things a fair bit in their newest iteration but increasing the memory interface size is easier said than done: the solder balls under the GPU can only be made so small and when the GPU core shrinks much faster than the die attachment area requirement does, cost scaling goes down the drain - that's a fair chunk of the reason they made efforts to reduce interface size in the first place.

Stacking memory with an ASIC works fine for 2-3W SoCs but it will require a lot more effort to work out on a ~200W GPU: if you put it on top, it considerably increases the GPU die's thermal resistance and if you put it on the bottom, the RAM ends up operating at considerably higher temperatures which bring their lot of troubles. You also end up with proprietary memory packages that only work with one specific GPU, which might not be too good for costs.

Multi-chip modules would be more manageable but that does not reduce the number of balls that need to go under the GPU die and you still lose that part of the GPU manufacturers' ability to distinguish their cards from all the others.
 
GPUs are not shrinking. The transistors are shrinking, at least when they actually get their die shrinks done on schedule, but the GPUs just get more transistors so the size doesn't really shrink over time.
 
Is it really that difficult to figure out where HD 4600/4400 from intel stands in the hierarchy chart? It's been a year now since they launched. And iris/iris pro?
 
I'd like to thank the PC Game industry for making the GTX 980 a luxury rather than a necessity. Games that would put all that silicon to good use are a year (or two) away at best. Also, I would like to thank the display industry for making 4K monitors either too expensive or too cheap ti be considered a good use for my upgrade dollars. If nvidia follows their game plan the GM204 will be on the market until 2016 - giving me ample time to upgrade.
 

They would be shrinking if IO pin count and density did not force them to keep or even increase the size and then find ways to make meaningful use of that forced extra space. GPUs are already starved for memory bandwidth much of the time, simply doubling the amount of raw compute on the die to fill that space would go nowhere.

CPUs are much the same way: the CPU cores alone would be too small to put all the necessary balls under them to connect all the external stuff they need so the die size gets padded with tons of cache, IGP, chipset functions, etc.
 

GPUs are usually not memory bandwidth starved, and what they use all that die space for actually adds performance. Like the R9 285 which performs better than the R9 280 despite a much lower memory bandwidth.

Or look at the GTX 980, it has a little old 256-bit memory interface despite a fairly sizable GPU at 398 mm square. They could easily have ramped up the memory interface, but it simply wasn't necessary because 256-bit is plenty. For reference, Tahiti has a 384-bit memory interface on a 352 mm square GPU.

And CPU cores with cache is pretty much a necessity. Cutting that away would wreck performance regardless of the memory interface. Packing more stuff in there has to do with lowering costs and increasing efficiency, something that Intel in particular has really been pushing for the last few years.
 
I think there is nothing wrong with the tiers, if you pay some attention you will see that the cards in each tier are in a performance order from left to right. For example the 970 is between the 780 and 780ti, that is correct.
 

They usually are.

The only reason the R9-285 manages to catch up with the R9-280 is because AMD invested more processing resources in texture compression and other bandwidth optimization tricks, same goes for Maxwell. But those tricks can only go so far
 


This is exactly what I'll be spending the next 2 months trying to judge and ultimately decide whether I buy a 980 now, or wait for the next big thing.

I really hope the next cards to come out are on a Holliday profit release timeline, and make my decision that much easier.
 
According to Tom's own benches and those at Anand...the 780M outperforms the 7970 mobility by up to 15-20% and yet they're on the same tier. The 280X mobility is specced lower than a 780M in pretty much every respect and yet it is going to be placed higher, since it's equivalent to a 7850.

The 780M outperforms the 660 in benchmark and somehow, they're on the same tier. The 780M outperforms the 660Ti and the desktop 7870 pitcairn and yet somehow is placed below both.

What's going on with the chart. You're ignoring your own benches and those of other sites that are very clearly showing that your hierarchy is massively compromised. If you need a 780M to test against the 660s, 660 Tis and 760, let me know. We can run a few tests.


I get that in a hierarchy like that one that sometimes the cards that are within 10-15% of each other have to be grouped but when the 780M outperforms the 660...and is placed at the same tier. Outperforms the 660 Ti and is placed below it. Placed below the 280x mobility, despite being faster, placed at the same tier as the 7970 mobility, despite being faster, placed below the 7870 desktop, despite being faster, I have to wonder...

What's the point of the hierarchy chart?
 
The easiest way to fix the chart is to bump the 880M up to the same tier as the 970M since if you check Notebookcheck, GPUboss and if you run the tests yourselves, you'll see that the results(and their specs) are almost identical. The 780M should hike up one slot to that of the 660Ti since it only beats that desktop card by about 5% or on high-memory intensive apps. That would put it at the same level as the 7870, of which it is also slightly faster than and place it a tier above the 660/7970 mobility of which it is *vastly* faster than. The new 280X mobility is about 10-15% slower than the 780M but it would also fit in the same tier.

Everything fits.

That would fix the mobility setup on your cards dramatically.
 

No, they aren't. If they were, Nvidia would have gained a lot from using a 384-bit memory interface for the GTX 980 and 970. They didn't, because it wasn't worth it.
 
Status
Not open for further replies.

TRENDING THREADS