Nvidia GeForce GTX 1050 & 1050 Ti Review

Page 3 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.
^ AMD broke many promises in the last decade. Especially "performance" promises.
They (AMD and NV) do have some control over pricing, but definitely not during launch when demand is too high.
Eventually they settle at some reasonable point as it is free market. Look at GTX 1070, we are 4 months after launch and prices are already below 400 for decent cards. OC there is a low point which can not be crossed without loosing money. Eventually those with patience get their HW at fair price.
 

InvalidError

Titan
Moderator

They have control over the contract price for the GPU chips, which is the single biggest variable determining how much the minimum cost of an assembled board is going to be. Most of the support components are off-the-shelf generic parts with relatively fixed costs, which is why prices tend to bottom out within a few bucks across vendors.

Until supply catches up with demand though, top pricing is free-for-all with individual vendors and retailers charging whatever the market will bear. This catch-up appears to be taking much longer than expected with the current generation. Are 14-16nm yields that bad or is it demand being exceptionally high? I'd guess a little of both.
 

sagematt

Commendable
Mar 29, 2016
8
0
1,510
0
Review shouldn't have focused on a non-6pin 1050 Ti card. Other reviews have tested 1050 Ti with 6-pin connectors and the card achieves decent overclocking. Additionally, cards are out now and there _are_ several models selling for MSRP, which was another thing you were being extremely skeptic about.

I hope I see a review from you guys soon for a 1050 Ti with 6-pin connector.
 

RedJaron

Splendid
Moderator
That's because you don't understand what has to go into a product review like this, so I'll explain.

Highly anticipated products, like a new GPU generation launch, usually involve the manufacturer sending products to the reviewers in advance to their actual market release. It almost always involves an agreement that the review will be posted on the day the product is released, along with an NDA that the reviewer can't say anything until the release day. The one big variable is how far in advance the reviewer gets the product and how long they can keep it. Sometimes review units are limited in quantity, so it's not uncommon for a reviewer to only have a day or two with it before they must send it back, or off to another reviewer.

So in essence, this comes down to time. The reviewer only has so many hours to examine the product, take pictures of it, sometimes disassemble it, and run it through all the benchmarks. A single game benchmark pass can take up to five minutes. It's fairly common to make multiple passes at the same setting to get an average performance to remove data spikes. So to keep things simple, let's say it takes 10 minutes to do a complete pass for one game at a given resolution and detail setting. Meaning it takes 80 mins to do a single pass across all eight games used here. That doesn't count the time it takes to record the data or wait for swap between games.

Adding another resolution or detail setting not only means you have to run the benchmark again, it also means you have to spend time adjusting settings in between runs. You can reduce this somewhat using scripts to fire off the benches quicker, but it still takes about an hour and a half for each full pass. That may not seem like much, but when you're on a tight clock, every minute counts. If you've only got two working days with the card, photos can easily take four hours, power consumption tests can take a few hours, and game benchmarks will be at least three hours. You can see how time can grow short.

And don't forget that every test done on the new card will need to be replicated on every other card it's compared against. As stated here, new drivers have been released and it was decided to use them. That means you can't use the historical data from other cards and they need to be retested. So each other card will need at least three hours of new tests ( more if you want to check their power consumption again with new drivers ). Seven other cards here are compared, so that means at least 21 additional hours would be required.

You might be able to get some other work done while those tests are running, and those cards wouldn't need to be done during the two days you have the one review sample, but you're still adding days to the total review time. Combined with the time to actually write it up, edit it, crunch the numbers, edit the product photos and comparison graphs, and get it all into the site's content manager, do you have any idea how much lead time you'd need to get everything ready to publish on the NDA expiration date?

Performance value comparisons generally don't work for products that don't compete with each other. Why would someone looking for GTX 1070-level performance also be considering a 1050 Ti, a card that costs almost a third as much? People tend to shop for computer parts in two different ways: how much performance can I get for a set amount of money, and how much money do I need to spend to get my desired level of performance?

The benchmark detail levels have been selected to show how far the cards can be pushed and still deliver satisfactory gaming performance. This then acts as a common factor among all the cards to allow you to fairly compare them.


1366x768 is probably the most common resolution for laptops in the last five years, particularly the lower-budget models. So how many of those survey users were on laptops vs desktops? I'm willing to bet the number of desktop gamers still using 1366x768 as their primary gaming resolution are quite low. Some APUs and iGPUs deliver good-enough performance at 720p. All but the lowest-end desktop cards have more power than that.
 

rwinches

Distinguished
Jun 29, 2006
888
0
19,060
30
Guru3D graphs show only avg frame rates which can be misleading.
A lot of results for cards tested earlier are with older drivers, followups with updated drivers are rare. (Yes I know a card may get returned, but I'm sure others are always available). It is also always disappointing when cards like the RX480 are missing from graphs and you have to go back to it's review for comparison, while GTX 1060 cards are included. I notices a lot of reviews were MSI Gaming X OC cards also.

It is encouraging that Tom's is considering having alternate test beds that reflect real world use for testing cards that the majority of users can afford.

Game engine presets and sliders are there for a reason. Engines stay ahead of current HW capabilities as a rule. Too many reviews use words like 'sacrifice' when sliders or feature switches need to be adjusted. Most gamers just want smooth game play and have little or no interest in wind blowing through a characters hair.

!050 and 1060 series cards don't do SLI so no future upgrade slapping in a used/refurb/discount card.
 

Onus

Titan
Moderator
I was very happy with the mostly ultra settings I got in GW2 with a HD7870 (what became the R9 270X). The GTX1050Ti appears to blow that card away, so it looks like a good long-term choice for me. I've spent more from time to time by buying too early or getting more than I needed, but once again it appears that a great single-monitor gaming experience should not require a graphics card more expensive than $175-$200, and I would not be suffering by any stretch with a $150 card.
 

InvalidError

Titan
Moderator

People who buy a 1050 aren't very likely to have bothered with buying a motherboard with two x16/x8 slots and with the 1060 having 50% more of everything on-board, a used 1060 will be a much better upgrade than 1050 SLI even if it was possible.

Also, with DX12, explicit multi-adapter has enabled developers to use whatever GPUs are available, at the expense of having to manage the GPUs themselves. That and diminishing returns on increasing driver development effort are the main reasons why Nvidia scaled back on implicit multi-adapter support. I wouldn't be surprised if Nvidia killed off implicit multi-adapter support a few more generations down the line - SLI/CF already require some degree of explicit support for best results anyway.
 
I've probably built my last SLI rig after having had one since the Voodoo 2 days. Poor scaling, crappy scaling support (or no support) out of the gate on a game release, etc. Like many people I started with one card then bought a second one a month or two later as budget allowed.

The real driving force for me even getting SLI was when I upgraded monitor resolutions (1280x1024 -> 1600x1200 -> 1920x1080 -> 2560x1440). The next move would be 4K, but I've held off on that indefinitely until single GPU power can run it at 60+ FPS (specifically keeping minimum FPS at or above 60FPS in games).

Here's to hoping the 1080Ti will be able to do it. Currently, Titan X Pascal can barely average that.
 

techy1966

Reputable
Jul 31, 2015
140
0
4,680
0
"I fundamentally disagree: artificially running GPUs into the ground with unrealistic settings nobody would ever play the games at using the GPUs being compared makes no sense since It over-emphasizes bottlenecks that wouldn't be there or at least be substantially less significant under a more GPU-appropriate detail level."

It is important to see what a GPU can do when pushed because going from most of the reviews around the net it puts the Nvidia cards 1050 cards in a higher standing than they should be. When pushed in a review like Guru3D's review it paints a different picture for those cards and shows they have very little left to give the customer in the future. In that same review it shows the AMD cards hold up a lot better when pushed to the same level of details in the game which shows that they have more memory bandwidth and GPU resources left to actually handle those higher setting a lot better than the Nvidia 1050 series cards.

I fully understand no one would subject these cards to those setting in real world gaming and run at 30-40 fps but it just as important to know how much performance is left in a card that you may be buying so it will serve you longer into the future. I still say including both sets of data as in benchmarks is important so you get the torture test benchmarks as well as more sane settings so you as a buyer know what you are getting and which GPU will serve you the best going into the future. From what I have seen in the Guru3D tests shows the AMD 460 and 470 to be the more future proof cards. In most of the other reviews it shows the 1050 to be faster than a 460 and the 1050 Ti to be slightly slower than a 470 or matching it. In Guru's review it shows the 1050 and 1050 Ti to be most of the time right at the bottom of the stack with the lowest performance. SO try to tell me when seeing the other reviews we as buyers are getting the full picture of what to expect from the 1050 cards when pitted up against the same class AMD cards.

We are not getting the full picture because at the end of the day it clearly shows that from Guru3D review the 2 AMD cards have more fuel left in the tank which in turn will allow the end user to turn some settings up a bit higher as well as be able to hold on to the cards a bit longer as new games come out. I am not a AMD or Nvidia fan boy I have 2 gaming systems 1 with OC 390x's crossfired and the other has Nvidia gtx 980 sli both systems are just as much fun but I do find that Nvidia has an edge with drivers and faster game support than AMD has but with crimson that Nvidia edge is slowly going away.

I think a proper review has to show both sides of the data all out raw performance and the relaxed settings data to paint a clear picture or real world settings. That way you get as a buyer the full story on the GPU you are about to buy and know what to expect from it. By just showing tweaked settings you are not actually reviewing a GPU but just letting the end user know how it performs when tweaked with hand picked settings which is wrong it needs to have both sets of data to be a proper review or don't do the review at all. I have repeated this over and over so maybe it sinks in..lol
 

Onus

Titan
Moderator
I realize not everyone cares (especially people building new), so I try not to harp on it, but the GTX1050Ti may not even need a PCIe power cable, and the RX 470 needs an 8-pin. Imho that's a huge difference, even if it is not a factor in gaming performance.
 

InvalidError

Titan
Moderator

You already know beyond any reasonable doubt that they have "very little left to give to customers in the future" from the simple fact that the cards got benchmarked at reduced details to produce playable frame rates on the main feature GPUs. How much more "proof" could you possibly need? This alone tells you that if you want to play at higher details or resolutions, then these GPUs which are already near their playable limit are no-go for you and you need to look one or more tiers higher.

Would torture-testing GPUs under unrealistic loads for what they are change that conclusion in any way, shape or form? No. The only people who might benefit from that are higher-end shoppers looking for confirmation bias to ward off buyer's remorse.
 

RedJaron

Splendid
Moderator
While not a bad idea in itself, how this is tested is crucial. Yes, all cards have a drop off point, however not all cards drop off at the same rate. A single data point doesn't reveal anything about that curve. A single data point, especially at the "pushed too hard" phase can be extremely misleading about performance on slightly scaled back settings. A card might be great for high 1080 settings but floored with a jump to 1440. Conversely a card might look just ok at 1440 that actually is mediocre across the board. If you're only showing the ultra high-end, then you'll miss those other more important details.

As I said, while I appreciate benches across multiple settings, I can understand if the reviewer is crunched for time and can't include them. If you can only run a single bench, I think it's much more useful to show how the card performs under regular use, or what settings the card can run acceptable framerates, rather than going for broke and seeing how it does under a workload it wasn't designed for.
 

Onus

Titan
Moderator
I agree, although I think settings higher than "Medium" should be used. I think most people would want to play on at least "High" settings. It seems to me it's the last few tweaks at UltraMaxOhWow! that bring cards down.
 

InvalidError

Titan
Moderator

Would you play a game at "High" settings if that meant a 45fps average and 25fps minimum? Jumpy frame rates make me nauseous within 10-15 minutes. I'd nuke whatever settings I have to in order to maintain a steady 45fps absolute minimum for anything I intend to play for more than a few minutes at a time.
 


And what information do you have that you base that opinion off of? Don't tell me because it favors Nvidia over AMD. As one of the early release "investors" of PCars and having worked with developers on it prior to release, you couldn't be more wrong.

In fact, DiRT Rally is easier on a GPU than PCars. A full field of cars at night in the rain on PCars brings any GPU to its knees with quality settings maxed out at 1440p. It was one of the major reasons I got a second 970 for SLI as with a single 970 at 1440p minimum FPS would dip down into the 30s.

Finally, PCars is still extremely popular for millions who have it globally. Never mind that PCars 2 is going to be released next year based on the same core graphics engine with modifications. If there is ANY wasted space on benchmarking here it's the six year old DX9 game Starcraft II that can be run on an iGPU successfully.

 

cicchis0

Commendable
Oct 28, 2016
1
0
1,510
0
"Would you play a game at "High" settings if that meant a 45fps average and 25fps minimum? Jumpy frame rates make me nauseous within 10-15 minutes. I'd nuke whatever settings I have to in order to maintain a steady 45fps absolute minimum for anything I intend to play for more than a few minutes at a time."

I sure would. I watch Blu-ray and film at 24 fps too.
 

turkey3_scratch

Polypheme
Herald


Films have blur.
 

InvalidError

Titan
Moderator

Movies have perfectly constant 24/25/30/48/50/60fps, which makes them much easier to watch. Movies also have very deliberate and measured pans and tilts, unlike gaming which tends to involve broad spontaneous movements. Also, when watching a movie, you aren't exposed to cognitive dissonnance (or whatever the disagreement between what you do and sensory feedback is called if that's not the correct term) since you play no active role in what you are watching. Not really comparable.
 
Status
Not open for further replies.

ASK THE COMMUNITY

TRENDING THREADS