AMD Radeon RX 480 8GB Review

Page 16 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.

Samer1970

Admirable
BANNED


"as usual" , "you fail" , "your word vs mine" , "do you even think before pressing the button" , "you know very little" ...

it is not you vs me , and when you talk with others on forums please avoid such language .

Thank you very much .

I said what I wanted to say without using such language . so please do the same . we dont agree fine , but it stops at that.

I am talking about a section in the market that exists , you insist on making it a big issue out of nothing as if you are winning some kind of a battle here , dont exaggerate things please and choose your words when you talk with others .

As for productivity , it depends on what you are doing , using 3 groups of programs one group on each screen is way better than splitting a single wide screen into 3 parts .

oh and I dont need to show evidence . because I dont need to convince you at all. it is not a big deal , just a section of the market that exists and I know them around me . does not need to reach this okay ?

have a nice day
 

Samer1970

Admirable
BANNED


I agree that the bezel is a problem , but not a big deal for some , also it depends on which game you are playing.

As for desktop place well yes they use 20-24 inch monitors for this (cheap $100 each)

30 inch screens are good I always used one in the past (The Dell 1600P one) but they dont span the horizontal view in games , so you just get bigger screen not wider view .

Try Flight simulators and Racing games on 3 screens they are the best games for 3 screens setups .


 


^^This. For the first time in four GPU generations of SLI ownership, I am seeing diminished to zero returns in some of the games I play/played. It is getting very frustrating for having spent $700 on two GPUs and not getting at least close to $700 performance out of them. By diminished I mean poor scaling at <75% performance improvement and by zero I mean no official SLI support at all (whether or not SLI support comes later officially with a patch or some hack file is made by a private party).

So for this reason - that the shift to multi-GPU support is being pushed back on the game developers in core programming and they are failing or not even caring - is a red flag of things to come in the upcoming next few years. And that is why my 970s will most likely be the last SLI GPUs owned and instead will move to a single high end GPU solution (1080Ti) for upgrading to 4K next year from 2K.

So here's a hypothetical question for Nvidia to ponder: will they make more money on a single 1080Ti purchase or two 1070s by a customer? One has to wonder what the OEMs think about this as well, and of course this goes for AMD and their OEMs as well.







 

InvalidError

Titan
Moderator

The 1070s are either crippled 1080s or 1080 rejects, just like the 1080Ti will be either crippled Titan Xv2 or rejects. Either way, Nvidia will thank you for taking their lower quality dies off of their hands and contributing to the volume production that makes binning for their highest-end GPUs possible.
 

Samer1970

Admirable
BANNED


Are you sure about that ? I mean this is unlikely ...

This means they have Huge percentage of rejects ... and I doubt this is the case .. you cant cover the demand using rejects .

And why would they cripple a healthy GPU ? this means there is no real 1080 , no cost , nothing ,the 1070 is just an 1080 with disabled parts ? I dont get this. can you explain? do they make a full 1080 and then cripple it from Bios ?
 

InvalidError

Titan
Moderator

Look at the 1070 launch review. The 1070 is a 1080 with one of its four GPU clusters disabled.
http://www.tomshardware.com/reviews/nvidia-geforce-gtx-1070-8gb-pascal-performance,4585.html

Some of those 1070s may have a defect that forced Nvidia to disable the cluster to salvage the silicon, others may have been binned as 1070s for failing to meet some of the 1080's binning parameters and one of the clusters, possibly the worst-performing one, got fused off in the process. If demand for 1080 is too low while 1070 demand is high, some 1080s may get crippled simply to meet 1070 demand.

Nvidia cannot take the chance of people or AIB discovering a way to re-enable that disabled 1070 cluster, so the disabling mechanism is most likely one-time-programmable bits inside the chip.
 
I always choose my words carefully, thank you. For the last time, please stop with the strawmen. No one is saying the mid-range SLI/CF market doesn't exist. Many of us are saying it's a small sliver of the dGPU market, and one that has questionable profitability. You are going on about how this is a huge problem with the 1060 and how NVidia made a big mistake. If anything I think it's demonstrable that your are the one making big issues out of nothing and exaggerating things.

If you are not trying to convince or persuade anyone, why do you even care what someone is saying about this? Why dedicate so much time and energy posting about it? However, if you're going to participate in any form of debate, I think you'll find it hard to have your ideas taken seriously if you don't bother to back them up.

It's not unlikely at all. Are you familiar with what is often referred to as silicon yield or the idea of binning? For anyone who isn't:

Silicon wafers aren't all perfect. Often there are small imperfections in the lattice. These cause issues with the lithography so that some transistors or pathways don't work right. Those defects can often be fused off so they're never used, thus allowing the rest of the chip to still function, albeit at lower capacity. The number of usable dies you get out of a wafer is the yield. This is why smaller lithography ( 14nm vs 22nm vs 32nm, etc ) is helpful. Smaller dies mean you can pack more onto a wafer, meaning you have less with defects which improves your yield. It costs a lot of money to setup each unique lithography run at a foundry. So to save costs, since defective dies are a given, CPU and GPU makers design various SKUs off the exact same silicon with different configurations of what resources are enabled or disabled. You can look back at numerous CPU and GPU reviews to see this.

After the dies are made, they're tested to make sure they work correctly and also to see how well they perform. Dies that can safely take higher voltage or that are stable at higher clock speeds are tossed in one bin while lower performing parts go in another. For example, the i7 6700T, 6700, and 6700K are all the exact same chip, made in the exact same process, but only the best performing ones get branded 6700K. A die that performs poorly, but is otherwise defect-free, may be branded as a lesser product, i.e. a GTX 1080 die that for various reasons requires too much voltage to power the entire die at 1080 clock speeds may be marked as a 1070 since using less cores lowers heat and total power consumption. So some of the cores / SMX / ALUs may be disabled in some way. It's not uncommon for these artificially disabled chips to be unlocked by end-users to get a "free" upgrade. It was particularly common to unlock a fourth core on some old Phenom CPUs or to unlock a Radeon 6950 ( I think? ) into a 6970. AMD, Intel, and NVidia have gotten better at permanently disabling dies in the last few years, though.

This whole process of binning and disabling resources allows a manufacturer to lower costs. It's cheaper to make 10,000 dies of two types than it is to make 5000 dies of four types. It also lets the manufacturer sell otherwise defective product for a profit rather than throw it away.

EDIT: Looks like IE already covered it while I was typing.
 
Don't know many that have done this I can use 1 hand to count them and rest a number of fingers. Just not normal for most budget gamers at all.
Actually I've not seem that at all, when they get the holiday money they usually will upgrade the video card to the new flavor or add a SSD. Can't say I've seen a demand for some one buying the same VGA card they bought a year before and 2 monitors... nope never seen it.


Well if it exists I think it may be limited, very limited to maybe just you and the group you hang with.
For gamers most tend to buy in the 23-27 inch range. I was expecting more of the 30 inch plus curved models to sell but they are not moving anywhere near the regular style models with Res around 1920 x 1080. That is what sells most these days in case anyone is wondering.

True people use their computers for other things besides gaming. Keep one fact in mind. People don't run out to the computer store and say hey give me the new hot VGA card I got a spreadsheet to work on or I got a letter to write. NO! They walk in and say hey I want to play that new game at Max settings what is the best VGA card that can do that for $250? Gaming is what is pushing the development of these new cards and technologies, without it would Multi- GPUs rigs even exists? Would 4K monitors be as popular? Would VR development be taking place? Would home PCs still exist? Perhaps but the specs wouldn't be pushed like they have been and the kinds of Graphics we have today wouldn't exist if not for the gaming segment. See how much money a big game title brings in and compare it to a big movie release. I think you'll be surprised.
 

The sort of GPU sorting you mention is common, and probably started with CPUs which are handled the same way. Remember when some motherboard mfrs had the ability to unlock cores for AMD CPUs. They didn't always work since some of those core were turned off because they were unstable or just bad. You state exactly what I have been told about this practice how they do this for GPUs and CPUs and explain it well.
+1 IE
 

InvalidError

Titan
Moderator

Even earlier than multi-core CPUs, Intel had Pentiums and Celerons based on the same die where the Celerons were lower clocked Pentiums with most of the SRAM cache disabled, either due to defects, performance issues or just crippled to meet demand.
 


Yep, I remember the CPUs were (and still are) tested to see what speed they could run stably and marked accordingly. I remember getting some AMD CPUs and they were clocked down to fill a need, so by the lot number we knew we could run them stably at a higher clock it was great. I bought some for my home PCs and used them for years.
 

Samer1970

Admirable
BANNED


So they could sell us GTX 1080 the same price of GTX 1070 if they chosen too ... without any real cost on their side .. wow !

 

InvalidError

Titan
Moderator

That isn't how it works. If they sold the 1080 at the same price as 1070s, they would have to write off the 1070s since nobody would buy them and need to recover those lost sales by increasing the 1080's price by an equivalent amount. You'd still end up with a 1080 priced over $500 but now without the 1070 in-between.

Greedy AiB partners, distributors and retailers taking a 20-50% combined markup over MSRP due to limited availability in some markets don't help things either.
 

Samer1970

Admirable
BANNED


What is the point ? the cost for them is the same , They ARE making GTX 1080 , THEN Cripple it , THEN sell it Cheaper as an 1070...

well dont Cripple it please .. it is the same cost for them ... how is this Legal anyways ?

sell it for $500 and make Another chip for $380 ... instead of asking $600 for the same CHIP ... I would not miss the 1070 if I could buy the 1080 for $500 instead of $600 ..

($380+$600)/2 = $490 (non founder)

($450+$700)/2 = $575 (founder)
 
As I already said, it doesn't work that way. Every unique chip made is extremely expensive. You need to factor in the R&D cost of designing and proving each specific architecture and chip layout, the cost of tooling the foundry for each specific lithography pattern, the discount lost from not making as many of dies of the same kind, and the cost of extra testing and binning paths for each distinct chip. If Intel, AMD, NVidia, and the rest made unique chips for each SKU, or at least each product series, they would all increase in cost. The current method allows them to use the exact same chip for multiple products, lowering all the costs listed above.

If NVidia made one specific chip for the 1080 and one for the 1070, then every single defect 1080 die would be wasted. They couldn't do anything with it. Yield percentage drops, thus manufacturing costs go up. Same would happen with every other product line.

Conversely, if they can salvage the defects and make some money off of them by selling them as 1070s, then they're making money on things that before were a total write-off, meaning they don't need to make as much off the perfect dies in order to still pull a profit, thus the perfect dies can also drop in price.

As for intentionally crippling some dies, that gets a little trickier. These are otherwise perfectly functional 1080 chips. But except in the case of overwhelming demand for the lesser model, the crippled dies that become 1070s are usually the dies that got binned lower. So while they might technically cost the same to manufacturer, they're not the same grade, thus they're not worth the same.

You also need to keep in mind margin, number or product sold, and - most importantly - competing products. You're saying if you cut the 1070 and just offered the 1080 for slightly less money it would be better for NVidia and the consumers. Probably not. As products become more expensive, the number of them sold tends to drop off almost exponentially. Let's look at market segment sizes. You have a few people in the top-shelf segment who will spend $600 or more on a GPU. You have more people in the $400 - $500 range. You have a lot more people in the $300 - $400 range. You have tons of people in the $200 - $300 range. And you have millions upon millions in the sub $200 range.

Now if you cut the X70 cards and sell the X80 for $500, yes, you'd get some new consumers from the $400 - $500 range stretching their budgets to get the high-end card. However you now have a big gap between the X80 at $500 and the X60 at $300 or lower. You might get a few people to stretch past their $400 limit to get the X80. But most are likely either going to get the X60, or more likely, go with a competitor's product in that $300 - $400 range that beats the X60. That's a LOT of sales you're missing out on. This is why the 970 was so successful. It offered great performance at a price that was just low enough that a lot of people who normally spent ~$250 on GPUs made that extra jump into the $350 range to get it. The 1070 will do the same thing. It gobbles up sales from all the people that aren't willing to shell out for the highest-end GPUs and also tempts a few from the mid-range to splurge a little more money to get a big jump in gaming performance.

NVidia has analyzed all of this and it knows exactly how many of each GPU at each price point it needs to sell to break even or to make a profit. It knows that the number of extra sales it would get by dropping the 1080 to $500 would not offset the money lost by not even offering the 1070. If the 1070 is in high demand, it makes more sense to sell an intentionally crippled die for less profit than it does to have fully enabled ones sitting on a store shelf. There is nothing illegal or unethical about that.
 

InvalidError

Titan
Moderator

It IS NOT the same cost.

let's say a 300mm wafer can produce 200 complete 1080 dies. Out of those 200 dies, you may have:
- 30 exceptional quality dies that get sold at a premium for top-tier 1080 boards (usually the dies closer to the middle of the wafer)
- 70 fully working dies that meet regular 1080 requirements (dies further out from the middle)
- 40 fully working dies that fall short of the normal grade 1080 performance, power or thermal profile requirements (dies closer to the wafer edge)
- 40 dies with non-fatal defects randomly distributed across the wafer (the affected circuitry can be disabled and the die sold as a lower-end chip)
- 20 dead dies

Without the 1070, Nvidia would have to scrap the sub-par and salvageable defective 1080 dies, which means that the cost of a wafer that produces 180 usable dies (100x 1080 + 80x 1070) would now need to be spread across the 100 sellable 1080s. This would increase the cost per die by 80% and you'd still end up having to pay around $600 for a 1080.

When overclockers speak of the "silicon lottery", it is exactly this: not all dies are created equal due to manufacturing process variations, die location on the wafer and countless other production parameters.
 

Samer1970

Admirable
BANNED



I understand this , but you said the they use full healthy 1080 Chips , in your example from the 140 out of 200 (70%) chips and cripple them and sell them as 1070 to meet demand ...

If they dont do this , it will reduce the 1080 price by 35% exactly (half the 70%)


Also one more thing , it seems that chips closer to the center of the wafer are the better ones as you said , so why not just make each wafer smaller in diameter ? this will surely give better yield . cant they just produce more smaller wafers but more perfect vs big ones with more defects ? it would be faster to print out the smaller ones anyways.
 

TJ Hooker

Titan
Ambassador

Because the bigger the wafer the more chips you get out of it. The cost to make a wafer increases slower than the rate of increase in surface area, so it's economical to produce as large a wafer as feasible. Which is why the trend as technology improves has been towards bigger wafers.
 

Samer1970

Admirable
BANNED


yes but at the same time , printing the smaller ones is faster ,

.. and being smaller or larger is a relative thing , I mean how much larger ? and how you determine that ? If Larger means , increase the wafer size once the technology can make better chips at that size then I am all with it ... increase it in the right time .

printing A3 sticker paper takes double the time of the A4 .. if the A4 gives better stickers then print two A4 (just an example)


 

InvalidError

Titan
Moderator

You forgot that 40 of those 140 working 1080 dies fail to meet performance requirements within the intended power and thermal envelope. They may be otherwise fully working 1080s but they would either under-perform or require more power to meet baseline 1080 performance standards.

As for why bigger wafers, on top of TJ said, there is the matter of geometry: wafers are round, dies are square. The smaller the wafer, the more area near the wafer's edge goes to waste. If a 300mm wafer can hold 200 complete dies, a 150mm wafer which is 1/4 the area might only create 1/5th as many dies, so you get 25% worse yield per square meter of wafer area on top of increased processing cost overheads of having to handle five times as many wafers to produce the same number of usable chips.
 

Samer1970

Admirable
BANNED


and why not just cut the wafer into small rectangles before printing? the size that gives good yield ? having better yield will cover the cost of the unused cut wafer .. because you can cut the rectangles to the size point you get almost perfect yields .. who cares about some unprinted small cut edges when everything inside will yield perfectly?
 


I have to say, I LOL'ed with your reply.

You have to trust Intel, TSMC, GloFo and Samsung really try to extract most of the wafers not only for their clients benefit, but because they all compete transiently through the end products around what those wafers put out in the market.

Die harvesting is something they need to do to not lose a sell (nVidia, AMD, Intel and any other chip provider). To put something out in a shelf to make money. Just like "artificial" harvesting is done to put products out there to sell. What good is to have perfect dies if no one wants to buy them? You're better off handicapping them than keep them unsold. Also, keep in mind OEMs buy in bulks most of the time, so if your yield estimates weren't good, you must compromise.

There are so many thing around why companies do harvesting that you will end up having more questions than answers :p

Cheers!

EDIT: Typos and improved sentences.
 

TJ Hooker

Titan
Ambassador

That doesn't make any sense. Cutting them into smaller pieces wouldn't increase yields. Any imperfections in the silicon are already there after the wafer has been made, and any imperfections from the lithography process can still happen regardless the size. The only thing that's changed is you now have to work with a bunch of little pieces rather than a big one. And of course you'll have the same amount of wasted wafer space as if you'd printed them on a single big, circular wafer.
 

Samer1970

Admirable
BANNED


so it is the wafer itself and not printing errors when printing away from the center . ok.

I will research more , I am kinda liking the subject.
 
Status
Not open for further replies.