How Low Can Nvidia Go

rangers

Distinguished
Nov 19, 2007
1,563
0
19,790
http://www.theinquirer.net/inquirer/news/052/1050052/nvidia-chips-show-underfill-problems
WHEN WE TOLD YOU about the 'bad bumps' in the Apple Macbook Pro 15-inch models the other day, we expected it to end there.

But as luck would have it, Nvidia pointed us to a much deeper problem that not only affects at least some of the Macbook Pro notebooks, but likely every other high Temperature of Glassification (Tg) underfill chip Nvidia makes.

Technical Background
To understand this article, you really need to understand the problem, so please read the technical three part series (Part 1, Part 2 and Part 3) explaining what the problem is and where it occurs.

Nvidia's current problem stems from its half-hearted response to its earlier problem by only changing the underfill. Nvidia said that's what it did, both near the end of our initial Macbook article and in a later Cnet article here.

In that, Nvidia's Mike Hara said, "Intel has shipped hundreds of millions of chipsets that use the same material-set combo. We're using virtually the same materials that Intel uses in its chipsets." Note the word 'virtually'. The problem with this statement - other than his analogy being misleading and not addressing Nvidia's chip design problem - is that 'virtually' in this case means Nvidia missed a key coating component in its revised chip engineering design. It is NOT the same material-set technology as Intel, AMD, ATI and everyone else we talked with uses. Unfortunately for Nvidia, the coating material it left out is critical for the life of the chip.

Before we break out the electron microscope again, we feel the need to point out some of the things that Nvidia managed not to talk about in its purported explanation of the fix. It is sad to have to point this out, but underfill does not crack, bumps do. The bumps that cracked did so for a long chain of reasons that are explained in my earlier three-part article linked above.

Nvidia changed one of the steps in the chain, and seemingly none of the others. This might change the frequency of the bumps cracking, for either good or bad, or it might not. It might also introduce a new and much more serious failure mode, and that is what we believe Nvidia did.

Underfill is basically a glue that surrounds the bumps, keeps them from getting contaminated, and keeps them moisture free. It also provides significant mechanical support for the chip that is crucial for enabling it to withstand structural stresses, which are primarily caused by repeated heating and cooling cycles during operation.

There are two properties of underfill, Tg and stiffness. Tg is the Temperature of Glassification, which means the temperature at which it loses all stiffness. Instead of thinking about it melting, think about it turning to jelly. Stiffness is how hard it is before it melts.

One unusual property of underfill is that its Tg is related to its stiffness. If you want it to glassify at a higher temperature, it will be stiffer to start with. Lower Tg, softer initial stiffness. When making a chip, you have to balance between making the underfill so elastic that it effectively does nothing and so hard that it rips the chip apart on first power up. If you do things right, you make it as stiff as you can, but not too stiff. If the underfill is too soft, it won't provide enough structural support to relieve the strain on the bumps; too hard, and it will damage the underside of the chip itself.

Passivation Layers
Let's move back to how a chip is made. You all know about a silicon wafer - it is a 300mm silicon disc that you essentially draw pretty patterns on. Modern chips have multiple layers of metal that make transistors drawn on the silicon, and on top of each other. You can see some of this in the microphotographs below.

Modern chips have multiple metal layers, eight is pretty common for devices like CPUs and GPUs. To prevent the layers from shorting each other out, there is a layer of insulation deposited between them - this is called the passivation layer. The resulting chip is a relatively thick hunk of silicon with a 16-layer or so sandwich on top that goes metal/passivation/metal/passivation and so on. It ends up looking like a Roman aqueduct in a cross-sectional view.

An Intel 90nm CPU sliced
Intel_90nm_cross_section

In a very simplified explanation, the more insulating you make the passivation layer, the faster the chip can work. This means low-K materials like Black Diamond are really useful, but they are also very fragile. You might have eight of these layers, and they have holes punched through to allow communication between the layers. The structure isn't all that strong to begin with, and the holes don't help. On top of the sandwich, you have an outer coating, usually Silicon Nitride (SiN), which is basically a hard ceramic shell that protects things.

Remember, these devices are called flip-chips because when they come out of the fab, they are flipped over, and the bumps go on what was the top. This is then covered with underfill and soldered to the substrate, the green fiberglass thing that most people think of as a 'chip'. The former top during fabrication is then the bottom after packaging, and the underfill touches the substrate and the SiN layer.

Because the SiN layer is pretty stiff, any strain on it will be transferred into the layers of the chip itself fairly directly. If there is too much strain, the layers of the chip peel apart and you have what is called catastrophic inter-layer delamination, and that kills the chip even deader than cracked bumps.

This means you have to change the passivation material to a stronger substance to take the stress. Unfortunately, the passivation layer isn't just an option you can readily change out on an already designed chip. Different choices in the passivation layer have cascading effects in the chip design and manufacturing process. This is complicated by the fact that there aren't that many viable choices to begin with. What you end up with is a limit on the stiffness of the underfill. This is why Nvidia didn't just crank up the underfill Tg a year ago - it has very serious consequences, most of them fatal to the device, and there are limited underfill options for a given passivation layer material.

A good analogy is a light bulb and a steel plate - light bulbs are fragile, steel plates are not. If you hit a light bulb with a hammer, you get lots of little pieces, but a steel plate will shrug it off. If you put a steel plate on top of a light bulb, carefully, and hit it with a hammer, you will not damage the plate, but the bulb will shatter just as if you hit it directly. This is very similar to how the strain within a chip assembly gets transferred, and the chip is basically a multi-layer light bulb and steel plate sandwich.

Polyimide Layers
Luckily for chipmakers, there is a third option that allows you to have a fairly stiff underfill and not tear things apart. It is called a polyimide layer (PI), and it is a relatively thick - we are talking µm here - coating that you put on top of the last passivation layer. The PI layer is kind of rubbery. It absorbs some of the strain so the passivation layers don't have to, and it also distributes it over a wider area.

In essence, the PI layer simply protects the chip more. This allows you to use a stiffer underfill and not tear things apart. Notice I said stiffer, not solid steel. If you go too far with a stiffer underfill, you will transfer too much strain, and the chip will still die. The PI layer gives you a bit more leeway, taking more stress off the bumps, but you still have to choose very carefully and test the results to an amazingly high degree.

In the Cnet article, Hara said Nvidia changed the underfill, and we will assume that he meant Nvidia stiffened it, not made it softer. Softening it would only increase the problems they had with bump cracking, and while we may not hold Nvidia engineering in all that high regard, we can't assume they are abjectly stupid. So, Nvidia changed the underfill to a more 'robust' version, and didn't change anything else. We actually believe this story, mainly based on the parts we have dissected.

All is well, right? Ride off to the coffee shop in the sunset with your new Macbook happily working, Nvidia chips not dying in large numbers. However, there is only one tiny problem with that ending.

The Problem In Pictures
Remember when we said that Nvidia engineering wasn't abjectly stupid? Scratch that. Remember when we said we were going to break out the electron microscope? It's time. Remember the part about the PI layer being necessary for stiffer underfills? Guess what?

A test chip with a SiN layer
WM_Small_Test_chip_eu_without_PI

What you are seeing is the top of the bump, where it contacts the chip. The round light grey part on the bottom is the bump, the darker gray on the top is the the silicon itself. The spotty stuff above the top yellow line is the transistor and passivation layer sandwich - the aqueduct - and the dark grey area on the right is the underfill.

This chip, a materials test part, has no PI layer, just a SiN coating. You can see that the SiN coating is not even 2µm thick - it is the dark line that crops the top of the bump and ends at the pad on the chip.

For those of you who have been paying attention, you may notice some clumping in the bump material - it is eutectic, not high lead, and the clumping is a result of enthalpy. This is a thermal test chip, not a production part, used for heat cycle testing. That is why the bump material clumped, repeated heat cycles.
A test chip with a PI layer

WM_Small_Test_chip_eu_with_PI

This next one has the same major components, but you will notice the SiN layer is much thicker, 5 or more times, almost 10µm. That is because it not only has the SiN layer, but it also has a PI layer to absorb stress. This chip is also a test vehicle, and has eutectic bumps and a higher Tg underfill. We can conclude from this that a typical PI layer is 5µm or more thick, and a SiN layer is visibly thinner. Things may change depending on the fab, materials used, and intended use, but the rough thicknesses won't change much.

The bump from a Macbook Pro 15-inch 9600 GPU

WM_Small_NV_High_Pb_Layer

Last up, we have an close up of the bump from the Macbook Pro's G96/9600 GPU. It is a high lead bump with, according to Nvidia, a higher Tg underfill. This means that the SiN layer should be under 2µm thick. Check, it is. Then the PI layer should be another 5+µ or so. Che.... Hey, wait a minute, there is no PI layer! No, really, it is not there.

Yeah, you are thinking right, Nvidia simply forgot the one critical layer to make its much vaunted, and on the surface correct, high Tg underfill work. To that, all we can say is that it does indeed seem so. If anyone has a better explanation, and several packaging engineers I talked with did not, feel free to chime in, my email is at the top of every article.

What this looks like is that Nvidia traded a bump cracking problem for an inter-layer delamination problem. Both lead to a term that semiconductor people call catastrophic failure, something you don't need an engineering degree to understand.

According to multiple packaging people contacted about this story, all of whom want to remain anonymous, this is a much worse problem than bump cracking. Phrases like "abject stupidity" and "how the [fsck] did they miss that" were tossed around, but still, they did.

In these conversations, several scenarios were put forward to explain it. None of them posit that it won't be a problem, they all say that it will, they were simply grasping at straws to say how Nvidia missed this one.

The first scenario theorizes that Nvidia had a bunch of high lead wafers sitting in inventory. When it first learned about the problem, it stopped bumping the chips because it knew where the problem lay, just not why. When the engineers got the go-ahead to restart the line with high Tg underfill, they had to use up a few months worth of wafers. Because a PI layer can't be applied after the wafer is fabbed, they were stuck, so they crossed their fingers and hoped someone like me wouldn't notice. I did, and if everything we hear is true, Macbook Pro owners and a lot of others will also eventually notice, as well.

The next theory is slightly more plausible - that Nvidia didn't have time to properly test. A heat cycle test of packaging material takes about three months to do, and you can't really rush it. If the first new parts started rolling out of the fab on July 1, 2008, the first day of Q3, and it takes about three months to set up and qualify a new fab process, that means the fab had to go into production setup on the first day of Q2.

Subtract out a further three months to thermal stress test the solution and Nvidia had to have started that around the first day of Q1/08, meaning that its engineers would have had to flip the switch on testing with a New Year's hangover. If the bump cracking problem was discovered in the fall of 2007, maybe even late summer, there was only one quarter to figure out what the problem was, research alternatives, and make test structures. There could not have been time for a second round of tests unless Nvidia knew about the problem far in advance of what HP and Dell admitted to.

The most likely way this would have played out is that Nvidia tested the structures, and none worked out well. Its engineers gritted their teeth and took the most promising option, no PI. The other scenario is that Nvidia didn't figure it out early, and was rushed to come out with a 'fix' because Jen-Hsun had to file an 8-K and let the public know. Not having an answer and a fix in hand would not have been compatible with executive egos, so the engineers came up with an answer, but they couldn't definitively say that it would work.

In either case, the length of testing time required is probably what bit them. It is a long and intricate process to stress test chips like this correctly. Nvidia has shown with the initial bad bumps problem that it botched that across multiple generations, so why should we give them the benefit of the doubt this time? The more interesting question is, when did it know what?

Next up, we have the long shot scenario, that Nvidia packaging engineers, if they actually have them rather than outsourcing everything, simply missed an entire branch of science. They all took a class on semiconductor engineering, but they all slept through that day. And didn't read the book.

One last thing to toss into the mix, cost. The PI layer is expensive, it adds about $50 to the cost of a wafer. Wafers from TSMC on a high end process cost about $3,000 to $5,000 depending on a lot of details. Adding the PI layer increases the cost of silicon by a noticeable amount, and adds to the defect rate.

For cards that sell to big OEMs for $30 or so, silicon can't be more than a few dollars of the total. Adding 25 cents to the cost of a chip is a big deal, it can mean the difference between profit and loss for the entire run. One engineer suggested that Nvidia might have shot down the PI layer on cost grounds, but we don't buy that. They weren't that desperate, were they?

Analysis
What does this mean? Unlike what Nvidia has been implying, we have never stated that the 'bad bumps' in the Macbook Pro 15-inch would cause a failure. We simply stated that it is using the same material that caused failures in the older Macbooks, several HP and Dell lines, and likely many more that Nvidia has not admitted to publicly. The consumer has a right know this about the products they are buying, and Nvidia steadfastly refuses to tell them.

This time, we see a potentially much more serious problem, and no doubt it will be explained away with pseudo-science and sound bites. Tame journalists and bloggers won't bother to question the science, won't understand it, and will take the easy, canned explanation at face value. No problem will ever be admitted to, and the problems that Macbook and other computer owners encounter will be something else, a rare anomaly, a one-off, trust them. Really. Apple did.

Once again, this is not saying that the Macbooks will fail, or that the one you have will fail. We are simply stating that, according to all the packaging experts we talked with, none of them could come up with a scenario where this is not a massive problem. Once again, time will tell.

Rebuttal
In the best of half-hearted PR speak, the Nvidia rebuttal (see Cnet link above) claims my initial investigation of the 'bad bumps' was "already flawed." Nvidia won't say how my analysis was flawed, but it tosses that out in an attempt to tarnish the evidence. It also won't say what parts are affected, so there is no way to tell for sure. If I am so wrong, why cover it up?

As for all high lead bumps being bad, that is simply not true, not once did I say that. I stated that given a chain of engineering failures, bad choices, and inadequate testing, these parts are failing. There is a long chain of events that causes the failures. Read the three part technical explanation linked above for more.

Nvidia is claiming that it changed the underfill material, and had Dawn sprinkle a little green fairy dust on them, and all is better. Every engineer I talked with disagrees. It is clear that they missed a critical step in making these chips, so changing a single step in the chain will very possibly make matters worse.

If you look at what the higher Tg underfill does, it moves strain off the bumps, and puts it on the SiN layer, which transfers it to the fragile passivation layer. Nowhere has Hara said that Nvidia attempted to reduce the strain that causes the failures in the first place, much less accomplished that goal. In fact, he admits the opposite, unless I misinterpret the statement, "What we did was, we just simply went to a more robust underfill." This is a band-aid, applied by a fairy, sprinkled with pixie dust. Sadly, it does not appear to be a thoroughly engineered fix.

Hara said, "The material set (combination of underfill and bump) that is being used is similar to the material set that has been shipped in 100's of millions of chipsets by the world's largest semiconductor company (Intel)." In saying that, Nvidia was right, it is similar. Similar is NOT the same, and the devil truly is in the details. He is right that every semiconductor manufacturer that uses a high Tg underfill uses a similar recipe, but all of them that I talked with, every single one, also uses a PI layer. Period.

The Man Behind The Curtain
Last up, Nvidia is strongly hinting, like in this Gizmodo article, that there are some mysterious, nefarious forces behind my reporting, and that electron microscopes are hard to come by. The implication is that I couldn't pull The Big Picture Book of Science out of a paper bag with a map, flashlight and guide dog.

It may be true that I am not up on the latest techniques at the cutting edge of electron microscopy, but my years of college - going from chemical engineering, to chemistry, to biology, to genetics - weren't a total waste. Reading the output from a spectrograph isn't that tough when you have been holed up in a lab doing similar work with related devices for years.

That brings up the crack about electron microscope scarcity. They really aren't that uncommon, it's just that Hara probably doesn't know where to look for one. I live quite close to the University of Minnesota, and last time I attended courses there many years ago, there were lots of them sitting around, some better than others.

Every major semiconductor design house has at least one electron microscope, likely many. They are indispensable research tools. How many does Nvidia own? I don't have a clue, but stories like this don't seem to imply that they are all that uncommon. In fact, I have seen dozens in tours of companies around the valley. In defense of Mike, he is an investor relations executive, and the SEMs at Nvidia are probably on a floor without executive washrooms.

Hara blames Nvida's competitors for being behind the story, and that is quite plausible on the surface. Really, Nvidia is cuddly, nice and honest, right? So who wouldn't like them? I mean, Nvidia openly declared war on Intel. It goes out of its way to antagonize AMD, treats the press like dirt, and plays its partners off against each other. A better question would be, at this point, who actually likes Nvidia? If you answer Joel Turnipseed, the guy in Iowa who lost all short term memory in a car accident in 2004, you might have the one.

One other thing that Hara doesn't appear to realize is that there are a few dozen teardown houses within an hour's drive of his office. Companies like Nvidia use them all the time when they want plausible deniability, a 'second opinion', or to dodge some trade secret laws. In fact, most semi companies use them regularly.

Some of them are public, others less so. A quick search for 'chip reverse engineering' should net you a dozen or so in very little time. To quote a friend from a large semi house, "The good ones don't have names."

What they do have, however, is a lot of expensive equipment, like the electron microscopes that are so craftily hidden at Nvidia headquarters. They also know how to use them well. One last thing, their business is quite 'peaky' - when a new chip comes out, they may tear it down, or tear down a few, and make a report. These reports sell for a lot of money, and that tides them over until a new part is released. In between busy times, some of them sit around bored, throwing darts at pictures of their former employers, while some stay busy 24/7. It simply depends.

What it comes down to in the end is that there is simply no shortage of companies, large and small, public and shadowy, that do teardown work. It really isn't all that hard. There is also no shortage of companies that dislike Nvidia - when a company sets out to piss everyone off, it often succeeds. The list of capable organizations with motives is not short, in fact it is very long.

Then again, it was my idea to begin with. When a company responds to an easy direct question with dodgy doublespeak, or answers another seemingly related question instead, alarm bells go off. Having solid information about the chips before you ask the question aids immensely in analyzing the PR/IR output. The bells went off this time, and the digging started. Several 'mad scientists' liked the idea, and agreed to help out as time permitted. It took two months, but the results were worth it.

Conclusion
In the end, what you have once again seems to be a massive engineering failure. This could, but not necessarily will, lead to inter-layer delamination failures. The Macbook Pro 15" GPU undoubtedly has the problem, and it is very likely that every Nvidia chip with high lead bumps and high Tg underfill does as well. We are still analyzing the eutectic bump parts, and will follow up with a report if we discover anything conclusive.

Nvidia is still stonewalling the first problem, and likely won't admit to this one unless they are forced by law to file an 8-K once again. Remember, the last admission was not voluntary. Once again, we will state the obvious: Nvidia has to come clean over this, admit what models are affected by the bump cracking, what computers the chips went into, and what chips are affected by this latest missing layer. Then the customer can decide. µ

Note: Apple was again called twice prior to publication and informed that there is a potential problem. Instead of calling us back to tell us that they knew about the issue, and had dealt with it, or would stand by their customers, Apple simply ignored us once again. Because of this, we award Apple the Steve Jobs Memorial Turtleneck for Pride and Arrogance (SJMTPA) for turning an opportunity to respond positively to this situation into mud. Own goal guys, zero for six!

it keeps on looking bad for nvidia, and yet some keep on recommending them
 

FHDelux

Distinguished
Jan 25, 2008
99
0
18,630
His proof that there is a problem is more convincing that anything NVIDIA has released. Not that this is directly related, but from personal experience i have seen 8000 series cards die a lot more often the i saw older 6000 and 7000 series cards die. The higher ups at NVIDIA might just be squeezing their engineers too hard to ship new product.

I currently use NVIDIA cards exclusively, but with that said, if the badaboom video encoder app didn't exsist i would already have a 4870. It will be interesting to see how this plays out.

Really good read....if you aren't lazy.
 
good thing i sold my gaming laptop with 8600gt in it xD
i bought it from newegg for $900 and after 6 months or so I sold it for $780, not a bad loss.....no more feeling of ticking bomb when my laptop will suddenly die out :] seens like nvidia is digging its own grave right now lol gogo fanboys tell me im wrong.....
 

sailer

Splendid
I use Nvidia cards in two of my computers, and one has been showing signs of a problem recently, though I'm not yet sure if its specifically related to all this. Another detailed explanation exists here:

http://forum.***.com/graphics-displays/14347-we-have-answer.html

"Facts and rumors about failing Nvidia chips have been spewing from all sides for months now. What's AMD's take on the issue, and why aren't we seeing similar failures from its products? We recently had a chat with Neil McLellan, AMD's director of packaging and interconnect technologies, who offered his insight and opinions about these matters.

To understand where AMD is coming from, one must go back a few years to the former ATI. Prompted by problems with packaging and interconnect materials in consoles as well as the European Union's Restriction of Hazardous Substances (RoHS) directive, ATI hired McLellan and went about rethinking its chip packaging strategy. In 2005, the RoHS directive required GPU packages to start connecting to their host boards with lead-free solder balls. ATI also took that opportunity to replace the high-lead solder bumps with so-called eutectic bumps. As you'll see in the diagram below, those solder bumps connect the silicon GPU die to the rest of the package:


A diagram of a GPU package. Source: AMD.

Why the change? High-lead bumps use 90% lead and 10% tin, while eutectic bumps switch that ratio to 37% lead and 63% tin. High-lead bumps can handle more current, but AMD thinks they're more prone to fatigue and need "comprehensive reliability engineering to be used successfully." To illustrate the fatigue issue, McLellan evoked a soda can: the tab will probably stay on if you bend it up and down slightly a hundred times, but it'll likely pop off if you bend it all the way two or three times. Similarly, high-lead bumps can fail because of repetitive heating and cooling. That's because the silicon GPU die and package substrate (see the diagram above) have different thermal expansion coefficients—2 parts per million/°C for the silicon and 30 ppm/°C for the substrate, McLellan said—which puts a significant stress on the bumps."

Notice that line on the material used: "High-lead bumps use 90% lead and 10% tin, while eutectic bumps switch that ratio to 37% lead and 63% tin. High-lead bumps can handle more current, but AMD thinks they're more prone to fatigue and need "comprehensive reliability engineering to be used successfully." So possibly the problem is two-fold; both the material of the bump that is used, and the engineering to make the high lead bump stable. So far, Nvidia doesn't seem to have mastered the engineering, and is offering lame excuses or giving no answer at all. Nvidia forgets that "No comment is a comment". By saying nothing, Nvidia leaves the speculations over what's wrong in high gear, and gives credence to the idea that they are still producing faulty cards.

Edit: Just noticed, the address I posted got blanked out. You can look in X-C-P-U-S.com under forum- video cards for the full article.
 

rangers

Distinguished
Nov 19, 2007
1,563
0
19,790
[quotemsg=6668690,4,94591]His proof that there is a problem is more convincing that anything NVIDIA has released. Not that this is directly related, but from personal experience i have seen 8000 series cards die a lot more often the i saw older 6000 and 7000 series cards die. The higher ups at NVIDIA might just be squeezing their engineers too hard to ship new product.

I currently use NVIDIA cards exclusively, but with that said, if the badaboom video encoder app didn't exsist i would already have a 4870. It will be interesting to see how this plays out.

Really good read....if you aren't lazy.[/quotemsg]


do you have 2 pay for that badaboom app
 

nocteratus

Distinguished
Mar 6, 2007
369
5
18,815
that's why I choose evga for both my 8800 nvidia, for the lifetime warranty...

so if my 8800gts 320mb blows... I'll RMA the card and get a new one... and maybe a upgrade...!
 

sailer

Splendid
[quotemsg=6668710,11,122243]that's why I choose evga for both my 8800 nvidia, for the lifetime warranty...

so if my 8800gts 320mb blows... I'll RMA the card and get a new one... and maybe a upgrade...![/quotemsg]


Same reason for getting EVGA cards here. That, and the 90 day step up policy that they have. Also might add that of the four EVGA cards that I have purchased through the years, not one of them have failed.
 

rangers

Distinguished
Nov 19, 2007
1,563
0
19,790
the lifetime warranty is great, but your missing my point, that its not ok for a company (any company) to lie and cheat to its customers and expect said customers, to show any brand loyalty, when things are going wrong...


am i thick, am i asking to much....
and can someone, please tell me the price of cheese
 

nocteratus

Distinguished
Mar 6, 2007
369
5
18,815
no, i get your point...

Nvidia cheats its customer. But at the same time compagnies offering nvidia products offers lifetime warranty.

The card I have works great. And if someday I have problems with it I'll RMA the card and get and new one...
I choose Evga before Nvidia... the reasons for evga are for the warranty, customer service and 90-days step-up program... but they only offer Nvidia base video card... if they had ATI I'll probably have 2 ATI right now instead of Nvidia
 

Pharo Surreal

Distinguished
Dec 17, 2008
26
0
18,530
[quotemsg=6668742,14,122243]no, i get your point...

Nvidia cheats its customer. But at the same time compagnies offering nvidia products offers lifetime warranty.

The card I have works great. And if someday I have problems with it I'll RMA the card and get and new one...
I choose Evga before Nvidia... the reasons for evga are for the warranty, customer service and 90-days step-up program... but they only offer Nvidia base video card... if they had ATI I'll probably have 2 ATI right now instead of Nvidia[/quotemsg]


Ok

Nvidia make graphic GPU's,

pass on to PNY, ASUS, EVGA, MSI, and so on yada yada yada!!!! who we buy from

Where i live ASUS cards = return/ replace with 2 year warranties (i can live with that)
PNY = 1 year (from top of my head)

No one buys direct from Nvidia as far as i'm aware. So how are Nvidia cheating, i really can't be bothered to read that stuff Ranger posted,

Thanks
 

rangers

Distinguished
Nov 19, 2007
1,563
0
19,790
[quotemsg=6668749,15,204934]Ok

Nvidia make graphic GPU's,

pass on to PNY, ASUS, EVGA, MSI, and so on yada yada yada!!!! who we buy from

Where i live ASUS cards = return/ replace with 2 year warranties (i can live with that)
PNY = 1 year (from top of my head)

No one buys direct from Nvidia as far as i'm aware. So how are Nvidia cheating, i really can't be bothered to read that stuff Ranger posted,

Thanks[/quotemsg]
well don't, what are you massogonistic, im more than positive that you can get some help for that sort of compulsive disorder...

i feel all good inside, helping some one, such as yourself
 

Pharo Surreal

Distinguished
Dec 17, 2008
26
0
18,530
[quotemsg=6668754,16,154485]well don't, what are you massogonistic, im more than positive that you can get some help for that sort of compulsive disorder...

i feel all good inside, helping some one, such as yourself[/quotemsg]


Thanks for the help

Im so glad i made you feel all good,

Any time!
 

sailer

Splendid
[quotemsg=6668754,16,154485]well don't, what are you massogonistic, im more than positive that you can get some help for that sort of compulsive disorder...

i feel all good inside, helping some one, such as yourself[/quotemsg]

Well, at least in the eyes of some here, its you that has the problem. Nvidia may make a faulty product, and I do believe that is the case. However, various companies that sell the cards made using Nvidia products warranty them against failure, replacing any card that does fail. So the effect to the customer is functionally the same as if there was no failure in the Nvidia cards. If anyone should be angry, it would be the companies selling the cards, as they bear the financial burdens of Nvidia's processors. Beyond that, as many people in this forum replace/update their video cards fairly often, its very likely that they will never suffer the effects of Nvidia's deception. That is, the card may fail in three or four years of use, but it will already have been discarded before that failure occurs.

As for that word, "massogonistic", I have never heard of it before, nor can I find it in the dictionary. Maybe its you that needs some help.
 

4745454b

Titan
Moderator
Seeing as you've been warned before for posting the entire article, I'll just say something fast before the thread gets locked. It should be obvious by now to any non fanboy that Nvidia has some problems. "Going big" while AMD "went small" is just the first problem. Renaming of old cards as new ones and using bad a manufacturing process are also problems they have right now.

With the lifetime warranty that some companies offer however, I'm not sure this is an issue. It also seems to be related to temps, so if you never shut your PC/laptop off, this won't effect you. (I don't have a laptop, and never turn my PC off, or standby for that matter.) I'm not sure how big of an issue this will be for most. While I would certainly avoid any Nvidia laptop, desktop cards shouldn't be a problem.

With any luck rangers, you'll just get a temp ban. Perhaps you can get help for your need to push the envelope.
 

Dekasav

Distinguished
Sep 2, 2008
1,243
0
19,310
While it's very possible Nvidia cheated its customers, you have to realize Nvidia has a responsibility to its investors, its employees, and its suppliers/supplied (Evga, XFX, etc.). If they hadn't gone for such a quick fix, it could cost the company significantly more, which in turn hurts investors, employees, and everyone down the road. While I admit that customers are incredibly important, a company has to watch out for everyone, including itself. And if they can take a cheap way out, and save themselves huge layoffs, that's better for the consumer in the long run than a dead Nvidia.
 

rangers

Distinguished
Nov 19, 2007
1,563
0
19,790
[quotemsg=6668761,18,55586]Well, at least in the eyes of some here, its you that has the problem. Nvidia may make a faulty product, and I do believe that is the case. However, various companies that sell the cards made using Nvidia products warranty them against failure, replacing any card that does fail. So the effect to the customer is functionally the same as if there was no failure in the Nvidia cards. If anyone should be angry, it would be the companies selling the cards, as they bear the financial burdens of Nvidia's processors. Beyond that, as many people in this forum replace/update their video cards fairly often, its very likely that they will never suffer the effects of Nvidia's deception. That is, the card may fail in three or four years of use, but it will already have been discarded before that failure occurs.

As for that word, "massogonistic", I have never heard of it before, nor can I find it in the dictionary. Maybe its you that needs some help.[/quotemsg]


i dont think your entitled to come in, the post was not for some one with the IQ of an ameba
im on fire tonight
 

Pharo Surreal

Distinguished
Dec 17, 2008
26
0
18,530
[quotemsg=6668761,18,55586]Well, at least in the eyes of some here, its you that has the problem. Nvidia may make a faulty product, and I do believe that is the case. However, various companies that sell the cards made using Nvidia products warranty them against failure, replacing any card that does fail. So the effect to the customer is functionally the same as if there was no failure in the Nvidia cards. If anyone should be angry, it would be the companies selling the cards, as they bear the financial burdens of Nvidia's processors.
[/quotemsg]


True though i think no one loses out:

These companies that buy Nvidia GPU's know what they're getting they don't worry about RMA's because most times they can get the product back, and recycle/reuse components to re-sell. (Well the big ones can easily).

eg. Sell 10 million, RMA 50,000. 10,000 of the RMA's re-cycled/reusable, profit 10- 15% they don't give 2 f**ks. RMA's

ASUS in the UK 2 year warranty, product f**ks up within 2 years they send you a re-furb (recycled/rebuilt bulls***)

It works, customers happy, company's very happy as it hardly loses out.

Companies that need to send out new units, they're the ones who suffer, but quality is at a price
 

rangers

Distinguished
Nov 19, 2007
1,563
0
19,790
[quotemsg=6668765,20,59138]Seeing as you've been warned before for posting the entire article, I'll just say something fast before the thread gets locked. It should be obvious by now to any non fanboy that Nvidia has some problems. "Going big" while AMD "went small" is just the first problem. Renaming of old cards as new ones and using bad a manufacturing process are also problems they have right now.

With the lifetime warranty that some companies offer however, I'm not sure this is an issue. It also seems to be related to temps, so if you never shut your PC/laptop off, this won't effect you. (I don't have a laptop, and never turn my PC off, or standby for that matter.) I'm not sure how big of an issue this will be for most. While I would certainly avoid any Nvidia laptop, desktop cards shouldn't be a problem.

With any luck rangers, you'll just get a temp ban. Perhaps you can get help for your need to push the envelope.[/quotemsg]


warned, Ive not had a warning, if you have a problem i suggest you take it up with one of the mods
 

squatchman

Distinguished
Oct 31, 2008
211
0
18,680
Just in case people can't really bring themselves to read this I will summarize:


nVidia: <Gives Apple a well deserved Middle Finger>
Apple says: But I just want to make bad art and listen to Coldplay!
 

TRENDING THREADS