How a .07-second Power Cut Killed Memory Chips

Page 4 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.

rbarone69

Distinguished
Aug 16, 2006
241
0
18,690
[citation][nom]MDillenbeck[/nom]@the rest -I'm impressed at how many people responded to the same question at the same time! Guess we poster's on Tom's really have no life (unless your in a Midwest situation like me and procrastinating on digging yourself out of after the snowstorm).[/citation]

Or like me... Buffao, NY digging out of the last storm... HA! Yet another pointless post about snow.
 
G

Guest

Guest
Please, as an American citizen I am begging the U.S. Government to NOT SPEND MY TAX DOLLARS on investigating this ridiculous story. It is apparent that Toshiba and the other memory mfgs. are trying to keep memory prices up as long as possible. C'mon people, let's have a protest! Nobody buy any computer hardware that uses any type of memory(NAND,DDRAMx,etc. Until 4th qtr. of 2011. Maybe the investors in all of the price fixing scumbag companies will demand that they get really fast UPS's(smile) or some other form of backup. Toshiba was really stupid with this one!
 

davewolfgang

Distinguished
Aug 30, 2010
454
0
18,860


Looking at reports of this further, I'm starting to put together that this might actually be a "human error" issue, rather than an actual power or UPS issue...



...just saying. ;-)
 

maestintaolius

Distinguished
Jul 16, 2009
719
0
18,980
[citation][nom]maigo[/nom]Looks like SOMEONE forgot to upgrade the battery backup![/citation]
This may have been it, heck there may have been engineers saying for years, "look, we need to buy this, or this might happen and cost us a lot of money." I fought for years to get some power conditioning and backup for our labs and I could never convince the bean counters/execs the 45,000$ investment was worth it until a surge blew up our 150,000$ parallel plate rheometer's board (about a 60,000$ repair).
 

ProDigit10

Distinguished
Nov 19, 2010
585
1
18,980
I can't believe they don't have condensers in their lines to prevent this!
Usually the worst system error is when there's a 1 second gap in electricity provision. 1 second is long enough to stop lines, and yet too fast for a computer to start emergency power, and stop it again.
 

zybch

Distinguished
Mar 17, 2010
481
0
18,790
This is just another bull poop excuse for the industry to ramp up RAM prices, just like they did after the fictitious factory fire 10+ years ago which led to RAM prices being higher gram for gram than gold.
 

yardpup01

Distinguished
Nov 10, 2010
4
0
18,510
.07 seconds is a long time for a power system. That's 4.2 cycles for a 60Hz system. As stated before, that's more than enough time for machines to become unsynchronized.
 

ch3oh

Distinguished
Dec 14, 2010
2
0
18,510
They used an inferior, or poorly design voltage compensator (from what I can gather judging by the duration and severity of the voltage sag). the product most likey used is only good for around a 30% 3 phase voltage sag or a 50% signle phase voltage sag. This type of a system uses a shot gun approach to the problem instead of correcting the problem at the tool level.
Talk about putting all of your eggs in one basket and then rolling the dice. Big irresponsible mistake on the design team.
Think about it, if you fix the tools to handle the problem, you can minimize UPS capacity used, use other design practices for high power components within the tool.
This has been my job for atleast the last decade, they took the wrong path.

To all that posted that this is just a ploy to raise prices, you could not be further from the truth, these guys (Toshiba) are in direct competition for a global market share. if they want to raise prices using dissonest tactics, they would rather price fix with other companies. This incedent cost them some sirious cash.
 

cuba_pete

Distinguished
Mar 12, 2010
34
0
18,530
I would say...as a plant engineer...that their UPS had a glitch. It happens, no matter how much backup to the backups you have...if the final backup hiccups, that's it. We have parallel paths, parallel power supplies, etc. There is always one final UPS. If that UPS dies the parallel power supply should carry on smartly, but since we have dirty power, (ALL power from the power company is dirty to those who rely on perfect power), we have to get the UPS back on line in short order to prevent further problems.
 

ch3oh

Distinguished
Dec 14, 2010
2
0
18,510
cuba, being a plant engineer, you would know that inorder to supply close to 100MW of power, you will have multiple UPS units, either medium voltage or about 50 low voltage 2MVA units. I could see one of those failing, but not all. If you read the article, the voltage sag was greater than the unit that was designed for the plant, was able to handle. If you look up specs on protection equipment, you will find that there are only a few that match, one would be the ABB AVC.
 

cuba_pete

Distinguished
Mar 12, 2010
34
0
18,530
I don't know where in the production run the sag occurred...the article isn't very specific. In my facility, we have a couple of racks of blade servers that draw 9 kW just by themselves. They are on an UPS. If that UPS glitches, that's it...reboot. Our high voltage power plants are paralleled, but not everything beyond or beside that is. If the UPS for a simple conveyor system glitched, that could be the simple straw breaking the camels back. I still cramp a little when the lights flicker. I move smartly to the computer spaces just to double check the displays for data outages. I am confident in my power system, and I cannot think of a better way to put it, but s#!t happens.
 

bto

Distinguished
Dec 8, 2010
53
0
18,630
[citation][nom]Horhe[/nom]You think that the backup power supply will kick in instantly? 70 milliseconds is too short of a period for their backup power supply to start.[/citation]

Then why have one? if that's all it takes to kill it, if it was shorter, lets say 65ms it still wouldn't have kicked on, and probably killed everything still. if it was longer, say 300 ms yet it took 71ms for the backup to switch over, then the line is still dead, you see where I'm going? Plants draw an incredible amount of power, some of the furnaces and etching lasers take 10's of thousands of watts to power. The battery bank would nearly need to be the size of the plant to run it for more than a minute.

All that aside, some things are extremely sensitive to slight voltage differences such as wafer fabs. I think it's obvious they are looking into a way to improve uptime and power backup. Getting enough capacitors to run all the live current through is the real struggl. There are at least three stages to a backup of this type. Capacitor, battery and Generator. 70ms (at whatever percentage of normal voltage drop and at whatever current they were drawing) was probably more than the capacity of the capacitors for the plant could handle. There are many things that can go wrong... current, current. capacity capacitors.. hmmm wouldn't be the first time there was a demand conspiracy, lotso money to be made there, especially if paid off to do it.
 

cuba_pete

Distinguished
Mar 12, 2010
34
0
18,530
I didn't really look at it from a conspiracy or market-control perspective...just technological. The reasoning for having this type of system in place is so that it doesn't happen hundreds of times per day. Our power quality surveys show that coming in, but we clean it up before it ever reaches our equipment.
 
Status
Not open for further replies.

TRENDING THREADS