Oops! Microsoft Loses All Sidekick Users' Data

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.
Murphy's law: If something can go wrong it will :)
I wonder who are the masterminds of this architecture. I mean ... a contact data will not be bigger than 1k, and most of us don't have 1000 contacts, so a few megs would have been enough to secure the most important data like contacts, calendar etc. Most of modern phones have gigabytes of memory. Why not store critical things on the device too? The design team = LOOSERS + STUPIDS. Probably together with the upper management that cut the costs and discarded a safer but more expansive design.
We will never loose customer data ... yeah, right. YOU JUST DID!
 
Heads will fly!!
For a company now days to not backup their data is inexcusable.
Even I have my data backed up at 2 other places. A secondary HDD and at remote server.
Also, forget about the backup before the update. What happened to last week's backup, or EVEN LAST MONTH's? None of those were made?

I know backing up TBs of data is expensive. But they are in a worst position now.
 
[citation][nom]cruiseoveride[/nom]Did the hard drive get smashed into smithereens by terrorists?[/citation]
No. Probably they rebuilt the RAID volumes. Tadaaaaa!!
 
[citation][nom]ZeroTech[/nom] What happened to last week's backup, or EVEN LAST MONTH's? None of those were made? I know backing up TBs of data is expensive. But they are in a worst position now.[/citation]

1TB HDD is 100$. And it's reusable. I know it's not that simple. But I cannot imagine that a solution to backup up data at a reasonable level of security and reliability will cost more than let's say 1000$/TB/Month

Oh. Another very important thing: CHECK THE BACKUP!!!!
A long long time ago a customer made daily backups only to discover that for some reason they were corrupted and unusable. Thank god the one week old full backup was good and we only had to type in a week of work. Because at that time (8 years ago) they still had a lot of paper.
 
I never knew the Sidekick was a MS thing. I had one efor a year before I upgraded to a MDA after 3 years of that I downgraded to a L2.
 
They should have used IBM XIV Storage - it's really to best thing going these days. Yeah, yeah EMC, DS8k, whatever. XIV is clearly the future...
 
I used to work for a large data center and working for a good size company. I am a freak when it comes to data redundancy in both personal and professional compacity. I am responsible for making sure we have redundancy in place for data in case the primary servers should fail.

Disk to Disk backups are the easiest and fastest thing to do in the world.. Just run RoboCopy if you use Windows Servers or Rsync if you use Linux. There is NO excuse for this fiasco. I never trusted third party vendors with my data, I run my own backups to tapes and disks.

Same goes whenever there is a major change in the data storage infrastructure including RAID. It always make me nervous whenever I do a PERC RAID firmware upgrade on a Dell production server as it could render the disk array un-readable. It's rare but it does happen due to mis-match of the storage sub-systems and RAID controller.

Since the upgrade involved a SAN infrastructure it simply means they screwed up the configurations of the RAID arrays by mis-matching the firmwares on the enclosures and the controller cards.

Remember folks.. RAID controllers do not like to be messed with hence the expression, "If it aint broken don't screw with it!!"
 
[citation][nom]DangerDeepDoh[/nom]They should have used IBM XIV Storage - it's really to best thing going these days. Yeah, yeah EMC, DS8k, whatever. XIV is clearly the future...[/citation]

What about data corruption?? Only thing that saved us a couple of times was Windows Volume Shadow Service and good tape back ups.
 
They probably tried to cut cost by relying on the backup abilities that are built into some raid arrays instead of a completely separate storage solution thats possibly off site.

Either that or t-mobile them self got a hosting plan that doesn't include backups in order to cut cost while at the same time ripping customers off.
 
[citation][nom]excalibur1814[/nom]This is the FIRST article I've read where it states MICROSOFT in the title. The others stayed subjective and put DANGER/MICROSOFT or just Danger.Well done Toms. Not.[/citation]
Yeah right. When your dog bites someone it's his fault, not yours.
 
I have a Sidekick 3 and found my phone pop up that it detected a new SIM card so I had to log back in. It rejected all attempts to log in so I removed the battery to access the SIM card to re-seat it. Still couldn't log in. Later that night at home I found web postings of the outage. I was patient for a week until finally the next weekend I called support. They changed my password, I logged in, and all my data was restored. I hadn't updated any new information in a while (probably 2 weeks) so I don't know if that perhaps is why I didn't lose anything.

Needless to say it has me puzzled that they're stating they lost all data.
 
I'm a little amazed at people who think this problem is because of cloud computing.

This is a problem of backups and upgrades. Yes, this impacted a large number of users because their data was stored there, but it was a problem caused by improper planning and procedures.

I'm wary of putting data in the cloud for a few reasons, but this is simply a mistake by admins at this company. It's completely unacceptable to not have this data backed up, and to not test the backup if it was. It's amazing to me that they had a single point of failure on this as well.

My guess is that this company will not be in business for long. A mistake like this simply cannot be made. Mistakes are OK, but not this kind...
 
My personal take on the incident is:
* Cloud threatens M$ antiquated business model
* M$ blows up cloud to discourage it's adoption.
* Someone at Hitachi gets secret, large bonus for job well done.
 
Microsoft didn't back up data? Wha, they are a developer of products for the PC, and it goes without saying that you back up your data.

I'm aghast here.
 
From what I've gathered there seems to be a kind of over exuberance to blame Microsoft for this. Danger is a subsidiary, but is responsible for their own internal workings. Blame Danger; not Microsoft, the company who represents something completely different.
 
[citation][nom]huron[/nom]I'm a little amazed at people who think this problem is because of cloud computing.This is a problem of backups and upgrades. Yes, this impacted a large number of users because their data was stored there, but it was a problem caused by improper planning and procedures. I'm wary of putting data in the cloud for a few reasons, but this is simply a mistake by admins at this company. It's completely unacceptable to not have this data backed up, and to not test the backup if it was. It's amazing to me that they had a single point of failure on this as well. My guess is that this company will not be in business for long. A mistake like this simply cannot be made. Mistakes are OK, but not this kind...[/citation]

thats not what people are hating it about, they hate it because 1 failure like this is too many. and theres always room for mistakes.
especially if a company is greedy, they don't really care about your data. with a cloud, it is just a extra thing that can go wrong and you loose your data over things like carelessness of others instead of your own carelessness.

remember on all of those DIY shows, do it your self because you will take care of your own stuff better and do a more through job than someone hired to do the job because in any case the only one willing to go the extra mile is you.


 
Probably m$ tried to eliminate all non-m$ technology from danger - deutsche telekom/m$ style (their usual transitions to m$ exchange/$erver$ or lu$er data exposure/lo$$ blunders). Working Oracle clusters or Sun servers aren't good enough for billy-boy's cronies...
Brilliantly: m$ technology at work... just another resounding $ucce$$ $tory - and there is some breed of lu$ers which trusts the gamer o$ and any other microcrap product$ and $ervice$ with closed eyes... beware the m$ premium mobile experience.
 
Wow, so many people who know nothing what they are talking about...

When you have backups for LARGE SANs, usually you do a full backup once a week (if you have enough drives / tapes / etc) and then do incremental backups as you go.

If your SAN is 200 Tb... your backup could take easily 48+ hours to complete.

They may well have had a backup, with a week's worth of incremental backups... At 200 Tb (not a very large SAN), to restore and verify that data could EASILY take a couple days to a week.

If they hadn't done a full backup in longer then, say, 2 weeks... well, add another 4+ hours on the restore for each days worth of incremental backup you have to go through...

When a SAN update borks... as a tech you almost have to go into total disaster recovery mode. And restoring many Tb of data off of tape backups... it takes time.
 
Status
Not open for further replies.