I believe I have covered this ground before but apparently it needs reiterating. Cloud storage without backup cannot be considered a viable solution. Replication only works well if you never delete or logically erase data from a primary copy. Once that’s done the data is also lost in all replica locations soon afterwards.
I am not sure what happened with the sidekick data, whether somehow a finger check deleted it or some other problem but from what I see looking in from the outside – there were no backups, no offline copies, no fall back copies of the data that weren’t part of the central node and it’s network of replicas. When that’s the case disaster is sure to ensue.
At the moment the blame game is going around to find out who is responsible and I hear that some of the data may be being restored. But that’s not the problem. Having no backups that are not part of the original storage infrastructure/environment are the problem. Replicas are never enough. Backups have to be elsewhere to count as backups.
What would have happened if they had backups is that the duration of the outage would have been the length of time it took to restore the data and some customer data would have been lost since the last backup but that would have been it. It wouldn’t be the first time backups had to be used and it won’t be the last. But without backups at all, then you have a massive customer data loss that cannot be recovered from.
This is unacceptable. It gives IT a bad name, puts a dark cloud over cloud computing and storage and makes the IT staff of sidekick/danger look bad or worse
All of you cloud providers need to take heed. You can do better. Backup software/services can be used to backup this data and we will all be better served because of it.
Some are saying that the backups just weren’t accessible but until the whole story comes out I will withhold judgement. Just glad to have another potential data loss be prevented.