Is backup dead?

In a few conversations with clients and prospective clients recently, I’ve been asked if we really need to worry about backup any more?


At first, I was bit taken aback by the question, then it became clear that there’s a growing perception that, with the increasing adoption of SAN and Virtualisation technologies, the tools provided by the vendors of these solutions, such as snapshots, mirrors, clones, etc provide adequate data protection and replace standard backup.


While there is undoubtedly a role for storage level data protection and the snapshot capabilities at the virtual machine level with Vmware, Hyper-V, etc. I can’t reconcile this with the dessertion of backup as a critical operation for businesses.


Yes, it’s true that the SAN vendor can provide the capability to instantaneously snapshot volumes and hold multiple chekpoints online and you can take images of entire virtual machines at the hypervisor level on the fly. You can also use OS level features such as Microsoft’s VSS to provide faster recovery.


But where do all these copies reside? How many can you hold? How long can you keep them for? What level of recovery can you achieve?


Snapshots are very convenient for quick recovery of unstructured data but not quite so simple for application data. They are typically created by the SAN vendor’s proprietary tools and need to reside on the same platform as the source volume i.e. in the SAN. This is excellent for providing intra-day recoverability – improve the SLA to your users by enabling recovery points with hours or minutes as opposed to last night’s backup, but not for the file that was deleted last week/month/year. With the increasing regulation of business and the specific compliance requirements of certain industries, this is an important point.


A failure in the SAN or the site could make data that is only protected via snapshot unavailable for recovery. This usually prompts the conversation about SAN to SAN replication for offsite data recovery and availability. Again, proprietary, complex and expensive. A sledgehammer to crack a nut?


And what happens if you decide to change storage vendors when the original equipment goes end of life? Over reliance on storage level technologies creates vendor lock-in and drives proprietary storage purchases. Even if it’s all Tier 2 storage, it’s still a significant expense.


Virtual Machine snapshots have similar benefits and drawbacks to SAN snapshots. They consume expensive storage and represent a particular point in time for the entire machine. While you can, with Vmware for example, create file level images that allow some degree of granular recovery, the restore process involves significant manual intervention and the snapshots are only crash consistent – not brilliant for recovering applications.


In my view, these tools can be gainfully employed as blocks of an enterprise data protection and continuity strategy but don’t in themselves negate the need for traditional backup. There will continue to be a requirement to protect data by maintaining copies on alternative media and in multiple locations.


I am very much in agreement that backup to TAPE as a primary data protection strategy is now unlikely to be sufficient for most organisations but the paradigm for traditional backup, using low cost secondary disk based storage as the primary target, remains valid. Tape can still be used for secondary backup copy for offsite and vault copies.


Used in conjuction and integrated with the proprietary snapshot and imaging capabilites elsewhere in the stack, backup forms the foundation of a comprehensive and efficient data protection strategy.


The news of the demise of backup is premature. Long live backup!





Stuart Matthews, Web Dev, fatBuzz