Data Protection Myth or Reality: Foolproof Risk Mitigation Strategies

Posted 06/19/2018 by Kalyani Kallakuri

Have you ever thought about trying to implement a foolproof backup and recovery risk mitigation strategy to protect your corporate data and maintain business continuity? Is that even possible? In a nutshell, foolproof means that it’s so easy it’s basically impossible to screw up. With that standard in mind, I’m not sure that “foolproof” is even conceivable, let alone implementable.

FOOLPROOF: "So simple, plain, or reliable as to leave no opportunity for error, misuse, or failure." -- Merriam-Webster

Robust data protection

So foolproof is an extremely high bar, but there are things you can do that make it almost foolproof. There are a few key considerations to keep in mind when building a robust data protection strategy; the following elements are a great place to start:

  • The first requirement is being able to efficiently back up data within a defined operation window. The initial backup usually takes longer, but the future incremental backups should have virtually no impact on performance for end-users and backup admins.
  • The need for backup speed is a given, but the ability to recover to meet business objectives is also critical. In today’s threat landscape, it’s paramount to have the ability to recover quickly and efficiently after a ransomware attack, or other unforeseen event, so that the end result is minimum downtime or data loss. 
  • Reliability is another key consideration, which translates to having a robust, security-focused and scalable data protection architecture.
  • Lastly, total cost of ownership (TCO) of the data protection architecture is important. You need a cost-effective, tiered storage strategy that accounts for performance and the value of the data, placing the most critical information on the highest performing infrastructure.

Technical considerations

Balancing the following constraints are typical of data management strategies:

  • Your data management architecture should securely and efficiently back up your infrastructure, servers, endpoint data and application data that resides in the cloud.
  • Businesses should establish a Recovery Time Objective (RTO), which is the maximum duration a service, and the accompanying data, must be restored after an outage or disaster. In order to achieve the defined RTOs, data can be tiered based on its criticality, and backed up accordingly. More critical data will be protected by high-performance storage, while less critical data can be stored on less-costly infrastructure.
  • A Recovery Point Objective (RPO) should be set for the maximum targeted duration that data will be unavailable from an IT service due to a major incident. Meeting the RPO objectives can be achieved by taking snapshots of the data at predefined regular intervals.
  • Industry best practices recommend maintaining more than one copy of data. To achieve this, you need a copy stored on-premises for fast access and one air-gapped (disconnected or offline) copy. I’ll cover air gap in more depth in a future blog post. For best performance, you can also leverage synchronous data replication, where data is replicated into two storage arrays concurrently. Albeit synchronous replication can prove to be costly, but some companies are able to justify this cost. Additionally, many vigorous data management strategies employ both a primary data center and an offsite disaster recovery site. 

3-2-1 data backup and recovery plan

A tried-and-true methodology to ensure reliable data backup and recovery is a 3-2-1 data backup plan. This strategy dictates that you must have multiple copies of your data: three copies in total, two stored locally (but on different storage media) and one copy offsite.

  • To illustrate this, consider a backup copy of critical application data; the primary backup copy is stored in the data center for quick recoveries, with a second copy kept on different infrastructure to avoid a single point of failure. Further, on a regular basis you upload your data to an offsite cloud; this is your third online copy. The offsite copy can be disconnected (offline) if required, providing the aforementioned air gap.
  • Another age-old technique is to use tape for the secondary copy. Tape is still a cheap and viable option to minimize data loss.

Myth or reality

Is foolproof risk mitigation that prevents data loss a myth or reality? The bottom line, regardless of which risk mitigation approach you choose, a foolproof data protection strategy is a myth. Instead, the right question to ask is, “How close can we get to a foolproof risk mitigation strategy?” The answer is that you can get very close with a virtually foolproof solution. It starts with a plan for all foreseeable risks to your organization’s data - breaches, ransomware or catastrophic failure - and balance that with the architectural considerations of resiliency, redundancy and cost of protecting the data.

You also need to implement an organizational plan for adoption of best practices, health checks and assessments (data/systems/infrastructure), along with frequent education and training programs on risk mitigation. All of these are steps in the right direction toward creating an “almost” foolproof risk mitigation plan. Of course, you’ll need to make tradeoffs based on your particular organizational needs, methods such as creating multiple copies achieves infrastructure resiliency and redundancy – but at a cost.

Learn more about how organizations address and solve common data management challenges through Commvault's backup and recovery solutions.

Kalyani Kallakuri is a Senior Product Manager for Commvault.