Getting Serious About Disaster Recovery at Commvault GO

Posted 12 October 2016 2:40 PM by Bobby Mulligan



I had the unique experience of presenting about disaster recovery with a real natural disaster heading our way. Presenting at Commvault GO, Commvault’s inaugural customer conference, my session delved into Commvault’s capabilities to provide enterprise customers an updated disaster recovery strategy leveraging the public cloud – with a focus on Microsoft Azure and Amazon AWS. During the conference, we had an impending natural disaster (Hurricane Matthew) scheduled to hit Florida, which made the session a bit more intense. It put everyone on edge, including myself.

I read an interesting stat: By the year 2020, 90 percent of DR operations will run in the public cloud. By the same time frame, 50 percent of applications will run in the public cloud. I think that’s interesting because many businesses today have an old and neglected disaster recovery plan. I’ve lived that life years ago: an expensive solution with an unacceptable RPO/RTO time. And this was just as painful to the IT team as it was to the business it was supporting. This equated to no sleep for multiple days and a bunch of documentation that quickly became outdated and irrelevant, as well as introducing a process that effectively provided a ‘check mark’ with hopes and promises that it would be better next year. Thankfully, those days are coming to an end. That's because Commvault is making it happen.

In my session I talked, walked and showed how leveraging Commvault with the public cloud, enterprise IT (becoming Hybrid IT) can have a true DR solution at a fraction of the cost. At the same, it will exceed your current security standards, maintaining control regardless of workload location, all while meeting and beating your SLOs/SLAs to the business.

I also talked through the Commvault vision and what has been in our DNA from the beginning: our hardware agnostic approach that has continued in our hypervisor capabilities; and how it has continued in our public cloud approach to enable decisions of the past and decisions in the future, while enabling business to have and maintain leverage with its vendors. This is a differentiator for the enterprise decision makers.

Continuing the discussion, I advised about the many considerations when developing a disaster recovery strategy. One major consideration is performing analysis on the business needs. This will provide insight to application tiers, groupings, dependencies and provide a budget. With this information, (Hybrid) IT can create a plan that aligns to RPO/RTO service level objectives/agreements. This ties into defining a supporting architecture and strategy.

The good news is that Commvault supports native integration with 40-plus cloud storage providers. This is budget-friendly and it enables the freedom of choice. But leveraging the cloud with Commvault increases that value into more use cases. It also extends data centers into any ‘active-active/active-passive’ topology.

The inherit platform capabilities like Livesync, the workflow engine and dash-copy all come together to enable policy-based orchestration and integration. That creates ‘To/From/In’ within the cloud capabilities. Now add virtual machine transformation so that the virtual machines will work between and across hypervisors, as well as cloud providers. By the way, this starts to enable many more technology use cases beyond DR. Think of workload portability and migration. Extend that thinking to DevOps and ... I better stop now as my word count is over, but you get the point. :)

After my session, I had insightful discussions with some of the attendees. Many of them were already starting to leverage the Commvault platform capabilities I discussed. This created confident messaging back to their business in regard to strategy and to also be able to focus on their own families while preparing for the hurricane.

Share: