Resource Library

Find videos, customer case studies, datasheets, whitepapers and more to learn how Commvault can help you make your data work for you.


Cost savings drove much of the initial cloud adoption and data center virtualization efforts. And, for the majority of businesses, it remains a significant driver. But as IT organizations build out private and hybrid clouds, agility has become a primary motivating factor.


The ability to respond to business needs in a flexible, efficient manner is becoming increasingly important. In fact, according to IDG, greater business agility/flexibility is amongst the top three expected benefits of a private cloud.

This agility, unfortunately, comes at a price. As IT organizations combine clouds – be they private, community or public – the IT environment becomes more complex, and the risk of losing data, whether due to unplanned downtime or a catastrophic event, increases. Egress fees, time to recover and data complexity all make recovery an immense, unforeseen challenge. IDC estimates that 50% of organizations have inadequate DR plans and might not survive after a significant disaster because of the inability to recover IT systems.

IT organizations require a means to protect their data, regardless of where it’s stored, and maintain the agility benefits they’ve worked hard to achieve. A cloud-based data management solution can deliver that protection while not only preserving the organization’s current agility but also extending it to backup and disaster recovery operations.


Data recovery has never been as challenging or as critical as it is today. IT organizations are tasked with overseeing service-level agreements (SLAs) for a variety of IT services, over which they have no control. Downtime is an all-too common occurrence and can occur for any number of unforeseen reasons, all of which impact the bottom line. The average cost of downtime, according to IDC, is about $100,000 per hour. Cumulatively, IDC research indicates that most organizations experience between 10 and 20 hours of unplanned downtime per year, even without a disaster.

The traditional approach to disaster recovery involves maintaining a secondary data center with the same systems and applications that run in the primary data center. Prior to the advent of data center virtualization, this was an expensive and complex endeavor that was viable for only the largest enterprises. Today, however, maintaining a secondary data center is nearly impossible given the dynamic nature of the IT environment. Even if it was a reasonable approach, it still does not address the need to recover data in third-party clouds.

According to IDC, more than 80% of organizations expect to use some sort of cloud service by 2018. As confidence in public cloud services grows, more mainstream applications are either moved to the cloud or replaced with Software as a Service (SaaS) solutions. While getting data into the cloud is relatively easy, getting data out is a different story. Enterprises simply can’t afford the time, processing and cost associated with recovering everything from the cloud to on-premises, nor does it make sense to when cloud adoption is becoming mainstream.