Unboxing The New Commvault

Disclaimer: Paid Content

By Josh Fidel

Sept. 4, 2019: Commvault acquires Hedvig. My first thought: What? Why? Commvault is a data company, it handles backup, disaster recovery, data archiving. Hedvig is software-defined storage. Why would a backup software company purchase a software-defined storage company? Data is data, infrastructure is infrastructure (software-defined or not), and I had trouble discerning what Commvault’s strategy was. In a world of traditional IT silos, it didn’t make sense, in my opinion.

Oct. 14, 2019: Commvault GO 2019, Denver. I’m at Tech Field Day (@TechFieldDay) as a delegate, and as Don Foster (@DFoster_Jr) and Patrick McGrath present to our panel on the new direction Commvault’s taking, things start to come together for me. Commvault is playing a bit of catch up, but it’s also charting a new course, one designed to move it into the future. My curiosity is piqued.

For the rest of this article, I want to unbox the new Commvault, explore what it’s planning to deliver and give my thoughts on a couple of these new offerings.

First, some background information on two new Commvault products that really intrigue me:

Hedvig: this is a software-defined storage solution. It allows you to use commodity x86 servers containing internal storage devices, abstract those storage resources and turn them into a clustered datastore solution, in either a hyperconverged form (compute and storage resources intertwined) or a hyperscale one (where storage scales separately from compute resources).

Pros: Software-defined, so storage policies relating to availability, redundancy and performance are granular and easily tweaked, API driven, can be utilized in IaC, integrates with ITSM and/or CI/CD pipelines.

Cons: Uses a control VM or container to make internal physical storage available that introduces slight management complexity, compute overhead and latency. For the management types reading this, you can also check out a Forrester report on the financial benefits of Hedvig as a solution.

Commvault Activate™: Activate is a data policy and governance engine that also adds in analytics, workflows and pre-built solution accelerators (meaning they’ve built some custom app engines inside the product that you might find useful). This is a data management platform, which is quite revolutionary for Commvault (and the market in general). It’s a separate product that can stand alone or can integrate with Commvault Complete Backup & Recovery.

Why is it a compelling product? It’s a deep and wide answer, but I’ll try to be brief. The primary use case I see: with the rise of General Data Protection Regulation (GDPR), the California Consumer Privacy Act (CCPA) and new data legislation coming around the world, organizations have to know what data they have, where it’s located, who can access it (and be able to audit that access) and what the data itself contains.

Throw in eDiscovery and data risk management, that’s a lot of irons in the fire for business, requiring a serious review of data archiving policies. Commvault Activate can give an org the necessary insight and provide policy automation to help corral that data.

The second use case I feel is one that’s been neglected for too long. All the data that companies used to classify as “simply archival backup” for risk mitigation or regulatory compliance is a huge, untapped resource. Companies don’t realize they’re sleeping on a giant data repository full of historical information that, if properly utilized, can bring massive benefits to companies for their operations, sales and customers.

My suggestion to organizations is to stop sleeping on your archived data. It’s not a data lake, it’s a data loch- deep, dark, cold and holds some incredibly surprising things if you dive down into it.

So why do I find this new Commvault intriguing?

Let me walk you through my thought process: An organization needs a better storage solution that’s easy to scale as data grows, allows replication of data between on-premises infrastructure, co-lo and cloud allows for quick and easy manipulation of data/VM/application protection and performance, and removes the burden of inserting vintage three-tier storage arrays into the equation.

Imagine Hedvig as your storage layer. Scaling up or out is a breeze; availability, performance and replication are changed with a few clicks or API calls, and it’s easy to manage. Hedvig checks all the right boxes.

Take it one step further. What if you were able to utilize Hedvig as both your primary storage layer AND your backup storage layer? How easy does that infrastructure become to manage? The storage infrastructure stays the same on both sides, and only the policies applied to the objects on that storage change – so simple, I love it.

Now, I’d like to see forward movement from Commvault and Hedvig on this. As today, I’m not positive Hedvig can meet the performance metrics I personally want to see in a primary storage layer, and I’m not a fan of storage controller VMs. But the idea ignites some passion for me as a long-time storage admin.

Let’s add Activate into the equation. I utilize Hedvig as my storage layer to move data around and Activate becomes an eDiscovery tool, a governance engine for access, a data policy engine to clean my archival data for analytics. Since my data’s clean(er) and easy to move, I can take advantage of my cheapest available compute resources (regardless of where they’re located) to run analytics against my “archived” data and suddenly the ROI on Commvault products goes more vertical. I like it. It’s a bold vision, and one I look forward to seeing more of from this company.

To learn more, listen to Foster and I discussing Hedvig as a software-defined storage solution in the webinar “Two Sides of the “Same Coin: A New Way to Think About Data and Infrastructure.”

Cheers.