A Primer On Commvault Distributed Storage Persistent Volumes For Containers – Part I (CSI And Beyond)
By Srividhya Kavaipatti Anantharamakrishnan and Abhijith Shenoy
In a previous blog post, we officially introduced s Commvault Distributed Storage’s (formerly known as Hedvig) CSI Driver and made a case for accelerating your journey into the container ecosystem. CSI Driver enables containerized applications to maintain state by dynamically provisioning and consuming Hedvig virtual disks as persistent volumes. But the job of the CSI Driver doesn’t end there!
In this post, we will focus on the capabilities of the CSI driver beyond dynamic provisioning. We will showcase how Commvault Distributed Storage simplifies existing stateful container workflows with complete storage lifecycle management while operating within the confines of the container orchestrator of choice.
Policy driven data placement
As organizations migrate stateful applications to container ecosystems, it is necessary to understand how to effectively manage data owned by different groups within the organization while adhering to security and compliance policies. Each group might have its preferred choice of container ecosystem as well as a preferred location (either on-premises or in the cloud, or both) for persistent application data.
The self-service, API-driven and programmable infrastructure of Kubernetes (“K8S”) enables application developers to customize their applications through policies. Commvault Distributed Storage, with its invisible multi-cloud infrastructure, allows application developers to declaratively specify where they want their persistent application data to reside. By providing data placement as a policy, different groups within an organization can continue to use their existing workflows, thereby accelerating the onboarding of stateful applications.
Let’s discuss this with an example. The following diagram shows a Commvault Distributed Storage cluster that spans across three sites – on-premises AWS and GCP. In addition to this, there are 2 Kubernetes clusters – a Staging cluster running on-premises and a Production cluster running in GCP.
Both the Staging and Production Kubernetes clusters can use the same Commvault Distributed Storage cluster simultaneously for persistent application data by defining the following storage classes:
Any persistent volume created using the storage class sc-on-premises in the Staging K8S cluster will have two copies, both residing in the on-premises site and not accessible to the Production K8S cluster. Conversely, any persistent volume created using the storage class sc-gcp in the Production K8S cluster will have three copies, all residing in the GCP site and not accessible to the Staging K8S cluster.
If you want the application data to be replicated to all three sites, you can do so by simply creating a persistent volume using the following storage class:
Multi-tenant storage management
We’ve showcased how a single Commvault Distributed Storage cluster can cater to the demands of multiple groups within an organization, each using its own container ecosystem. But what about the scenario where multiple groups are sharing the same Kubernetes cluster? Is it possible to ensure that each group has dedicated volume space allocated to it that can be managed independently? You absolutely can!
With a multi-tenant architecture, volumes allocated to different groups are completely segregated and a limit can be set on the aggregate size of volumes created.
Let’s discuss this with another example. The screenshot displays the tenant configuration on the Commvault Distributed Storage cluster.
Tenants Production and Stg have sizes of 10TB and 1TB respectively. Storage classes for these tenants can be created as follows:
With these storage classes, we will ensure that volumes created using sc-production and sc-stg will never exceed the size limit set on the corresponding tenant.
Support for CSI sub-features
Any discussion about storage lifecycle management with the CSI Driver isn’t complete without talking about the support for CSI sub-features. With the latest release of the Commvault Distributed Storage CSI Driver, the following features are supported:
- Dynamic Volume Expansion – Commvault Distributed Storage CSI Driver enables users to perform online expansion of persistent volumes, which also includes dynamic expansion of the file system for Commvault Distributed Storage-block back end.
- Raw Block Volumes – Commvault Distributed Storage CSI Driver enables users to dynamically provision persistent volumes that will appear as block devices inside containers.
- Snapshot and Clones – Commvault Distributed Storage CSI Driver enables users to create on demand as well as scheduled snapshots of persistent volumes with support for v1beta1 snapshot APIs.
Kubernetes enables users to build cloud independent applications. To achieve true cloud independence it is absolutely necessary to have a cloud-agnostic storage tier built from the ground up to increase availability not only within a single site but also across different physical locations, including the cloud. Commvault Distributed Storage, with its invisible multi-cloud infrastructure coupled with a comprehensive list of capabilities showcased in this blog, caters to a wide variety of enterprise use-cases.
Stay tuned for our next blog on an in-depth overview of CSI snapshots and clones.