Cloud native applications and containerization
By Greg Bennett
How enterprises can benefit from containers
Unless you’ve been living under a rock (which may be safe during a global pandemic), you’ve heard a lot about containers in recent months. You may even have a DevOps team in your organization deploying containerized applications. In fact, Gartner predicts that by 2025, 85% of global enterprises will be running containerized applications in production.1 There is definitely a lot of buzz about containers and digital transformation, but what does it all mean and why all the fuss?
In this blog, we’ll explore:
- What are containers
- Benefits of containerization
- What is persistent storage
- The challenges containers pose
- Kubernetes and CSI
- Who cares about containers and why
- What you should consider as you look to implement an effective containers strategy to assist in your application modernization efforts
What is a container
Before we delve into the nitty gritty details of containers, let’s review how applications and infrastructure have evolved over the past decade plus. Back in the day, you deployed monolithic applications on dedicated servers, each with its own operating system (OS) This was very inefficient from a hardware utilization standpoint and increased maintenance costs.
Enter virtualization, which abstracted the hardware from the OS, allowing you to run many OSes on a single server. This was much more efficient and enabled hardware consolidation, but it introduced conflicts between applications. This brings us to containers.
At the highest level, containers decouple, or abstract, the OS and underlying hardware infrastructure from the application making them faster to deploy across environments.
According to Kubernetes (more on them later), “a container image is a ready-to-run software package containing everything needed to run an application: the code and any software dependencies, applications and system libraries, and default values for any essential settings.” In short, containers reduce the application footprint, increase predictability, and simplify application development.

Benefits of containerization
The great philosopher Ferris Bueller once said “Life moves pretty fast. If you don’t stop and look around once in a while, you could miss it.” While you’re not likely to miss the transition to containers, it’s happening much faster than the transition to VMs because containers offer several advantages over traditional VMs:
- Bundled dependencies. With containers, all the dependencies for an application are bundled together. For example, if you need a specific version of Windows or Linux or Java, those unique dependencies are packaged in the container making them more predictable. This means a developer’s app will work in production like it did on the laptop on which it was created without having to test all the different scenarios across the environment.
- Stateless. Containers are designed to perform a specific task and then go away (i.e., they do not persist after executing their task). This makes them very efficient for developers to deploy. A side benefit is that they can be immutable, meaning the information within the container can’t be changed.
- Isolation. Each application component lives within its own container. For example, you could have a database application running in Container A and a web application running in Container B on the same server, but neither has visibility into the other nor shares any dependencies. This minimizes the amount of software running in any given container, reducing the attack surface for hackers.
- Patching. Containers simplify patching because there isn’t any. Rather than try to update the files and folders inside a container to support a new version of an application, you simply delete the old container and create a new one with an updated image.
- Portability. Unlike with VMs, you can move containers to different cloud services (e.g., from Amazon EKS to Azure AKS or Google GKE) easily and have them spun up almost instantly since all the dependencies are included. VMs, on the other hand, contain an operating system, application software, and your data, making them harder and slower to move due to size. Additionally, on-premises hypervisors and the cloud don’t use a common specification, requiring VMs to be transformed to make them portable. With containers, the format is universal making them easy to move.

VMs vs. Containers
VMs | Containers |
---|---|
Abstracts HW from the OS | Abstracts the OS from the app |
Must be patched and secured; includes full OS and associated hardware and software licensing | Can spin up/configure containers in seconds, typically fully patched on startup |
Slow to provision/start, dependency conflicts | Predictable (includes dependencies), saves time |
Components of a container
Containers include several components.
- Container Image: all the files and binaries that applications need to run, including the dependencies. The image is stored in a public image registry (e.g., hub.docker.com) or internally on a private image registry.
- Persistent storage: storage volumes attached to a container that remain after the container goes away (see more info below).
- Configuration: includes manifest information on how to schedule a container or containers on a Kubernetes cluster. For example, you can tell Kubernetes to maintain three (3) running copies of your application for availability and performance and Kubernetes will maintain this.
What is persistent storage

Containers were designed to be ephemeral or stateless meaning the data in a container is not stored after the container is shut down, deleted, or stops working. Stateless containers allow applications to be quickly scaled for a specific task, which enabled DevOps teams to build web-scale applications that could adapt at the speed of cloud growth (e.g., Pokemon GO). Once DevOps engineers had the ability to create their own containers, the migration of stateful applications into containers began.
For stateful applications, you need persistent storage. For example, if a container is running MySQL, you don’t want to store your database in the running container, as it will be deleted when the container stops. You want to connect separate storage that lives (i.e., persists) after the container goes away. This is what makes containers so expendable (i.e., you can replace the image and re-attach the storage volume without losing your data).
Container challenges
Despite their advantages of being lightweight, faster, and more programmable than VMs, data and infrastructure challenges remain around:
- Provisioning storage for containers
- Protecting containerized workloads
- Migrating containers with their persistent data and configs across hybrid multi-cloud locations
Stateful applications require proper storage and data management throughout the entire application lifecycle. Organizations need a way to easily migrate/replicate and protect data in their container ecosystem to recover container-based apps across their environment. And while containers are multi-cloud by nature, data is not making it challenging for the IT and DevOps teams to manage the infrastructure and data together in a cohesive manner.
The question is, how do you enable simple self-service storage provisioning of persistent storage for your DevOps teams while still protecting all data types (containers and non-containers) as you migrate to a container environment?
Additionally, while creating and launching an individual container is easy, managing containers can become challenging at scale. Many organizations have hundreds, if not thousands of containers in their environment, so how do you manage containers at scale?
Kubernetes and CSI

That’s where Kubernetes (K8s) comes in. It provides the orchestration layer that controls the entire container infrastructure – deployment, monitoring, storage management, logging, patching, rollbacks, load balancing, etc. K8s is the de-facto standard for container orchestration, and using the CSI driver provides persistent storage capabilities for containerized workloads across hybrid-multi-cloud environments.
The Container Storage interface (CSI) driver standardizes persistent volume workflows across different K8s platforms and storage technologies. CSI is a plugin for K8s that allows storage arrays to be consumed by containerized applications as persistent storage. Using CSI, K8s developers can dynamically provision storage, expand capacity, schedule snapshots, and recover persistent volumes using array-specific capabilities.
K8s supports a microservices architecture where apps that perform specific tasks can be quickly and easily scaled up or down as needed. The orchestration is handled by K8s based on user-defined workflows. Modern cloud-native applications are ideal for K8s and while some legacy apps could be converted to containers, it may not be worth it for others. Bottom line, when considering containers, look for cloud-native solutions that integrate natively with K8s via CSI.
Who cares about containers and why
There are two key personas that care about containerization for different reasons. The first audience is the DevOps organization, for whom containers are old hat. The second audience is traditional IT, for whom containers are likely new.
DevOps

If you are a DevOps engineer, you are no doubt intimately familiar with containers and have been using them for a while to build containerized apps with stateful and stateless data. You want to dynamically provision persistent storage with self-service access using your existing K8s workflows. While this sounds simple, there are challenges:
- Storage requirements differ by application. Traditional storage arrays apply storage services (e.g., deduplication, encryption) across the entire array instead of at the application level, while many cloud-native storage solutions are limited to container-only environments. Since most orgs have traditional and cloud-native solutions, this increases complexity and limits flexibility.
- You want a programmable infrastructure that integrates with your existing tools (K8s, Git) to optimize productivity.
- You need the flexibility to seamlessly deploy apps across a hybrid multi-cloud environment. It doesn’t matter where the storage is located, but you’re not willing to compromise on recovery or migration performance.
You want a storage solution that supports the following:

- Programmable infrastructure. Look for solutions that integrate with your current workflows, CI/CD systems, and GitOps practices via APIs that allow you to use kubectl and YAML manifests to define Storage Classes, provision storage, schedule snapshots, and/or migrate apps.
- Application-centric storage. Look for a solution that allows you to turn on and off features at an application level vs. a storage array level. For example, you may choose to enable deduplication for one app (e.g., files) but disable it for another (e.g., medical images).
- Flexible scalability. Cloud-native applications will continue to accelerate in adoption. Look for next generation storage building blocks that support modern distributed applications regardless of where the apps live to improve portability and performance.
Traditional IT
Container adoption continues to accelerate. If you’re part of Traditional IT, you’re acutely aware of this trend and realize you need to protect containers and the supporting infrastructure, but you may not be very familiar with containers. This leads to several challenges:
- As stateful applications on VMs, bare-metal servers, and in the cloud migrate to containers, you must protect this stateful data to allow applications to recover to production in the event of an unplanned outage (i.e., failure/restart).
- Given the complexity of containers, you may not have the appropriate container protection and management expertise on staff.
- Ransomware attacks specifically targeting containers and Kubernetes are emerging, creating a new attack vector to be concerned about.
- Kubernetes is the de facto standard, so you must have a protection solution that integrates natively with K8s via CSI.
- Data silos across your organization limit visibility into your data and inhibit your ability to identify new business opportunities.
When assessing container protection solutions, look for a solution that supports the following:

- Comprehensive protection. It’s important to protect, migrate, and recover stateful K8s and non-K8s applications across hybrid multi-cloud environments, including protecting source code, CI/CD systems, and image registry data. Look for solutions that protect CNCF-certified K8s distributions, K8s applications with or without persistent data, and data inside and outside your K8s cluster. Container-only protection solutions just add complexity.
- Native integration with K8s via CSI for snapshot-based protection of stateful applications.
- Built-in ransomware protection with the ability to monitor data, identify anomalies, and notify users as appropriate.
- Simplified operations. Ideally, you want a single solution for your container and non-containerized data that lets you leverage existing DevOps-defined processes to identify apps that need protection. This reduces the burden on your existing IT team.
- Storage consolidation. By consolidating storage technologies (block, file, object) and eliminating data silos, you improve visibility into your data, enabling greater insights and actions.

Conclusion
Containers promise to accelerate your application modernization efforts, but they come with their own set of challenges. When considering implementing a containers strategy, look for a comprehensive solution that makes it easy to manage all your applications (K8s and non-K8s) while providing flexibility for your container workloads.
Learn more:
- Accelerate your containers journey with Commvault Solution Brief
- Containers
- DevOps
- Software defined storage
References
1 Best Practices for Running Containers and Kubernetes in Production – 4 Aug 2020 – Gartner ID G00730344