The twin pillars of application transformation

By Drew Conry-Murray

I recently participated in a webinar with Commvault’s Don Foster on building modern, microservices-based applications with Kubernetes. The event covered a variety of topics, including the rise of containers and Kubernetes as the development platforms of choice, the challenges these platforms present to enterprises, and how companies can take advantage of automation and orchestration to build applications that meet the twin business requirements of scale and velocity.

While there’s a great deal of discussion about containers, microservices and Kubernetes, actual enterprise adoption is relatively low. A 2020 Red Hat survey revealed that 62% of companies run only 10% or fewer workloads using containers.

However, that doesn’t mean microservices are a fad. The same survey notes that the number of companies with 50% or more of their workloads running in containers is expected to nearly triple in the next 24 months.

The takeaway is that the business value of new application architectures is clear: companies can build new applications quickly, add or change features based on business demands and deliver on the imperative of digital transformation. 

It’s also clear that it’s not too late for companies to begin their own transformation. For companies that are ready to embrace microservices, the two pillars of this strategy are programmable infrastructure and investing in your IT staff.

The programmability pillar

Programmable infrastructure undergirds digital transformation, particularly when it comes to microservices-based applications. What does “programmable infrastructure” mean? It means having hooks into your IT stack – storage, compute, networking and so on – so that developers can, via automation, request the resources they need to support an application.

A programmable infrastructure also ensures that those resources:

  • will be correctly provisioned to meet the application’s capacity and performance requirements
  • will come with appropriate security, compliance and regulatory controls

The aforementioned hooks are typically APIs, which enable systems to make and respond to requests for services and functions. APIs are key to programmability, and most modern infrastructure supports APIs.

In order to be effective, programmability has to touch all of the layers of the IT stack. It makes no sense to invest in containers, microservices and Kubernetes on the developer side if those developers still have to wait hours or days for a person to provision a storage volume or connect an application via CLI to the appropriate network segment.

In other words, one key tenet of programmable infrastructure is to create an abstraction layer across the IT stack, so that each layer can interact with the other layers as necessary.

A second tenet is that behind that abstraction layer resides the domain-specific experts – storage pros, network pros, security pros – who handle the gnarly details that a developer shouldn’t have to deal with. 

For instance, with programmable infrastructure, a developer can request 500Gbs of storage, and the storage array will provision it. Behind that simple abstraction is a set of policies and configurations put in place by storage administrators to ensure that the array provisions the correct storage medium and that the appropriate data protection and backup policies are in place.

Programmable infrastructure enables the integration and orchestration of all of the elements that make up an application, while also leveraging, behind the scenes, the domain expertise for the proper functioning of each element in the IT stack.

Invest in people

The second pillar of transformation is your IT staff. It’s imperative to invest in training so that IT pros can take full advantage of programmable infrastructure. It’s also essential that IT and business leaders drive the process changes required to support new application architectures.

Many organizations mistakenly tackle transformation by starting with products and tools. IT shops can get bogged down debating the pros and cons of Chef versus Puppet, Ansible versus Terraform, and so on.

Instead, organizations need to start with their employees. In many companies, automation training is ad hoc; engineers invest their own time and money in picking up skills or tools that catch their interest.

While this self-directed learning is admirable, it’s not strategic. For one, if no one else on this engineer’s team is on board, it’s unlikely this engineer’s efforts will have any significant impact.

For another, ad-hoc instruction may not align with the organization’s priorities. For example, a network engineer learning to write Python scripts to automate tasks on the campus network is developing a useful skill. But if the organization’s strategic priority is a hybrid cloud, that engineer’s efforts could be better directed toward mastering networking features in AWS or GCP.

Companies can encourage and direct employee training. The simplest way is to pay for specific training courses or certifications. Another option is to incent employees to develop skills through raises, bonuses, increased responsibilities, advancement opportunities and promotions.

Companies can nudge employees by hosting lunch-and-learns, sponsoring meetups and publicly acknowledging and rewarding IT staff who skill up on tools and processes that align with the company’s strategic objectives.

In addition to technical training, companies are also going to have to explicitly change workflows and processes. IT teams that have spent years supporting monolithic applications are accustomed to rigid change control processes and long timelines on application updates and changes. Microservices applications iterate at a much faster pace and require new workflows, tools and monitoring.

Process change in an IT organization is much harder than skills development, in part because IT performance has historically been measured on uptime. Change can threaten uptime, so IT became risk-averse. Traditional IT shops treat changes with the same deliberate, procedural and bureaucratic processes you might find at a nuclear reactor. That’s because mistakes or misconfigurations could lead to performance slow-downs or outages, which in turn could lead to rebuke, recrimination, or job loss.

Business leaders must provide training and support for new processes and communicate that the risks associated with digital transformation are understood and accepted by the business. In other words, business leaders need to provide the cover for IT teams to learn, experiment, make mistakes, learn from those mistakes and move forward.

Digital transformation is possible, but companies must build it on the twin pillars of programmable infrastructure and investing in your IT staff, including training.

Building out stateful applications with Kubernetes

Check out the webinar for additional insights and recommendations.