Welcome back to my series on the seven principles of data readiness. On the heels of my last post on how to prescriptively manage across the data lifecycle, we’re going to talk about how to automate and recalibrate—continuously.
So to start: how many of you have done a health check to set up an optimization service to streamline and provide better performance for the environment, after it has changed over a certain period of time?
The answer is: most of the people reading this blog.
And yes, organizations are always paying for consulting to optimize infrastructure and how their environments operate. The reason is because it changes pretty drastically and when it does, so do performance characteristics and bottlenecks. Bottlenecks shift and new challenges come about with new business requirements, so being able to look at different things and constantly optimize is always important.
With that being said, in the era of machine learning and artificial intelligence, understanding what your data and infrastructure environments look like enable you to collect all kinds of information. Why can’t we provide that same level of intelligence back into your platform?
This will not only help you continuously and automatically recalibrate the ways you’re doing things based on these new AI and ML technologies, but also to start answering key questions with the predictive outcomes you expect from your data protection vendor, such as:
- Are my backup windows in compliance with my SLAs?
- Will the backup be long enough and sufficient to ensure all data is protected?
- Am I recovery ready enough to meet the SLA that I require from my critical, non-critical, dev, test and QA data?
So what does it take to make that change, and why should it be a manual effort? When you bring on a consultant and they tell you what changes to make, the second you begin doing business with those changes, the environment is already out of its optimized state. This is why we need automated technology to recalibrate the environment and its data on a daily, hourly, or even minute-by-minute basis.
The ability to assess and determine whether or not you’re meeting the critical outcomes as defined by the business are key when it comes to recovery readiness protection. Job execution and, most importantly, infrastructure health are exactly what Commvault is delivering to our customers through our Intelligent Data Services.
By using artificial intelligence machine learning combined with Commvault’s understanding of trends from monitoring, we can create the sort of telemetry that we can capture and track. We’re able to highlight when our customers may or may not be correctly calibrated, whether it’s for backup, recovery or wherever they may start to see infrastructure issues that are out of the norm for typical, day-to-day operation. Commvault’s solutions can pinpoint ways to further optimize, change or fine-tune different scenarios to meet SLAs, automatically and intelligently.
In today’s world where petabytes are quickly becoming exabytes, and infrastructure is no longer within four walls of the data center, it’s more pertinent than ever to ensure that you’re using these types of technologies to automatically manage backup and recovery success and recognize further opportunities by pointing to changes that need to be made.
That’s what the future of automate and recalibrate continuously is all about—part of the main principle of Intelligent Data Services’ role in helping to deliver recovery readiness and data readiness in general. It’s the future of how we, as IT professionals, can ensure we can manage the exabytes of data coming in the near future and continue to scale in our roles.