One Hour Cloud Migrations: A Step-by-Step Guide

Posted 1 November 2016 2:54 PM by John Fox



Ten-plus years ago when equipment reached its end of life or a data center maxed at capacity, businesses ordered new equipment or planned an expansion (or move) of the data center. A little more than five years ago, we would have been talking 'physical to virtual' in the same said data centers. Today with the attractiveness of the cloud, more businesses are looking to lift and shift these workloads to the cloud. This could be a single application or the phased migration of an entire data center.

Before even setting data for a migration to the cloud, careful planning needs to be done. Applications are no longer a single workload and may consist at a minimum of a few workloads. You need to consider all of the dependencies that an application may need that won’t be coming along with it. Do you have the network in place and can we talk from on-premises to the cloud, and are the bandwidth requirements sufficient? Cutting over and bringing it live is step one. Applications owners need to be part of the process. They also need to ensure that all is working as it should. The cloud offers platforms such as DBaaS that don’t exist on-premises and may be considered as an alternative (new thing to learn). Don’t assume all will go smoothly. Test the migration prior to the actual cut over date. Lastly, (as this isn’t enough) the cloud is rapidly changing with new offerings, which also means prices are in flux. What is most attractive today is most likely different than six months ago. The same applies to six months from now.

The first step in a cloud migration is getting your data to the cloud. Sure, you could push over the wire, but that could take hours to days and most likely isn’t acceptable. Commvault natively writes to more than 40-plus providers without the need of any hardware devices. Nothing needs to be running to do this. It allows us to do more with it than just having a dormant copy of the data. You also need an incremental forever policy that allows you to go through the pain of moving the data in its entirety only once and, going forward, only ship the bits that are unique. Let’s face it: if you have a 10GB connection to the cloud, you are on the autobahn. Finally, you need to efficiently manage the storage in the cloud. When it ages off, it is deleted and you stop paying for it.

We also need to look at how much data we are looking to move. We can go over the wire and the math there is pretty simple. X amount of data times the speed of my connection equals how long it will take to get it there. However, what if I need it there sooner? That is where drive shipping comes into play. Most vendors support this, but AWS has made it simple where you can have their armored Snowball device shipped to your data center.

So now that we have the data in the cloud, it is migration time. Commvault has, for some time, had the capability to recover VMware/Hyper-V workloads to AWS and Azure. So while this is definitely a differentiator, the introduction of LiveSync from VMware to AWS/Azure takes it a leap forward and allows us to crush down those RTO times. What this means is that prior to this functionality when recovery time came, it was a full recovery. Depending on the size of the workloads it could take quite some time. LiveSync allows us to seed the instances/VMs in AWS/Azure and only overlay the incremental changes at cut-over (AWS/Azure function differently so your mileage may vary).  LiveSync is what allows us to call this session ‘One Hour Cloud Migrations: A Step-by-Step Guide’. The RTO at migration time has now been be collapsed from hours to possibly minutes. Once these workloads have been brought live and connectivity confirmed, it is time to send in the administrators to validate a successful migration.

Unfortunately, the work isn’t done just yet. These workloads need to be protected with the same SLAs they received on-premises. While the cloud offers robust redundancy and continuance, it is by no means a replacement for backup since rare multiple component failures can lead to data loss. That is where Commvault’s deep integration with AWS/Azure comes into play. This could be snap creation for quick recovery copies, extend those to object storage to long-term replication, integration with DBaaS or cloud aware agents that cover the needs of the application. Yes there are many tools, so we can ensure that the needs of the application/data are met.

Share: