Migrating to the cloud for an enterprise that has been running workloads on-premises for years can be very daunting. To be successful, a migration plan needs to factor in many different aspects relating to people, processes, and technology. If you are designing the migration, you need guidance and best practices to help steer you through this process.
Getting started with the migration
Before you start your migration, you should gather some foundational understanding about the Cloud provider, your environment, and different migration approaches:
1. Understand the difference between different Cloud platforms and the current environment. The source environment could be on-premises or a private hosting environment. These environments have a different operational model compared to a public cloud, from physical security, networking, power, hardware, and virtualization standpoint.
2. Identify the type of workloads that need to be migrated. I recommend you start your migration by classifying workloads as either legacy or cloud-native. Legacy workloads were developed without any consideration for cloud environments, with limited support for scaling resources such as disks and compute. As a result, these workloads can be difficult to modify and expensive to run and maintain. When designed following best practices, cloud-native workloads are natively scalable, portable, available, and secure. As a result, cloud-native workloads tend to increase developer productivity and agility, because developers can focus on the actual workloads, rather than spending effort to manage development and runtime environments.
3. Determine your organization’s maturity level for cloud technologies. When identified early, skill gaps can be addressed as part of the migration process through actions like self-study, training or peer mentorship. You can use cloud adoption framework to measure your organization’s cloud adoption maturity using below links :
- AWS Cloud Adoption Framework
- Microsoft Cloud Adoption framework for Azure
- The Google Cloud Adoption framework
4. Familiarize yourself with the different types of migration approaches and their tradeoffs, because different workloads might require different migration approaches. Migration can be divided broadly in three categories:
- Lift and shift. You migrate the workload, applying the least amount of changes.
- Improve and move. You modify parts of the workload to adopt cloud-native approaches as part of the migration.
- Rip and replace. You decommission the workload, and write a new workload, adopting a cloud-native approach.
The four phases of migration
Broadly speaking, the migration journey can be captured as a four-phase process: Assess, Plan, Deploy and Optimize. It’s easier to show this linearly, but it’s rarely so straightforward, with these phases often happening in parallel for different workloads.
Phase 1: Assess the workloads to be migrated
This phase builds on any pre-work that you’ve done, with a focus on taking an inventory of the workloads that you plan to migrate and their respective dependencies. Things to think about include (but are not limited to) hardware and performance requirements, users, licensing, compliance needs, and workload dependencies. Then, map this information into an app catalog that summarizes the information across some key axis questions—for example:
- Whether the workload has dependencies or is a dependency for other workloads
- How critical the workload is to the business
- How difficult it is to migrate the workload
The app catalog will provide you with a high-level view of the amount of effort to migrate all your different workloads. To further get a sense of the size of risk or effort, you should conduct a proof of concept (POC) that tests the different use cases and requirements of the workload, with a focus on the more complicated workloads. This aids with getting more information early as well as reducing unknowns. You need to decide on which workloads to migrate first. The answer will vary from business to business depending on many different factors such as the business value of the workload, the complexity of migration, and the availability and requirements of the workload. To help guide this decision, it’s a good idea to call a meeting of the subject matter experts of the different workloads and go through a jointly agreed list of factors. Succeeding with the first workload is key to the overall success of your migration journey, as early success yields trust and goodwill, whereas early challenges can sometimes derail entire migration projects. Lastly, you should also perform a total cost of ownership (TCO) calculation at this phase, giving the business visibility into what their cloud expenditure will look like as a result of the migration, compared to your existing environment. This can be done using below pricing calculators :
Phase 2: Plan the foundation
The next phase is to plan the foundational pieces of the new cloud environment, which consist of but are not limited to:
1. Establishing user and service identities. How will users and service accounts be created and managed? You can read about Identity and access management features for different cloud providers using below links :
- Azure identity and access management.
- AWS identity and access management
- GCP identity and access management
2. Designing a resource organization hierarchy. How are the different cloud resources structured hierarchically? Management groups, regions, folders, resources provide the building blocks to set up a resource organization hierarchy. A properly designed resource organization simplifies access control and billing management. You can dive deep into this topic using this resource hierarchy link.
3. Defining groups and roles for resource access. What are the different roles of users who will be accessing your cloud environment? What permissions should these different roles have? Use different tags and policies for easy management.
4. Designing your network topology and connectivity. Into which regions will you deploy your application? Will there be connectivity back into the source environment? How many separate networks will you need to set up? The answers to these questions will feed into how you design your cloud network resources. It’s important to also ensure you are applying security best practices by restricting access to unwanted users, applications or IPs.
Phase 3: Deploy the workloads
Once the foundation for your migration is in place, the next step is to determine the best approach to deploy your workloads to your cloud environment. You don’t need to take the same approach for all your workloads, however, the more standardized the process is, the more opportunity for cross-team learning and improvement of the deployment process. Example of different deployment approaches are:
- Fully manual deployments. This approach is the simplest and quickest way to get your workload up and running, and can be performed from the console or shell directly. Although a manual deployment might be all right for some experimentation, I do not recommend this approach for production workload deployments because it is error prone, not repeatable and tends to be poorly documented. Every cloud provider has a native migration tool that can be used for the migration. There are multiple migration scenarios including migrating compute resources, database migration, container migration.
- Migrate for compute engine :
- Database migration solutions :
- Vmware engine migration :
- Migrate to Azure Vmware Solution with the help of HCX configuration.
- Migrate to Google cloud vmware engine. This can be done using HCX while using interconnect in the backend. Or we can also use Layer2 VPN and configure NSX for on-prem to GCVE communication.
- Migrate to VMware cloud on AWS with the help of HCX configuration.
- Deploy using configuration management tools. Using configuration management (CM) tools such as Ansible, Chef or Puppet provides a repeatable, automated and controlled way to run your deployment. However, these tools are best suited for provisioning and configuration, and less suitable for workload deployments. This is because the tools require bespoke deployment logic to handle procedures such as zero-downtime deploys, blue-green deployments or rolling updates, and end up becoming more difficult to manage and maintain over the long run.
- Deploy by using container orchestration tools. If your workloads are containerized you can use Google Kubernetes Engine, Azure Kubernetes Service, Amazon Elastic Kubernetes Service. This can be done by Anthos for GCP, Azure Migrate for AKS, these applications help intelligently extract, migrate and modernize your applications to run natively on containers.
- Deploy automatically. An automated deployment can be configured, which will eventually be triggered based on some action that result in a change in the workload. Automated deployments allow you to streamline and standardize your deployment process reducing human error. You can use tools such as Jenkins, Azure DevOps, SonarQube, Cloud Build or Spinnaker to build an end-to-end automated deployment pipeline on top of your existing orchestration tools
- Deploy by applying the infrastructure as code pattern. This is my recommended best practice as it provides repeatability and traceability, which improves overall deployment velocity. This process can be implemented using tools such as Terraform and managed services such as a Deployment Manager, AWS CloudFormation, ARM Templates.
Phase 4: Optimize your environment
Once a basic deployment of your workloads is running and tested in your new Cloud environment, you can start to improve on this foundation. This includes critical pieces that should be completed before cutting over live traffic, for example, training your team on new cloud operational playbooks as well as ensuring that logging, monitoring and alerting for these workloads are in place. This can again be done by using native tools such as Google monitoring and alerting solution, Amazon cloudwatch, Azure Monitor. We can also use open source monitoring tools such as Grafana, Prometheus to configure monitoring and alerting based on our requirements.
Other aspects that you can optimize once the workload is serving production traffic include:
- Cost optimization with autoscaling
- Moving to managed workloads to reduce operational overhead
- Automating the deployment process