Here’s a situation we recently encountered. A leading financial institution, with a typical mix of legacy, current and leading-edge applications, was in the middle of migrating to a public cloud. But the migration was full of unforeseen challenges and they uncovered a troublesome fact: migrating to a public cloud is complex and represents a significant challenge.
We see this all the time. If you’re like most organizations, you’re actively attempting to move your portfolio of applications off of legacy infrastructure and onto modern infrastructure. That doesn’t happen overnight; it’s a journey, typically involving multiple application teams, each with different skill sets and development methodologies. Most of the enterprise decision makers I speak with lay out a five-year plan for their migration. That plan covers the following categories of applications:
- Modern applications (10 percent of the application portfolio)
- Microservice applications
- Containerized applications
- 12 factor applications
- Cloud native applications
- CI/CD pipelines
- “The Dark Ages” applications (30 percent of the application portfolio)
- Mission critical applications that reside on mainframes
- Applications that were purchased from a vendor that is no longer around
- The people who “know how the applications work” are no longer with your company
- “Everything Else” applications (60 percent of the application portfolio)
- Client-Server applications that run on bare metal
- Applications that have been migrated to or started life on virtualization platforms
- Applications that are slated for rewrites or replacement in the next five years.
The first category is either already on a cloud platform, or could easily go there. The second category may never go to a cloud because migration is too risky. It’s the third category that’s interesting — and challenging. How do you modernize the “everything else” app category without adding complexity, challenges and costs?
Explore the ecosystem
At Rackspace, we help customers think about the entire ecosystem around the app as well as the app itself. To get a good sense of what the dependencies might be, we encourage customers to think beyond the application and ask questions, including:
- What is the architecture of the application; does it follow cloud-native principles for high availability or is it dependent on its current infrastructure to provide resiliency and high availability?
- What tools are being used to manage and enforce the security policies around the application?
- What tools are in use to manage access to the systems on top of which the application runs?
- What tools are in use by the operations team(s) responsible for deploying, managing and troubleshooting the application?
- How aware or unaware is the application of the underlying hardware topology and performance characteristics?
These questions highlight the crux of the challenge — during migration planning, it’s easy to overlook the need to consider current toolchains and processes that surround the application along with the application itself. Some toolchains, processes and dependencies aren’t well suited for a public cloud. Others can’t be feasibly recreated there. Others still could force you to make compromises or face technology limits you can’t accept.
What happens in the real world?
The financial institution listed above bumped into all of these issues. As they were moving applications to the public cloud, they struggled to extend their network and security policies into the public cloud infrastructure.
Their toolchain was made up of physical firewalls, physical intrusion detection systems and physical encryption key management systems; none of which were easily extendable. They were faced with not only migrating their application into the new platform, they had to adopt a completely new approach to security, evaluating new tools, technologies and processes, and ultimately training their existing staff on those new tools or adding new staff with new skills.
Migrating an app wasn’t the problem — but migrating the ecosystem around the app was going to drive up costs and complexity to the point where the migration was no longer in their best interest.
It’s a common problem that the tooling, processes and technology that secure applications are difficult to move to a public cloud. Inside of these enterprises, there is an ecosystem of tools to manage the security policies and procedures around these applications. Specific physical devices like firewalls, intrusion detection/protection systems and HSMs enforce the security of applications. When moving to a public cloud none of these physical devices can accompany the application, which forces a radical rethink around security.
Generally, there are cloud-native solutions for all of these physical security devices, but many are not made by the same vendors as the physical devices. This means the teams within the enterprise responsible for operating these functions would have to learn completely new toolchains for each of these functions. While in the long-term, these teams will have to learn new toolsets, doing all of this at once can be too big of a leap and will hamper the success of a migration.
Unlike a public cloud, a private cloud can be deployed to work with these existing devices. A private cloud solution provides a much longer runway for the enterprise to evaluate and replace each of these physical devices with cloud-ready solutions.
One of the core design tenets of a public cloud is that an application must expect failure at the infrastructure level and must be designed in a way to mitigate these failures. If an application is actively rewritten to be “modern,” this design tenet is acceptable.
But if an enterprise is evaluating one of the “Everything Else” applications, there’s a fundamental disconnect. Most traditional apps can’t automatically cope with failure at the infrastructure level. This public cloud tenet will result in the wailing and gnashing of teeth, especially if the application is coming from a platform that allows non-resilient applications to have some degree of fault tolerance.
In contrast, a private cloud can be designed in a legacy-friendly way. One good example would be to leverage the OpenStack feature of instance storage backed by Ceph, which provides some of the same live migration and virtual machine high availability features the application was dependent on in its previous state. This capability allows more applications to be migrated onto a private cloud where they can continue to operate until it is time for them to be rewritten or replaced.
Another challenge that enterprises face when looking at migrating to a public cloud has to do with the application’s ability to access data residing on legacy systems within their data center(s).
Most existing applications have a built-in assumption of being extremely “close” to their data stores in terms of latency. When this type of application is migrated to a public cloud, the built-in assumption of the application breaks down as it is no longer “close” to data stores. Assuming the application is not being refactored during the migration this can lead to outcomes such as:
- the application is executing at a much slower speed
- the application is unable to handle the same level of transactions per second
- aborted transactions due to time out errors.
Again, this is a situation where a private cloud is an ideal solution. An enterprise would gain access to the agility of a cloud platform without the increase in data access latency, as the cloud can be deployed in the same data center as the legacy data stores. Architecting high-performance connectivity is much easier to design, deploy and control in a local data center than it is in a public cloud.
Finally, enterprises that operate in a number of countries can face another challenge: data sovereignty laws. Most of these laws state that personal data collected about a citizen of a country must be stored on systems that reside within the confines of said country.
These laws pose a challenge to enterprises looking to adopt public clouds. Some public cloud vendors won’t have infrastructure residing within the country where the data must be stored. A related issue is that data privacy protections vary from country to country, and you may be concerned with storing your data in a country where protections are relatively lax. The ability to have a private cloud in-country allows you to maintain compliance with data sovereignty laws while protecting access to your data.
So whatever happened to that leading financial institution? After struggling with these migrations for more than a year, they chose to implement an on-premises private cloud delivered as a service by Rackspace. Two years later this same financial institution has successfully migrated more than 200 workloads onto their Rackspace OpenStack Private Cloud platform with more migrations under way.
For more than 17 years, Rackspace has worked with numerous companies on their journey to modernize and transform their IT. Rackspace understands that this journey requires a tailor-made solution, leveraging existing capabilities when necessary, in order to ensure the best result. Using OpenStack, an open source cloud computing platform co-founded by Rackspace, we’ve been able to provide customers with a robust enterprise private cloud as a service solution, either in a Rackspace data center, customer data center, or third-party data center.
To learn more and ask questions about whether private cloud as a service might be a good fit for your organization, take advantage of a free strategy session with a private cloud expert — no strings attached. SIGN UP NOW.