“OpenStack is on the cusp of major adoption.” How many times have you heard a vendor or analyst say that or some variation of it in the past 12 months?
The fact is that many companies are still evaluating OpenStack and trying to determine how it should fit into their overall IT strategies. That should not be a surprise given the disruptive nature of a technology like cloud computing.
I’ve argued in the past that OpenStack is in the process of crossing the chasm from acceptance by early technology adopters to acceptance by the early majority/mainstream.
Over the past 12 months, I have spoken with a number of companies in the early majority camp that are currently conducting OpenStack evaluations or using it in small projects. Almost all of these users are current VMware customers with legacy application workloads. They understand that OpenStack is not something they can ignore, but they often struggle to understand its true value and how it should impact their current vSphere deployments. The questions they frequently ask include:
My approach to answering these questions generally focuses on trying to bring clarity in three areas:
Virtualization and Cloud
The starting point for me, when I speak with customers about vSphere and OpenStack, is to highlight the different design philosophies behind legacy infrastructures and cloud computing. Here I want to focus on legacy infrastructures that are built on virtualization technologies, such as the ESX hypervisor and vSphere, which came into prominence as a technology to virtualize many smaller servers so they can be consolidated on to a few large servers. This worked very well since most servers were hosting applications with monolithic architectures, such as Oracle or Microsoft Exchange. Today, each instance of this type of legacy application is still typically encapsulated in a single virtual machine and grows by scaling up on a single physical server running the ESXi hypervisor. High availability can be achieved by running a clustered version of the application, such as Oracle Real Application Clusters; however, this can be an expensive and overly complex solution and most applications do not have such functionality. Most VMware shops choose to run their application servers as virtual machines in vSphere clusters and depend on features such as vSphere HA and vMotion to provide infrastructure resiliency and redundancy. While these solutions work well, they also require certain architecture choices to be made, such as reliance on shared storage, that make scaling out the infrastructure difficult.
Cloud computing, however, was created to accommodate a different class of applications, such as MongoDB and Hadoop. Cloud platforms like OpenStack are designed to be used with applications that have a distributed architecture where application components are distributed across multiple physical or virtual servers. These applications are generally designed to grow by scaling out across multiple servers so as demand increases, resources can be expanded by adding more application instances and re-balancing workloads across those instances. Another design principle behind cloud platforms such as OpenStack is that given the distributed nature of these applications, ownership for resiliency belongs to the applications and not the underlying infrastructure. This approach is often misunderstood by folks in the VMware space as a shortcoming and immaturity in the OpenStack platform; “lacking” features such as vSphere HA are seen as a potential warning sign that OpenStack is not ready for production usage.
However, this is a misunderstanding about the differing design principles of legacy and cloud. Distributed applications that run on cloud platforms (what the industry often term as “cloud native” applications) have lowered the barrier for building resiliency, both in terms of cost and of usability. By moving application resiliency up the stack, cloud platforms remove the need for shared-everything architecture based decisions such as the use of shared storage. This promotes the use of commodity as an option for running a cloud platform and creates an architecture that enables rapid scaling out the infrastructure. It is also an architecture that is best suited for next-generation large scale application environments where failure is expected and needs to be designed at multiple layers, not simply at the infrastructure layer.
Once a customer understands the differing design principles, we can talk about the different infrastructure consumption models. It is important to differentiate between virtualized infrastructure consumption and cloud consumption here as well.
Along with running bare-metal servers and a virtualization technology such as vSphere in their own data center, companies can consume managed hosting offerings such as the Rackspace’s Dedicated vCenter offering or VMware’s vCloud Hybrid Services offering. Both are built on VMware technologies and offer off-prem virtualization solutions to augment customers’ on-prem vSphere deployments. Both are designed and ideally suited for legacy applications that do not require rapid scaling and rely on the virtualized infrastructure to provide application availability and resiliency.
In contrast, cloud consumption typically begins with public cloud usage and may later include private cloud deployments. In this space, the focus is on accommodating next-generation applications and being able to scale and to provision resources quickly, often using commodity hardware.
vSphere with OpenStack
It should be clear by now that one size does NOT fit all when it comes to building out infrastructure for different workloads. Rackspace has customers that, because of the distributed nature of their applications, have been able to move directly to our OpenStack-powered public cloud and/or our private cloud. However, most customers have legacy applications, often running on a bare-metal or virtualized infrastructure that cannot be easily rewritten to use a cloud platform such as OpenStack. For these customers, co-existence and not replacement is the route they will choose to take in adopting OpenStack within their bare-metal vSphere dominated data centers. This route typically fork into one of three paths:
This is the most frequently chosen route by customers. Typically, this involves making the decision to keep existing legacy applications running on their vSphere environment while building new applications on a separate OpenStack cloud. While this is the least disruptive route to take in adopting OpenStack, it also perpetuates IT silos and adds complexity and the additional overhead of managing two completely distinct environments since separate operations teams are often required.
Another possible route is to leverage the work VMware has done to integrate vSphere into OpenStack. This is similar to the silos route where legacy workloads continue running on vSphere while next-generation workloads run on a hypervisor such as KVM or Xen. In this case, OpenStack is used as the control plane to manage a multi-hypervisor cloud, consolidating cloud management while allowing applications to be hosted on the environment best suited for them. The primary drawback to this architecture is that the vSphere integration with OpenStack is very new and there are still rough edges that still need to be refined, such as how resources are scheduled. Please note that while Rackspace constantly evaluate new technologies, we currently do not support integrating vSphere with the Rackspace Private Cloud.
Best-Fit Hybrid Solution
Given the current state of affairs, Rackspace has adopted a best-fit hybrid solution as the best and most mature route for OpenStack adoption. This is similar to the silos approach in terms of separating the control planes for vSphere and OpenStack and focusing on applying the hybrid solution architecture to ensure that applications are hosted in the infrastructure that best fits their needs. However, the goal here is to maintain separate infrastructures and integrate the operations teams for the two environments so that they work together to build integrated solutions. One key is to utilize technologies, such as Rackspace’s RackConnect, to tie these infrastructures together so each can leverage the other as appropriate. One use case is a multi-tier application with a distributed web-tier running on an OpenStack-powered Rackspace Private Cloud that is connected via RackConnect to an Oracle database running on a managed vSphere cluster. In this hybrid solution, the Rackspace Public Cloud can also be used to enable bursting up of web-tier workloads from the private cloud.
Want To Learn More?
Rackspace is committed to providing managed services for both OpenStack and VMware technologies to offer our customers a best-fit hybrid solution. A large part of that commitment is educating the VMware community on how it can use OpenStack to build a true cloud platform while continuing to leverage its VMware investment where appropriate. Throughout the year, Rackspace will sponsor several VMware User Group (VMUG) conferences where I will speak more in depth about vSphere with OpenStack. The table below lists the VMUG conferences at which I am scheduled to speak. I encourage VMware administrators, architects or partners to attend these events and to participate in my sessions. The reality is that a growing number of companies are either deploying or evaluating OpenStack as their private cloud platform and will look for technologists who can help them understand how to leverage OpenStack, either as a replacement for or as a complementary platform to their existing vSphere environments.
I look forward to hitting the road and continuing the work I’ve been doing as an official OpenStack Ambassador and a VMware vExpert educating technologists on the value and power of vSphere with OpenStack. More importantly, I am excited to meet as many folks as I can in the VMware and OpenStack communities. As always, I invite you all to engage with me and help me make what I present useful to as many people as possible.