Today at the OpenStack Summit in Tokyo, Rackspace announced Carina, a free public beta of a new service that allows you to create managed clusters for running containers in the cloud using the same tools you use today for running containers locally on your own computer. As the architect of Rackspace’s global container strategy, I’d like to explain what makes Carina different from the technical perspective, and share our philosophy behind the service.

I am responsible for assembling the OpenStack containers team, which collaborated to design and architect OpenStack Magnum, initially released in January 2014. I serve as the OpenStack Magnum PTL, leading our impressive community of contributors. Before my efforts with Magnum, Rackspace was busy engineering on Carina in preparation for the service we are now offering. In fact, many of the contributions Rackspace made to OpenStack Magnum were based on what we learned building Carina.

OpenStack allows cloud operators to create sets of physical servers, known as host aggregates, which are matched by the Nova scheduler with metadata on the Glance image and/or on the Nova flavor. This feature allows us to offer different flavors for Linux hosts, Windows hosts, and now container hosts, each handled by a different host aggregate suited for a particular use. Today, we use Nova’s libvirt/lxc driver to create LXC containers on a specialized host aggregate designed for running containers rather than running virtual machines. We expect to use the same approach to plug in Windows container hosts in the future.

OpenStack Magnum introduces a new essential resource known as a bay that allows for secure multi-tenancy of containers on OpenStack clouds. Each bay is a grouping of Nova compute instances that run a Container Orchestration Engine (COE) such as Docker Swarm, Kubernetes, or Apache Mesos as a managed cluster. Your COE is responsible for running your containers on one or more of the hosts in the bay, and may manage them once they are started. For example, Kubernetes can automatically restart containers on an alternate host if a host crashes. Each COE has its unique properties that make it good at handling various use cases. Carina will use this capability to offer users a choice of COE.

Today we offer Docker Swarm, and will soon introduce Kubernetes, and potentially other COE options, depending on feedback from our users. New ones are surfacing quickly, so having a standard way of accommodating them is really important. You can create arbitrary numbers of bays, so your various development teams do not need to share them.

Most hosted container services we have seen use virtual machines as the basis for the servers that run containers. Virtual machines have both advantages and disadvantages. They cause application code to execute more slowly than it would on a bare metal server environment. That’s bad.

They are easier to secure for multi-tenancy. That’s good! The virtual hardware interface is relatively simple compared to the Linux syscall interface. Using syscalls, applications can interact with the Linux kernel, and that interface is a wide attack surface that must be secured in order to prevent neighboring containers from escaping into the kernel, and to prevent potential unauthorized access between neighboring workloads. This is a tough job. Strictly speaking, the practice of using virtual machines for multi-tenant security isolation is lower risk from a security perspective because of their smaller attack surface.

Bare metal servers offer considerable performance benefits when compared to Virtual Machines. However, they require the addition of large increments of capacity because it’s not easy to share them. You end up adding a full server at a time. That’s a lot of hardware! It would be better if there were a way to scale up using smaller increments of capacity that more closely tracked the needs of your dynamic workload. This is why most cloud service providers use virtualization as the basis for their container services. They strike a compromise between these good and bad characteristics. Unfortunately that compromise means reduced performance.

Carina is different.

It’s innovative approach offers applications access to bare metal hardware performance in increments that cost less than using virtual machines, giving you a choice of what flavor types your bays are composed of. You can choose what Carina offers in our beta today — bare metal containers isolated by additional security techniques in the server operating system to help keep them safe from each other.

In future releases, you can run your container clusters on other flavor types such as virtual machines or even full bare metal hosts for the cases where those choices make the most sense. We will let you choose the best fit, because you know your application best. You can get full bare metal servers on the Rackspace Cloud today by using our OnMetal flavors. Once we announce support for the OpenStack Magnum API with Carina, you will make this choice by specifying the flavor type your container clusters are built on, including OnMetal flavors!

When selecting your flavor type, consider your use case and find the right balance between performance, security Isolation, and cost. Carina’s initial public beta release offers containers as the first flavor type. We will add more over time. This table highlights the most important considerations:

Why do we care so much about basing Carina on OpenStack Magnum rather than just settling for what we have today? We have several reasons:

  • We believe users deserve a choice of what COE they run. If you plan to extensively customize your environment, you may really benefit from an imperative style of management that Docker Swarm offers, paired with the declarative style of deploying applications with tools like docker-compose, which allows you to group numerous containers that work together into a single deployment. Kubernetes does this in a different way, which allows less customization without changing the code in the COE. You might want to run multiple different COEs simultaneously for different workloads. With OpenStack Magnum, you can do that today.
  • Rackspace offers both a public cloud and managed private clouds, so we want to offer a consistent experience with the simplicity and performance of Carina, but for Rackspace Private Cloud users as well. You should be able to choose whether to deploy your application on a public or private cloud, and do it the same way on each. By basing Carina on OpenStack Magnum, we can offer this consistent experience on both public and private clouds.
  • You should be able to choose what hardware flavor type your COE runs on. Some workloads, such as dev/test or highly ephemeral workloads, may not be sensitive in terms of security isolation, but you may derive significant business benefit from being able to get bare metal performance that costs less than public clouds resources would.
  • Developers have strong preferences for what tools they want to use. We are convinced that offering the option for developers to use native tools and APIs is essential. You should not have to learn yet another proprietary API and another complex tool in order to run containers on Carina.

You might wonder why you shouldn’t just set up your own bare metal hosts, slap your own COE on top, and let your users start running containers there.

First, if you do, you will need to accept the single-tenant nature of your chosen COE. Second, you won’t be able to offer any of the choices detailed above that make OpenStack Magnum so compelling. Finally, your approach will really only work properly on a static pool of hosts. If you want that pool to scale up and down dynamically, the COE must be integrated with an IaaS. Doing that will open a Pandora’s box of questions about how to connect your networks, your storage, how to assign IP addresses, how to sign and distribute TLS certificates so that your COE prohibits unauthorized use on public networks, how will you manage integration to your identity system, how will you account for usage on your system… you get the idea.

That list is long — and most of those potential challenges have nothing to do with containers. Using a solution like Carina allows you choice, with confidence that the underlying technology is coming from a diverse, thriving community of contributors working together toward one goal: making the best integration of IaaS and containers possible, with more choice and flexibility than you’d have with the D-I-Y approach.

Carina offers a unique blend of simplicity, performance and affordability, which are only possible when using a different approach than other cloud providers. We are elated to have the opportunity to offer you the power of choice, and the confidence that your technology selections today will evolve with you as container technology changes over time.

Get Carina today for free.

Adrian served as a Distinguished Architect for Rackspace, focusing on cloud services. He cares deeply about the future of cloud technology, and important projects like OpenStack. He also is a key contributor for and serves on the editing team for OASIS CAMP, a draft standard for application lifecycle management. He comes from an engineering and software development background, and has made a successful career as a serial entrepreneur.



Please enter your comment!
Please enter your name here