Containers: Delivering Disruption From The Bottom Of The Stack

This is a guest post written and contributed by David Strauss, CTO at Pantheon, a Rackspace partner and all-in-one Drupal platform provider.

Right now every business and organization in the world is wrestling with how best to build, launch and run sites and applications on the Internet. It’s a problem everyone has, and the status quo of cloud services still leaves quite a lot to be desired.

But help is on the way. Innovation emerging from the bottom of the software stack — ongoing development down in the kernel — offers breakthrough potential for those willing to look beyond traditional Virtual Machines. Linux containers (LxC) are a way to run professional-grade multi-tenant architectures, where many applications share a single server, and are rapidly becoming “a thing.”

At Pantheon, we’re putting theory into practice, building a massive website platform — serving tens of thousands of sites and billions of pageviews a month — on the Rackspace Cloud using containers. Rather than going down the road of spinning up VMs on a per-customer basis, we provision large servers and use our software to safely co-locate many customer applications within them.

This gives us a huge leap in operational agility as well as efficiency. It allows us to provide a rich suite of tools for customers to build, launch and run sites without having to even think about servers. It allows us to put every customer instance in a web-scale configuration. No more outgrowing your VM and having to upsize, or go through the painful transition from vertical to horizontal scalability; it’s one platform from day one. All of this is only possible because containers let us stop equating sites with servers, and start (literally) thinking outside the box.

Historically, sharing a Linux server entailed all kinds of untenable compromises. In addition to the security concerns, there was simply no good way to keep one application from hogging resources and messing with the others. The classic “noisy neighbor” problem made shared systems the bargain-basement slums of the Internet, suitable only for small or throwaway projects.

Serious use-cases traditionally demanded dedicated systems. Over the past decade virtualization (in conjunction with Moore’s law) has democratized the availability of what amount to dedicated systems, and the result is hundreds of thousands of websites and applications deployed into VPS or cloud instances. It’s a step in the right direction, but still has glaring flaws.

Most of these websites are just piles of code sitting on a server somewhere. How did that code got there? How can it can be scaled? Secured? Maintained? It’s anybody’s guess. There simply isn’t enough SysAdmin talent in the world to meet the demands of managing all these apps with anything close to best practices without a better model.

Containers are a whole new ballgame. Unlike VMs, you skip the overhead of running an entire OS for every application environment. There’s also no need to provision a whole new machine to have a place to deploy, meaning you can spin up or scale your application with orders of magnitude more speed and accuracy.

The pioneering engineers at Heroku were the first to take containers to market in a big way. Their multi-tenant platform put containers — or “dynos” in their lingo — on the map, forever changing the way Ruby off Rails developers looked at infrastructure. Other services are emerging utilizing similar architectures for a range of use-cases, and a number of open-source projects — Docker, CoreOS and Red Hat’s OpenShift — are advancing the cutting edge in ways that are accessible to all.

From the “everything old is new again” file, much of this technology has been present in other *nix flavors for quite some time. BSD “jails” and Solaris “zones” have been around for ages, but the development of mature tools in the Linux space is a game-changer: that’s where most of the software is written to run, so that’s where containers will do the most damage.

The advances in speed, efficiency and accuracy make it feasible to automate the full software lifecycle. That’s the paradigm shift. It’s inevitable that today’s bespoke web infrastructures — thousands of beautiful unique snowflakes, lovingly hand-crafted by a small army of SysAdmin and DevOps professionals — will be swept away and replaced by utility container platforms.

In fact, most “Web-scale” companies (e.g. Google, Facebook, etc) are already using some variation on this process to manage their software. What’s new and exciting is that developers and business owners don’t need a crack squad of kernel hackers to utilize these kinds of tools.

While the power and complexity of full, unfiltered operating system access will always have a place in every developers toolbox — you can take my root when you pry it from my cold dead hands — another layer of abstraction is just what the doctor ordered when it comes to running software on the internet at scale.

By standardizing and speeding up operational processes, they allow infrastructure to be automated and scalable in software. They enable innovators like us to build meta-tools that abstract away practical use-cases like websites away from the infrastructure, freeing up developers to focus on the application, where the strategic value is and the human effort truly belongs.

“Container” is actually the perfect name for a technology leading this change. They’re how the future ships.

Learn more about containers on the Rackspace Blog
Read Strauss’s Op-Ed in the Linux Journal on containers
Read more about how Pantheon uses containers for faster scaling, lower cost


Please enter your comment!
Please enter your name here