Deploying Kubernetes from Kubernetes: How We Deploy Our Clusters

Editor’s note: Datapipe was acquired by Rackspace in 2017.

What is Kubernetes?

Kubernetes is an open-source system for automating deployment, scaling, and management of containerized applications. Applications are run in containers in the kubernetes ecosystem and can be managed with ease via CLI, API or UI. Kubernetes goals are to optimize runtime resources using the kube-scheduler to manage and orchestrate your containers effectively. The orchestration comes to bat with Kubernetes’ ability to scale and support containers in a distributed environment.

Kubernetes enforces application container design, which is becoming a widely accepted methodology of running modern applications effectively in the cloud. With over a decade of hard lessons learned from Google engineers, Kubernetes proudly flaunts the battle scars necessary to take on monumental orchestration tasks like those seen in the release of Pokemon Go.

Of course, we encourage all readers to deploy the ecosystem themselves and form their own opinions, which brings up a great point…

What actually is the right way to deploy a Kubernetes cluster?

The story of Kubernetes Operations (kops)

Kubernetes Operations (kops) is an open source command line application that manages kubernetes deployment, installation, and upgrades, and is widely used in production. Its origin comes from the need to standardize the way kubernetes environments are deployed in the cloud. The project has been evolving since March 2016 and is now getting widespread attention with its synchronously named v1.4.1 release.

This attention is well deserved, as the tool supports flexible and intuitive command line arguments, verbose logging, a state-sync model with remote storage, and it can even generate Terraform configurations for you.

My colleague at Datapipe and well-known Kubernetes contributor, Chris Love, recently said it best when he said that kops is becoming the premier tool for installing, upgrading and managing an HA hardened production Kubernetes deployment on AWS. But what about all the lessons we have been learning from the Kubernetes master project and it’s containerized approach to application development?

As a platform engineer at Datapipe and an active participant in the Kubernetes community, I work daily to answer the tough questions. Could we make kops any better? Could we apply the same logic used to run all of our other microservices to the application that builds the backend for our microservices?

Why not? Let’s give it an API and run it in a container.

We are hard at work at implementing an HTTP layer for the kops command line utility. This new feature is scheduled to be released shortly from the community.

The new feature will support parity functionality between the kops command line tool chain, and the soon to be released API. The application will be containerized, and ready to rock, in a kubernetes environment.

Seems like a great idea, but there is only one minor problem…

Where do we deploy our kops container to?

Without a kubernetes environment to host the HTTP API, how can we use the microservice to deploy new kubernetes installations?

kops will build itself 

kops will create an environment to run itself in, which can then can be deployed into the newly created Kubernetes environment.

Of course we all remember how Stallman was able to get GCC to compile itself in the late 80s, which is why this likely all sounds very familiar. In this case, kops is being engineered towards the same feature set of self-hosting.

Kops will be able to deploy a new kubernetes installation using the traditional command line methodology. The newly created installation can be bootstrapped into a kops deployment mode, in which case kops will deploy itself to the newly created Kubernetes environment.

The user will now have a working Kubernetes installation, with a fully functional HTTP API to interact with kops.

Kubernetes and kops will now be able to deploy themselves into any of the environments kops currently supports at the click of a button. The user will have full kubernetes environment CRUD and total control over every environment. Environments will be deployable, scalable, destroyable, and manageable – just like our containerized application predecessors.

The design opens the door to more controversial topics. The support and adoption of kops as a service could eventually make its way in the Kubernetes API stack as a plugin. On deployment, once the master nodes are up, the kubernetes cluster could literally finish building itself.

Now what?

It is hard to argue that the future of application design and deployment will exist without containers. One day, we hope to make it hard to argue that the future of orchestration design and deployment will exist without the orchestration layer managing itself.