In an ideal world, every OpenStack deployment should have a ‘staging’ version, so production critical workloads can be tested against the next release. You should also have an R&D environment for more disruptive testing and staff training purposes. But we know this isn’t always possible, so what’s the alternative?
The cost of implementing a new environment for testing is often prohibitive, from a time and money perspective. So, how do you go about training new staff preparing for their Certified OpenStack Administrator (COA) certification? How do you test new OpenStack projects without impacting your current production system?
One option is to deploy OpenStack on OpenStack, giving you the option of running multiple versions on the same hardware. Now, it’s worth pointing out at this early stage that performance will be significantly down on a production system running on dedicated hardware, so this is only aimed at functional testing, and certainly can’t be used to run production workloads.
So assuming you want to deploy this type of testing environment, how do you go about it and what do you need?
In the first part of this blog series, I’m going to cover how to deploy the infrastructure on virtual machines (VMs) using Heat. My upcoming articles will continue to explore the implementation process in detail:
- Part 2: Configuring the infrastructure VMs using Ansible
- Part 3: Installing OpenStack using OpenStack-Ansible
- Part 4: Setting up OpenVPN and accessing your VMs
Deploying using Heat
The aim of this deployment is to mirror a production system where possible, whilst allowing for limitations by running with nested virtualisation. As an OpenStack Specialist Architect, I have access to the Rackspace Public Cloud powered by OpenStack, so I leverage this for my own testing. My approach can be easily adapted and used to deploy onto your own OpenStack Private Cloud, or even as an alternate public cloud.
You’ll need to prepare your environment by setting up networking and deploying a set of VMs which’ll simulate your physical servers in a real deployment. The easiest way to achieve this is by using Heat – click here for a copy of my Heat template.
My Heat template will deploy the following networks:
The following networks are created by mirroring a physical installation of OpenStack, using OpenStack-Ansible:
- Management network: used by all hosts, and containers which run on controllers, and is the central OpenStack management network.
- Storage: provides access to Swift object storage via Swift proxies running on controllers, and access to Ceph storage nodes.
- Storage replication: handles Swift and Ceph storage replication between nodes.
- Flat: enables provider networks without VLAN tagging, keeping it public cloud friendly.
- VXLAN: used by OpenStack when creating private networks using VXLAN, as VLAN tagged networks on public cloud aren’t easy to leverage.
In hindsight, a more suitable name for the VXLAN network could’ve been ‘tenant’ or ‘private’ as the network isn’t using VXLAN tech natively at the public cloud layer. However, the virtual hypervisors use VXLAN when creating their own private networks which run across this network.
This is where it gets interesting. For a physical install, the above networks would be used with bridges and possibly bonds to provide High Availability (HA). If you have full control of the underlying OpenStack installation you’re deploying on, you could create suitable networks which don’t have security layers – such as IP and MAC address filtering – to enable a seamless deployment. However, if running this deployment on a public cloud, you’ll need to implement an overlay network to surpass security restrictions. You can learn more around advanced networking on public clouds here.
The trick is to deploy what I call the ‘encapsulation network’, followed by modifying the configuration of all VMs to enable VXLAN encapsulation of any network where containers are required to communicate with other containers or hosts. The flat, management, storage and VXLAN networks are encapsulated with a VXLAN tunnel – preventing the public cloud IP and MAC filtering from blocking traffic to/from containers. The configuration of the VXLAN tunnel is covered in the second part of this blog series.
The following VMs are deployed by the Heat template:
- ceph1-3: three Ceph storage nodes to provide block storage and Glance backend storage
- compute1-3: three compute nodes which’ll run QEMU
- console: admin VM to enable direct access to VMs – can implement a desktop environment to provide local browser
- controller1-3: three controller nodes mirroring a production configuration
- gw: Linux router and Squid proxy enabling provider network functionality and improved deployment speeds
- lb1: HA proxy load balancer to front the three controller nodes
- swift1-3: three Swift storage nodes to provide Object storage
To simulate a physical installation of OpenStack, some VMs have additional volumes assigned to them. The controller and compute nodes utilise volume groups, which OpenStack-Ansible expects to find on nodes with these roles. The Swift nodes have five volumes assigned, OpenStack-Ansible expects to see these mounted as individual partitions – keep an eye out for part two of my blog series where I’ll be covering the configuration of these partitions. Finally, Ceph nodes also have five additional volumes assigned, but these do not require pre-configuration as the OpenStack-Ansible tooling handles this for you.
To deploy the environment you simply need to use the Heat template “openstack-osa-framework.yaml” to create all of the infrastructure on your OpenStack cloud.
Note, at the time of publication, v2.0 of the git repo deploys the environment as described above, however the repo is under active development with Designate and Octavia due to be added soon, so keep an eye out for updates.