In the first part of this blog series, I covered why you might deploy OpenStack on top of OpenStack using Heat and Ansible. If you followed part one, you’ll have deployed 15 VMs using Heat, ready to be prepped for your OpenStack installation…
In this post, I’ll cover the roles of various playbooks and explain how they prepare the VMs for the installation of OpenStack.
We’ll be using Ansible to perform the VM configuration. I recommend installing the latest version of Ansible – this was initially developed with Ansible 22.214.171.124 running on macOS High Sierra.
Clone the following repo to your deployment machine:
Firstly, you’ll need to configure your deployment machine so Ansible can authenticate to your cloud. Ansible uses clouds.yaml to store this connection information – you’ll find a ‘clouds_example.yaml’ file which can be used to create your own clouds.yaml with appropriate account settings.
You’ll also need to set up group_vars/general/vault.yml by using the vault_example.yml file as a template. This file stores your chosen password and allocates it to a user account, which is created once OpenStack is configured. It also contains settings to enable mail alerting which, due to the long running nature of the deployment, is useful to monitor progress.
- refer to the README.md on github for more details about configuring and using clouds.yaml and vault.yml
- whilst files should be prevented from syncing back to Github by being included in .gitignore, care should be taken not to expose these files if you create your own repo
Initially deploy it in phases
The site.yml file calls roles in the correct order, and in theory can be run in one hit using the following command:
However, it’s recommended you break the deployment down to enable testing of each phase. Once you’re satisfied that each role works as expected, run them all via site.yml
The gateway role sets the VM up as a simple Linux based router, which’ll provide the final hop for Neutron Networks within the OpenStack deployment. This will ensure the VMs have outbound internet access and prevent public cloud network restrictions interfering with traffic.
The second function of this – and the reason it’s the first VM we deploy – is that it also acts as a proxy server for the underlying host VMs. A large amount of code needs to be pulled and installed on every VM – using a caching proxy can significantly reduce the time it takes to install the various components.
To deploy the gateway role, run the following command from your host which has Ansible installed and the clone of the git repo, which you’ll have updated as per the instructions above:
OpenStack-Ansible expects to find volume groups on the controller and compute nodes, so this role sets the appropriate groups for each node.
Setting up volume groups entails running the following command from your host:
NTP server / client
We need to ensure all host VMs are using a common time source and are in sync. Whilst the underlying public cloud infrastructure should take care of this, I believe it’s best practice to actively manage this. By default, controller1 is used as the NTP server – all other nodes leverage this as a time source.
To configure the NTP server and clients, run the following commands from your deployment host:
ansible-playbook ntp-server.yml ansible-playbook ntp-client.yml
Generate / fetch / distribute key
We need a ‘SSH’ key on the deployment host, which by default is controller1. Controller1 uses this key to connect to other nodes when running OpenStack-Ansible. We first generate a random key on controller1, then fetch the public key before distributing it to all other nodes.
To create and distribute the SSH keys, run the following commands from your deployment host:
ansible-playbook generate-key.yml ansible-playbook fetch-key.yml ansible-playbook distribute-key.yml
Each Swift Node has five volumes associated with it, which you need to partition and mount within the host OS before OpenStack-Ansible can use them when deploying Swift.
To prepare the volumes on all swift nodes, run the following command from your deployment host:
The OSA prep role prepares each node for the installation of OpenStack. After installing many required packages, you’ll need to gather the various IP addresses allocated to each VM by the public cloud, followed by configuration of the network bridges – rebooting only if required and waiting for all nodes to reboot.
This play relies on the various interfaces files which can be found in the /files folder. These files are used to configure networking for each node type depending on its role. Various update emails are sent during this process as it can take some time to complete.
To prepare all the nodes for the installation of OpenStack run the following command from your deployment host:
You now have 15 nodes which have been prepared and are ready for the installation of OpenStack. Part three of this blog series will dive deeper into the process of deploying OpenStack onto these nodes.