Installing Rackspace Private Cloud In 20 Minutes Or Less

Being an application developer with a very limited attention span, my test of any new technology or product is how quickly can I write my “Hello World” application. The Rackspace Private Cloud, which is based on OpenStack, is intended for a full-fledged data center with a number of compute, storage and network requirements, but I was still hoping to install it in just a few minutes.

My “Hello World” exercise was defined as follows:

  • To be able to install the product from scratch
  • To try out some basic OpenStack CLI commands, such as keystone, glance and nova
  • To be able to spin up instance(s) using Horizon (the GUI-based interface for OpenStack)
  • To be able to log in to the instance(s)

If I had to cut some corners, then so be it — High Availability (HA) and Neutron were out the door (these are left as an exercise for the astute reader). A note of caution here, I was building something that I would never expect to see in a production environment.

The goal of the exercise was to be able to install a very simple instance of the Rackspace Private Cloud and to use the lessons learned to stand up a data center, with High Availability and advanced networking requirements on dedicated hardware as outlined in the Knowledge Center.

All you need is an account on the Rackspace Open Cloud to get started!

High Level Overview of the Installation Process

Rackspace Private Cloud and OpenStack achieve “scale out” of computing and network resources. To be able to support the  “scale out” for a set of OpenStack instances running on multiple nodes, there are two primary distinguishing roles:

  • One or two Controller nodes that host the services.
  • Multiple Compute nodes that host the Virtual Machine (VM) instances.

The installation of the Controller node(s) and the Compute node(s) is done using Chef scripts from a Chef server.

Since our goal is to keep it as simple as possible, we will create two instances of Ubuntu 12.04 on the Rackspace Open Cloud and use one as the Chef server and the other as the Controller and Compute Node.

Overview of Steps

Here are the steps involved:

  1. Create two instances of Ubuntu 12.04 on the Rackspace Open Cloud
  2. Update /etc/hosts on both the server instances
  3. Install Chef server and cookbooks
  4. Tweak the Chef environment
  5. Deploy both the Controller and Compute roles services on one node (petcattle) using the “allinone” role
  6. Fire up a browser, connect to Horizon and OpenStack away

Step 1 – Create Two Ubuntu 12.04 Instances

You can create an Ubuntu instance on by providing the appropriate credentials. You will need an account.

Create two nodes:

  1. Chef server named as chef in the examples below.
  2. Controller+Compute node named as petcattle in the example below.

Pick “Ubuntu 12.04 LTS (Precise Pangolin)”  for Image. For the flavor, a 512MB for Chef should suffice. For the Controller and Compute nodes, a flavor of 2GB or higher is preferred.

Using the credentials provided as part of the server creation process, ssh as root into the Chef server as shown below when it is ready for use.

ssh root@

Step 2 – Update /etc/hosts with the IP address entries of the other server

Run the command ifconfig and note down the values for eth0. In this case, it happens to be

root@chef:~# ifconfig
eth0      Link encap:Ethernet  HWaddr bc:76:4e:20:03:df  
          inet addr:  Bcast:  Mask:
          inet6 addr: 2001:4802:7800:1:9bc6:4c:ff20:3df/64 Scope:Global
          inet6 addr: fe80::be76:4eff:fe20:3df/64 Scope:Link
          RX packets:2012 errors:0 dropped:0 overruns:0 frame:0
          TX packets:1415 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:166181 (166.1 KB)  TX bytes:195649 (195.6 KB)

eth1      Link encap:Ethernet  HWaddr bc:76:4e:20:03:e6  
          inet addr:  Bcast:  Mask:
          inet6 addr: fe80::be76:4eff:fe20:3e6/64 Scope:Link
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:6 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:0 (0.0 B)  TX bytes:468 (468.0 B)

lo        Link encap:Local Loopback  
          inet addr:  Mask:
          inet6 addr: ::1/128 Scope:Host
          UP LOOPBACK RUNNING  MTU:16436  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)

Likewise, note down the values for eth0 on the server petcattle. The value is We will use this IP address for the external interfaces (keystone endpoint, etc.), as we will see subsequently.

Add this network address for petcattle to the /etc/hosts file on chef so that it looks like below.

# Adding the IP address for petcattle	petcattle

Similarly, add the network address of chef to the /etc/hosts file on petcattle so that it looks like the following example:

# Adding the IP address for chef	chef

Configure ssh on the chef server to be able to ssh to petcattle without prompting for a password (just accept the defaults for the ssh-keygen command) using the following commands.


Now copy the public key from the chef server to petcattle using the following command:

ssh-copy-id root@petcattle

Step 3 – Install Chef Server and Cookbooks

On the Chef server, execute the following commands:

curl -s -L | bash
curl -s -L | bash

Log out of the Chef server and log back in to include knife in the path.

Verify that knife is installed and configured by running.

knife client list

This should yield an output similar to the following:


Step 4 – Tweak the Chef Environment

Create a file named rpcs.json with the following content. Substitute the values for eth0 of petcattle as noted in step 2. In this case, those are the entries for osops_networks which is The installation will also create a network interface named br100 that will assign the CIDR range of as the network addresses for the VMs that will be created.

  "name": "rpcs",
  "description": "Environment for Rackspace Private Cloud (Grizzly)", 
  "cookbook_versions": {
  "json_class": "Chef::Environment",
  "chef_type": "environment",
  "default_attributes": {
  "override_attributes": {
    "nova": {
      "libvirt": {
        "virt_type": "qemu"
      "networks": [
          "label": "public",
          "bridge_dev": "eth1",
          "dns1": "",
          "dns2": "",
          "num_networks": "1",
          "ipv4_cidr": "",
          "network_size": "255",
          "bridge": "br100"
    "mysql": {
      "allow_remote_root": true,
      "root_network_acl": "%"
    "osops_networks": {
      "nova": "",
      "public": "",
      "management": ""

Upload the environment file with the following command.

knife environment from file rpcs.json

Now, you’re ready to configure the server that will serve as both the Controller and Compute nodes. The logical structure looks something like below.

Step 5 – Install the Controller and Compute Roles

Configure the second server (petcattle) to be both a Controller and Compute host with the “allinone” role.

knife bootstrap petcattle -E rpcs -r 'role[allinone]'

This should run through all the recipes in the cookbooks and should finally yield an output, which is similar to:

petcattle Chef Client finished, 316 resources updated

ssh to petcattle:

ssh petcattle

Verify that OpenStack is up by running the keystone and nova command(s).

source openrc
keystone service-list

This should yield output similar to the following example:

|                id                |   name   |   type   |        description        |
| 635ceead9bb44a4cb457382301db951b |  cinder  |  volume  |   Cinder Volume Service   |
| f7a3cff141a84c17b8bcdc399f034c4a |   ec2    |   ec2    |  EC2 Compatibility Layer  |
| fb31e8ae88f34de182a11a93c658bd40 |  glance  |  image   |    Glance Image Service   |
| 2de6009762fe417aa2a11c271ecaf02f | keystone | identity | Keystone Identity Service |
| a914101228b04ee6bb96f929c3503ae7 |   nova   | compute  |    Nova Compute Service   |

The id is the UUID of the associated service.

nova service-list

The following example shows which services are up and running:

| Binary           | Host      | Zone     | Status  | State | Updated_at                 |
| nova-cert        | petcattle | internal | enabled | up    | 2013-08-29T17:00:19.000000 |
| nova-compute     | petcattle | nova     | enabled | up    | 2013-08-29T17:00:22.000000 |
| nova-conductor   | petcattle | internal | enabled | up    | 2013-08-29T17:00:18.000000 |
| nova-consoleauth | petcattle | internal | enabled | up    | 2013-08-29T17:00:20.000000 |
| nova-network     | petcattle | internal | enabled | up    | 2013-08-29T17:00:18.000000 |
| nova-scheduler   | petcattle | internal | enabled | up    | 2013-08-29T17:00:18.000000 |

Upload a bootable ubuntu image to Glance with the following command.

glance image-create --disk-format qcow2 --container-format bare --name "Ubuntu 12.04.1 Precise (cloudimg)" --copy-from --is-public true

Running the following command:

glance image-list

This should yield an output similar to the following example:

| ID                                   | Name                              | Disk Format | Container Format | Size      | Status |
| 4f721a9a-abd3-41b8-ad97-c06af86da089 | cirros-0.3.0-x86_64-uec-initrd    | ari         | ari              | 2254249   | active |
| 69e22230-287b-4873-96b0-ec2d411615e8 | cirros-0.3.0-x86_64-uec-kernel    | aki         | aki              | 4731440   | active |
| 59bfe3a9-40f4-4bb7-aea3-571d98b7db7b | cirros-image                      | ami         | ami              | 25165824  | active |
| fd3dd411-9111-4861-8fb0-2c1863ea457b | Ubuntu 12.04.1 Precise (cloudimg) | qcow2       | bare             | 253755392 | saving |

The ID is an UUID associated with the bootable image. The Status is shown as saving and will eventually turn to active.

Step 6 – Connect to OpenStack

Point your browser to the IP address of petcattle –, and after accepting the browser warnings, provide the credentials of username as admin and password as secrete.

Click on the project tab and on Instances and the “+ Launch Instance” button. Provide the values similar to the following example:

Note down the network address of the VM that was created. In the case below it is

You can ssh to the instance you just created as in the following example. Substitute the correct network address, respond to the prompt with “yes”, and provide the password – cubswin🙂 as-is.

ssh cirros@

Congratulations! You just completed the “Hello World” exercise!

Summary and Follow Up

The goal of this exercise was to get up and running very quickly, similar to devstack.  This installation is highly restrictive since it does not allow Internet access to the VMs that are instantiated. Just as you would never use devstack in production, you would never use this installation in a real life scenario, but the knowledge learned from this installation can be extrapolated to stand up a data center using dedicated hardware with High Availability and advanced networking requirements. The principles are still the same and you can scour the Knowledge Center to gain a better understanding to help you further with these efforts.


Please enter your comment!
Please enter your name here