Advanced Networking on Public Clouds

How do you deploy complex applications which require VLANs or multiple network segments into the public cloud?  How can you make containers communicate with other containers running on different virtual machines (VMs), when the Public Cloud Network Security prevents this type of traffic?

If any of the above scenarios sound familiar, then read on…

Whilst attempting to setup an OpenStack test environment recently, where I needed containers to run and communicate on many different VMs, I realised the Public Cloud Networking Security was getting in my way.  The security is there for good reason and that’s to protect my data from other users. I needed a way to get around these barriers, without compromising security.

I had several ‘host’ VMs which could communicate directly via different layer 2 networks.  On these, I had a number of Linux Containers (LXC) which needed to communicate with all other hosts and containers. Host-to-host traffic wasn’t a problem and containers could communicate with the host they were running on but couldn’t break out to other hosts or containers.

The solution was quite simple – VXLAN. So what is VXLAN, and how did it help?

Virtual Extensive Local Area Network (VXLAN) has been around since 2011 and is a method of encapsulating layer 2 network traffic over an existing physical layer. The solution was to connect all the physical hosts with an ‘Encapsulation Network’, then encapsulate all the container related traffic within this network. So, how did we achieve this?

My VMs were running Ubuntu 16.04 which has native support for VXLAN, so it was simply a case of updating the network configurations to enable VXLAN and encapsulate the required traffic. As my environment is used to run test installations of OpenStack using OpenStack-Ansible (OSA), I have many different networks configured on my Public Cloud deployment.

PublicNet and ServiceNet are default networks on our Rackspace Public Cloud. The Encapsulation Network is created specifically to carry all traffic encapsulated by VXLAN.  The remaining networks are specific to my OpenStack test installation and will be more thoroughly covered in a follow-on article.

Within my VMs, there are bridges created for each of the ‘Flat’, ‘Management’, ‘Storage’, ‘Replication’, and ‘VXLAN’ networks. Note, the VXLAN network is used within OpenStack and shouldn’t be confused with the ‘Encapsulation Network’, which also uses VXLAN technology.

By following a standard configuration approach, I’ve mirrored as closely as possible a standard deployment, but then added the VXLAN Encapsulation as an extra layer. The configuration of each VM needs to be set to enable the Encapsulation Network, followed by creating an encapsulated interface for every network, before finally mapping a bridge to each one.

This is the configuration of the Encapsulation Network Interface. Whilst it was initially allocated via DHCP by the Rackspace Public Cloud, it now needs a Static IP – so I used Ansible automation to set it within the VM.

auto eth2
iface eth2 inet static
    address 10.240.0.2
    netmask 255.255.252.0

Then we created an interface to encapsulate the ‘management’ traffic, as follows:

auto encap-mgmt
iface encap-mgmt inet manual
    pre-up ip link add encap-mgmt type vxlan id 236 group 239.0.0.236 dev 
    eth2 ttl 5 dstport 4789 || true
    up ip link set $IFACE up
    down ip link set $IFACE down
    post-down ip link del encap-mgmt || true

The key VXLAN settings are:

ip link add encap-mgmt type vxlan id 236 group 239.0.0.236 dev eth2 ttl 5 
dstport 4789

id: the Segment ID or VXLAN Logical Network ID (similar to a VLAN Tag)
group: the Multicast Group, uses a class D address (I set the last octet to match 
the third octet, but any range from 224.0.0.0 to 239.255.255.255 can be used)
dev: the device to use, in this case eth2
ttl: needs to be set >1 for multicast and large enough to allow traversal of 
the DC networking
dstport: 4789 is the default port for VXLAN traffic, assigned by iana

And finally, we create a bridge which uses the ‘encap-mgmt’ interface:

auto br-mgmt
iface br-mgmt inet static
    bridge_stp off
    bridge_waitport 0
    bridge_fd 0
    bridge_ports encap-mgmt
    offload-sg off
    address 172.29.236.2
    netmask 255.255.252.0

All the configuration was handled with Ansible which used the IPs assigned by the Public Cloud and then converted them into static IPs. The interfaces would need to be taken down and brought up again for the settings to take effect, however as I was applying the configuration – along with a raft of updates and other changes – a full reboot of each VM was triggered, ensuring the new configuration was activated.

Once back online the Encapsulation Network began carrying all the traffic for the various networks, wrapped in a VXLAN tunnel. The tunnel enables the traffic to get past cloud security restrictions whilst keeping your data safe from other users.

If you want to learn more about advanced networking on cloud platforms then reach out to our experts today, wherever you might be on your migration journey.

Keep an eye out for my upcoming blogs, the first of the series will be honing in on deploying infrastructures on VMs using Heat…

LEAVE A REPLY

Please enter your comment!
Please enter your name here