These next 2 blogs are going to be a change from my usual vCNS based blogs and will be my last blogs for a while as I’m taking on a new exciting career in network virtualisation, but will hopefully post some more blogs in the future.

 

This blog is going to show the lab testing I’m doing on Openvswitch “OVS” and Openflow using the Hypervisors KVM, XEN and Citrix XenServer. All labs will be running on VMware Workstation using Ubuntu 12.04 Linux where I can.

Below is a diagram of the Lab I have built. There are 4 Hypervisors, 2 KVM, 1 XEN and 1 XenServer. Each Hypervisor is running Openvswitch. Both KVM and the XEN Hypervisor are running Libvirt for VM management and attaching VM’s to an Openvswitch bridge. I’m running a new version of Libvirt “version 1.0.0” on Host1 as I was trying out Openvswitch support in Libvirt for VLAN support as opposed to the other hosts using Linux Bridge Compatibility with Libvirt version 0.9.8.  The XenServer host is managed using Citrix XenCenter.

I then have the POX OpenFlow Controller with POXDesk installed on an Ubuntu 12.04 VM.

Even though in this lab each host is connected to VMNet8 for simplicity each host could be in a different layer3 network as GRE can be used over layer 3 boundaries but to keep this lab simple all hosts are on the same subnet.

Lab Diagram:

 

OVS blog lab.gif

What I wanted to achieve from this lab was to have different Hypervisors using a common Virtual Switch using an Openflow controller and a tunnelling method to virtualise the network. I could then use this lab to do some tests on Openflow outside of this blog.

During this blog I will take you through the below steps I took to build the lab, these will assume you will have install the hosts, Openflow controller OS, Hypervisors and VM management tools as well as each Openvswitch. I would highly recommend Scott Lowe’s blog http://blog.scottlowe.org/2012/08/17/installing-kvm-and-open-vswitch-on-ubuntu/for this as it is out of the scope of this blog.

 

Steps:

  1. Build the OpenFlow controller and start it with required modules
  2. Add a Bridge on each OVS called br0
  3. Configure the bridge br0 fail_mode to “Secure”
  4. Connect each OVS “br0” to the Openflow controller
  5. Add a GRE tunnel between each OVS “br0” as described in the lab diagram.

Once the above steps are done we can then verify each OVS is connected to the Openflow controller. Then boot our VM’s on each Hypervisor and test connectivity using ping. This will assume you have used tools such as virt-install or the Citrix XenCenter to create the VM’s and attach to each OVS bridge “br0”. On XenCenter you will need to add a “Single-Server Private Network” then find the bridge name on the XenServer hosts OVS using the command “ovs-vsctl show” in my case the bridge was called xapi0, this bridge will only be present when a VM is attached to it and powered on.

 

1. Build the OpenFlow controller and start it with required modules.

 

On the VM you have created to act as your Openflow controller install git, this will be used to download the POX repository.

“sudo apt-get install git”

Now clone and checkout the most current POX repository into your home folder.

“git clone http://github.com/noxrepo/pox”

“cd pox; git checkout betta”

Now to install the POXDesk GUI to view topology and Openflow tables etc using a web browser, run the below from within the pox folder.

“git clone http://github.com/MurphyMc/poxdesk”

The POX openflow controller is now built; there is a README file in the pox folder that shows how to start the POX controller and how to load modules.

We now want to start the POX Openflow controller with required modules using the below command.

“./pox.py --verbose forwarding.l2_learning samples.pretty_log web messenger messenger.log_service messenger.ajax_transport openflow.of_service poxdesk openflow.topology openflow.discovery poxdesk.tinytopo”

The “orwarding.l2_learning” module will be used to leverage the Openflow controller as a learning bridge/switch by installing reactive flows into each OVS bridge that sends an Openflow Protocol “OFP” packet up to the controller due to receiving a packet that does not match any local flows. The other modules are used to discover the topology, make logs look pretty and provide the POXDesk web user interface etc. Please note we are not using Openflow or spanning-tree to prevent forwarding loops as this is not something I have yet covered in OVS, The GRE tunnels are setup in a way to intentionally prevent any forwarding loops.

2. Add a Bridge on each OVS called br0

Now that we have our POX Openflow controller running we will add a bridge on each each KVM and the XEN host called br0 using the below command.

“sudo ovs-vsctl add-br br0”

3. Configure the bridge br0 fail_mode to “Secure”

So we can prove that the OVS bridge br0 is using the POX Openflow controller and not performing local learning we will set each OVS bridge br0 to have a fail mode of secure, which means that the OVS will not perform local learning and will rely on the POX Openflow controller to install flows in br0. Use the below command to set the fail mode to secure on each OVS br0 bridge.

“sudo ovs-vsctl set-fail-mode br0 secure” for KVM/XEN hosts

“sudo ovs-vsctl set-fail-mode xapi0 secure” for XenServer host

4. Connect each OVS “br0” to the Openflow controller

We now want to connect each of our newly created OVS bridges “br0” to our POX Openflow controller. We are not using TLS simply TCP in this lab.

“sudo ovs-vsctl set-controller br0 tcp:192.168.118.128:6633”

If we now run the command “sudo ovs-vsctl show” on each OVS we will see a single bridge “br0” on KVM/XEN or xapi0 on XenServer with a controller connected to IP 192.168.118.129 on TCP port 6633 and connection status is true. We can also see that the fail mode is “secure”.

What I wanted to achieve from this lab was to have different Hypervisors using a common Virtual Switch using an Openflow controller and a tunnelling method to virtualise the network. I could then use this lab to do some tests on Openflow outside of this blog.

During this blog I will take you through the below steps I took to build the lab, these will assume you will have install the hosts, Openflow controller OS, Hypervisors and VM management tools as well as each Openvswitch. I would highly recommend Scott Lowe’s blog http://blog.scottlowe.org/2012/08/17/installing-kvm-and-open-vswitch-on-ubuntu/for this as it is out of the scope of this blog.

Steps:

1. Build the OpenFlow controller and start it with required modules

2. Add a Bridge on each OVS called br0

3. Configure the bridge br0 fail_mode to “Secure”

4. Connect each OVS “br0” to the Openflow controller

5. Add a GRE tunnel between each OVS “br0” as described in the lab diagram.

Once the above steps are done we can then verify each OVS is connected to the Openflow controller. Then boot our VM’s on each Hypervisor and test connectivity using ping. This will assume you have used tools such as virt-install or the Citrix XenCenter to create the VM’s and attach to each OVS bridge “br0”. On XenCenter you will need to add a “Single-Server Private Network” then find the bridge name on the XenServer hosts OVS using the command “ovs-vsctl show” in my case the bridge was called xapi0, this bridge will only be present when a VM is attached to it and powered on.

1. Build the OpenFlow controller and start it with required modules.

On the VM you have created to act as your Openflow controller install git, this will be used to download the POX repository.

“sudo apt-get install git”

Now clone and checkout the most current POX repository into your home folder.

“git clone http://github.com/noxrepo/pox”

“cd pox; git checkout betta”

Now to install the POXDesk GUI to view topology and Openflow tables etc using a web browser, run the below from within the pox folder.

“git clone http://github.com/MurphyMc/poxdesk”

The POX openflow controller is now built; there is a README file in the pox folder that shows how to start the POX controller and how to load modules.

We now want to start the POX Openflow controller with required modules using the below command.

“./pox.py --verbose forwarding.l2_learning samples.pretty_log web messenger messenger.log_service messenger.ajax_transport openflow.of_service poxdesk openflow.topology openflow.discovery poxdesk.tinytopo”

 

The “orwarding.l2_learning” module will be used to leverage the Openflow controller as a learning bridge/switch by installing reactive flows into each OVS bridge that sends an Openflow Protocol “OFP” packet up to the controller due to receiving a packet that does not match any local flows. The other modules are used to discover the topology, make logs look pretty and provide the POXDesk web user interface etc.

Please note we are not using Openflow or spanning-tree to prevent forwarding loops as this is not something I have yet covered in OVS, The GRE tunnels are setup in a way to intentionally prevent any forwarding loops.

 

Pox Starting:

starting POX.png

2. Add a Bridge on each OVS called br0

Now that we have our POX Openflow controller running we will add a bridge on each each KVM and the XEN host called br0 using the below command.

“sudo ovs-vsctl add-br br0”

 

3. Configure the bridge br0 fail_mode to “Secure”

So we can prove that the OVS bridge br0 is using the POX Openflow controller and not performing local learning we will set each OVS bridge br0 to have a fail mode of secure, which means that the OVS will not perform local learning and will rely on the POX Openflow controller to install flows in br0. Use the below command to set the fail mode to secure on each OVS br0 bridge.

“sudo ovs-vsctl set-fail-mode br0 secure” for KVM/XEN hosts

“sudo ovs-vsctl set-fail-mode xapi0 secure” for XenServer host

 

4. Connect each OVS “br0” to the Openflow controller

We now want to connect each of our newly created OVS bridges “br0” to our POX Openflow controller. We are not using TLS simply TCP in this lab.

“sudo ovs-vsctl set-controller br0 tcp:192.168.118.128:6633”

If we now run the command “sudo ovs-vsctl show” on each OVS we will see a single bridge “br0” on KVM/XEN or xapi0 on XenServer with a controller connected to IP 192.168.118.129 on TCP port 6633 and connection status is true. We can also see that the fail mode is “secure”.

 

Image showing OVS connected to POX:

ovs-vsctl showing ovs connected to POX.png

 

Also if we go back to the terminal running our Openflow controller POX we can see each OVS bridge connecting to our Openflow controller.

 

Image of OVS connecting to POX:

OVS connecting to POX.png

 

As we have not got any interfaces such as “Eth0” connected to each of the OVS bridges br0, any VM’s attached to each bridge br0 will only be able to communicate with VM’s on the same OVS bridge/host. To enable layer 2 connectivity between each hosts OVS bridge br0 we could simply add the interface Eth0 into a OVS port Eth0 on each bridge br0 but that would not work if each host was on a different layer 2/3 domain. So we will configure a GRE tunnel between each OVS bridge “br0” making sure we do not create any forwarding loops i.e. create a simple daisy chain of bridges not a ring.

 

5. Add a GRE tunnel between each OVS “br0” as described in the lab diagram.

 

For host1:

“sudo ovs-vsctl add-port br0 gre0 -- set interface gre0 options:remote_ip=192.168.118.146”

 

For host2:

“sudo ovs-vsctl add-port br0 gre0 -- set interface gre0 options:remote_ip=192.168.118.145”

“sudo ovs-vsctl add-port br0 gre1 -- set interface gre1 options:remote_ip=192.168.118.130”

 

For host3:

“sudo ovs-vsctl add-port br0 gre1 -- set interface gre1 options:remote_ip=192.168.118.146”

“sudo ovs-vsctl add-port br0 gre2 -- set interface gre2 options:remote_ip=192.168.118.133”

 

For host4:

“sudo ovs-vsctl add-port xapi0 gre2 -- set interface gre2 options:remote_ip=192.168.118.130”

 

On each host, if we now issue the command “sudo ovs-vsctl show” we will now see the gre tunnel configured. In this case now with a new port named “gre0” with an interface “gre0” attached of a type gre with a remote endpoint IP address.

 

Image showing OVS GRE tunnel:

ovs-vsctl showing tunnel.png

We now have 4 OVS bridges connected to each other using GRE tunnels and are all also connected to the same POX Openflow controller.

We can now start up a VM on each host and test connectivity between VM’s over the virtual network we have just created and the results should be that all VM’s have connectivity to each other as though they are all on a single virtual switch as below.

 

Image showing single virtual switch:

OVS blog lab 02.gif

If you take a look at the terminal running your POX Openflow controller you will see log events for flows being installed as below from the connectivity test we performed earlier.

 

Image showing Flows being Installed:

POX flows being install on terminal.png

We can now use the POXDesk module to view the topology. We will open up Firefox and load the URL

 

http://192.168.118.129:8000

 

At the bottom of the page that loads click on the link “/poxdesk”

When the POXDesk GUI loads you can then click on the bottom left “POX” buttons TopologyViewer, Tableviewer and L2LearningSwitch to view a graphical topology and view flows on each OVS bridge as shown below:

poxdesk.png

 

That is the end of this part of the blog, at this stage we have a working Openflow/Openvswitch lab working on different hypervisors with connectivity between all VM’s utilising GRE tunnels. In the second part of this blog I will show how you can use Wireshark and the OFP plug-in to decode the OFP packets and make sense of them, as well as use cli commands to view the flow tables at various parts of an OVS and look at interface counters etc.

Thanks for reading and as always open to feedback.

 

Kind Regards

Kevin

Note: all comments and opinions expressed in this blog are that of myself and not my employer.