Skip navigation
2013

This is part two of my blog post on my OpenFlow and Openvswitch lab.
This will be my last blog post for some time until I settle into my new Role.

 

In my previous blog Post http://communities.vmware.com/blogs/kevinbarrass/2013/03/13/mixed-hypervisor-with-openvswitch-and-openflow-network-virtualisation I showed how you can build a lab using several mixed Hypervisors KVM/XEN/XenServer all using Openvswitch and build a virtual network across all hosts using Openflow and GRE tunnels.

 

In the second part of this blog post I will show how I used Wireshark and the Open Flow Protocol "OFP" dissector to decode the OFP packets and get an idea of what is happening as well as viewing the flow tables on each Openvswitch "OVS".
You can find details of the OFP dissector on the website: http://www.noxrepo.org/2012/03/openflow-wireshark-dissector-on-windows/

To simplify number of packets I have captured, I have reduced the number of hosts from 4 to 2 with a single GRE tunnel as shown in the lab diagram below:

OVS blog 2 hosts.gif

We will need to know the OVS port-name to Port number on each host to be able to interpret the OFP messages; you can do this by typing the command "sudo ovs-dpctl show br0" on each host. In the case of my lab it gives the below port name->port number mappings.

 

Host1 OVS port name to port number mappings.

host1 show port name to port number.png

Host2 OVS port name to port number mappings.

host2 show port name to port number.png


As the above lab is running on VMware Workstation using VMnet8 (NAT) it is easy to use Wireshark on the same computer running Workstation to capture all traffic on that network as it is flooded over VMnet8.

What I will try to do during this lab test is sending a ping from VM1 to VM2, then capture the OFP packets. I will then decode some of the OFP packets and try to explain what each packet is doing, there is some duplication of OFP packets i.e. one for ICMP echo-request then one for ICMP echo-reply, I will decode the 1st for each flow.
Please bear in mind I'm very new to OVS and Openflow so I may well have mistakes in my interpretation on how this lab works and the Openflow Protocol/OVS. I would recommend building a similar lab reading the Openflow Switch specification and have a play

So in the minimal lab I have started up the POX Openflow controller as before, this time with just two OVS's connected. I have a Windows VM on each host/OVS one with the IP address 172.16.0.1 the second with the IP address 172.16.0.2. I then run a ping with a single ICMP echo-request from 172.16.0.1 to 172.16.0.2. Below is the Wireshark capture of the OFP packets related to both the ARP requests and ICMP echo request/reply.

 

all OFP packets.png

 

From the above OFP packet capture:
A. OFP Packets 197 and 224 in block A is related to VM01 on host1/OVS sending an ARP request for VM2's MAC address.
B. OFP packets 234 and 235 in block B are the OFP packets related to the ARP response from VM02 on host2/OVS.
C. OFP packets 239 and 240 in block C that relates to the ICMP echo-request from VM01 on host1/OVS to VM02.

 

ARP request OFP Packet IN Host1.png

Now that we have sent the ICMP echo-request from VM01 to VM02, VM01 generates an ARP request for VM02's MAC address. This is received by host1's OVS who has no matching flow for this packet so send an OFP Packet-in to the NOX Openflow controller.

In the above decoded packet capture of the OFP packet in for the ARP-request:
A: This OFP is of a type "packet-in" with a version of 0x01
B: The buffer ID of the buffered packet that caused this OFP packet.
C: What OVS port the packet was received on, in this case port 2.
D: Reason the OFP was generated; in this case it was due to no local flow entry matching this ARP request packet.
E: Frame data containing details of the received packet that can be used to construct a flow entry.
F: shows summary of the OFP packet.

 

ARP request OFP Packet OUT Host1.png

The NOX Openflow controller now receives the previous OFP packet and using the forwarding.l2_learning module makes a policy decision in this case as the ARP request is a broadcast so the NOX controller will instruct the OVS using a OFP packet-out to flood the packet out all ports except those blocked by spanning-tree STP "not used in this lab" or the source OVS port.

In the above decoded packet capture of the OFP packet out for the ARP-request:
A: This OFP is of a type "packet-out" with a version of 0x01
B: The buffer ID of the buffered packet on the OVS related to this packet-out.
C: What OVS port the packet was received on, in this case port 2.
D: Action type, in this case to output to a switch port.
E: Action to take, in this case to Flood the packet in buffer ID 288 out of all ports expect the input port and ports disabled by STP.
F: shows summary of the OFP packet.

At this point the ARP request from VM01 is flooded out of host1/OVS and received by host2/OVS. Host2/OVS then goes through the above process with this ARP request but I will not decode these as we have already examined similar  an OFP ARP-request packet above.

 

ARP reply OFP Packet IN Host2.png

At this point VM02 has received the ARP request and VM02 will send an ARP-reply back directly to the MAC address of VM01. This is received by host2's OVS who has no matching flow for this packet so send an OFP Packet-in to the NOX Openflow controller.

In the above decoded packet capture of the OFP packet in for the ARP-reply:
A: This OFP is of a type "packet-out" with a version of 0x01
B: The buffer ID of the buffered packet that caused this OFP packet.
C: What OVS port the packet was received on, in this case port 3.
D: Reason the OFP was generated, in this case it was due to no local flow entry matching this ARP reply packet.
E: Frame data containing details of the received packet that can be used to construct a flow entry.
F: shows summary of the OFP packet.

ARP reply OFP Flow Mod host2.png

 

The NOX Openflow controller receives the previous OFP packet from host2/OVS and using the forwarding.l2_learning module makes a policy decision. In this case as the ARP reply is not a broadcast packet, instead of a packet-out to flood the packet the NOX controller creates a specific flow entry and sends OFP flow-mod to the OVS. the OVS will install this Flow and then send the buffered packet out matching this flow.

In the above decoded packet capture of the OFP flow-mod for the ARP-reply:
A: This OFP is of a type "flow-mod" with a version of 0x01
B: The specific details used to create the flow.
C: Idle time to discard this flow when inactive, and max time before the next packet of this flow is punted back to the NOX Openflow controller for NOX to then install or not and matching flow back on the OVS.
D: The buffer ID of the buffered packet that caused this OFP packet.
E: Action type, in this case to output to a switch port.
F: Action to take, in this case to send all packets matching this flow including the one in buffer matching the ID 289 out of OVS port 1.
G: shows summary of the OFP packet.

As this ARP-reply passes to host1/OVS a similar flow will be installed by the Openflow controller NOX onto host1/OVS, with the ARP-reply eventually reaching VM01.

 

ICMP echo req OFP packet in host1.png

 

VM01 will then send an ICMP echo request as before this ICMP echo-request will reach host1/OVS and as there is no matching flow on host1/OVS the OVS will send an OFP packet type packet-in to the NOX Openflow controller.
The NOX Openflow controller receives the this OFP packet from host1/OVS and using the forwarding.l2_learning module makes a policy decision. The NOX controller creates a specific flow entry and sends OFP flow-mod to the OVS. The OVS will install this Flow and then send the buffered packet out matching this flow.

In the above decoded packet capture of the OFP flow-mod for the ICMP echo-request.
A: This OFP is of a type "packet-in" with a version of 0x01
B: The buffer ID of the buffered packet that caused this OFP packet.
C: What OVS port the packet was received on, in this case port 2.
D: Reason the OFP was generated; in this case it was due to no local flow entry matching this ICMP echo-request packet.
E: Frame data containing details of the received packet that can be used to construct a flow entry.
F: shows summary of the OFP packet.

 

ICMP echo req OFP Flow Mod host1.png

 

The NOX Openflow controller receives the previous OFP packet from host1/OVS and using the forwarding.l2_learning module makes a policy decision. The controller then creates a specific flow entry and sends OFP flow-mod to the OVS. The OVS will install this Flow and then send the buffered packet out matching this flow.

In the above decoded packet capture of the OFP flow-mod for the ICMP echo-request:
A: This OFP is of a type "flow-mod" with a version of 0x01
B: The specific details used to create the flow.
C: Idle time to discard this flow when inactive, and max time before the next packet of this flow is punted back to the NOX Openflow controller for NOX to then install or not and matching flow back on the OVS.
D: The buffer ID of the buffered packet that caused this OFP packet in.
E: Action type, in this case to output to a switch port.
F: Action to take, in this case to send all packets matching this flow including the one in buffer matching the ID 290 out of OVS port 1.
G: shows summary of the OFP packet.

The ICMP echo-request will then be tunnelled over to host2/OVS using GRE and host2/OVS will go through the same process and have a similar flow installed by the NOX Openflow controller. A similar process will then happen in reverse for the ICMP echo-reply.

All the flows being installed by NOX here are reactive flows, i.e. NOX did not determine the full network topology and install pre-emptive flows it is reacting to packet-in to then reactively install flows into the OVS Bridge br0 that originated the OFP packet-in.

To view the flows installed by the Openflow controller into the OVS userspace run the command "sudo ovs-ofctl dump-flows br0" and to view these flows that have been installed into the OVS data path for bridge br0 due to a flow matching a userspace flow you can run the command "sudo ovs-dpctl dump-flows br0" which will dump the installed flows as shown in the screenshot below:

 

ovs-dpctl dump flows.png

 

You can also run the command "sudo ovs-dpctl show br0 -s" to get port statistics such as received/transmitted packets as show in the screenshot below:

get port counters.png


That is the end of this my blog post on Mixed Hypervisor with Openvswitch and Openflow Network virtualisation I hope it was of some use to anyone and as before I'm open to feedback and any corrections on anything I may have miss-interpreted.

Thanks for reading.

Kind Regards
Kevin Barrass

These next 2 blogs are going to be a change from my usual vCNS based blogs and will be my last blogs for a while as I’m taking on a new exciting career in network virtualisation, but will hopefully post some more blogs in the future.

 

This blog is going to show the lab testing I’m doing on Openvswitch “OVS” and Openflow using the Hypervisors KVM, XEN and Citrix XenServer. All labs will be running on VMware Workstation using Ubuntu 12.04 Linux where I can.

Below is a diagram of the Lab I have built. There are 4 Hypervisors, 2 KVM, 1 XEN and 1 XenServer. Each Hypervisor is running Openvswitch. Both KVM and the XEN Hypervisor are running Libvirt for VM management and attaching VM’s to an Openvswitch bridge. I’m running a new version of Libvirt “version 1.0.0” on Host1 as I was trying out Openvswitch support in Libvirt for VLAN support as opposed to the other hosts using Linux Bridge Compatibility with Libvirt version 0.9.8.  The XenServer host is managed using Citrix XenCenter.

I then have the POX OpenFlow Controller with POXDesk installed on an Ubuntu 12.04 VM.

Even though in this lab each host is connected to VMNet8 for simplicity each host could be in a different layer3 network as GRE can be used over layer 3 boundaries but to keep this lab simple all hosts are on the same subnet.

Lab Diagram:

 

OVS blog lab.gif

What I wanted to achieve from this lab was to have different Hypervisors using a common Virtual Switch using an Openflow controller and a tunnelling method to virtualise the network. I could then use this lab to do some tests on Openflow outside of this blog.

During this blog I will take you through the below steps I took to build the lab, these will assume you will have install the hosts, Openflow controller OS, Hypervisors and VM management tools as well as each Openvswitch. I would highly recommend Scott Lowe’s blog http://blog.scottlowe.org/2012/08/17/installing-kvm-and-open-vswitch-on-ubuntu/for this as it is out of the scope of this blog.

 

Steps:

  1. Build the OpenFlow controller and start it with required modules
  2. Add a Bridge on each OVS called br0
  3. Configure the bridge br0 fail_mode to “Secure”
  4. Connect each OVS “br0” to the Openflow controller
  5. Add a GRE tunnel between each OVS “br0” as described in the lab diagram.

Once the above steps are done we can then verify each OVS is connected to the Openflow controller. Then boot our VM’s on each Hypervisor and test connectivity using ping. This will assume you have used tools such as virt-install or the Citrix XenCenter to create the VM’s and attach to each OVS bridge “br0”. On XenCenter you will need to add a “Single-Server Private Network” then find the bridge name on the XenServer hosts OVS using the command “ovs-vsctl show” in my case the bridge was called xapi0, this bridge will only be present when a VM is attached to it and powered on.

 

1. Build the OpenFlow controller and start it with required modules.

 

On the VM you have created to act as your Openflow controller install git, this will be used to download the POX repository.

“sudo apt-get install git”

Now clone and checkout the most current POX repository into your home folder.

“git clone http://github.com/noxrepo/pox”

“cd pox; git checkout betta”

Now to install the POXDesk GUI to view topology and Openflow tables etc using a web browser, run the below from within the pox folder.

“git clone http://github.com/MurphyMc/poxdesk”

The POX openflow controller is now built; there is a README file in the pox folder that shows how to start the POX controller and how to load modules.

We now want to start the POX Openflow controller with required modules using the below command.

“./pox.py --verbose forwarding.l2_learning samples.pretty_log web messenger messenger.log_service messenger.ajax_transport openflow.of_service poxdesk openflow.topology openflow.discovery poxdesk.tinytopo”

The “orwarding.l2_learning” module will be used to leverage the Openflow controller as a learning bridge/switch by installing reactive flows into each OVS bridge that sends an Openflow Protocol “OFP” packet up to the controller due to receiving a packet that does not match any local flows. The other modules are used to discover the topology, make logs look pretty and provide the POXDesk web user interface etc. Please note we are not using Openflow or spanning-tree to prevent forwarding loops as this is not something I have yet covered in OVS, The GRE tunnels are setup in a way to intentionally prevent any forwarding loops.

2. Add a Bridge on each OVS called br0

Now that we have our POX Openflow controller running we will add a bridge on each each KVM and the XEN host called br0 using the below command.

“sudo ovs-vsctl add-br br0”

3. Configure the bridge br0 fail_mode to “Secure”

So we can prove that the OVS bridge br0 is using the POX Openflow controller and not performing local learning we will set each OVS bridge br0 to have a fail mode of secure, which means that the OVS will not perform local learning and will rely on the POX Openflow controller to install flows in br0. Use the below command to set the fail mode to secure on each OVS br0 bridge.

“sudo ovs-vsctl set-fail-mode br0 secure” for KVM/XEN hosts

“sudo ovs-vsctl set-fail-mode xapi0 secure” for XenServer host

4. Connect each OVS “br0” to the Openflow controller

We now want to connect each of our newly created OVS bridges “br0” to our POX Openflow controller. We are not using TLS simply TCP in this lab.

“sudo ovs-vsctl set-controller br0 tcp:192.168.118.128:6633”

If we now run the command “sudo ovs-vsctl show” on each OVS we will see a single bridge “br0” on KVM/XEN or xapi0 on XenServer with a controller connected to IP 192.168.118.129 on TCP port 6633 and connection status is true. We can also see that the fail mode is “secure”.

What I wanted to achieve from this lab was to have different Hypervisors using a common Virtual Switch using an Openflow controller and a tunnelling method to virtualise the network. I could then use this lab to do some tests on Openflow outside of this blog.

During this blog I will take you through the below steps I took to build the lab, these will assume you will have install the hosts, Openflow controller OS, Hypervisors and VM management tools as well as each Openvswitch. I would highly recommend Scott Lowe’s blog http://blog.scottlowe.org/2012/08/17/installing-kvm-and-open-vswitch-on-ubuntu/for this as it is out of the scope of this blog.

Steps:

1. Build the OpenFlow controller and start it with required modules

2. Add a Bridge on each OVS called br0

3. Configure the bridge br0 fail_mode to “Secure”

4. Connect each OVS “br0” to the Openflow controller

5. Add a GRE tunnel between each OVS “br0” as described in the lab diagram.

Once the above steps are done we can then verify each OVS is connected to the Openflow controller. Then boot our VM’s on each Hypervisor and test connectivity using ping. This will assume you have used tools such as virt-install or the Citrix XenCenter to create the VM’s and attach to each OVS bridge “br0”. On XenCenter you will need to add a “Single-Server Private Network” then find the bridge name on the XenServer hosts OVS using the command “ovs-vsctl show” in my case the bridge was called xapi0, this bridge will only be present when a VM is attached to it and powered on.

1. Build the OpenFlow controller and start it with required modules.

On the VM you have created to act as your Openflow controller install git, this will be used to download the POX repository.

“sudo apt-get install git”

Now clone and checkout the most current POX repository into your home folder.

“git clone http://github.com/noxrepo/pox”

“cd pox; git checkout betta”

Now to install the POXDesk GUI to view topology and Openflow tables etc using a web browser, run the below from within the pox folder.

“git clone http://github.com/MurphyMc/poxdesk”

The POX openflow controller is now built; there is a README file in the pox folder that shows how to start the POX controller and how to load modules.

We now want to start the POX Openflow controller with required modules using the below command.

“./pox.py --verbose forwarding.l2_learning samples.pretty_log web messenger messenger.log_service messenger.ajax_transport openflow.of_service poxdesk openflow.topology openflow.discovery poxdesk.tinytopo”

 

The “orwarding.l2_learning” module will be used to leverage the Openflow controller as a learning bridge/switch by installing reactive flows into each OVS bridge that sends an Openflow Protocol “OFP” packet up to the controller due to receiving a packet that does not match any local flows. The other modules are used to discover the topology, make logs look pretty and provide the POXDesk web user interface etc.

Please note we are not using Openflow or spanning-tree to prevent forwarding loops as this is not something I have yet covered in OVS, The GRE tunnels are setup in a way to intentionally prevent any forwarding loops.

 

Pox Starting:

starting POX.png

2. Add a Bridge on each OVS called br0

Now that we have our POX Openflow controller running we will add a bridge on each each KVM and the XEN host called br0 using the below command.

“sudo ovs-vsctl add-br br0”

 

3. Configure the bridge br0 fail_mode to “Secure”

So we can prove that the OVS bridge br0 is using the POX Openflow controller and not performing local learning we will set each OVS bridge br0 to have a fail mode of secure, which means that the OVS will not perform local learning and will rely on the POX Openflow controller to install flows in br0. Use the below command to set the fail mode to secure on each OVS br0 bridge.

“sudo ovs-vsctl set-fail-mode br0 secure” for KVM/XEN hosts

“sudo ovs-vsctl set-fail-mode xapi0 secure” for XenServer host

 

4. Connect each OVS “br0” to the Openflow controller

We now want to connect each of our newly created OVS bridges “br0” to our POX Openflow controller. We are not using TLS simply TCP in this lab.

“sudo ovs-vsctl set-controller br0 tcp:192.168.118.128:6633”

If we now run the command “sudo ovs-vsctl show” on each OVS we will see a single bridge “br0” on KVM/XEN or xapi0 on XenServer with a controller connected to IP 192.168.118.129 on TCP port 6633 and connection status is true. We can also see that the fail mode is “secure”.

 

Image showing OVS connected to POX:

ovs-vsctl showing ovs connected to POX.png

 

Also if we go back to the terminal running our Openflow controller POX we can see each OVS bridge connecting to our Openflow controller.

 

Image of OVS connecting to POX:

OVS connecting to POX.png

 

As we have not got any interfaces such as “Eth0” connected to each of the OVS bridges br0, any VM’s attached to each bridge br0 will only be able to communicate with VM’s on the same OVS bridge/host. To enable layer 2 connectivity between each hosts OVS bridge br0 we could simply add the interface Eth0 into a OVS port Eth0 on each bridge br0 but that would not work if each host was on a different layer 2/3 domain. So we will configure a GRE tunnel between each OVS bridge “br0” making sure we do not create any forwarding loops i.e. create a simple daisy chain of bridges not a ring.

 

5. Add a GRE tunnel between each OVS “br0” as described in the lab diagram.

 

For host1:

“sudo ovs-vsctl add-port br0 gre0 -- set interface gre0 options:remote_ip=192.168.118.146”

 

For host2:

“sudo ovs-vsctl add-port br0 gre0 -- set interface gre0 options:remote_ip=192.168.118.145”

“sudo ovs-vsctl add-port br0 gre1 -- set interface gre1 options:remote_ip=192.168.118.130”

 

For host3:

“sudo ovs-vsctl add-port br0 gre1 -- set interface gre1 options:remote_ip=192.168.118.146”

“sudo ovs-vsctl add-port br0 gre2 -- set interface gre2 options:remote_ip=192.168.118.133”

 

For host4:

“sudo ovs-vsctl add-port xapi0 gre2 -- set interface gre2 options:remote_ip=192.168.118.130”

 

On each host, if we now issue the command “sudo ovs-vsctl show” we will now see the gre tunnel configured. In this case now with a new port named “gre0” with an interface “gre0” attached of a type gre with a remote endpoint IP address.

 

Image showing OVS GRE tunnel:

ovs-vsctl showing tunnel.png

We now have 4 OVS bridges connected to each other using GRE tunnels and are all also connected to the same POX Openflow controller.

We can now start up a VM on each host and test connectivity between VM’s over the virtual network we have just created and the results should be that all VM’s have connectivity to each other as though they are all on a single virtual switch as below.

 

Image showing single virtual switch:

OVS blog lab 02.gif

If you take a look at the terminal running your POX Openflow controller you will see log events for flows being installed as below from the connectivity test we performed earlier.

 

Image showing Flows being Installed:

POX flows being install on terminal.png

We can now use the POXDesk module to view the topology. We will open up Firefox and load the URL

 

http://192.168.118.129:8000

 

At the bottom of the page that loads click on the link “/poxdesk”

When the POXDesk GUI loads you can then click on the bottom left “POX” buttons TopologyViewer, Tableviewer and L2LearningSwitch to view a graphical topology and view flows on each OVS bridge as shown below:

poxdesk.png

 

That is the end of this part of the blog, at this stage we have a working Openflow/Openvswitch lab working on different hypervisors with connectivity between all VM’s utilising GRE tunnels. In the second part of this blog I will show how you can use Wireshark and the OFP plug-in to decode the OFP packets and make sense of them, as well as use cli commands to view the flow tables at various parts of an OVS and look at interface counters etc.

Thanks for reading and as always open to feedback.

 

Kind Regards

Kevin

Note: all comments and opinions expressed in this blog are that of myself and not my employer.