I am poor network engineer used to fight against armies of cables and evil devices now trying to make his way through a new world made of what we, who come from the underworld and the deepest caves of the OSI model, use to be allergic to: software programming or anything above layer 4. The first steps to enter the SDN world and to accept the paradigm shift of replacing wires and hardware for software networking devices is to understand some of the popular available technologies.
Today I will cover the basics of Open vSwitch, a full multilayer and open source (and maybe the most popular free tool) used to deliver more flexibility into the Software Defined Datacenter fabric. Open vSwitch extends the rudimentary (and even primitive) traditional networking tools found in Linux which have been used for ages to work under virtualized environments and offers a much easier and centralized model to deploy VLANs, GRE tunnels, VXLANs, basic QoS traffic shaping, IPsec, LACP and much other features (the complete list here: http://openvswitch.org/features/). But before your heart starts pumping faster be aware of the two major drawbacks of OVS I have found so far: the lack of good documentation and the Linux kernel.
Sadly, Open vSwitch is poorly documented and it could be a real struggle to deploy it on complex scenarios. Well, that’s the cost of open source (you see? nothing is free). The real downside of OVS lies in its own birth: the fat, slow and bloated Linux kernel (and I am quoting Linus Torvalds himself). True: Linux offers undeniable advantages over other systems, but a virtual switch will never have better performance than a true dedicated and specially networking-designed piece of hardware, except maybe when communicating VMs running inside the memory of the same hypervisor. This is something we must be aware of, in order to create true resilient, redundant, well designed virtual networks. Why Open vSwitch then? Because it simplifies the systems/network administrator’s life and in the end you will love it with all its defects.
To test this environment I have used Virtualbox to create 3 Linux virtual machines inside an Ubuntu physical host.
The figure above shows our start environment. Nothing new here but only a couple of VMs attached to the physical host`s eth0 interface receiving its IP configuration via DHCP from the Internet.
First off, we will install Open vSwitch as a package. OVS has many options but we will install the basic tools only:
#apt-get install openvswitch-switch
Once installed, we will create our first bridge. Please note the terminology as it can be misleading when working with OVS, specially for us who come from the Cisco world. Open vSwitch creates „bridges“ instead of switches. We can have multiple bridges inside a physical host and even bind them together.
Let’s create a bridge called bridge0 and bring it up (bridge0 will be shown as a standard linux interface)
# ovs-vsctl add-br bridge0
# ifconfig bridge0 up
Now we will attach the eth0 interface to bridge0. Please be aware that you will loose Internet connectivity immediately after this step, so make sure you have an alternative network to reach the physical box or you have direct access to the system’s console. The reason of this is very simple; under normal circumstances the IP stack of the physical host is configured to reach the Internet through eth0 (by running route -n you can see the output interface of the default route). After connecting the bridge0 to eth0, this interface won’t be longer available directly and we will need to update our IP configuration to use bridge0 as the Internet facing interface.
# ovs-vsctl add-port bridge0 eth0
To recover Internet connectivity delete the IP from eth0:
# ifconfig eth0 0
and run a DHCP discover from bridge0:
# dhclient bridge0
If everything ran ok you should be able to see a new IP address on bridge0
Now our network diagram looks like this:
Note that both VMs still conserve their IP addresses as their DHCP lease is not yet expired. However they won’t have connection to each other until we attach them to the new bridge. To do this, we will add 2 new TAP interfaces to our physical server, one for each server (if you have 50 VMs you will require 50 TAP interfaces).
To create the new tap devices just summon the good old ip command.
# ip tuntap add mode tap vmport1
#ip tuntap add mode tap vmport2
# ifconfig vmport1 up
# ifconfig vmport2 up
Now when running ifconfig should see both tap devices up and running
Note: you may also see other interfaces but they’re not related with our configuration.
Great. Now we have our bridge0 and two tap interfaces called vmport1 and vmport2 running on our linux box. What’s next? As our networking abilities haven’t been lost from the old times when we used to spend hours cabling racks, you may correctly guess that now we need to attach both vmport1 and vmport2 to bridge0. This can be easily done with:
# ovs-vsctl add-port bridge0 vmport1
# ovs-vsctl add-port bridge0 vmport1
Let’s take a look at what our network diagram and what we have done so far:
Both interfaces are now attached to bridge0. This can be checked by running ovs-vsctl show
We could think of ovs-vsctl show as an alternative to show interfaces status on Cisco switches. Let’s analyse this output before we go further on our configuration. The first line is the UUID (Unique Universal ID) of our environment as defined in the RFC 4122. Then we have one bridge called „bridge0“ which has 4 ports: eth0, bridge0, vmport1 and vmport2. If more bridges are added, they will be shown here. Each of this ports has also an interface with the same name. What’s the difference between Ports and Interfaces? If you have ever worked with Etherchannels (also called NIC bonding) you will clearly understand the difference. A port is an entity which may contain multiple physical interfaces (I am calling them physical instead of virtual just to make the difference, even when both vmport1 and vmport2 are, indeed, virtual). A clear example could be an Etherchannel called Port1 which contains the interfaces FastEthernet0/1, FastEthernet0/2 and FastEthernet0/3. The same concept applies to Open vSwitch with LACP. Don’t worry, we won’t go that deep today 😉
Another thing which can be somehow confusing is the Port called bridge0. When we created our bridge and we named it bridge0, automatically a port with the same name was created, containing an interface which is also called bridge0 (this is what you see when running ifconfig) and type internal. The idea of this internal interface is if you ever want to configure an IP address on the bridge (the same way as a SVI on a Cisco switch) you can do it here. Type „internal“ is there for legacy compatibility with the Linux kernel bridge utilities.
Let’s go back to our topology. We have a bridge connected to Internet and two taps. Now we will configure our VM1 to use the vmport1 as bridge adaptor and VM2 to use vmport2:
On VM1 (Debian1):
On VM2 (Debian2):
Now our network diagram looks like this
Both VM1 and VM2 are now able to surf the Internet and ping each other:
Now we can start playing in our virtual environment adding VLANs, QoS and much more. We will review these concepts in a later post.
Note: These changes are not persistent. Here’s a small script to load the whole configuration after a reboot :
#### OVS boot parameters ####
echo „Loading vPorts“
# Load as many virtual interfaces as you have
ip tuntap add mode tap vmport1
ip tuntap add mode tap vmport2
ifconfig vmport1 up
ifconfig vmport2 up
echo „Deleting IP address from eth0“
ifconfig eth0 0
echo „Starting bridge0“
ifconfig bridge0 up
echo „Restarting Open vSwitch…“
# It is important to restart the service to load the interfaces into the OVS configuration.
service openvswitch-switch restart
echo „Restarting Open vSwitch [OK]“
- Save this file as /etc/ovs-persistent.sh
- chmod +x /etc/ovs-persistent.sh
- Edit /etc/rc.local and add the line: sh /etc/ovs-persistent.sh (just before „exit 0“)
For debugging check the logs under /var/log/openvswitch/
More information: http://openvswitch.org/