The configuration of most of VM parameters (cpu, memory, storage,..) is quite straightforward, but network part might be a bit more difficult.
In order to have something much cooler than just NATed virtual interfaces (it is default), I've decided to build an Openflow based network with Openvswitch (vBridge1 and 2) on each host (see picture below).
As it is of no importance for the first part which controller to use, and I didn't want to spend time configuring flow forwarding rules manually, I've used the OpenvSwitch controller (ovs-controller), in the hub mode by using the -N or the -H command-line option. For the basic connectivity is is more than sufficient and it can be changed later on to a full blown Openflow controller like ODL or POX.
Openvswitch configuration
There are loads of articles and guidelines how to configure Openvswitch, so I'll just summarize the steps taken. Port1 and port2 should be replaced by whichever interface(s) should be used as interconnect interface(s).Create vbridge on Host A
ovs-vsctl br-create vbridge1 ovs-vsctl br-add vbridge1 port1 ovs-vsctl br-add vbridge1 port2 ovs-vsctl set-controller vbridge1 tcp:hostB:6633
Create vbridge on Host B
ovs-vsctl br-create vbridge2 ovs-vsctl br-add vbridge1 port1 ovs-vsctl br-add vbridge1 port2 ovs-vsctl set-controller vbridge1 tcp:hostB:6633
As the VM network interfaces would be automatically created and attached to the vbridge by libvirt when the VM starts, we can move on the the main part of the configuration.
Libvirt VM profile configuration
All the profiles libvirt uses are stored in /etc/libvirt/qemu (or other if you use different hypervisor).There is also a possibility to edit them via CLI interface by using virsh edit guest_VM.
Usual template or profile generated by a GUI interface that defines a network card looks like this:
<interface type='network'>
<mac address='00:00:00:00:00:00'/> <source network='default'/> <model type='virtio'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x00' function='0x0'/ </interface>
In order to use Openvswitch instead the part above has to be modified as follows:
<interface type='bridge'> <source bridge='vbridge'/> <virtualport type='openvswitch'/>
<model type='virtio'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x00' function='0x0'/ </interface>
To use it as a VLAN tagged interface, the following line can be added (although it is unnecessary as flows separation would be done by the controller):
<vlan><tag id='1024'></vlan>
In order to have some kind of system in all the generated interfaces, it is a good idea to distinguish them. The following line would create interface with the name configured:
<target dev='gentoo-veth0'/>
There are more details on choices available from the libvirt website.
Guest VM configuration
As guests need to recognize the devices that hypervisor offers them (in order to use them), it is also necessary to provide the drivers for the virtio network device by compiling the appropriate kernel features.Processor type and features ---> [*] Linux guest support ---> [*] Enable Paravirtualization code [*] KVM Guest support (including kvmclock) Device Drivers ---> Virtio drivers ---> <*> PCI driver for virtio devices [*] Network device support ---> <*> Virtio network driver
And after rebooting the VM, network interfaces would be available for use.
Setup validation
First lets see if the configured bridges contain the interfaces configured:
The output should list possible actions that the bridge supports and interfaces that are part of it. This would be important when configuring the flows manually.
ovs-ofctl show vbridge1
The output should list possible actions that the bridge supports and interfaces that are part of it. This would be important when configuring the flows manually.
Now let's see if the forwarding works. I've configured some IP addresses on the interfaces of each VM as well as on the local interfaces of each vbridge to have some source/destination to ping.
ovs-ofctl dump-flows vbridge1
NXST_FLOW reply (xid=0x4): cookie=0x0, duration=3.629s, table=0, n_packets=3, n_bytes=294, idle_timeout=60, idle_age=1, priority=0,icmp,in_port=LOCAL,vlan_tci=0x0000,dl_src=00:10:18:c1:89:94,dl_dst=52:54:00:db:63:44,nw_src=10.0.0.1,nw_dst=10.0.0.11,nw_tos=0,icmp_type=8,icmp_code=0 actions=FLOOD cookie=0x0, duration=3.626s, table=0, n_packets=3, n_bytes=294, idle_timeout=60, idle_age=1, priority=0,icmp,in_port=2,vlan_tci=0x0000,dl_src=52:54:00:db:63:44,dl_dst=00:10:18:c1:89:94,nw_src=10.0.0.11,nw_dst=10.0.0.1,nw_tos=0,icmp_type=0,icmp_code=0 actions=FLOOD
As we are using simple HUB controller, the action for the switch is to flood the packets to all ports (except the incoming one), but you can see all the match conditions listed in the flow and actions. The in_port numbers are the ones that are displayed by the first validation command.
The same should be observed on the other host, except the in_port numbers might be different.
There are other commands, which could show the statistics about Openflow tables or status of ports:
ovs-ofctl dump-tables vbridge1 ovs-ofctl dump-ports vbridge1
But the most useful command for debugging a switch or a controller is the monitor command:
ovs-ofctl monitor vbridge1 watch:
This command displays events that are coming to the bridge or are being sent to the controller. So whenever the VM decides to talk, or something wants to talk to the VM, this command would show it.
There are many options to filter the list of events that are displayed in real-time, but for small deployment like this one, the command above is good enough.
No comments:
Post a Comment