Showing posts with label virtualization. Show all posts
Showing posts with label virtualization. Show all posts

Tuesday, October 21, 2014

Importance of free training appliances/software

I try to get any opportunity I can to keep my technical skills up-to-date, from reading articles down to getting virtual versions of the software/appliance and play with it. While major vendors provide technical documentation and various papers for free, there are still some who hide their documentation behind authentication wall.

But that's not the prime concern, as engineers prefer something tangible, something to play with..
And here comes the idea of virtual labs that some vendors provide for money or rental labs from various training and certification companies. But all that still costs, and this is not very viable for home-use to learn the technology or get acquainted with the management interface.

With the recent advancements of virtualized environments came network and security virtual appliances, which could be re-purposed for training or testing. There are some vendors who offer 30 day validation, but honestly.. in real world people have to deal with many tasks and there is no continuous 30 days time, when one can focus on evaluating the product and perform all the tests and analysis. Also such evaluations are not always well planned with all the test cases identified before performing evaluation, so some cases might be missed and PoC has to be re-built to do them later.

So when actually engineers need or want testing/training VMs:

  • when selecting new network/security elements for purchase (e.g. shortlisting for evaluation)
  • when evaluating compatibility of various elements/vendor systems
  • when preparing for new job position
  • when testing new management/automation systems/monitoring software
  • when developing code for the above 
  • when preparing for certifications
  • for collection purposes

All of these reasons would benefit a vendor (if he provides free evaluation VMs) in the following way:

  • customer's engineers already know the product(s), so no barrier to sell different products(or from different vendor)
  • compatibility would be evaluated without shipping try+buy equipment and problems would be identified (and resolved) quite early, especially if some bounty program would exist
  • with more engineers available to the customer on the market, they might grow/expand faster (=> more equipment can be purchased and deployed)
  • same applies to NMS/SIEM/automation software configuration
  • developing code in free time is also not very easy if one needs Cisco Nexus 9000 to actually test it.
  • although certifications are also source of income, but by lowering the cost of LAB preparation, it might motivate more people to do it.
  • some people collect stamps, while others might collect VMs.. but really it's more about being prepared for one of the reasons above..

There are more benefits to mention, but this surely is sufficient for any product manager or marketing director to stop for a while and think about it.
Some of the vendors already did it, and they chose this strategy in order to win hearts and minds of engineers, who otherwise would not have the opportunity to find out how good these products are.

Conclusion

Dear Vendor,
whether you want to introduce new product or gain larger market share, providing free VMs of your products (with limited performance of course) might bring you more advantages than risks.
Even when you are having problems recruiting good pre-sales or professional services engineers, the availability of free VMs for preparation or training on your products would expand your hiring selection choice in the long run.
What is also important to mention, engineers who would know your products (and like them) could indirectly be supporters or enablers of potential sales opportunities wherever they would work.
So please consider this when adjusting your product portfolios and creating your marketing strategies.
Respectfully yours,
                         Security Engineer

Saturday, February 15, 2014

Arista EOS VM in libvirt environment

To have another type of device in my Openflow lab, I decided to give Arista EOS a try.
First let's get the software needed to build the VM from www.aristanetworks.com/support/download
and download the boot image (aboot-xxx-veos.iso) and the flash drive (eos-xxx-veos.vmdk).
To access the download area, a registration is required, but software can be downloaded without any support contract or license ID (unlike other vendors..).

Libvirt configuration

Creating the VM profile (either via GUI or text editor) is quite straightforward if you consider the following constraints:

Network

Forums recommend at least 4 network interfaces, which should use the e1000 driver. Virtio driver doesn't work (despite the fact that drivers exist and kernel creates the device) and rtl8139 also doesn't work (module not compiled).
First configured interface is the management1-0 interface and the other ones are ethernet1, 2 and 3.
It was a bit confusing to begin with, as the ethernet interfaces can be loaded in different order (e.g. 3rd is the second defined), so it is better to compare the MAC addresses to be sure that interfaces are configured correctly.

Disk

The configuration should have only IDE controller (you have to remove SATA controller or else it won't boot).
The flashdrive image is to be configured as disk and the aboot iso image as DVD or CD drive (raw format).
First the dvd would be booted and it then loads the flashdrive disk, so boot order should have DVD as first.

Memory

Arista recommends using 1GB memory, which works quite well (and as it seems it is all used up):

[admin@vEOS1 ~]$ free
             total       used       free     shared    buffers     cached
Mem:        991516     951588      39928          0     115064     480736
-/+ buffers/cache:     355788     635728
Swap:            0          0          0

vEOS openflow configuration

Login as admin (there is no password as default )
CLI is very similar to Cisco one, so to perform any configuration you have to enter the config mode.

So lets configure the interface to talk to the controller:

vEOS1(config)>interface ethernet 1
vEOS1(config-if-Et1)>no switchport
vEOS1(config-if-Et1)>ip address 10.0.0.100/24

It's important to note that interface can't be the management interface, which on hardware is normally not part of the switching plane.
Next let's bind the switch to the controller and allow it to use the other interfaces:

vEOS1(config)>openflow
vEOS1(config-openflow)>controller tcp:10.0.0.10:6633
vEOS1(config-openflow)>bind ethernet 2
vEOS1(config-openflow)>bind ethernet 3

And delight on the result of our "hard" work:

vEOS1>show openflow
Description: vEOS1#sh openflow
OpenFlow configuration: Enabled
DPID: 0x000052540021d910
Description: vEOS1
Controllers:
  configured: tcp:10.0.0.10:6633
  connected: tcp:10.0.0.10:6633
  connection count: 1
  keepalive period: 10 sec
Flow table state: Enabled
Flow table profile: full-match
Bind mode: interface
  interfaces: Ethernet2, Ethernet3
IP routing state: Disabled
Shell command execution: Disabled
Total matched: 42 packets


Thursday, February 6, 2014

Deploying Openvswitch in Libvirt

In the age of virtualization, there are many solutions that provide centralized VM management. Some of which are very expensive when it comes to licences (VMware), others are too big for smaller deployments (Openstack). Libvirt is a small enough solution that can be managed via CLI, GUI or API.
The configuration of most of VM parameters (cpu, memory, storage,..) is quite straightforward, but network part might be a bit more difficult.
In order to have something much cooler than just NATed virtual interfaces (it is default), I've decided to build an Openflow based network with Openvswitch (vBridge1 and 2) on each host (see picture below).


As it is of no importance for the first part which controller to use, and I didn't want to spend time configuring flow forwarding rules manually, I've used the OpenvSwitch controller (ovs-controller), in the hub mode by using the -N or the -H command-line option. For the basic connectivity is is more than sufficient and it can be changed later on to a full blown Openflow controller like ODL or POX.

Openvswitch configuration

There are loads of articles and guidelines how to configure Openvswitch, so I'll just summarize the steps taken. Port1 and port2 should be replaced by whichever interface(s) should be used as interconnect interface(s).

Create vbridge on Host A

ovs-vsctl br-create vbridge1
ovs-vsctl br-add vbridge1 port1
ovs-vsctl br-add vbridge1 port2

ovs-vsctl set-controller vbridge1 tcp:hostB:6633

Create vbridge on Host B

ovs-vsctl br-create vbridge2
ovs-vsctl br-add vbridge1 port1
ovs-vsctl br-add vbridge1 port2

ovs-vsctl set-controller vbridge1 tcp:hostB:6633

As the VM network interfaces would be automatically created and attached to the vbridge by libvirt when the VM starts, we can move on the the main part of the configuration.

Libvirt VM profile configuration

All the profiles libvirt uses are stored in /etc/libvirt/qemu (or other if you use different hypervisor).
There is also a possibility to edit them via CLI interface by using virsh edit guest_VM.

Usual template or profile generated by a GUI interface that defines a network card looks like this:

    <interface type='network'>
      <mac address='00:00:00:00:00:00'/>
      <source network='default'/>
      <model type='virtio'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x00' function='0x0'/
    </interface>

In order to use Openvswitch instead the part above has to be modified as follows:

    <interface type='bridge'>
      <source bridge='vbridge'/>
      <virtualport type='openvswitch'/>
      <model type='virtio'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x00' function='0x0'/
    </interface>

To use it as a VLAN tagged interface, the following line can be added (although it is unnecessary as flows separation would be done by the controller):

      <vlan><tag id='1024'></vlan>

In order to have some kind of system in all the generated interfaces, it is a good idea to distinguish them. The following line would create interface with the name configured:

      <target dev='gentoo-veth0'/>

There are more details on choices available from the libvirt website.

Guest VM configuration

As guests need to recognize the devices that hypervisor offers them (in order to use them),  it is also necessary to provide the drivers for the virtio network device by compiling the appropriate kernel features.

Processor type and features  --->
    [*] Linux guest support --->
        [*] Enable Paravirtualization code
        [*] KVM Guest support (including kvmclock)
Device Drivers  --->
    Virtio drivers  --->
        <*>   PCI driver for virtio devices
    [*] Network device support  --->
        <*>   Virtio network driver

And after rebooting the VM, network interfaces would be available for use.

Setup validation

First lets see if the configured bridges contain the interfaces configured:

ovs-ofctl show vbridge1

The output should list possible actions that the bridge supports and interfaces that are part of it. This would be important when configuring the flows manually.

Now let's see if  the forwarding works. I've configured some IP addresses on the interfaces of each VM as well as on the local interfaces of each vbridge to have some source/destination to ping.

ovs-ofctl dump-flows vbridge1

This command should list two entries (one for sending and one for receiving the packets) after executing the ping.

NXST_FLOW reply (xid=0x4):

 cookie=0x0, duration=3.629s, table=0, n_packets=3, n_bytes=294, idle_timeout=60, idle_age=1, priority=0,icmp,in_port=LOCAL,vlan_tci=0x0000,dl_src=00:10:18:c1:89:94,dl_dst=52:54:00:db:63:44,nw_src=10.0.0.1,nw_dst=10.0.0.11,nw_tos=0,icmp_type=8,icmp_code=0 actions=FLOOD

 cookie=0x0, duration=3.626s, table=0, n_packets=3, n_bytes=294, idle_timeout=60, idle_age=1, priority=0,icmp,in_port=2,vlan_tci=0x0000,dl_src=52:54:00:db:63:44,dl_dst=00:10:18:c1:89:94,nw_src=10.0.0.11,nw_dst=10.0.0.1,nw_tos=0,icmp_type=0,icmp_code=0 actions=FLOOD

As we are using simple HUB controller, the action for the switch is to flood the packets to all ports (except the incoming one), but you can see all the match conditions listed in the flow and actions. The in_port numbers are the ones that are displayed by the first validation command.
The same should be observed on the other host, except the in_port numbers might be different.

There are other commands, which could show the statistics about Openflow tables or status of ports:

ovs-ofctl dump-tables vbridge1
ovs-ofctl dump-ports vbridge1

But the most useful command for debugging a switch or a controller is the monitor command:

ovs-ofctl monitor vbridge1 watch:

This command displays events that are coming to the bridge or are being sent to the controller. So whenever the VM decides to talk, or something wants to talk to the VM, this command would show it.
There are many options to filter the list of events that are displayed in real-time, but for small deployment like this one, the command above is good enough.