Professional Documents
Culture Documents
LGI SetUp
Objective:
Test Bed diagram demonstrates the PoC to develop the Test Environment as well simulate the
Customer infrastructure within where the test cases of SDN controllers will be executed.
Accomplishment:
Both, the development of Test environment as well as the simulation of Customer infrastructure like
CSR – in particular the DHCP servers, is simulated using the infamous virtualization concept of Linux
Kernel - Linux Namespace, using which Linux Containers are created.
Linux Containers:
is created by isolating required Linux namespace(s) and thereby allowing the applications to execute
in these isolated namespaces.
In our context, most of the containers are created by isolating the Linux Network Namespace,
meaning isolating a separate network stack - consequence of which all the applications executing in
this namespace will be using separate network stack – in particular separate TCP/IP stack which will
be different from the host’s Network stack.
This network stack will have its own virtual network interface and with its own unique IP address
separated from the host network interface and host IP address.
For a lay man, it gives the impression of the separate host, because a host in a networked
environment is uniquely identified through IP address.
A network application like sshd running on the newly created container with a separate network
stack can be used to login from the remote host.
Now, a user working remotely can access this host machine with 2 IP addresses, one with the host’s
IP address and other with the container’s IP address.
Stats: We can create 100s of Linux containers with Linux Network Namespace isolation, each having
its own IP address and can run sshd server on all the 100s of containers. This means I can simulate
100s of hosts in a decent (dual core processor and 4 GB RAM) hardware configuration of single host
machine.
This is the profundity of Linux Namespace.
Anything in this world, simpler in its design and usage, profound in its application, will become
popular.
So, is the idea of Linux Namespace and its profound application - Linux Containers, which we are
using in our Test Setup.
Abstract:
With this we are convinced that we will see the Linux containers with isolated Linux network
namespace as separate hosts.
We will create 2 sets of 4 + 1 containers each set configured to be in a separate collision domain,
a.k.a vlans, say vlan 10 and vlan 20. 1 container in each vlan will be running dhcp server to provide IP
address dynamically to all the other containers within that vlan.
Then Test Engineers will be logging into the containers to execute test cases.
Detailed Description:
This test bed consists of both physical as well as virtual components. To be more specific there are
less number of physical components and more number of virtual components.
To begin with let me introduce to the reader, the physical and virtual components in this Test Bed.
1. GNU/Linux of Ubuntu 16.04 64-bit version is used on Host machines (Desktops or Laptops).
(Here after this host machine is referred as Linux host)
a. Linux Host 1
b. Linux Host 2
c. Linux Host 3
2. Managed Switch
a. Layer 2 switch
3. Three Ethernet cables marked as blue line.
1. 2 OVS Bridges
2. 4 Linux Bridges
3. 20 virtual cables each having a pair of virtual interfaces.
4. 8 Linux containers with only Network Namespace isolation and shares other namespaces with
the host system.
5. 2 Linux containers (C10 and C20) with both Network Namespace and mount namespace
isolation and will share the other namespaces with the host system.
With the prior approval of the Manager or domain head, physical components must be
procured from IT
1. Physical interface enp1s0 of Linux Host 1 is connected to the port 1 of the Managed Switch
2. Physical interface enp7s0 of Linux Host 2 is connected to the port 2 of the Managed Switch
3. Physical interface enp1s0 of Linux Host 3 is connected to the port 11 of the Managed Switch
Unlike Physical Components Virtual components cannot be procured, but, must be created using
tools provided by the OS distributor, in this case Ubuntu 16.04.
Before using the tools, let us peep through the software packages which provides the tools required
for our usage.
• iproute2 (contains ip utility)
• bridge-utils (Contains brctl utility)
• openvswitch-switch (Contains ovs-vsctl utility)
• net-tools (contains ifconfig utility)
• iputils-ping (Contains ping)
• isc-dhcp-client (Contains dhclient)
Interconnecting virtual components is all about plugging the virtual ports to the Linux bridge or ovs
bridge or Linux containers.
1. Connecting a port to the ovs bridge is accomplished using ovs-vsctl command
a. $ ovs-vsctl add-port <bridge name> <port no.>
2. Connecting a port to the Linux Bridge is accomplished using brctl command
a. $ brctl addif <bridge name> <port no.>
3. Connecting a port to the Linux container is accomplished using ip command
a. $ ip link set <veth port> netns <container name>
Before we go to the details of wiring the virtual network components, let us understand the role of
the physical and virtual components
1. Linux Hosts: To house the virtual hosts named Linux containers and to house the virtual
network elements like Linux Bridges and OVS Bridges.
2. Managed Switch: To interconnect the two hosts and to allow vlan traffic from one host to
the other host. Also, it aids in interconnecting the Management host to the Two linux hosts
housing the Linux Containers.
3. Linux Bridges: To logically separate the Linux Containers into vlans
4. OVS Bridges: To technically separate the Linux Containers into vlans.
5. Pair of virtual ports: Used to wire the virtual network elements.
Sequence of commands to execute the above 10 steps is described in the script file
host_1_setup_1.sh. Also sequence of commands to undo the above 10 steps is described in the
script file host_1_unset.1.sh
6. Creating 2 virtual cables each having a pair of virtual ports to interconnect 2 containers
hosting DHCP containers and OVS-Bridge. {(v300, v301), (v302, v303)}
7. Creating 2 virtual cables each having a pair of virtual ports to interconnect Linux Bridges and
OVS-Bridge. {(veth10, veth11), (veth12, veth13)}
8. Creating 3 virtual cables each having a pair of virtual ports to interconnect OVS-Bridge and
Linux Host or Global NameSpace. {(v10, v11), (v12,v13), (v14,v15)}
9. Configure ports {v12, veth10, v300} as vlan10 access ports
10. Configure ports {v14, veth12, v302} as vlan20 access ports
11. Configure ports{v1} as vlan 10 access ports
12. Configure the phy port enp7s0 as trunk port carrying vlan1, vlan10 and vlan20 tagged frames
Sequence of commands to execute the above 10 steps is described in the script file
host_2_setup_1.sh and host_2_setup_2.sh. Also sequence of commands to undo the above 10 steps
is described in the script file host_2_unset.1.sh and host_2_unset.2.sh
With this the setup is ready for us to launch the applications in respective containers.
Launching of Applications
Disclaimer:
Every time a virtual interface is created and associated to the Container, Linux gives
a different MAC address. Hence DHCP server will assign a new IP address. If there is
frequent act of creating and deleting the Containers, there is high chance of depleting the IP
address from the DHCP server. Hence it is recommended that the lease time of the IP
address assigned by the DHCP server should be as less as possible and the leasing
information should be
Diagnostics
Following are the few diagnostics which will help you to diagnose the anatomy and physiology of
the Test bed
1. $ ip a show (Gives the statistics about all the bridge ports, virtual ports and physical ports)
2. $ sudo ovs-vsctl show ( Gives the statistics about all the ovs-bridges and its ports)
3. $ ping <ip-address> ( Helps in figuring out the connectivity of the remote host)
4. $ sudo tcpdump -i <interface-name> (Captures the packets flowing through the interface
name)
Following are the steps involved in installing Docker and starting Docker container
1. Install Docker
$ curl https://get.docker.com > /tmp/install.sh
$ chmod a+x /tmp/install.sh
$ /tmp/install.sh
2. Load the Ubuntu's dhcp image in the form of tarball (The link is
supplied in the Reference Section)
$ docker load -i ubuntu_dhcpd.tar.gz
Reference: