You are on page 1of 7

1

LGI SetUp

Fig: LGI Test Bed.

Test Bed Description:

Objective:

Test Bed diagram demonstrates the PoC to develop the Test Environment as well simulate the
Customer infrastructure within where the test cases of SDN controllers will be executed.

Accomplishment:

Both, the development of Test environment as well as the simulation of Customer infrastructure like
CSR – in particular the DHCP servers, is simulated using the infamous virtualization concept of Linux
Kernel - Linux Namespace, using which Linux Containers are created.

Linux Containers:
is created by isolating required Linux namespace(s) and thereby allowing the applications to execute
in these isolated namespaces.
In our context, most of the containers are created by isolating the Linux Network Namespace,
meaning isolating a separate network stack - consequence of which all the applications executing in
this namespace will be using separate network stack – in particular separate TCP/IP stack which will
be different from the host’s Network stack.

TEL Confidential LGI Setup Prabhunath G


2

This network stack will have its own virtual network interface and with its own unique IP address
separated from the host network interface and host IP address.
For a lay man, it gives the impression of the separate host, because a host in a networked
environment is uniquely identified through IP address.
A network application like sshd running on the newly created container with a separate network
stack can be used to login from the remote host.
Now, a user working remotely can access this host machine with 2 IP addresses, one with the host’s
IP address and other with the container’s IP address.
Stats: We can create 100s of Linux containers with Linux Network Namespace isolation, each having
its own IP address and can run sshd server on all the 100s of containers. This means I can simulate
100s of hosts in a decent (dual core processor and 4 GB RAM) hardware configuration of single host
machine.
This is the profundity of Linux Namespace.
Anything in this world, simpler in its design and usage, profound in its application, will become
popular.
So, is the idea of Linux Namespace and its profound application - Linux Containers, which we are
using in our Test Setup.

Abstract:
With this we are convinced that we will see the Linux containers with isolated Linux network
namespace as separate hosts.
We will create 2 sets of 4 + 1 containers each set configured to be in a separate collision domain,
a.k.a vlans, say vlan 10 and vlan 20. 1 container in each vlan will be running dhcp server to provide IP
address dynamically to all the other containers within that vlan.
Then Test Engineers will be logging into the containers to execute test cases.

The whole effort and exercise in this document is to do the following


1. Demonstrate the creation of Linux containers within two GNU/Linux host machines.
2. Group them into separate vlans,
3. Execute dhcp server in one container of each vlan.
4. Allow the containers within a vlan to dynamically fetch IP address from its corresponding
DHCP servers executing in the same vlan
5. Allow the users to log in to the containers from the host machine to execute testcases.

Detailed Description:
This test bed consists of both physical as well as virtual components. To be more specific there are
less number of physical components and more number of virtual components.
To begin with let me introduce to the reader, the physical and virtual components in this Test Bed.

Physical Components in this setup:

1. GNU/Linux of Ubuntu 16.04 64-bit version is used on Host machines (Desktops or Laptops).
(Here after this host machine is referred as Linux host)
a. Linux Host 1
b. Linux Host 2
c. Linux Host 3
2. Managed Switch
a. Layer 2 switch
3. Three Ethernet cables marked as blue line.

TEL Confidential LGI Setup Prabhunath G


3

Virtual Components in this Setup:

1. 2 OVS Bridges
2. 4 Linux Bridges
3. 20 virtual cables each having a pair of virtual interfaces.
4. 8 Linux containers with only Network Namespace isolation and shares other namespaces with
the host system.
5. 2 Linux containers (C10 and C20) with both Network Namespace and mount namespace
isolation and will share the other namespaces with the host system.

How to get Physical Components:

With the prior approval of the Manager or domain head, physical components must be
procured from IT

Inter-Connecting Physical Components:

1. Physical interface enp1s0 of Linux Host 1 is connected to the port 1 of the Managed Switch
2. Physical interface enp7s0 of Linux Host 2 is connected to the port 2 of the Managed Switch
3. Physical interface enp1s0 of Linux Host 3 is connected to the port 11 of the Managed Switch

How to get Virtual Components:

Unlike Physical Components Virtual components cannot be procured, but, must be created using
tools provided by the OS distributor, in this case Ubuntu 16.04.
Before using the tools, let us peep through the software packages which provides the tools required
for our usage.
• iproute2 (contains ip utility)
• bridge-utils (Contains brctl utility)
• openvswitch-switch (Contains ovs-vsctl utility)
• net-tools (contains ifconfig utility)
• iputils-ping (Contains ping)
• isc-dhcp-client (Contains dhclient)

Creating Virtual Components


1. OVS bridge is created by using ovs-vsctl command
a. $ sudo ovs-vsctl add-br <bridge-name>
2. Linux bridge is created by using brctl command
a. $ sudo brctl addbr <bridge-name>
3. A virtual cable having a pair of virtual ports is created using ip command
a. $ sudo ip link add <veth-port1> type veth peer name <veth-port2>
4. Linux Containers with network namespace isolation is created using ip command
a. $ sudo ip netns add <container name>
5. Linux Containers with both network namespace isolation and mount namespace
isolation is created by a Docker
a. Please find below the steps involved in installing docker and launching docker
container with dhcp package.

TEL Confidential LGI Setup Prabhunath G


4

Interconnecting Virtual components

Interconnecting virtual components is all about plugging the virtual ports to the Linux bridge or ovs
bridge or Linux containers.
1. Connecting a port to the ovs bridge is accomplished using ovs-vsctl command
a. $ ovs-vsctl add-port <bridge name> <port no.>
2. Connecting a port to the Linux Bridge is accomplished using brctl command
a. $ brctl addif <bridge name> <port no.>
3. Connecting a port to the Linux container is accomplished using ip command
a. $ ip link set <veth port> netns <container name>

Before we go to the details of wiring the virtual network components, let us understand the role of
the physical and virtual components
1. Linux Hosts: To house the virtual hosts named Linux containers and to house the virtual
network elements like Linux Bridges and OVS Bridges.
2. Managed Switch: To interconnect the two hosts and to allow vlan traffic from one host to
the other host. Also, it aids in interconnecting the Management host to the Two linux hosts
housing the Linux Containers.
3. Linux Bridges: To logically separate the Linux Containers into vlans
4. OVS Bridges: To technically separate the Linux Containers into vlans.
5. Pair of virtual ports: Used to wire the virtual network elements.

Setting up Virtual Network components on Linux Host 1

1. Create 1 OVS-bridge named br0


2. Create 2 Linux Bridges named br1 and br2
3. Create 4 Linux Containers named CR11, CR12, CR21 and CR22
4. Create 4 virtual cables each having a pair of virtual ports to interconnect Containers and
Linux Bridges. {(v100,v101), (v102,v103), (v200,v201), (v202,v203)}
5. Create 2 virtual cables each having a pair of virtual ports to interconnect Linux Bridges and
OVS-Bridge. {(veth0, veth1), (veth2, veth3)}
6. Create 3 virtual cables each having a pair of virtual ports to interconnect OVS-Bridge and
Linux Host or Global NameSpace. {(v0,v1), (v2,v3), (v4,v5)}
7. Configure ports {v2, veth0} as vlan10 access ports
8. Configure ports {v4, veth2} as vlan20 access ports
9. Configure ports{v1} as vlan 1 access ports
10. Configure the phy port enp1s0 as trunk port carrying vlan1, vlan10 and vlan20 tagged frames

Sequence of commands to execute the above 10 steps is described in the script file
host_1_setup_1.sh. Also sequence of commands to undo the above 10 steps is described in the
script file host_1_unset.1.sh

Setting up Virtual Network components on Linux Host 2

1. Creating 1 OVS-bridge named br0


2. Creating 2 Linux Bridges named br1 and br2
3. Create 2 Linux Containers named C10 and C11 hosting DHCP servers.
4. Create 4 Linux Containers named CR13, CR14, CR23 and CR24
5. Create 4 virtual cables each having a pair of virtual ports to interconnect Containers and
Linux Bridges. {(v104, v105), (v106, v107), (v204, v205), (v206, v207)}

TEL Confidential LGI Setup Prabhunath G


5

6. Creating 2 virtual cables each having a pair of virtual ports to interconnect 2 containers
hosting DHCP containers and OVS-Bridge. {(v300, v301), (v302, v303)}
7. Creating 2 virtual cables each having a pair of virtual ports to interconnect Linux Bridges and
OVS-Bridge. {(veth10, veth11), (veth12, veth13)}
8. Creating 3 virtual cables each having a pair of virtual ports to interconnect OVS-Bridge and
Linux Host or Global NameSpace. {(v10, v11), (v12,v13), (v14,v15)}
9. Configure ports {v12, veth10, v300} as vlan10 access ports
10. Configure ports {v14, veth12, v302} as vlan20 access ports
11. Configure ports{v1} as vlan 10 access ports
12. Configure the phy port enp7s0 as trunk port carrying vlan1, vlan10 and vlan20 tagged frames

Sequence of commands to execute the above 10 steps is described in the script file
host_2_setup_1.sh and host_2_setup_2.sh. Also sequence of commands to undo the above 10 steps
is described in the script file host_2_unset.1.sh and host_2_unset.2.sh

Configuration of Static IP addresses

It is essential to configure static IP addresses for few interfaces


1. Configuring Management IP addresses
a. 172.16.1.1 on phy interface enp2s0 on Linux host 3
b. 172.16.1.10 on virtual interface v1 on Linux host 1
c. 172.16.1.20 on virtual interface v11 on Linux host 2
2. Configuring IP addresses on Linux Containers DHCP Servers
a. 192.168.1.1 on virtual interface eth0 on Linux Container C10
b. 192.168.2.1 on virtual interface eth0 on Linux Container C20

With this the setup is ready for us to launch the applications in respective containers.

Launching of Applications

1. Launch DHCP server (/usr/bin/dhcpd) application in Linux container C10


2. Launch DHCP server (/usr/bin/dhcpd) application in Linux container C11
3. Execute dhcp client on all the containers except C10 and C11 to dynamically obtain IP
addresses for their virtual interface eth0
4. Execute ssh server (/usr/sbin/sshd) on all the containers except C10 and C11

Logging into the Containers


1. Note down the the IP addresses all the containers before logging in.
2. Logging into the container through ssh command (ssh username@ip-address)

Disclaimer:

Every time a virtual interface is created and associated to the Container, Linux gives
a different MAC address. Hence DHCP server will assign a new IP address. If there is
frequent act of creating and deleting the Containers, there is high chance of depleting the IP
address from the DHCP server. Hence it is recommended that the lease time of the IP
address assigned by the DHCP server should be as less as possible and the leasing
information should be

TEL Confidential LGI Setup Prabhunath G


6

Diagnostics

Following are the few diagnostics which will help you to diagnose the anatomy and physiology of
the Test bed
1. $ ip a show (Gives the statistics about all the bridge ports, virtual ports and physical ports)
2. $ sudo ovs-vsctl show ( Gives the statistics about all the ovs-bridges and its ports)
3. $ ping <ip-address> ( Helps in figuring out the connectivity of the remote host)
4. $ sudo tcpdump -i <interface-name> (Captures the packets flowing through the interface
name)

Following are the steps involved in installing Docker and starting Docker container

1. Install Docker
$ curl https://get.docker.com > /tmp/install.sh
$ chmod a+x /tmp/install.sh
$ /tmp/install.sh

# Add the user to the docker group


$ sudo usermod -aG docker `echo $USER`

2. Load the Ubuntu's dhcp image in the form of tarball (The link is
supplied in the Reference Section)
$ docker load -i ubuntu_dhcpd.tar.gz

3. Verify the image in the local repository


$ docker images # You should see the image tagged dhcpd in
# the REPOSITORY ubuntu

4. Create docker container named dhcp-server-1 with the above listed


image
$ docker run -it --name=dhcp-server-1 --net=none ubuntu:dhcpd
/bin/bash

5. Create docker container named dhcp-server-2 with the above listed


image
$ docker run -it --name=dhcp-server-2 --net=none ubuntu:dhcpd
/bin/bash

6. Following configuration has to be done within the container

7. Create a file /var/lib/dhcp/dhcpd.leases in both the containers


# touch /var/lib/dhcp/dhcpd.leases

8. Edit the dhcp configuration file /etc/dhcp/dhcpd.conf in both the


containers to edit the pool profile. A sample pool profile named
LGI is already present in the configuration

TEL Confidential LGI Setup Prabhunath G


7

Reference:

1. Here is the link to download ubuntu_dhcpd.tar.gz


https://tataelxsi-
my.sharepoint.com/personal/prabhunath_g_tataelxsi_co_in/_layouts/15/
guestaccess.aspx?guestaccesstoken=o21LgDstoQwd4P100EiBq%2fwxHWE3HTFd
%2bugENiwbba0%3d&docid=2_103a082df9f454ed0a585a40e7f84b5a5&rev=1

TEL Confidential LGI Setup Prabhunath G

You might also like