Professional Documents
Culture Documents
Cluster Access
SSH Access
Accessing your cluster
Several ways are available for you to access the nodes that you just created.
1. Primary Method (Recommended)
2. Secondary Method
1. Primary Method
To access the nodes in the lab, go to the Access Devices Tab and click on respective node which is shown in
the Lab Topologyimage to access the node.
Go to the Access Device Tab and click on neutron-openstack-ctrl node which is shown in the
Lab Topology image
Note:
If you are facing any issue while accessing lab nodes, please refer Secondary method mentioned below for
accessing lab nodes.
2. Secondary Method
You can access nodes in the cluster using the following different ways based on your OS.
All the nodes in your cluster can be accessed via SSH using the public IP address, port number and using
username/password authentication.
After provisioning the lab, go to Access Device Tab click on Lab Details section and please make a of note
the IP address, port numbers, username and password of all the nodes of the Lab.
Example:
password: criterion
To login into the nodes using private key, you need an ssh client like PuTTY. You will also require
PuTTYgen.
You can download both PuTTY and PuTTYgen from the below link
Download Putty
Download Puttygen
Provide the IP address of the node in which you want to login. Save the session. Click on Data and enter Auto-
Login Username as ubuntu
Then click Session on save the session again. Now you can login to your node with the password.
password: criterion
Horizon Access
Horizon Access
username: admin
password: Chang3M3
Launch
VXLAN Networking
Introduction
About Lab
The focus of this lab is to create a VXLAN tunnel between 2 nodes. Here the 2 nodes will be Neutron-
Openstack-Ctrl1 and Neutron-Service-lb1. Along with that the lab will also help you understand Linux
Network Namespaces.
Creation of VXLAN tunnel require bridges which can be Linux bridges or OVS bridges. This lab will show
you step by step how a VXLAN tunnel can be created using OVS bridges. We will create 2 namespaces as
well in each node and check the final connectivity between them through the VXLAN tunnel.
The namespaces we will create will be called as Tom and Jerry. Please refer the following diagram to
understand the topology.
As shown we will create Tom and Jerry in both Neutron-Openstack-Ctrl1 and Neutron-Service-lb1. They will
be connected to the OVS bridge br-test. br-test from Neutron-Openstack-Ctrl1 to br-test in Neutron-Service-
lb1 will talk to each other through the VXLAN tunnel. We will also be creating veth pairs to connect the
namespaces to the bridge br-test.
Exercise-I
Objective:
Note: If you are facing any issue, for further clarification please refer the SSH AccessTab under the Cluster
Access for more information.
Similarly,
In another session, SSH into neutron-service-lb1 as well using same steps and keep the session active. We
will be needing it later.
ovs-vsctl show
ip link list
Lets bring the vnet interfaces up (as they are just created but not turned up):
Lets attach ports vnet0 and vnet3 to the bridge we created br-test:
Exercise-II
Objective:
1. Creating namespaces
Lets create 2 namespaces tom and jerry
Helpful Commands:
3. Assigning IP
Lets assign an IP 10.1.1.1/24 to vnet1 in tom and vnet3 in jerry
Exercise-III
Objective:
ifconfig eth0
ovs-vsctl show
vi vxlan-flows-1.txt
table=0,in_port=1,actions=set_field:100->tun_id,resubmit(,1)
table=0,in_port=2,actions=set_field:200->tun_id,resubmit(,1)
table=0,actions=resubmit(,1)
table=1,tun_id=100,ip,nw_dst=10.1.1.1,actions=output:1
table=1,tun_id=200,ip,nw_dst=10.1.1.1,actions=output:2
table=1,tun_id=100,ip,nw_dst=10.1.1.2,actions=output:10
table=1,tun_id=200,ip,nw_dst=10.1.1.2,actions=output:10
table=1,tun_id=100,arp,nw_dst=10.1.1.1,actions=output:1
table=1,tun_id=200,arp,nw_dst=10.1.1.1,actions=output:2
table=1,tun_id=100,arp,nw_dst=10.1.1.2,actions=output:10
table=1,tun_id=200,arp,nw_dst=10.1.1.2,actions=output:10
table=1,actions=drop
Exercise-IV
Objective:
a. Create a bridge
e. Create namespaces
ovs-vsctl show
table=0,in_port=1,actions=set_field:100->tun_id,resubmit(,1)
table=0,in_port=2,actions=set_field:200->tun_id,resubmit(,1)
table=0,actions=resubmit(,1)
table=1,tun_id=100,ip,nw_dst=10.1.1.2,actions=output:1
table=1,tun_id=200,ip,nw_dst=10.1.1.2,actions=output:2
table=1,tun_id=100,ip,nw_dst=10.1.1.1,actions=output:10
table=1,tun_id=200,ip,nw_dst=10.1.1.1,actions=output:10
table=1,tun_id=100,arp,nw_dst=10.1.1.2,actions=output:1
table=1,tun_id=200,arp,nw_dst=10.1.1.2,actions=output:2
table=1,tun_id=100,arp,nw_dst=10.1.1.1,actions=output:10
table=1,tun_id=200,arp,nw_dst=10.1.1.1,actions=output:10
table=1,actions=drop
Now, lets check the ping across HOSTS from neutron-openstack-ctrl1 node
Troubleshoot
Following commands will help you to understand the configurations that were done
ovs-vsctl show
ip netns list
VIM-Openstack
Introduction
About Lab
The focus of this lab is to create a basic familiarity with OpenStack. Exercises will include uploading an
image, creation of network, creation of VM and how to reach the VM.
In this lab we will interact with OpenStack using CLI and the next lab will help you interact with OpenStack
using HORIZON.
In OpenStack VMs can be instantiated by using Images available in GLANCE image store. Exercise I will
help you to download and upload CIRROS image in GLANCE. You can upload any image using the same
steps.
To create different isolated networks, OpenStack uses NEUTRON service. Access to the private network is
provided by a Router by adding public network interface as well. In Exercise II we will create a new Private
Network and attach it to one of the interfaces of the Router which as well we will create.
Apart from IP reachability, NEUTRON provides a firewall to allow protocols/IPs/Ports which needs to be
added to reach the VM as default security group blocks everything. We can also create ssh-keypair that can be
used to SSH into the VM when it is instantiated. Exercise III will take you to handling these aspects of
OpenStack and as well launch a VM. We will also check where the router is and test reachability from router
to VM.
OpenStack provides FloatingIPs to the VM so that they can be reached directly from the public network, which
we will find out in Exercise IV.
Finally Exercise V will help you to trace a packet's path from VM to router and also reuse parts from earlier
Exercises.
Do not skip any step, as the next lab is dependent on the same.
Exercise-I
Objective:
If it is not available then go to Access Devices tab click on the neutron-openstack-ctrl1 node present
under Topology tab
Note: If new SSH session of openstack controller is launched then before performing exercises please make
sure to source openrc_admin
wget http://download.cirros-cloud.net/0.3.1/cirros-0.3.1-x86_64-disk.img
source openrc_admin
4. Uploading Image
Following command will upload the image cirros-0.3.1-x86_64-disk.img to glance
--container-format => format of the container which specifies the addition metadata required for the virtual
machine bare it means no container.
glance image-list
Exercise-II
Objective:
Launch
Username: admin
Password: Chang3M3
To create tenant name net0 add the details shown in following image and submit.
We will attach subnet as 10.0.0.0/24 to the Tenant Network net0 that we are creating
Subnet Name:
subnet0
Network Address:
10.0.0.0/24
IP version: IPv4
Gateway IP:
10.0.0.1
Add IP pool from 10.0.0.20 to 10.0.0.250 and DNS-name sever(optional) 8.8.8.8 and make sure enable DHCP
is checked.
Verify,
3. Create a Router
To make the Tenant Network reachable we will create a router and add Tenant Network and Public Network to
it.
Select subnet0
Similarly,
Exercise-III
Objective:
Note:
If you already added security rules while performing exercise2 of nova. Please ignore step3( Add the Security
Group rules).
If it's not available Go to Access Devices tab click on the Topology. Click on the neutron-openstack-
ctr1 present in the topology image to access the node.
Note: If new SSH session of openstack controller being launched, before performing any exercises please
make sure you source openrc_admin file.
nova keypair-list
Note:
If you already added security rules while performing exercise2 of nova. Please ignore
step3( Add the Security Group rules).
To allow required traffic to the VM, add the following security group rules to nova
Note:
Please use previously launched Horizon Dashboard, If it is not available please newly launch Horizon
Dashboard as we done in last exercise and make necessary changes.
Go to Manage Rule
Similarly,
4. Launching a VM
To launch new VM go to Project > Compute
We can see following image Test1 status as Active and It got IP address as 10.0.0.11
ip netns list
Helpful Commands:
Check Router :
To remove router:
neutron router-gateway
Exercise-IV
Objective:
1. Creating floating IP
Execute following exercise on neutron-openstack-ctrl1
Floating IPs are essential so that the VMs created in the Tenant Network are reachable from Openstack.
2. Associate floating IP
To associate previously created floating IP to instance Test1.
If it's not available Go to Access Devices tab click on the Topology. Click on the neutron-openstack-
ctr1 present in the topology image to access the node.
Note: In case new SSH session of openstack controller is being launched, before performing any exercises
please make sure you source openrc_admin file.
Exercise-V
Objective
tcpdump -i eth0
As shown in the following picture, change in n_packets, will help you identify which flow gets hit. Please Log
in to VM Test1 we created earlier in the exercise and ping only 4 packets from VM to router using
ping -c 4 10.0.0.1
Yellow boxes show number of packets before sending a ping and green boxes show number of packets after
pings were sent.
7. Create RouterA and RouterB for Tenant Networks and Attach private and public interfaces on the
respective routers.
ip netns list
10. Ping 2 VMs from Router name spaces and verify they are reachable.
11. Dump the flows to see how many Vxlan tunnels are present in the system.
This lab will focus on giving a hands on with Tacker using OpenStack HORIZON. We will validate basic
Tacker's functionality as VNFM and NFVO.
Tacker can be split into 2 a Generic Virtual Network Functions Manager and Network Functions Orchestrator.
By NFVO what we mean is it is responsible for an End-To-End Service Orchestration, which involves all the
interactions with VIM such as resource allocation, network creation, etc. while VNFM looks at the lifecycle of
the VNF such as Health Monitoring, Fault Recovery, etc.
We are Using br-mgmt as a flat network. A flat network is a network that does not provide any segmentation
options. A traditional L2 ethernet network is a flat network. Any instances attached to this network are able to
see the same broadcast traffic and can contact each other without requiring a router. br-mgmt network is
configured as part of the lab bring up and it has to be Flat Network Type compared to Net2 and Net3 which
we created as Vxlan type. NFVO and VNFM communicates with cloud controller and VNF's respectively.
Which is why VNFM requires direct access to VNF which can be provided by flat network.
Launch
Username: admin
Password: Chang3M3
After logging, click on Admin. Click on Instances under System which will have Test1
Click on tick box for Cirros (2nd image) and click on Delete Images to delete this image.
Networks will have all networks in OpenStack alongwith the ones that we created.
HORIZON can also show Network Topology. Click on Project -> Network -> Network Topology.
Tacker requires a Virtual Infrastructure Manager (VIM) to instantiate VNFs. We will register OpenStack as
our VIM in Exercise I.
Exercise II will demonstrate Orchestrating a VNF using a TOSCA Template and in Exercise III will check
VNFM's Fault Recovery.
Exercise-I
Objective:
Registering VIM
2. Checking VIMs
Verify if any VIMs is registered in NFV -> NFV Orchestration -> VIM Management
It should be empty
3. Registering VIM
Click on Register VIM and fill in the configurations as shown:
Password: Chang3M3
Exercise-II
Objective:
1. Onboard VNF
In HORIZON, goto NFV -> VNF Management -> VNF Catalog and click on onboard VNF.
Configure as shown:
Tacker uses Topology and Orchestration Specification for Cloud Applications (TOSCA) template in YAML
format to orchestrate and manage a VNF
For TOSCA YAML we will create 3 VDUs in VNF and verify 3 NICs for each VM in a
TOSCA Template
tosca_definitions_version: tosca_simple_profile_for_nfv_1_0_0
metadata:
template_name: sample-tosca-vnfd
topology_template:
node_templates:
VDU1:
type: tosca.nodes.nfv.VDU.Tacker
properties:
image: cirros
flavor: m1.tiny
availability_zone: nova
mgmt_driver: noop
config: |
param0: key1
param1: key2
CP11:
type: tosca.nodes.nfv.CP.Tacker
properties:
management: true
anti_spoofing_protection: false
requirements:
- virtualLink:
node: VL1
- virtualBinding:
node: VDU1
CP12:
type: tosca.nodes.nfv.CP.Tacker
properties:
anti_spoofing_protection: false
requirements:
- virtualLink:
node: VL2
- virtualBinding:
node: VDU1
CP13:
type: tosca.nodes.nfv.CP.Tacker
properties:
anti_spoofing_protection: false
requirements:
- virtualLink:
node: VL3
- virtualBinding:
node: VDU1
VDU2:
type: tosca.nodes.nfv.VDU.Tacker
properties:
image: cirros
flavor: m1.tiny
availability_zone: nova
mgmt_driver: noop
config: |
param0: key1
param1: key2
CP21:
type: tosca.nodes.nfv.CP.Tacker
properties:
management: true
requirements:
- virtualLink:
node: VL1
- virtualBinding:
node: VDU2
CP22:
type: tosca.nodes.nfv.CP.Tacker
requirements:
- virtualLink:
node: VL2
- virtualBinding:
node: VDU2
CP23:
type: tosca.nodes.nfv.CP.Tacker
requirements:
- virtualLink:
node: VL3
- virtualBinding:
node: VDU2
VDU3:
type: tosca.nodes.nfv.VDU.Tacker
properties:
image: cirros
flavor: m1.tiny
availability_zone: nova
mgmt_driver: noop
config: |
param0: key1
param1: key2
CP31:
type: tosca.nodes.nfv.CP.Tacker
properties:
management: true
requirements:
- virtualLink:
node: VL1
- virtualBinding:
node: VDU3
CP32:
type: tosca.nodes.nfv.CP.Tacker
requirements:
- virtualLink:
node: VL2
- virtualBinding:
node: VDU3
CP33:
type: tosca.nodes.nfv.CP.Tacker
requirements:
- virtualLink:
node: VL3
- virtualBinding:
node: VDU3
VL1:
type: tosca.nodes.nfv.VL
properties:
network_name: br_mgmt
vendor: Tacker
VL2:
type: tosca.nodes.nfv.VL
properties:
network_name: net2
vendor: Tacker
VL3:
type: tosca.nodes.nfv.VL
properties:
network_name: net3
vendor: Tacker
Here net2 and net3 are the networks created in Exercise V of VIM-OpenStack Lab (30.0.0.0/0 and 40.0.0.0/0).
2. Launching VNF
Before launching 1st VNF we will associate role of admin to heat_stack_owner. Go to SSH session
of neutron-openstack-ctrl1 and run following command:
Goto NFV -> VNF Management -> VNF Manager and click on Deploy VNF.
Configure as shown:
Refresh the page in sometime. The status of VNF1 will change to ACTIVE
Check instances under Admin in Horizon for the VNF that came up with 3 VMs
Exercise-III
Objective:
Making VNFM monitor the VNF and check if VNFM recovers the VNF from a fault
Monitoring VNF
Lets Create a VNF which is Monitored using PING and Re-Launch in case of ping failure
Goto NFV -> VNF Management -> VNF Catalog and click on Onboard VNF. Give name as VNFD2 and for
TOSCA YAML, use following confiugration.
tosca_definitions_version: tosca_simple_profile_for_nfv_1_0_0
metadata:
template_name: sample-tosca-vnfd
topology_template:
node_templates:
VDU1:
type: tosca.nodes.nfv.VDU.Tacker
properties:
image: cirros
flavor: m1.tiny
availability_zone: nova
mgmt_driver: noop
config: |
param0: key1
param1: key2
monitoring_policy:
name: ping
parameters:
monitoring_delay: 120
count: 3
interval: 1
timeout: 2
actions:
failure: respawn
CP1:
type: tosca.nodes.nfv.CP.Tacker
properties:
management: true
anti_spoofing_protection: false
requirements:
- virtualLink:
node: VL1
- virtualBinding:
node: VDU1
CP2:
type: tosca.nodes.nfv.CP.Tacker
properties:
anti_spoofing_protection: false
requirements:
- virtualLink:
node: VL2
- virtualBinding:
node: VDU1
CP3:
type: tosca.nodes.nfv.CP.Tacker
properties:
anti_spoofing_protection: false
requirements:
- virtualLink:
node: VL3
- virtualBinding:
node: VDU1
VL1:
type: tosca.nodes.nfv.VL
properties:
network_name: br_mgmt
vendor: Tacker
VL2:
type: tosca.nodes.nfv.VL
properties:
network_name: net2
vendor: Tacker
VL3:
type: tosca.nodes.nfv.VL
properties:
network_name: net3
vendor: Tacker
Goto VNF Manager and click on Deploy VNF. Give VNF Name as VNF2, select VNF Catalog as VNFD2,
VIM name as VIM0 and click on Deploy VNF.
Password : cubswin:)
VNF is Re-spawned