You are on page 1of 226

HOL-SDC-1620

Table of Contents
Lab Overview - HOL-SDC-1620 - OpenStack with VMware vSphere and NSX.................... 2
Lab Guidance .......................................................................................................... 3
Module 1: Introduction to VMware Integrated OpenStack - (30 Minutes).......................... 6
Intro to OpenStack and VIO..................................................................................... 7
VIO architectural components: How they tie into OpenStack................................ 13
Basic OpenStack operations: Instance, network, security, storage. ...................... 15
Module 2: VIO Networking - (60 Minutes) ....................................................................... 67
Introduction........................................................................................................... 68
Environment Setup ............................................................................................... 73
Basic Virtual Networking ....................................................................................... 76
Security Groups & Micro-Segmentation .............................................................. 104
Advanced Networking ......................................................................................... 127
Environment Clean-Up ........................................................................................ 154
Module 3: Advanced OpenStack Operations - (30 Minutes) .......................................... 157
Environment Setup ............................................................................................. 158
CLI Tools: Nova, Neutron, Cinder ......................................................................... 161
Working with Glance Image Catalogs.................................................................. 169
API Consumption: Heat Templates ...................................................................... 178
Module 4: Operationalizing VIO - (60 Minutes).............................................................. 186
Overview of OpenStack Operations .................................................................... 187
Troubleshooting Scenario with Log Insight and vRealize Operations................... 190

HOL-SDC-1620

Page 1

HOL-SDC-1620

Lab Overview - HOLSDC-1620 - OpenStack


with VMware vSphere and
NSX

HOL-SDC-1620

Page 2

HOL-SDC-1620

Lab Guidance
VMware Integrated OpenStack (VIO) is a VMware supported OpenStack
distribution prepared to run on top of an existing VMware infrastructure. VIO will
empower any VMware Administrator to easily deliver and operate an Enterprise
production grade OpenStack cloud on VMware components. This means that you will be
able at to take advantage of all VMware vSphere great features like HA, DRS or VSAN for
your OpenStack cloud and also extend and integrate it with other VMware management
components like vRealize Operations and vRealize Log Insight.
In this Hands-On-Lab we provide you with an introduction to VIO, demonstrate how it
integrates with NSX and vRealize Operations and let you take the reigns in building out
virtual networks with micro-segmentation while providing granular visibility and
reporting.
If your goal is to complete the lab from start to finish, then it make more sense to
complete the lab in chronological order. Otherwise, each module is designed in such a
way that it can be taken without completing other modules. For example, if your interest
are with VIO Networking, then you do not need to complete Module 1 to try Module 2.
We have prepared a README.txt file on the ControlCenter Desktop for users with
International keyboards and for copying long strings of text.
Lab Module List:

Module
Module
Module
Module

1
2
3
4

Introduction to VMware Integrated OpenStack (VIO) (30 minutes)


VIO Networking (60 minutes)
Advanced OpenStack Operations (30 minutes)
Operationalizing VIO (60 minutes)

Lab Captains:

Marcos Hernandez (Module 3)


Ed Shmookler (Module 2)
Jonathan Cham (Module 4)
Hadar Freehling (Module 1)

This lab manual can be downloaded from the Hands-on Labs Document site found here:
http://docs.hol.pub/catalog/
This lab may be available in other languages. To set your language preference and have
a localized manual deployed with your lab, you may utilize this document to help guide
you through the process:
http://docs.hol.vmware.com/announcements/nee-default-language.pdf

HOL-SDC-1620

Page 3

HOL-SDC-1620

Activation Prompt or Watermark


When you first start your lab, you may notice a watermark on the desktop indicating
that Windows is not activated.
One of the major benefits of virtualization is that virtual machines can be moved and
run on any platform. The Hands-on Labs utilizes this benefit and we are able to run the
labs out of multiple datacenters. However, these datacenters may not have identical
processors, which triggers a Microsoft activation check through the Internet.
Rest assured, VMware and the Hands-on Labs are in full compliance with Microsoft
licensing requirements. The lab that you are using is a self-contained pod and does not
have full access to the Internet, which is required for Windows to verify the activation.
Without full access to the Internet, this automated process fails and you see this
watermark.
This cosmetic issue has no effect on your lab. If you have any questions or concerns,
please feel free to use the support made available to you either at VMworld in the
Hands-on Labs area, in your Expert-led Workshop, or online via the survey comments as
we are always looking for ways to improve your hands on lab experience.

Disclaimer
This session may contain product features that are currently under
development.
This session/overview of the new technology represents no commitment from
VMware to deliver these features in any generally available product.
Features are subject to change, and must not be included in contracts,
purchase orders, or sales agreements of any kind.
Technical feasibility and market demand will affect final delivery.
Pricing and packaging for any new technologies or features discussed or
presented have not been determined.
These features are representative of feature areas under development. Feature
commitments are subject to change, and must not be included in contracts,

HOL-SDC-1620

Page 4

HOL-SDC-1620

purchase orders, or sales agreements of any kind. Technical feasibility and market
demand will affect final delivery.

HOL-SDC-1620

Page 5

HOL-SDC-1620

Module 1: Introduction to
VMware Integrated
OpenStack - (30 Minutes)

HOL-SDC-1620

Page 6

HOL-SDC-1620

Intro to OpenStack and VIO


This lab will explore VMware's Integrated OpenStack

What is OpenStack?
OpenStack is an open source software that delivers a framework of services for API
based infrastructure consumption. OpenStack framework requires hardware or software
based infrastructure components and management tools to build a functional
OpenStack cloud. The "plug-in" architecture of OpenStack services enables various
vendors (such as VMware) to integrate their infrastructure solutions (such as vSphere
and NSX) to deliver an OpenStack cloud.
This next section is for those who have no exposure to OpenStack or VIO, feel
free to skip to the next section if you are familiar with OpenStack by clicking
here.

OpenStack is a Cloud API Layer in a Cloud Technology


Stack
A typical cloud technology stack consists of following major components
1. Hardware Infrastructure
2. Software Infrastructure (or virtualization layer)
3. Cloud API layer that enables consumption and orchestration of underlying cloud
infrastructure
4. Cloud Management Layer that provides governance, resource planning, financial
planning etc and potentially manages multiple underlying cloud fabrics
5. Applications running on top of cloud infrastructure
In a non-cloud datacenter model, an application owner would contact one or more
datacenter administrators, who would then deploy the application on the application
owner's behalf using software infrastructure tools (e.g., VMware vSphere) to deploy the
application workloads on top of physical compute, network, and storage hardware.
OpenStack is a software layer that sits on top of the software infrastructure and enables
an API based consumption of infrastructure. OpenStack enables a "self-service" model in
which application owners can directly request and provision the compute, network, and
storage resources needed to deploy their application.
The primary benefits of self-service are increased agility from applications owners
having "on demand" access to the resources they need and reduced operating expenses
by eliminating manual + repetitive deployment tasks.

HOL-SDC-1620

Page 7

HOL-SDC-1620

OpenStack components
OpenStack splits infrastructure delivery functions into several different services. Each of
these services is known by its project code name:

Nova: Compute service.


Neutron: Network services (formerly called "Quantum").
Cinder: Block Storage service.
Glance: Image service.
Keystone: Identity service.
Horizon: Web GUI.

OpenStack services orchestrate and manage the underlying infrastructure and expose
APIs for end users to consume the resources. OpenStack's strength is a highly
customizable framework, allowing those deploying it to choose from a number of
different technology components, and even customize the code themselves.

Nova
OpenStack Compute (Nova) is a cloud computing fabric controller, which is the main
part of an IaaS system. It is designed to manage and automate pools of computer
resources and can work with widely available virtualization technologies, as well as bare
metal and high-performance computing (HPC) configurations.

HOL-SDC-1620

Page 8

HOL-SDC-1620

Neutron
OpenStack Networking (Neutron, formerly Quantum) is a system for managing networks
and IP addresses. OpenStack Networking ensures the network is not a bottleneck or
limiting factor in a cloud deployment, and gives users self-service ability, even over
network configurations.
OpenStack Networking provides networking models for different applications or user
groups. Standard models include flat networks or VLANs that separate servers and
traffic. OpenStack Networking manages IP addresses, allowing for dedicated static IP
addresses or DHCP. Floating IP addresses allow dynamic traffic rerouting to any
resources in the IT infrastructure, so users can redirect traffic during maintenance or in
case of a failure.

Cinder
Cinder is a Block Storage service for OpenStack. It's designed to allow the use of either
a reference implementation (LVM) to present storage resources to end users that can be
consumed by the OpenStack Compute Project (Nova). The short description of Cinder is
that it virtualizes pools of block storage devices and provides end users with a self
service API to request and consume those resources without requiring any knowledge of
where their storage is actually deployed or on what type of device

Glance
OpenStack Image Service (Glance) provides discovery, registration, and delivery
services for disk and server images. Stored images can be used as a template. It can
also be used to store and catalog an unlimited number of backups. The Image Service
can store disk and server images in a variety of back-ends, including OpenStack Object
Storage. The Image Service API provides a standard REST interface for querying
information about disk images and lets clients stream the images to new servers.
OpenStack.org updates Glance every six months, along with other OpenStack modules.
Some of the updates are to catch-up with existing cloud infrastructure services, as
OpenStack is comparatively new. Glance adds many enhancements to existing legacy
infrastructures. For example, if integrated with VMware, Glance introduces advanced
features to the vSphere family such as, vMotion, high availability and dynamic resource
scheduling (DRS). vMotion is the live migration of a running VM, from one physical
server to another, without service interruption. Thus, it enables a dynamic and
automated self-optimizing datacenter, allowing hardware maintenance for the
underperforming servers without downtimes.

HOL-SDC-1620

Page 9

HOL-SDC-1620

Keystone
OpenStack Identity (Keystone) provides a central directory of users mapped to the
OpenStack services they can access. It acts as a common authentication system across
the cloud operating system and can integrate with existing backend directory services
like LDAP. It supports multiple forms of authentication including standard username and
password credentials, token-based systems logins. Additionally, the catalog provides a
queryable list of all of the services deployed in an OpenStack cloud in a single registry.
Users and third-party tools can programmatically determine which resources they can
access.

Horizon
OpenStack Dashboard (Horizon) provides administrators and users a graphical interface
to access, provision, and automate cloud-based resources. The design accommodates
third party products and services, such as billing, monitoring, and additional
management tools. The dashboard is also brandable for service providers and other
commercial vendors who want to make use of it. The dashboard is one of several ways
users can interact with OpenStack resources. Developers can automate access or build
tools to manage resources using the native OpenStack API or the EC2 compatibility API.

VMware Integrated OpenStack components


VIO is made by two main building blocks, first the VIO Manager and second OpenStack
components. It is packaged as an OVA file that contains the Manager server and an
Ubuntu Linux virtual machine to be used as the template for the different OpenStack
components.

OpenStack components
The OpenStack services in VMware Integrated OpenStack are deployed as a distributed
highly available solution formed by the following components:
OpenStack controllers. Two virtual machines running Horizon Dashboard, Nova
(API, scheduler and VNC) services, Keystone, Heat, Glance, and Cinder services in
an active-active cluster.
Memcached cluster.
RabbitMQ cluster, for messaging services used by all OpenStack services.
Load Balancer virtual machines, an active-active cluster managing the internal
and public virtual IP addresses.
Nova Compute machine, running the n-cpu service.
Database cluster. A three node MariaDB Galera cluster that stores the OpenStack
metadata.
Object Storage machine, running Swift services.

HOL-SDC-1620

Page 10

HOL-SDC-1620

DHCP nodes. These nodes are only required if NSX is not selected as provider for
Neutron.

More details about VIO


As mentioned previously, VIO is a production grade OpenStack deployment based on a
reference architecture developed through customer best practices and the VMware
Network Systems Business Unit internal cloud. It is designed to be highly available
through the use of vSphere capabilities like HA and DRS, and through the use of
redundant components. The core OpenStack services are deployed as follows:
Controller - Controller VM's expose the core OpenStack service APIs and run
schedulers. Nova, Neutron, Glance, Cinder and Keystone Services run here. VIO deploys
two controllers in an Active/Active configuration.
Database - Database VM's are used by the OpenStack services to store metadata. VIO
deploys 3 MariaDB databases with Galera cluster services configured as Active/Passive/
Passive. Data is fully replicated between the databases.
MemCached - MemCached VM's are used as a distributed in memory key-value store
for database call results. Memcached is easily scaled out. VIO deploys 2 Memcached
VM's
RabbitMQ - OpenStack communications within a service and between services are
message based. VIO deploys RabbitMQ as the messaging service. It is deployed in two
VMs
Load Balancer's - Both internal management communication and external API access
is load balanced across two HAProxy LoadBalancer VMs. VIO configures API Service
identity endpoints using Virtual IP Addresses (VIPs).
Nova Compute - The Nova Compute nodes are the worker bees of an OpenStack cloud.
They handle launching and terminating instances and must scale out as the cloud
resources increase. VIO starts with a single Nova Compute node and adds new nodes for
each vSphere Cluster added to the OpenStack cloud.
NSX - It is important to note that while VIO will configure the Neutron networking
Service, it does not do any configuration of the underlying virtualized networking
components. It is an out of band exercise to ensure that either NSX or vDS has
previously been setup, with the appropriate physical networks for the planned
environment. You will enter configuration information from that setup as part of VIO
Cluster Creation, but you are not reconfiguring that environment.

Installation requirements
To be able to successfully deploy VMware Integrated OpenStack you will need at least
the following:

HOL-SDC-1620

Page 11

HOL-SDC-1620

One management cluster with two to three hosts, depending on the hardware
resources of the hosts.
One Edge cluster. As with any NSX for vSphere deployment it is recommended to
deploy a separate cluster to run all Edge gateway instances.
One compute cluster to be used by Nova to run instances. One ESXi host will be
enough but again that will depend on how much resources are available and what
kind of workloads you want to run.
Management network with at least 15 static IP addresses available.
External network with a minimum of two IP addresses available. This is the
network where Horizon portal will be exposed and that will be used by the
tenants to access OpenStack APIs and services.
Data network, only needed if NSX is going to be used. The different tenant logical
network will be created on top of this, the management network can be used but
it is recommended to have a separate network.
NSX for vSphere. It has to be setup prior to VIO deployment if NSX plugin is going
to be used with Neutron.
Distributed Port Group. In case of choosing DVS-based networking a vSphere portgroup tagged with VLAN 4095 must be setup. This port group will be used as the
data network.
The hardware requirements are around 56 vCPU, 192GB of memory and 605GB of
storage.
To that you have to add NSX for vSphere required resources like the NSX Manager, the
three NSX Controllers and the NSX Edge pool.

HOL-SDC-1620

Page 12

HOL-SDC-1620

VIO architectural components: How


they tie into OpenStack
VMware Integrated OpenStack (VIO) Architecture
VIO is based atop VMware's Software Defined Data Center infrastructure. With purpose
built drivers for each of the major OpenStack services, VIO optimizes consumption of
Compute, Storage and Network resources. VIO also includes OpenStack specific
management extensions for the vCenter Client, vCenter Operations Manager and
LogInsight to allow use of existing tools to operate and manage your OpenStack cloud.

Nova Compute Integration


The vCenter Driver exposes Compute resources to the OpenStack Nova service through
vCenter API calls. Resources are presented as cluster level abstractions. The Nova
scheduler will choose the vSphere cluster for new instance placement and vSphere DRS
will handle the actual host selection and VM placement. This design enables OpenStack
Instance VMs to be treated by vSphere as any other VM. Services like DRS, vMotion and
HA are all available

Cinder and Glance Integration


The VMDK driver exposes Storage resources to the OpenStack Cinder service as block
devices through datastore/VMDK abstractions. This means that any vSphere datastore,
including VSAN, can be used as storage for boot and ephemeral OpenStack disks.
Glance images may also be stored as VMDK's or OVA files in datastores.

HOL-SDC-1620

Page 13

HOL-SDC-1620

Neutron Networking Integration


The NSX driver supports both the vSphere Virtual Distributed Switch (vDS) and NSX for
true software defined networking. Customers can leverage their existing vDS to create
provider networks that can isolate OpenStack tenant traffic via VLAN tagging. They can
also take advantage of NSX to provide dynamic creation of logical networks with private
IP overlay, logical routers, floating IPs and security groups, all enabled across a single
physical transport network.

Management Integration
The vSphere web client has been extended to include OpenStack specific meta data to
allow searching by terms appropriate to your VM's (Tenant, Flavor, Logical Network,
etc.). VIO also includes a vSphere Client plugin for managing OpenStack consumption of
vSphere resources. Management packs for vCOPS and LogInsight allow for OpenStack
specific monitoring based on metadata extracted from the OpenStack services.

HOL-SDC-1620

Page 14

HOL-SDC-1620

Basic OpenStack operations: Instance,


network, security, storage.
VIO Plugin access
In this part of the lab, we will explore using OpenStack and how it leverages the VIO
plugin to deploy VMs

Checking lab status


You must wait until the Lab Status is at Ready before you begin. If you receive an
error message, please end the lab and redeploy another.

HOL-SDC-1620

Page 15

HOL-SDC-1620

Login to vCenter
Click on the google Chrome browser icon.
Login with the following credentials:
1. Username: administrator@vsphere.local
2. Password: VMware1!
OR
3. You can also click on the Use Windows session authentication checkbox.
4. Click on login

VMware Integrated OpenStack Plugin


1. Click on the VMware Integrated OpenStack plugin icon

HOL-SDC-1620

Page 16

HOL-SDC-1620

VIO plugin
Here you can find the information about the VIO deployment, its state, and other
important settings.
1. Click on the Monitor Tab

VIO plugin - Monitor


As you can see, the VIO deployment information is provided including

HOL-SDC-1620

Page 17

HOL-SDC-1620

VIO plugin - Manage - syslog


1. Click on the Manage tab
2. Click on Settings
Here you can find the syslog server settings. We will be reviewing these logs later in this
lab.
You are free to explore some of the other settings with the Manage tab. We will not be
preforming any activity within these sections during this lab.

VIO Deployment validation


1. Click on the OpenStack Deployment Icon on the far right of your screen

HOL-SDC-1620

Page 18

HOL-SDC-1620

VIO Deployment status


Make sure your OpenStack deployment is showing running. If the Status is not showing
as Running, you may need to restart the lab.

Lets Start using OpenStack Horizon


We will now start using OpenStack by logging into the Horizon portal (Not to be
confused with VMware Horizon EUC products). Horizon provides a web portal for both
administrators and users. Administrators can use the UI for common tasks such as
creating users, managing their quotas, check infrastructure usage, etc. In Horizon, cloud
administrators have a different view when compared to cloud users. While cloud
administrators can see and manage all infrastructure resources, cloud users can only
see inventories created by them.
We will start with an orientation of the Horizon Web UI for cloud administrators and then
switch to a cloud user view later.
Click on the tab to open a new window.
1. Click on the Login - VIO bookmark in your browser bar. (https://vio.corp.local)

HOL-SDC-1620

Page 19

HOL-SDC-1620

Login to OpenStack Horizon


1. User Name: admin
2. Password: VMware1!

HOL-SDC-1620

Page 20

HOL-SDC-1620

Openstack Admin Overview


Upon initially logging in as 'admin', note the following key tabs:
1. At the top is a drop down menu that allows an admin to switch views to a specific
user. For example, if an admin wants to see what resources are visible to a particular
user, they can select the user from drop down list. For now, please ensure that the
drop down has 'admin' as the selected user.
2. There is a 'Project' tab. Every user in OpenStack belongs to a project (more info on
this in next section). An admin belongs to an 'admin' project that is created by default. A
project contains all the instances, volumes and other inventories created by all users
belonging to the project.
3. Since we have logged in as admin, you will note an 'Admin' tab at the bottom.

Hypervisors
1. Click on the Hypervisors tab within the Admin Panel.
2. Here you can see the Hypervisors that OpenStack is managing.

HOL-SDC-1620

Page 21

HOL-SDC-1620

Notice that there is only a single hypervisor shown. The reason behind this is that
OpenStack sees each vSphere Cluster as a single hypervisor where workloads can be
placed. This allows for key vSphere features like DRS, HA and vMotion to still be used in
the background without confusing OpenStack.
Please Note: The resources of this hypervisor represent the resources of the vSphere
cluster. In this case, the two ESX hosts combined, and the shared datastore. The
memory shown is less than the combined total of the hosts because ESX reserves some
memory for operations.

Flavors
1. Click on the Flavors tab under the Admin panel.
2. Flavors represent the different options users will have in terms of what size a VM
they deploy. The cloud administrator can define what flavors are supported in an
OpenStack deployment, and cloud users can then select from the set of flavors
exposed to them.

HOL-SDC-1620

Page 22

HOL-SDC-1620

Images
1. Click on the Images tab under the Admin Panel.
2. Here is the list of all images that will be available to tenants to choose from when
they look to create a virtual machine. Cloud administrators will typically upload a
variety of "public" images to be made available to their cloud users. Cloud users
are able to further extend this set of images with their own custom images.
3. Please Note: For simplicity, we have already uploaded a single Debian Linux
image for use in this lab. The VMDK disk format indicates that it can be used with
vSphere.

HOL-SDC-1620

Page 23

HOL-SDC-1620

Network Topology
Now, we will look at the current network topology that has been setup with OpenStack.
1.
2.
3.
4.

Click on the Project panel (top of left side margin).


Select the Network tab under the Project panel.
Select the Network Topology tab under the Network panel.
For this lab, we have pre-created an 'External- Shared' Network and a TestNet
network.

The two networks represent a tenant network (TestNet), and a provider network
(External-Shared). When you have multiple clients, they would each get their own
tenant network, but all would share the provider network as a gateway to external
resources such as the Internet or any corporate systems.

HOL-SDC-1620

Page 24

HOL-SDC-1620

HOL-SDC-1620

Page 25

HOL-SDC-1620

Projects
In OpenStack, users are grouped in containers called projects. These projects are
effectively tenants in the OpenStack cloud environment. Each project has an assigned
quota of compute, network and storage resources that are shared by all users in that
project. Projects are isolated from each other, that is, users in one project can't see the
users and resources of other projects. Users must be associated with at least one
project, though they may belong to more than one.
1.
2.
3.
4.

Select the Admin tab on the left-hand navigation bar.


Click the Identity Panel tab under Admin.
Click the Projects tab under the Identity Panel.
Click the Create Project button.

HOL-SDC-1620

Page 26

HOL-SDC-1620

Project Adam
You will need to provide the project a name.
1. Name: Adam-project
2. Make sure the Enabled box is checked
3. Click on the Project Members tab

HOL-SDC-1620

Page 27

HOL-SDC-1620

Add user to the project


1. Under Project Members tab click on the "+" button next to admin to add the user
to the project
2. Then Click the Create Project button.
Note: The admin account needs to be added to the project in order to pull instance
metadata from the vSphere Web Client.

HOL-SDC-1620

Page 28

HOL-SDC-1620

Create a second project


Repeat the steps you took to create the previous project. If you forgot, click here.
1. Call this project: Susan-project.
2. Make sure to add the admin user to the project
You should have two new projects in your project window

HOL-SDC-1620

Page 29

HOL-SDC-1620

HOL-SDC-1620

Page 30

HOL-SDC-1620

Quotas for susan-project


Quotas are used to set operational limits around the resources assigned to a project. By
implementing quotas, OpenStack cloud administrator can predictably allocate capacity
to tenants and prevent one tenant from monopolizing shared resources.
1. Find the Susan-project in the list
2. Click on on the More button in the pull-down menu under the Projects tab and
select Modify Quotas.

HOL-SDC-1620

Page 31

HOL-SDC-1620

Quota editing
Optional: Explore the default quotas for this tenant. Click Save when done.

Creating a New User


We will now create a user for the previously created project.

HOL-SDC-1620

Page 32

HOL-SDC-1620

1.
2.
3.
4.

Click the Admin tab.


Select the Identity Panel tab.
Select the Users tab.
Click Create User to display the user menu.

HOL-SDC-1620

Page 33

HOL-SDC-1620

Creating user Adam


Enter or select the following data into the fields to create a user named Adam.
1.
2.
3.
4.
5.
6.

Username: Adam
Email: adam@raindrop.com
Password: VMware1!
Primary Project: Adam-Project
Role: Leave as default _member_
Click the Create User button

HOL-SDC-1620

Page 34

HOL-SDC-1620

Creating user Susan


Now, create a new user name Susan. Enter or select the following data into the fields to
create a user named Susan. Follow the steps taken previously taken in creating the
Adam user, but with the following info.
1.
2.
3.
4.
5.
6.

Username: Susan
Email: susan@raindrop.com
Password: VMware1!
Primary Project: Susan-Project
Role: Leave as default _member_
Click the Create User button

HOL-SDC-1620

Page 35

HOL-SDC-1620

HOL-SDC-1620

Page 36

HOL-SDC-1620

User created...now sign out


You should now have your two new users, along with the built-in accounts, in the Users
panel. We now need to sign out as Admin and sign in with another user.
1. Click on Sign Out

Creating User's Instance


An instance is the OpenStack's terminology for a virtual machine. From Horizon, users
can provision instances and attach them to existing OpenStack networks. In this section,
we will illustrate the process of creating instances from OpenStack.

HOL-SDC-1620

Page 37

HOL-SDC-1620

Login as Adam
Now that you have logged out as admin, you will need to login as user Adam to create
your new Instance.
Log into the Horizon Web UI, this time using the following credentials:
1. User Name: Adam
2. Password: VMware1!

HOL-SDC-1620

Page 38

HOL-SDC-1620

User Overview
From the overview section, you are shown how much of the user's current quota limits
have been used.
Since we haven't done anything yet, all categories show 0 resources used except for
Security Groups. One security group is used by the Internal Shared network available
to all users for the purposes of this lab. We will revisit networking in greater detail later
on.

HOL-SDC-1620

Page 39

HOL-SDC-1620

Launch an Instance
1. Click on the Instances tab, on the left hand side.
2. Then click on the Launch Instance button.

HOL-SDC-1620

Page 40

HOL-SDC-1620

Launch Instance settings


Under the Details tab fill in the following fields.
1.
2.
3.
4.
5.

Instance name: HOL


Flavor: From the pulldown menu select m1.small
Instance Count: 2
Instance Boot Source: From the pulldown menu select Boot from Image
Image Name: From the pulldown menu select ubuntu-14.04-server-amd64 (860,8
MB)
6. Click on the Access & Security tab

HOL-SDC-1620

Page 41

HOL-SDC-1620

Launch Instance - Access & Security


1. Ensure that default is selected under Security Groups.
2. Click on the Networking tab.

Launch Instance Networking


1. Click on the + button next to the TestNet network.
2. Click the Launch button to create the instance.

HOL-SDC-1620

Page 42

HOL-SDC-1620

Instances built
Now you can view the instances being built.
1. When the build is complete, click on the top Instance to view details of the build.

HOL-SDC-1620

Page 43

HOL-SDC-1620

Instance Details
Here you can find more details about the instance such as the IP provisioned and the
unique ID that Openstack provisioned for this instance. When done, go back the
instance table view.

HOL-SDC-1620

Page 44

HOL-SDC-1620

Instance options
1. Click on the More button for the first Instance.
Here you can find all the options that are available to you.

Overview of Instances
Now, lets go back to the overview screen and see how it has been updated.
1. Click on Overview link on the left side of the page.
You can now see that graphs have been updated to reflect the new instances that have
been created.

HOL-SDC-1620

Page 45

HOL-SDC-1620

Volumes
Why do we need volumes at all? In OpenStack, the instance you have provisioned
already has a local disk, but this disk will not persist if the instance is terminated.
Imagine a workload where 1-hour of computation needs to occur at the end of each
business day. Ideally, you would like to only spin up the instance when necessary for
1-hour per day. However, if you were only using a local disk, you would lose any data
you generated between runs. This is where volumes come in. They are a persistent
storage that can be attached and detached on- demand to any running VM.
1. Click on the Volumes tab within the Project pane on the left-hand side of the
screen.
2. Click the Create Volume button
This will start your creation of a persistent volume

HOL-SDC-1620

Page 46

HOL-SDC-1620

Create volume
Fill in the following:
1. Volume Name: data-volume1
2. Size (BG): 10
3. Click on Create Volume

HOL-SDC-1620

Page 47

HOL-SDC-1620

New Volume
Please wait as your volume is deployed. Wait tell the status changes to Available.
1. Click on the More button
2. Select Edit Attachments
We will now attach the volume to an Instance

HOL-SDC-1620

Page 48

HOL-SDC-1620

Attach Volume to an Instance


1. Select the top Instance in your list
2. Click Attach Volume

HOL-SDC-1620

Page 49

HOL-SDC-1620

Volume attached
Now you return to the Volumes page
Wait for the Volume to show In-Use.
Once the Volume is attached you will see /dev/sbd as the attach point.
Please remember the Instance that has the volume attached to it. You will need this info
in the next step.

Start console through Openstack


1. Click on Instances
2. Select the Instance that has the volume attached and click on the More dropdown
menu
3. Select Console

HOL-SDC-1620

Page 50

HOL-SDC-1620

Log into VM
1. Click on the link to show only the console
2. Click Enter to be taken to the login prompt
Note: If the screen is not blue and showing a SMBus controller error, please give the
image more time to boot.

HOL-SDC-1620

Page 51

HOL-SDC-1620

Login to the VM
Login with the following
username: root
password: vmware

View Disk Details


At the command prompt type in the following command
1.1.1

df -h

You will notice that the second hard drive is not showing up in the listing of devices
attached on the VM. This is because /dev/sdb has not yet been scanned, formatted or
mounted.

Format and mount new volume


Run the following command at the prompt to have the OS rescan for attached disk
devices:
1.1.2

echo "- - -" > /sys/class/scsi_host/host2/scan

There is a space between each "-". Once you see output indicating it found /dev/sdb,
press enter to get a new prompt. You should see some output after this command.

HOL-SDC-1620

Page 52

HOL-SDC-1620

NOTE: If nothing shows up, try replacing host2 with host0 or host1.
Now that your have found the new drive, you need to format the disk.
1.1.3

mkfs -t ext3 /dev/sdb

1.1.4 Type y when prompted to format the drive

HOL-SDC-1620

Page 53

HOL-SDC-1620

Re-run disk scan


Now to make a directory on the new drive
1.1.5

mkdir /mnt/persistent-data

And lastly to mount the drive


1.1.6

mount /dev/sdb /mnt/persistent-data

Lastly lets check to see if the drive is mounted.


1.1.7

df -h

You should now see the /dev/sdb in your list.

Test File
To test that persistent volume are working, we will create a file on the persistent and
non persistent volume. Then we will attach the persistent volume to a different instance.
Make sure to click Enter after each command.
1.1.8

echo "Hello non-persistent World" > /root/test-file1.txt

1.1.9

echo "Hello persistent World" > /mnt/persistent-data/test-file2.txt

HOL-SDC-1620

Page 54

HOL-SDC-1620

Edit Volumes
Now that we have formated the drive and created the test files, we will detach the
volume from our instance and attach to the other instance.
Return to the Volumes screen in OpenStack.
Click the back button on your browser to get out of the full screen console.
1. Click on the More button and select Edit Attachments

Detach Volume
1. Click on the Detach Volume
This will detach the volume from your existing Instance and allow you to attach it to
another.

HOL-SDC-1620

Page 55

HOL-SDC-1620

Confirmed Detach Volume


1. Click on Detach Volume when prompted to confirm.

Volume status change


Now you will notice that the Attached To field is empty.

Volume available to attach


Now you will attach the Volume to the other instance and test to see if the file is there.
1. Click on the More button and select Edit Attachments

HOL-SDC-1620

Page 56

HOL-SDC-1620

Change volume attachment


1. This time select the lower Instance in your pull down list.
2. Click Attach Volume

Volume attached
Now you see that your volume is attached and ready to use on the other Instance. You
will also notice the mounting point of this drive.

HOL-SDC-1620

Page 57

HOL-SDC-1620

Start console through Openstack


1. Click on Instances
2. Select the Instance that now has the volume attached and click on the More
dropdown menu
3. Select Console

HOL-SDC-1620

Page 58

HOL-SDC-1620

Log into VM
1. Click on the link to show only the console
2. Click Enter to be taken to the login prompt

HOL-SDC-1620

Page 59

HOL-SDC-1620

Login to the VM
Login with the following
username: root
password: vmware

HOL-SDC-1620

Page 60

HOL-SDC-1620

Rescan for new volume


Run the following command at the prompt to have the OS rescan for attached disk
devices:
1.2.1

echo "- - -" > /sys/class/scsi_host/host2/scan

There is a space between each "-". Once you see output indicating it found /dev/sdb,
press enter to get a new prompt. You should see some output after this command.
NOTE: If nothing shows up, try replacing host2 with host0 or host1.
Now that your have found the new drive, you need to format the disk. Type the following
1.2.2

mkdir /mnt/persistent-data

1.2.3

mount /dev/sdb /mnt/persistent-data

1.2.4

df -h

Look for file on Volume once we mount it


Now to check if the file we created is still there.
1.2.5

ls /root

Nothing there..
1.2.6

ls /mnt/persistent-data

HOL-SDC-1620

Page 61

HOL-SDC-1620

Here you will see the files on the volume, including the test-file2.txt file we created. To
see its content enter the following command
1.2.7

cat /mnt/persistent-data/test-files2.txt

Your output should show Hello persistent World.


Click the back button on your browser to end the full screen console.

vCenter client and OpenStack


Now, we will go into the vCenter Client and see what information is shared between
OpenStack and vCenter.
Go to your vCenter Client tab in your browser or open a new tab and click on vCenter
1. Click on one of your Instances that starts HOL

Metadata within vCenter for OpenStack


Scroll down to the OpenStack VM section

HOL-SDC-1620

Page 62

HOL-SDC-1620

There you can see the information that was found in OpenStack Horizon is available
within vCenter.

HOL-SDC-1620

Page 63

HOL-SDC-1620

Shell VM for Cinder Volume VMDK


Look for a notice there is a VM in the inventory that is powered off state and has a name
starting with "data-volume".
1. Click on this VM name in the inventory and view the Summary tab.
Notice in the 'VM Hardware' window, this VM has a single hard disk with a size of 10 GB
that matches the size of the Cinder volume we created. This is a "shell" VM to house the
10 GB VMDK corresponding to the Cinder volume in scenarios when the volume is not
attached to any "real" running VM.

HOL-SDC-1620

Page 64

HOL-SDC-1620

Cleaning up Instances
We will now need to remove instances used in this module.
Return to your OpenStack Horizon webpage and login in the user Adam, if you were
logged out.
1. Click on the Instance tab
2. Select both instances
3. Click on Terminate Instances

Deleting Instances
You should now see the task for each instance change to Deleting.
Once the task is done, no Instance should be see in your table.

HOL-SDC-1620

Page 65

HOL-SDC-1620

Verify VM Deletion in vCenter


You return to vCenter and verify that the instances have now been removed. Notice that
the data-volume shadow VM is still there.

Conclusion
This concludes the Module 1 of our lab.
You may continue now to Module 2.

HOL-SDC-1620

Page 66

HOL-SDC-1620

Module 2: VIO Networking


- (60 Minutes)

HOL-SDC-1620

Page 67

HOL-SDC-1620

Introduction
In the traditional model of networking, users attach VM's to existing networks which are
mostly hardware defined. However, relying on hardware defined, pre-existing networks
makes a private cloud inflexible, hinders scalability and doesn't support majority of
cloud use cases. Cloud users need the flexibility to create network topologies and
modify network access policies to suit their applications.
In most IaaS/SaaS environments, services such as Web, Application and Database
Servers are all required to run on different L2 networks. Additionally while Web Servers
need to be accessible from the internet, Application and Database Server VM's need to
block internet access. These types of customized network topologies and network
access controls are provided by VMware NSXv through the OpenStack Neutron plug-in
available with VMware VIO.

HOL-SDC-1620

Page 68

HOL-SDC-1620

VIO Architecture with NSXv Focus


VMware VIO supports two deployment options for networking. One option utilizes the
vDS with the more traditional VLAN backed port-groups and the other with VMware
NSXv and VXLANs. In this module we will be focusing on the VIO + NSXv model and it's
subsequent features. With that said, this module will assume the lab user has some
background and basic understanding of VMware NSXv and/or has taken other NSXv
related HOL's.
Some of the many benefits of VIO + NSXv include:
Programmatic provisioning of network and security services can result in greater
agility and visibility for your network and security infrastructure in addition to
simplified operation and lower CapEx.
Advanced security and multi-tenancy (Micro-Segmentation). \L
Advanced virtualized network services with massive scale and throughput
(routing, security groups, QoS). \L
Integration with third-party network services such as load balancers and firewalls
(e.g., Arista, f5, and more).

NSXv Architecture with VIO Consumption


VMware NSXv brings lots of benefits when we compare it to a traditional OpenStack
networking configuration relying on VLAN's:

HOL-SDC-1620

Page 69

HOL-SDC-1620

Scale
Scale-Out Cloud Infrastructure: 10,000 VMs (per vCenter), thousands of tenants,
1,000 hypervisors supported.
Very High throughput: 20 Gbps per hypervisor (with 2x10Gbps NIC bonding).
Optimized traffic path, thanks to distributed L3 and Security.
Management and Operations
HA of all the management services.
HA of all the network services.
Management and Monitoring tools (statistics, port monitoring, port mirroring, port
connection tools, seamless upgrade).
Advanced Network Services

Static Routing.
L2 Bridging (logical with physical).
Access Control List on Logical Router.
QoS.
Optimization of Broadcast/Multicast Traffic.

In summary, NSX offers:


A centralized control plane, highly available and scalable (Controller Cluster).
A management plane interface to configure and provision the environment.
(NSX Manager). This management plane, as the figure shows, is also the API
entry-point into the NSX domain. IT can be leveraged by Could Management
Platforms (CMPs), for integration and automation. Examples of these CMPs are:
VMware Infrastructure for OpenStack (VIO), vRealize Automation (vRA) and
vCloud Director (vCD).
A Scale out cluster of Layer 3 Gateways are leveraged. (NSX Edge Services
Gateway).
An encapsulation protocol for Network Virtualization (VXLAN) which deliver high
performance, vendor independent transport on any physical fabric architecture..
This module will give you an overview of key NSXv benefits that empower cloud users to
realize custom network topologies and control various aspects of network access.

HOL-SDC-1620

Page 70

HOL-SDC-1620

Overview Of Module Objectives


Module 2 is broken down into three main sections:
1. Basic Virtual Networking
In this section, we focus on building out a few instances (OpenStack lingo for VM's) that
connect into a couple virtual networks with a logical router providing connectivity
between virtual networks and as an external path out. We also demonstrate how
configurations made in Horizon Dashboard get translated via the Neutron Plugin into
NSX in vCenter.
2. Security Groups & Micro-Segmentation
In this section, we focus on creating and understanding Security Groups and also
implement Micro-Segmentation. VIO together with NSX provides not only a Distributed
Firewall feature-set but also Micro-Segmentation where a Security Group policy can be
used to allow or disallow access between instances on the same L2 network. This
feature has become very important and increasingly popular in setting appropriate
security boundaries without having to rely on traditional L2 boundaries.
3. Advanced Networking
In this section, we focus on setting up Static Routing, Enabling/Disabling NAT and
Distributed Routing. Most of the Advanced Networking section has to be completed via
Neutron CLI since Horizon Dashboard does not yet have workflows for them. We also

HOL-SDC-1620

Page 71

HOL-SDC-1620

switch between CLI and NSX in vCenter to demonstrate that the Neutron Plugin is
properly mapping commands over ot NSX. Distributed Routing is probably the most
interesting feature in this section since no other vendor can do this today.

HOL-SDC-1620

Page 72

HOL-SDC-1620

Environment Setup
The objective of this section is to provide steps to get web browser tabs opened to the
appropriate portal pages in preparation for the rest of the module.

Check Lab Status


You must wait until the Lab Status is at Ready before you begin. If you receive an
error message, please end the lab and redeploy another.

Clean Up (If Necessary)


If you are starting this module and have previously completed other modules of this lab,
please make sure to delete and remove any artifacts that may be left over. While each
module in this lab are related to one another and configured in an intuitive chronological
order, they are also designed to be autonomous, self contained and do not build from
one another. Meaning that you do not need to take Module 1 in order to take Module 2,
etc.

Launch Web Browser


Click to launch the Google Chrome web browser icon located on your HOL desktop.

vSphere Web Client


1. Click on the vCenter Web Client bookmark to open the vSphere Web Client in
a new tab. (It may already be open.)
2. Type your Username: administrator@vpshere.local
3. Type your Password: VMware1! (case-sensitive)

HOL-SDC-1620

Page 73

HOL-SDC-1620

4. Click the Login button.


Please Note: The first time you login to vSphere Web Client takes a bit longer and in
some cases up to a minute.

Create New Tab


Click to create a new web browser tab.

Horizon Dashboard
1. Click on the Login - VIO bookmark to open the Horizon Dashboard login portal.
2. Type your User Name: admin
3. Type your Password: VMware1! (case-sensitive)

HOL-SDC-1620

Page 74

HOL-SDC-1620

4. Click the Sign In button.


Please Note: The first time you login to Horizon Dashboard takes a bit longer and in
some cases up to a minute.
Congratulations HOL user, you have successfully completed this section!!!

HOL-SDC-1620

Page 75

HOL-SDC-1620

Basic Virtual Networking


The objective for this section is to create a few instances that connect to a couple virtual
networks and tie it together with a router for external connectivity.

View Current Network


First let's see what logical networks already exist.
1. Click on Project pane.
2. Click on Network sub-pane.
3. Click on Networks.
We can see that two networks have already been pre-created for us. The first is an
External Network which has a special designation and will serve as our gateway out of
OpenStack. The second is a regular logical network named TestNet.

HOL-SDC-1620

Page 76

HOL-SDC-1620

Create Network (Virtual)


Click the + Create Network button to start the workflow.

Network Name
1. Create a network name called "web-tier".
2. ConfirmAdmin State checkbox is ticked. (If this checkbox is not ticked, the
network will not forward packets)
3. Click the Next button.

HOL-SDC-1620

Page 77

HOL-SDC-1620

Subnet and Network Address


1.
2.
3.
4.

Confirm that Create Subnet checkbox is ticked.


Type in "web-subnet" for the Subnet Name field.
Type in "11.0.0.0/24" for the Network Address field.
Click the Next button.

HOL-SDC-1620

Page 78

HOL-SDC-1620

Provide Subnet Detail


The Subnet Detail tab offers us the opportunity to configure DHCP, DNS Name Servers
or Host Routes.
1. Confirm that Enable DHCP checkbox is ticked.
2. Type in "11.0.0.10,11.0.0.19" as the IP range for the Allocation Pools field.
3. Click the Create button to complete this step.

HOL-SDC-1620

Page 79

HOL-SDC-1620

Confirm Network Creation


You should now see your newly created "web-tier" network in the list of available
networks. It is already in the ACTIVE state.
You can easily add more subnets or completely delete the existing network by first
clicking on More and choosing the corresponding action.
For now, click on the "web-tier" link to get all the details regarding this network
segment.

HOL-SDC-1620

Page 80

HOL-SDC-1620

Network Detail
This Network Detail view allow you to add/delete Subnets or Edit Ports. You can also
come back here later to see a new port being added when an attachment is made with
a Logical Router.
This view also allows you to see the ID tied to the network. This ID is useful when
troubleshooting and will help you directly correlate the entry to the NSX logical switch in
vCenter.
Note the ID above for this network. (Your lab may differ since the ID is randomly
generated)

HOL-SDC-1620

Page 81

HOL-SDC-1620

Compare ID with NSX in vCenter


1. Swaptabs to vSphere Web Client to see how this OpenStack Network appears in
NSX.
2. Click the Network & Security icon.

HOL-SDC-1620

Page 82

HOL-SDC-1620

NSX Logical Switches


Click on Logical Switchesmenu item in the Navigator window to see the list of NSX
logical switches.
Note that the ID from Horizon Dashboard matches the ID of the Logical switch created
by NSX. NSX receives these configurations through API calls from Horizon Dashboard via
the Neutron plugin.

HOL-SDC-1620

Page 83

HOL-SDC-1620

Network Topology
1. Swaptabs back to Horizon Dashboard.
2. Click on Network Topology.
You should see your newly created "web-tier" logical network which isn't connected to
anything yet.
Please Note: "External-Network" network was pre-created in your lab by OpenStack
admins and shared with all Projects to provide external connectivity to your applications.

HOL-SDC-1620

Page 84

HOL-SDC-1620

Create Logical Router


We now need to create a Logical Router to route traffic from "web-tier" to the
"External-Network". All the VM's connected on the "web-tier" logical network will be
using this router as the default gateway.
Click on Create Router button.

Complete Logical Router


1. Type in "tenant-router" for the Router Name field.
2. Click Create Router button.

Confirming New Router


You should have seen a light green popup message with a Success message saying:
"Router tenant-router was successfully created." As you can see in the Network Topology
view, your router is not connected to anything yet.
Please Note: There is a bug in the Icehouse release of OpenStack where newly created
routers may not correctly show the networks they are connected to in the network
topology view. Since VIO currently utilizes Icehouse we are affected by this OpenStack

HOL-SDC-1620

Page 85

HOL-SDC-1620

bug as well. We were even affected by this during the development of this manual and
had to add/delete the ports several times to the router to get the correct screenshots.
This bug will be addressed in the next release of VIO since the next release will be based
on the Kilo release of OpenStack.

Set Router Gateway


1. Click on Routers menu item to see the list of created routers.
2. Click on Set Gateway button.

HOL-SDC-1620

Page 86

HOL-SDC-1620

Connect External Network


1. Select"External-Network" from the External Network drop down field.
2. Click the Set Gateway button.
You should see a message saying: "Success: Gateway interface is added."

Connect Router To Logical Network


Click"tenant-router" to view up Router Details.

HOL-SDC-1620

Page 87

HOL-SDC-1620

Add Router Interface


Click on + Add Interface button.

Select Subnet
1. Select"web-tier: 11.0.0.0/24 (web-subnet)" in the Subnet drop down field.
2. Click the Add interface button.
A message saying "Success: Interface added 11.0.0.1" will appear shortly.

HOL-SDC-1620

Page 88

HOL-SDC-1620

Confirm Router and Network Attachment


Now lets navigate to the Network Topology view to see how the newly created "webtier" network looks like attached to our "tenant-router".
1. Click on Project pane.
2. Click on Network sub-pane.
3. Click on Network Topology.
If everything was completed correctly you should see the "tenant-router" connected to
two networks:
"External-Network"
"web-tier"
Please Note: There is a bug in the Icehouse release of OpenStack where newly created
routers may not correctly show the networks they are connected to in the network
topology view. Since VIO currently utilizes Icehouse we are affected by this OpenStack
bug as well. We were even affected by this during the development of this manual and
had to add/delete the ports several times to the router to get the correct screenshots.
This bug will be addressed in the next release of VIO since the next release will be based
on the Kilo release of OpenStack.

HOL-SDC-1620

Page 89

HOL-SDC-1620

Build One More Network


Now lets take everything we have learned so far and create a new "app-tier" network
and attach it to the same "tenant-router" just as we did with the "web-tier" network.
Below you will find all the details needed to create the new "app-tier" network.

Network Name: app-tier


Subnet Name: app-subnet
Network Address: 12.0.0.0/24
DHCP Allocation Pool: 12.0.0.10,12.0.0.19

Confirm New Network


Now lets navigate to see what networks have been created and whether the new "apptier" network is ACTIVE.
1. Click on Project pane.
2. Click on Network sub-pane.
3. Click on Networks.
If everything was completed correctly you should see a new network created in the
ACTIVE status.

HOL-SDC-1620

Page 90

HOL-SDC-1620

Confirm Router Attachment to New Network


Now lets navigate to see what networks the "tenant-router" is connected to:
1. Click on Project pane.
2. Click on Network sub-pane.
3. Click on Routers.
If everything was completed correctly you should see a new interface created with a
"12.0.0.1" IP Address with Admin State of UP.

Confirm in Topology View


Now lets navigate to the Network Topology view to see what networks the "tenantrouter" is connected to:
1. Click on Project pane.
2. Click on Network sub-pane.
3. Click on Network Topology.
If everything was completed correctly you should see the "tenant-router" connected to
three networks:
"External-Network"
"web-tier"
"app-tier"
Please Note: There is a bug in the Icehouse release of OpenStack where newly created
routers may not correctly show the networks they are connected to in the network
topology view. Since VIO currently utilizes Icehouse we are affected by this OpenStack
bug as well. We were even affected by this during the development of this manual and
had to add/delete the ports several times to the router to get the correct screenshots.

HOL-SDC-1620

Page 91

HOL-SDC-1620

This bug will be addressed in the next release of VIO since the next release will be based
on the Kilo release of OpenStack.

HOL-SDC-1620

Page 92

HOL-SDC-1620

Build A Few Instances'


Now lets navigate to the Image view to see what images are currently available to us. A
basic Ubuntu image has been pre-created for us. Let's use this image to create several
instances'.
1.
2.
3.
4.

Click
Click
Click
Click

HOL-SDC-1620

on Project pane.
on Compute sub-pane.
on Images in menu.
the Launch button to create an instance from the current image.

Page 93

HOL-SDC-1620

Instance Details
1. Type"WebSvr1" into the Instance Name field. (Leave everything else as default)
2. Click over to the Networking tab.

HOL-SDC-1620

Page 94

HOL-SDC-1620

Instance Networking
1. Drag the "web-tier" network from the Available networks field over to the
Selected Networks field to connect the instance to the "web-tier" network.
2. Click on the Launch button.

Confirm Instance Creation


Once the instance creation is kicked off, the screen will shift to the Instances view and if
everything worked correctly you should see the newly created "WebSvr1" instance in
the Running state under the Power State column.

HOL-SDC-1620

Page 95

HOL-SDC-1620

View Network Topology With New Instance


Now lets navigate to the Network Topology view to see our new instance connected to
the "web-tier" network. If everything is completed correctly, your lab topology should
match the screenshot above.
1.
2.
3.
4.

Click on Project pane.


Click on Network sub-pane.
Click on Network Topology.
Clickview instance details link to see additional details for the instance.

HOL-SDC-1620

Page 96

HOL-SDC-1620

Instance Details View


A new screen will pop up with detailed instance details such as name, ID, specs, ip
addresses, security group membership, etc. This view is really useful to drill down which
security groups a particular instance is a member of if you need to correlate a specific
instance to its VM in vCenter. (Similar to what we did by matching the ID of a network
from Horizon to NSX in vCenter.)

HOL-SDC-1620

Page 97

HOL-SDC-1620

HOL-SDC-1620

Page 98

HOL-SDC-1620

Build Additional Instances


Let's take what we have learned in building instances' and create a few more to test
network connectivity and for our security section. Please use the following details to
create additional instances:
Create another instance called "WebSvr2" attached to the "web-tier" network
just like the previous example.
Create another instance called "AppSvr1", but this time attached to the "apptier" network.
If completed correctly your Network Topology should closely reflect the screenshot in
this step. Let's take a look:
1.
2.
3.
4.

Click on Project pane.


Click on Network sub-pane.
Click on Network Topology.
Click on Normal to switch the view to a more detailed view with names and IP
addresses.

Test Connectivity
Let's Navigate to the Instances view so that we can open a console to "WebSvr1" and
test connectivity.
1. Click on Project pane.
2. Click on Compute sub-pane.

HOL-SDC-1620

Page 99

HOL-SDC-1620

3. Click on Instances menu item.


4. Click on Morebutton for "WebSvr1"
5. Click on Console in the opened menu.
Note the various IP addresses assigned to our three instances by DHCP for the "webtier" and "app-tier" networks.
"WebSvr1": IP Address: 11.0.0.11 ; Default Gateway: 11.0.0.1
"WebSvr2": IP Address: 11.0.0.12 ; Default Gateway: 11.0.0.1
"AppSvr1": IP Address: 12.0.0.11 ; Default Gateway: 12.0.0.1

HOL-SDC-1620

Page 100

HOL-SDC-1620

Login To Console
You may have to bypass certificate checking by clicking on "Proceed Anyway."
Once inside the console window, authenticate to "WebSvr1" using:
Login: root
Password: vmware

Ping Instance
First check the IP address of the "WebSvr1" instance you've selected (Should be
11.0.0.11):
ifconfig
Ping the other test instance IP address (11.0.0.12 in our example).
Please Note: If you forgot the IP addresses for your test instances, you can find it back
on the Instances list in the Horizon Dashboard or in the previous step.
ping -c 2 11.0.0.12
You can also ping the "tenant-router" which happens to be the instances' default
gateway.
ping -c 2 11.0.0.1
We have not mentioned Security Groups yet, that section comes next, however it is
important to understand that the default policy for Security Groups allows all instances
to have reachability within the same project

HOL-SDC-1620

Page 101

HOL-SDC-1620

Let's test that and ping the "AppSvr1" instance from our current "WebSvr1" console
window.
ping -c 2 12.0.0.11
These connectivity tests validate that due to the current default Security Group settings,
all three instances can communicate across both L2 and L3 boundaries.
Close the console window.
Please Note: Some of our pings show a (DUP!) at the end of the string, this is due to
the nested nature of HOL environment. You would not see this in a normal production
environment.
Congratulations HOL user, you have successfully completed this section!!!

HOL-SDC-1620

Page 102

HOL-SDC-1620

HOL-SDC-1620

Page 103

HOL-SDC-1620

Security Groups & Micro-Segmentation


Security Groups are sets of IP filter rules that are applied to an instance. VIO together
with NSX provides not only a Distributed Firewall feature-set but also MicroSegmentation where a Security Group policy can be used to allow or disallow access
between instances on the same L2 network. This feature has become very important
and increasingly popular in setting appropriate security boundaries without having to
rely on traditional L2 boundaries.
For more information on NSX Micro-Segmentation please consider taking the HOLSDC-1603 lab.
All projects in OpenStack have a default security group. Lets review our current rule set.

Security Groups Objectives


In this section our objective is to create a set of security groups that would limit
communication between instances and tiers that folly common security practices for
multi-tier applications. We have three instances that we will be using to demonstrate
connectivity. Two instances on the "web-tier" named "WebSvr1" and "WebSvr2"
along with one instance on "app-tier" named "AppSvr1".
To demonstrate mico-segmentation, we will first limit traffic between our two web
server instances that reside on the same L2 network called "web-tier". Most
security professionals would agree that there is no real benefit for web servers to
communicate with one another. The primary responsibility of the web server is to
take requests from the outside and deliver content from the application server
back to the original requestor.
To demonstrate distributed firewall between L2 boundaries, we will limit traffic to
the "app-tier" so that only traffic to/from the "web-tier" is all that is permitted.
No other traffic will be allowed to the "app-tier".

Assign Floating IP
Typically web servers need to be reachable from the outside and are often provided an
external address with NAT. In OpenStack this feature is called a Floating IP. In order for
us to test connectivity from our desktop CONTROLCENTER we need to make sure both of
our web servers have a reachable address.
To add a Floating IP we need to navigate to our Instances view and add it to both of
our web servers "WebSvr1" and "WebSvr2".
1. Click on Project pane.
2. Click on Compute sub-pane.
3. Click on Instances.

HOL-SDC-1620

Page 104

HOL-SDC-1620

4. Click the More button.


5. Click on Associate Floating IP menu item.

Allocate A Floating IP
Since we have not allocated any Floating IP's, the workflow provides an option for us.
Click on the +button.

HOL-SDC-1620

Page 105

HOL-SDC-1620

Allocate A Floating IP Part 2


1. Select"External-Network" from the drop-down menu.
2. Click the Allocate IP button.
Note how the allocation process selects an available IP from a pre-defined quota on the
"External-Network" network range.

Assigning Floating IP To Instance


1. Select the newly allocated Floating IP from the IP Address drop-down menu. (In
our example "192.168.100.101")
2. Select the "WebSvr1: 11.0.0.11" Instance from the Port to be associated dropdown menu.
3. Click the Associate button.

HOL-SDC-1620

Page 106

HOL-SDC-1620

Confirm Floating IP Creation


If completed correctly you should see the Floating IP address appear as an additional
address for the "WebSvr1" instance under the IP Address column.

Create Additional Floating IP


Let's take what we have learned so far with Floating IP's and create an
additionalFloating IP for the "WebSvr2" instance on our own.
If you have completed this step correctly, your lab should closely resemble the
screenshot above.

HOL-SDC-1620

Page 107

HOL-SDC-1620

Access & Security


To view the current policies for default we must navigate to the Access & Security
view.
1.
2.
3.
4.

Click
Click
Click
Click

on Project pane.
on Compute.
on Access & Security.
the Manage Rules button.

Detailed View Of default Security Group


As you can see, all traffic leaving the instance is allowed (Egress) but only traffic from
the same Security Group (default) will be allowed to enter the Instances (Ingress).

HOL-SDC-1620

Page 108

HOL-SDC-1620

Create Security Group


Let's navigate back to the Access & Security view so that we can create a new Security
Group.
1.
2.
3.
4.

Click
Click
Click
Click

on Project pane.
on Compute.
on Access & Security.
the + Create Security Group button.

Security Group Name


1. Type"web-sg" in the Namefield.
2. Type"Web Server Security Group" in the Descriptionfield.
3. Click the Create Security Group button.

Manage Rules
If the creation of the Security Group was correctly completed, you should see a new
Security Group named "web-sg" in the list.

HOL-SDC-1620

Page 109

HOL-SDC-1620

Click on the Manage Rules button to modify the ruleset.

Delete Existing Rules


1. Tick the checkbox to select all current rules.
2. Click the Delete Rules button to delete all existing rules.

Confirm Delete Rules


Click Delete Rules button to confirm deletion of all existing rules.
VIO with NSX has an implicit deny in it's firewall. So unless communication is allowed, it
will be blocked. By deleting all existing rules in the "web-sg" Security Group we
effectively block all communication in or out and between any instances that are a
member of the "web-sg" Security Group.

HOL-SDC-1620

Page 110

HOL-SDC-1620

Create Another Security Group


Let's take what we have learned in creating Security Groups and create another
Security Group named "app-sg" the exact same way as "web-sg". Make sure to delete
all existing rules inside the newly created "app-sg" Security Group.
Create Security Group:
Name: "app-sg"
Description: "Application Server Security Group"
Delete all existing rules
If completed correctly, your lab shoot closely resemble the screenshot above.

Modify web-sg Security Group


Click on the Manage Rules button for "web-sg" so we can start adding some new
rules.

HOL-SDC-1620

Page 111

HOL-SDC-1620

Add New Rules


Our next step is to create new rules that will now allow traffic from the outside to
communicate with members of "web-sg" followed by communication between "websg" members and "app-sg" members.
To summarize what we are doing:

Outside <--> web-sg via Floating IP (Allowed By Rule)


web-sg <--> app-sg (Allowed By Rule)
web-sg <--> web-sg (Implicitly Blocked)
app-sg <--> app-sg (Implicitly Blocked)
Outside <--> app-sg (Implicitly Blocked)

Click the + Add Rule button.

HOL-SDC-1620

Page 112

HOL-SDC-1620

Add web-sg Ingress Rule


1.
2.
3.
4.
5.

SelectALL ICMP from the Rule drop-down menu.


SelectIngress from the Direction drop-down menu
SelectCIDR from the Remote drop-down menu.
Select0.0.0.0/0 from the CIDR drop-down menu.
Click the Add button.

Please Note: In a production deployment this would typically be restricted to port 80/
443 for web servers, however in our lab it easier to use ICMP for testing.

HOL-SDC-1620

Page 113

HOL-SDC-1620

Add New web-sg Rule


Click the + Add Rule button add another rule.

HOL-SDC-1620

Page 114

HOL-SDC-1620

Add Egress web-sg Rule


With this rule we allow communication from "web-sg" to "app-sg" Security Groups.
1. SelectALL ICMP from the Rule drop down menu.(We use ICMP since it is easy to
test connectivity by using PING)
2. SelectEgress from the Direction drop down menu
3. SelectSecurity Group from the Remote drop down menu.
4. Select"app-sg" from the Security group drop down menu.
5. SelectIPv4 from the Ether Type drop down menu.
6. Click the Add button.
Please Note: In a production deployment, the web server would be using a custom
application server port to access content. In our lab we are using ICMP to keep things
simple.

Confirm Rule Creation


You can see we have successfully created both rules for "web-sg" Security Group.

HOL-SDC-1620

Page 115

HOL-SDC-1620

Click on Access & Security menu item to go back to the list of Security Groups.

Modify app-sg Security Group


Click on Manage Rules button to add a new rule to the "app-sg" Security Group.

HOL-SDC-1620

Page 116

HOL-SDC-1620

Add Rules for app-sg Security Group


Let's take what we have learned with Security Groups so far and create the last rule for
our "app-sg" on our own.
This new rule will only allow members of "app-sg" to receive traffic from the members
of "web-sg" Security Group.
Create an Ingress rule for ALL ICMP Traffic from "web-sg" Security Group.
If you correctly configured this rule, your lab should closely resemble what's in the
screenshot above.

Confirm Security Groups in vCenter


1. Swaptabs to vSphere Web Client to see how these newly created Security
Groups in OpenStack appear in NSX.
2. Click the Network & Security icon.

HOL-SDC-1620

Page 117

HOL-SDC-1620

Security Group Mapping In vCenter with NSX Firewall


Click on Firewall in the Networking & Security Menu to bring up the NSX FW rule set.
Note how Neutron Security Group rules are mapped to dedicated NSX Distributed
Firewall rules, organized into sections. Here we can see our "web-sg" and "app-sg"
Security Groups with all of their rules. At the bottom we see the implicit deny.

HOL-SDC-1620

Page 118

HOL-SDC-1620

Add Instance To Security Group


Let's navigate back to the Instance view so that we can join our instances' with our
newly created Security Groups.
1.
2.
3.
4.
5.

Click
Click
Click
Click
Click

HOL-SDC-1620

on Project pane.
on Compute sub-pane.
on Instances.
the Morebutton for "AppSvr1".
the Edit Security Groups menu item.

Page 119

HOL-SDC-1620

Edit Instance
Click on the -button to remove the default Security Group from the instance.

HOL-SDC-1620

Page 120

HOL-SDC-1620

Edit Instance Part 2


1. Click on the +button to add the "app-sg" Security Group to the Instance.
2. Click on the Save button.

Add The Other Instances to their Security Group


Let's take what we have learned so far with adding instances' to Security Groups and
add both "WebSvr1" and "WebSvr2" to be members of "web-sg" Security group
exactly how we did with "AppSvr1" in the last few steps.

Test Connectivity
Let's Navigate to the Instances view so that we can open a console to "WebSvr1" and
test connectivity.
1.
2.
3.
4.
5.

Click
Click
Click
Click
Click

on
on
on
on
on

Project pane.
Compute sub-pane.
Instances menu item.
More button for "WebSvr1"
Console in the opened menu.

Note the various IP addresses assigned to our three instances by DHCP for the "webtier" and "app-tier" networks.
"WebSvr1": IP Address: 11.0.0.11 ; Default Gateway: 11.0.0.1

HOL-SDC-1620

Page 121

HOL-SDC-1620

"WebSvr2": IP Address: 11.0.0.12 ; Default Gateway: 11.0.0.1


"AppSvr1": IP Address: 12.0.0.11 ; Default Gateway: 12.0.0.1

Login To Console
You may have to bypass certificate checking by clicking on "Proceed Anyway."
Once inside the console window, authenticate to "WebSvr1" using:
Login: root
Password: vmware

Ping Instance
First check the IP address of the "WebSvr1" instance you've selected (Should be
11.0.0.11):

HOL-SDC-1620

Page 122

HOL-SDC-1620

ifconfig
Ping the other "WebSvr2" instance IP address (11.0.0.12 in our example above).
The expected behavior is that pings should be blocked between "WebSvr1" and
"WebSvr2" on the same L2 network due to our policy in Security Groups and with NSX
providing Micro-Segmentation capabilities.
Please Note: If you forgot the IP addresses for your test instances, you can find it back
on the Instances list in the Horizon Dashboard.
ping -c 1 11.0.0.12
You can also ping the "AppSvr1" where the expected behavior is to allow pings since
they are being sourced from our "WebSvr1". If you recall our policy in the "app-sg"
Security Group allowed ICMP traffic to and from the "web-sg" Security Group.
ping -c 1 12.0.0.11

HOL-SDC-1620

Page 123

HOL-SDC-1620

Open Command Prompt


Open a new window of Command Prompt conveniently located in your Windows
taskbar.

HOL-SDC-1620

Page 124

HOL-SDC-1620

Ping Floating IP
Let's trying pinging both of our Floating IP's that we assigned to "webSvr1" and
"WebSvr2" at the beginning of this section.
In our lab screenshot example:
"WebSvr1" Floating IP address mapped to 192.168.100.101
"WebSvr2" Floating IP address mapped to 192.168.100.102
Please Note: The Floating IP mappings can vary from lab to lab. In order to find your
mapping, you would need to navigate back to the Instance view sound under Compute
in the Horizon Dashboard navigation window pane.
Test connectivity for "WebSvr1".
ping 192.168.100.101
Test connectivity for "WebSvr2".
ping 192.168.100.102
All of these connectivity tests validate that our original objectives we set at start of this
section were met.

Outside <--> web-sg via Floating IP (Allowed By Rule)


web-sg <--> app-sg (Allowed By Rule)
web-sg <--> web-sg (Implicitly Blocked)
app-sg <--> app-sg (Implicitly Blocked)
Outside <--> app-sg (Implicitly Blocked)

Close the console window.


Congratulations HOL user, you have successfully completed this section!!!

HOL-SDC-1620

Page 125

HOL-SDC-1620

HOL-SDC-1620

Page 126

HOL-SDC-1620

Advanced Networking
In this section, we focus on setting up Static Routing, Enabling/Disabling NAT and
Distributed Routing. Most of the Advanced Networking section has to be completed via
Neutron CLI since Horizon Dashboard does not yet have workflows for them. We also
switch between CLI and NSX in vCenter to demonstrate that the Neutron Plugin is
properly mapping commands over ot NSX. Distributed Routing is probably the most
interesting feature in this section since no other vendor can do this today.

Create A Static Route


The current desktop (CONTROLCENTER) you are on has Python for Windows preinstalled. Along with the appropriate ENVORNMENT variables set so that we can use
Command Prompt to issue Neutron CLI commands.
Open a new window of Command Prompt conveniently located in your Windows
taskbar.

HOL-SDC-1620

Page 127

HOL-SDC-1620

Create A Static Route Part 2


Currently Horizon Dashboard does not support creating routes of any kind. However,
Neutron does support static routes via Neutron CLI and is completely supported by NSX.
Let's create a random static route via Neutron CLI and see how it maps to NSX in
vCenter.
Let's see what routers we presently have via Neutron CLI.
2.1.1 neutron router-list
Now, let's add a static route via Neutron CLI.

2.1.2 neutron router-update tenant-router --routes type=dict list=true destinat


Let's confirm the route was added and see if there are any other routes present.
2.1.3 neutron router-show tenant-router
Please Note: In this step we will not actually be testing connectivity for the static route
so the route IP address and next hop are not related to the lab and chosen at random.

HOL-SDC-1620

Page 128

HOL-SDC-1620

Navigate to NSX under vCenter


1. Swaptabs to vSphere Web Client to see how these newly created Static Route
appears in NSX.
2. Click the Network & Security icon.

Inspect NSX Edge Details


1. Click on NSX Edges in the Navigator Menu.
2. Double-Click on an NSX Edge whose name starts with "shared..." to open it's
settings.

HOL-SDC-1620

Page 129

HOL-SDC-1620

NSX Edge Static Routing


1. Click on the Manage tab.
2. Click on Routing section.
3. Click on Static Routes.
Note that our static route created via Neutron CLI is mapped in the Static Route list
within NSX.
Let's navigate back to the Command Prompt and clear all static routes via Neutron
CLI to clean up.
2.2.1 neutron router-update tenant-router --routes action=clear

Disable SNAT (Source NAT)


Navigate back to the Command Prompt window so that we can issue Neutron CLI
commands.
List all current logical routers:
2.3.1 neutron router-list
List specific details for "tenant-router":
2.3.2 neutron router-show tenant-router
In VIO 1.0, SNAT is enabled be default and is the primary method to get in and out of
your OpenStack environment. By disabling SNAT you effectively block traffic from the
outside and into the tenant network.
To disableSNAT issue the following Neutron CLI command:
2.3.3 neutron router-gateway-set --disable-snat tenant-router External-Network

HOL-SDC-1620

Page 130

HOL-SDC-1620

***IMPORTANT*** If you disable SNAT as part of this exercise you must issue the the
command in the next Neutron CLI step to re-enable it, otherwise everything you have
built out in module 2 will no longer have external connectivity.

Navigate to NSX in vCenter


1. Swaptabs to vSphere Web Client to see how these newly created Static Route
appears in NSX.
2. Click the Network & Security icon.

HOL-SDC-1620

Page 131

HOL-SDC-1620

Inspect NSX Edge Details


1. Click on NSX Edges in the Navigator Menu.
2. Double-Click on an NSX Edge whose name starts with "shared..." to open it's
settings.

NAT in NSX under vCenter


1. Click on the Manage tab.
2. Click on Routing section.
3. Click on Static Routes.
We can see that the default SNAT entry does not exist in the NSX mapped ESG router
since we disabled it via Neutron CLI in the prior step. The only NAT entry is VIO internal
DNAT for LB purposes.

HOL-SDC-1620

Page 132

HOL-SDC-1620

Re-Enable SNAT (Source NAT)


***IMPORTANT*** If you disable SNAT as part of this exercise you must issue the the
command in the next Neutron CLI step to re-enable it, otherwise everything you have
built out in module 2 will no longer have external connectivity.
List specific details for "tenant-router":
2.4.1 neutron router-show tenant-router
To re-enable SNAT, cut & paste the network ID string (unique to every lab) of your
"External-Network" into the command below:

2.4.2 neutron router-update tenant-router --external_gateway_info type=dict net

Distributed Routing Overview


Distributed Routing is unique because it enables each vSphere Compute host to perform
L3 routing in the kernel at line rate. The DLR is configured and managed like one logical
router chassis, where each host is like a logical line card. Because of that the DLR works
well as the device handling the East-West traffic in your virtual network. We want this
traffic to have low latency and high throughput, so it just makes sense to do this as
close to the workload as possible, hence the DLR. Since VIO is tightly integrated with
NSX via the Neutron Plugin we are able to take advantage of great NSX features such as
these within an OpenStack environment that otherwise would not be possible.
For more information on Distributed Logical Routing (DLR) in NSXv you can also take a
look at HOL-SDC-1603 lab.
Let's see how it works with VIO!

HOL-SDC-1620

Page 133

HOL-SDC-1620

DLR Objectives and Topology


In the next few steps we will build a sample Logical Topology using the Neutron CLI and
Horizon Dashboard.
The sample topology is shown above:
2-tier Application ("web-tier" and "app-tier").
All inbound TCP traffic to "WebSvr1" and "WebSvr2" is allowed on external
routable IP (Floating IP).
ALL ICMP traffic allowed between "web-sg" to "app-sg" Security Groups.
(Created in prior section and will be reused here)
Use Distributed Routing for optimized East-West communications (CLI provisioned
due to not yet supported by Horizon Dashboard).
Use Centralized Routing for North-South connectivity.

Create Distributed Router


The current desktop (CONTROLCENTER) you are on has Python for Windows preinstalled. Along with the appropriate ENVORNMENT variables set so that we can use
Command Prompt to issue Neutron CLI commands.
Open a new window of Command Prompt conveniently located in your Windows
taskbar.

HOL-SDC-1620

Page 134

HOL-SDC-1620

Create Distributed Router Part 2


Create a distributed router named "dist-router" via Neutron CLI. (This operation is
currently not supported in the Horizon Dashboard.)
Type the following syntax into the Command Prompt:
2.5.1 neutron router-create dist-router --distributed True

HOL-SDC-1620

Page 135

HOL-SDC-1620

Disassociate Floating IP's


Navigate to the Instance view so that we can disassociate the current Floating IP
mappings.
1.
2.
3.
4.
5.

Click
Click
Click
Click
Click

on Project pane.
on Compute sub-pane.
on Instances menu item.
on Morebutton for "WebSvr1"
the Disassociate Floating IP menu item.

This action is necessary so that we can re-map the "web-tier" from the "tenantrouter" to the newly created "dist-router". If we did not disassociate the Floating IP
mappings, the system would give us an error when we would try to disconnect the port
from "tenant-router".

HOL-SDC-1620

Page 136

HOL-SDC-1620

Detach web-tier & app-tier From tenant-router


Click on "tenant-router" to display detailed view of ports.

Select Interfaces To Delete


1. Click the Delete Interface button.
2. Click the Delete Interface button.

HOL-SDC-1620

Page 137

HOL-SDC-1620

Add Networks To Distributed Router


Click the "dist-router" to bring up the detailed router view.

Add Interface
Click the + Add Interface button.

HOL-SDC-1620

Page 138

HOL-SDC-1620

Add web-tier and app-tier As Interfaces


1. Select"Web-tier: 11.0.0.0/24 (web-subnet)" from Subnet drop-down menu.
2. Click the Add Interface button.
Make sure to do this for "app-tier" as well.
Please Note: Only VXLAN-backed networks are supported on the DLR (no VLAN
support).

HOL-SDC-1620

Page 139

HOL-SDC-1620

Set Distributed Router Gateway


We need to make sure to add a gateway so that the router can communicate to the
outside world.
1. Click on Project pane.

HOL-SDC-1620

Page 140

HOL-SDC-1620

2. Click on Network sub-pane.


3. Click on Routers menu item.
4. Click on the Set Gateway button.

Connect External-Network
1. Select"External-Network" from the External Network drop-down menu.
2. Click the Set Gateway button,
This action will trigger the selection of an NSX ESG for centralized routing services. A
SNAT rule will automatically be created as well on this ESG. The Topology Dashboard will
display both NSX routers as a single Neutron Router.

HOL-SDC-1620

Page 141

HOL-SDC-1620

Confirming Attachments To dist-router


If the last few steps were completed correctly, you should see interfaces connected to
your dist-router and should closely resemble the screenshot above.

HOL-SDC-1620

Page 142

HOL-SDC-1620

Confirming In Topology View


In this topology view you can see all the networks connected to the newly created
distributed router. Note that the distributed router appears a single entity even though
on the backend in NSX it is really two. A Logical Router connected to an ESG called the
PLR.

HOL-SDC-1620

Page 143

HOL-SDC-1620

Adding Back Floating IP's


Navigate to the Instance view so that we can associate Floating IP mappings back
under the "dist-router".
1.
2.
3.
4.
5.

Click
Click
Click
Click
Click

on Project pane.
on Compute sub-pane.
on Instances menu item.
on More button for "WebSvr1"
the Associate Floating IP menu item.

Make sure to do this for the other "WebSvr2" instance as well.


It is important to add back the Floating IP's for the web servers so that they have an
external address that we can ping.

HOL-SDC-1620

Page 144

HOL-SDC-1620

Attach Floating IP's For Both WebSvr1 & WebSvr2


1. Select"192.168.100.101" from the IP Address drop-down menu for "WebSvr1".
2. Click the Associate button.
Make sure to do this for "WebSvr2" as well.

HOL-SDC-1620

Page 145

HOL-SDC-1620

Confirming Floating IP Attachments


If done correctly you should see an additional IP Address for "WebSvr1" and
"WebSvr2" created.

Navigate to vCenter
1. Swaptabs to vSphere Web Client to see how these newly created Security
Groups in OpenStack appear in NSX.
2. Click the Network & Security icon.

Distributed Router Configured in NSX under vCenter


You can see that VIO has launched a logical router called "dist-router" and also an ESG
called "dist-router-plr". The Logical Router scales out East-West, while the ESG acts as

HOL-SDC-1620

Page 146

HOL-SDC-1620

the North-South gateway or Provider Logical Router (PLR). Let's drill down on both
routers starting with the Logical Router first.
Double-Click on "dist-router-...ID" to view settings.

Logical Router Interfaces


You can see the three networks we attached in Horizon Dashboard. One External and
two Internal.
Click on Networking & Security back button in the Navigator window to go back to the
previous screen.

HOL-SDC-1620

Page 147

HOL-SDC-1620

View the NSX ESG


Double-Click on the "dist-router-plr...ID" to edit and view it;s settings.

ESG Interfaces
1. Click on the Manage tab.
2. Click on Settings option.
3. Click on Interfaces to view the connected DLR and External connection.
Remember in Horizon Dashboard the DLR and this ESG appear as a single entity, but in
reality they are not and follow NSX suggested best practices for deployment.

ESG NAT Settings


Click on the NAT option.

HOL-SDC-1620

Page 148

HOL-SDC-1620

Note all the NAT entries that were configured to allow external connectivity. Here we
can also see the Floating IP's.

HOL-SDC-1620

Page 149

HOL-SDC-1620

Test Connectivity
Let's Navigate to the Instances view so that we can open a console to "WebSvr1" and
test connectivity.
1.
2.
3.
4.
5.

Click
Click
Click
Click
Click

on
on
on
on
on

Project pane.
Compute sub-pane.
Instances menu item.
Morebutton for "WebSvr1"
Console in the opened menu.

Note the various IP addresses assigned to our three instances by DHCP for the "webtier" and "app-tier" networks.
"WebSvr1": IP Address: 11.0.0.11 ; Default Gateway: 11.0.0.1
"WebSvr2": IP Address: 11.0.0.12 ; Default Gateway: 11.0.0.1
"AppSvr1": IP Address: 12.0.0.11 ; Default Gateway: 12.0.0.1

HOL-SDC-1620

Page 150

HOL-SDC-1620

Login To Console
You may have to bypass certificate checking by clicking on "Proceed Anyway."
Once inside the console window, authenticate to "WebSvr1" using:
Login: root
Password: vmware

Ping Instance
First check the IP address of the "WebSvr1" instance you've selected (Should be
11.0.0.11):
ifconfig
Ping the other "WebSvr2" instance IP address (11.0.0.12 in our example above).
The expected behavior is that pings should be blocked between "WebSvr1" and
"WebSvr2" on the same L2 network due to our policy in Security Groups and with NSX
providing Micro-Segmentation capabilities.
Please Note: If you forgot the IP addresses for your test instances, you can find it back
on the Instances list in the Horizon Dashboard.
ping -c 1 11.0.0.12
You can also ping the "AppSvr1" where the expected behavior is to allow pings since
they are being sourced from our "WebSvr1". If you recall our policy in the "app-sg"
Security Group allowed ICMP traffic to and from the "web-sg" Security Group.

HOL-SDC-1620

Page 151

HOL-SDC-1620

ping -c 1 12.0.0.11

Open Command Prompt


Open a new window of Command Prompt conveniently located in your Windows
taskbar.

Ping Floating IP
Let's trying pinging both of our Floating IP's that we assigned to "webSvr1" and
"WebSvr2" at the beginning of this section.
In our lab screenshot example:
"WebSvr1" Floating IP address mapped to 192.168.100.101
"WebSvr2" Floating IP address mapped to 192.168.100.102

HOL-SDC-1620

Page 152

HOL-SDC-1620

Please Note: The Floating IP mappings can vary from lab to lab. In order to find your
mapping, you would need to navigate back to the Instance view sound under Compute
in the Horizon Dashboard navigation window pane.
Test connectivity for "WebSvr1".
ping 192.168.100.101
Test connectivity for "WebSvr2".
ping 192.168.100.102
All of these connectivity tests validate that our original objective in our sample topology
has been achieved. We now have a distributed router that is able to scale east-west and
resides on all Compute Hosts with two networks and three instances attached.
Close the console window and Command Prompt window.
Congratulations HOL user, you have successfully completed this section!!!

HOL-SDC-1620

Page 153

HOL-SDC-1620

Environment Clean-Up
Did you know that the HOL 1620 vPod is almost 1TB in size?
In order to continue having a smooth experience for the remaining modules, we kindly
ask that you follow these steps to save on some resources.

Terminate All Instances


Navigate to the Instances view.
1. Click on Instances in the menu under Compute.
2. Tick the checkbox next to each Instance we created in this module.
**IMPORTANT**Do not tick the "TestVM" instance.
3. Click the Terminate Instances button.

Router Detail View


Navigate to the Routers view and Click on "dist-router".

Disconnect Router Ports


1. Tick the checkbox next to all router ports we created, except for the "ExternalNetwork" port.

HOL-SDC-1620

Page 154

HOL-SDC-1620

2. Click the Delete Interfaces button.


Let's do the same for the other "tenant-router" if needed.

Clear Router Gateway


1. Click the Clear Gateway button for "dist-router".
2. Click the Clear Gateway button for "tenant-router".

HOL-SDC-1620

Page 155

HOL-SDC-1620

Terminate All Routers


Navigate to the Routers view.
1. Click on Routers in the menu under Network.
2. Tick the checkbox to select all instances.
3. Click the Delete Routers button.

Terminate All Switches


Navigate to the Networks view.
1. Click on Networks in the menu under Network.
2. Tick the checkbox to select only the networks we created as part of this module.
**IMPORTANT**Do not tick the "TestNet" and "External-Network"
networks.
3. Click the Delete Networks button.
Congratulations HOL user, you have successfully completed this section and
Module 2.

HOL-SDC-1620

Page 156

HOL-SDC-1620

Module 3: Advanced
OpenStack Operations (30 Minutes)

HOL-SDC-1620

Page 157

HOL-SDC-1620

Environment Setup
The objective of this section is to provide steps to get web browser tabs opened to the
appropriate portal pages in preparation for the rest of the module.

Check Lab Status


You must wait until the Lab Status is at Ready before you begin. If you receive an
error message, please end the lab and redeploy another.

Clean Up (If Necessary)


If you are starting this module and have previously completed other modules of this lab,
please make sure to delete and remove any artifacts that may be left over. While each
module in this lab are related to one another and configured in an intuitive chronological
order, they are also designed to be autonomous, self contained and do not build from
one another. Meaning that you do not need to take Module 1 in order to take Module 2,
etc.

Launch Web Browser


Click to launch the Google Chrome web browser icon located on your HOL desktop.

vSphere Web Client


1. Click on the vCenter Web Client bookmark to open the vSphere Web Client in
a new tab. (It may already be open.)
2. Type your Username: administrator@vpshere.local
3. Type your Password: VMware1! (case-sensitive)

HOL-SDC-1620

Page 158

HOL-SDC-1620

4. Click the Login button.


Please Note: The first time you login to vSphere Web Client takes a bit longer and in
some cases up to a minute.

Create New Tab


Click to create a new web browser tab.

Horizon Dashboard
1. Click on the Login - VIO bookmark to open the Horizon Dashboard login portal.
2. Type your User Name: admin
3. Type your Password: VMware1! (case-sensitive)

HOL-SDC-1620

Page 159

HOL-SDC-1620

4. Click the Sign In button.


Please Note: The first time you login to Horizon Dashboard takes a bit longer and in
some cases up to a minute.
Congratulations HOL user, you have successfully completed this section!!!

HOL-SDC-1620

Page 160

HOL-SDC-1620

CLI Tools: Nova, Neutron, Cinder


The OpenStack Community offers a set of bundled CLI binaries packaged with the
OpenStack project clients. These clients utilize Python API libraries to interact with their
corresponding project APIs. It is expected that a universal OpenStack client will
eventually replace the individual ones (in fact, the Keystone client has been deprecated
in favor of the universal client). In the meantime, cloud users can install the existing
clients and use them to simplify operations and configuration tasks.
In this section, we will use the following clients (which have been pre-installed in your
ControlCenter Windows jumphost):
Nova - Compute API and extensions
Neutron - Networking API
Cinder - Block Storage API and extensions
The Glance and Heat CLI clients are also installed, but we won't be using them. There
are sections dedicated to these projects, with several advanced operations that we will
be running later.

Basic Nova and Neutron CLI operations


Nova is the compute project in OpenStack and it provides self-service access to
scalable, on-demand compute resources. Refer to Module one of this lab (HOLSDC-1620) for additional information on Nova. Let's run a few Nova CLI commands to
get you familiarized with the CLI tools. First, open a Windows command prompt from
your ControlCenter. As stated earlier, all CLI tools and Python libraries have been preloaded on your jumphost and the environment variables have been set with the
OpenStack Admin credentials so you can run commands directly as the "admin" tenant.

Display Running Nova Instances


From the command prompt, type the following command:

HOL-SDC-1620

Page 161

HOL-SDC-1620

3.1.1 nova list


(be patient).
This will display a list of the running and failed instances currently owned by the admin
tenant. Please notice that the screenshot may differ from what you see on your side,
depending on whether or not you are accessing this module directly.

Display List of Available Flavors and Images


Run the following commands to display a list of available flavors and images:
3.2.1 nova flavor-list
3.2.2 nova image-list
Flavors are virtual hardware templates in OpenStack, which define sizes for RAM, disk,
number of cores when launching instances. OpenStack images can often be thought of
as "virtual machine templates." Later on, we will be running some advanced operations
with image manipulation using Glance.

Display Available Networks and Security Groups


We will switch gears a little bit and use a Neutron CLI command to display the networks
that are available to the admin tenant. Module 2 of this lab covers advanced Neutron

HOL-SDC-1620

Page 162

HOL-SDC-1620

operations, which include Neutron CLI options in more depth. Type the following
commands:
3.3.1 neutron net-list
3.3.2 neutron security-group-list
Note the UUID for the TestNet network, 5b5246be-b175-4d48-bc6e-ae98002fb210
(your output may differ from the screenshot above). You will see multiple "default"
Security Groups. OpenStack creates one default Security Group per tenant and the
admin tenant sees them all. We will be using the one with UUID
89afc049-dd19-4798-b43c-86f70a9ec4d3:

HOL-SDC-1620

Page 163

HOL-SDC-1620

Launch Multiple Nova Instances (bulk operation)


Again, from the command prompt, type the following command to launch 2 instances
simultaneously on the "TestNet" Neutron network while using the default Glance image
in the catalog. Both instances will have the prefix "MyVM" in their names, followed by a
random UUID:

3.4.1 nova boot --num-instances 2 --image ubuntu-14.04-server-amd64 --flavor m1


Please make sure you are using the default Security Group with UUID
89afc049-dd19-4798-b43c-86f70a9ec4d3.
Notice the command syntaxt follows this format:

nova boot --num-instances=NUMBER --image IMAGE --flavor FLAVOR --nic net-id=NET

List the New Running Instances


Type nova list again to display the current status of your Instances, as well as the IP
address assigned to them by Neutron DHCP.
3.5.1 nova list

HOL-SDC-1620

Page 164

HOL-SDC-1620

Verify the new running Instances in Horizon


In Horizon, running Instances will also be display under the Project tab. For instructions
on how to access Horizon, please refer to Module 1 of this lab.

Create a Neutron Network


Module 2 covers the Neutron + NSX integration in great detail, but we thought we would
revisit some basic Neutron operations in case you are accessing this module directly.
From the Windows command prompt, type the following command to create a Neutron
network called "MyNet":
3.6.1 neutron net-create MyNet
A Neutron Network created in this manner will map to an NSX logical switch (VXLANbacked port group). The use of VXLAN overlays allows OpenStack tenants to access selfservice network operations without the need to interact with the physical network (i.e.
no requirements for VLAN pre-provisioning or maintenance). Overlays also enable
greater scalability and better utilization of your private cloud infrastructure.

Create a Neutron Subnet


The net-create command only provisions a L2 segment for the tenant with no L3
identity. For tenants to be able to launch instances on this Neutron network, it is

HOL-SDC-1620

Page 165

HOL-SDC-1620

necessary to create its corresponding Neutron subnet. Run the following command to
create a subnet called on the 192.168.10.0/24 range, with 192.168.10.254 as the
default gateway. DHCP will be enabled by default on this subnet and will be provided in
the back-end by an NSX Edge Services Gateway (ESG):
3.7.1 neutron subnet-create --gateway 192.168.10.254 MyNet 192.168.10.0/24

List your Newly Created Network and Subnet


Type the following command to display the newly created Neutron constructs:
3.8.1 neutron net-list
3.8.2 neutron subnet-list
You are ready to launch VMs on this network. Remember, we did all this using selfservice workflows and never had to call the network guy!!

Create a Persistent Volume Using Cinder CLI


Cinder is the persistent block storage service in OpenStack. Tenants can create
persistent volumes and attach them to instances on demand. Let's create a simple
Cinder volume, 1GB in size called MyVolume. Notice that while we use Nova CLI to

HOL-SDC-1620

Page 166

HOL-SDC-1620

accomplish this task, the backe-end is communicating with Cinder to honor the
provisioning request. Important: take note of the volume UUID, since we will need it
later.
3.9.1 nova volume-create --display_name MyVolume 1

Obtain the Target Instance UUID


Once the Cinder volume is available, you can attach it to any running instance under the
admin tenant administration. Run nova list to extract the UUID of a running VM (we will
use the TestVM in this example):
3.10.1 nova list

Attach the Volume to the Instance


Run this command to attach the volume to the instance in question (the volume UUID
will be different):
3.11.1 nova volume-attach 804c8244-88d8-4290-949b-cd0c8bc65dd2 UUID auto
The first UUID is the ID for the target instance, while the second UUID identifies the
Cinder volume. The auto parameter indicates that Nova must attempt to automatically

HOL-SDC-1620

Page 167

HOL-SDC-1620

assign a device identifier to the volume within the guest. Notice the device id, /dev/sdb.
This is the path you would use in the Guest OS to mount the volume.
Your volume is now attached and whatever data you store on it will survive instance
recycling (destroy/create/re-attach operations). This completes this section of Module 3.

HOL-SDC-1620

Page 168

HOL-SDC-1620

Working with Glance Image Catalogs


An OpenStack cloud without any images is like a physical server without an operating
system (not so useful!). To support rapid provisioning, VMs are instantiated from a prebuilt operating system image (for vSphere administrators, a very good analogy would be
the VM Template from which we clone). VIO / OpenStack provides an Image Service for
storage and management of OpenStack images. There are several administration
options, with both UI and CLI based options illustrated in the upcoming lab exercises.

OpenStack - vSphere integration


VIO populates uploaded images to a designated vSphere datastore (shown in the
diagram). OpenStack supports many image formats, but we will only focus on the most
common with the main objective being to get up and running as quickly as possible.

Using an Existing Image


VIO is bundled with an existing image built fromUbuntu 14.04, a lightweight Linux
distribution that can be used to learn the basics of consuming existing images. During
VIO installation a Glance datastore is selected, which is where the initial image will
reside. All future images that get uploaded to Glance will sit here as well.

HOL-SDC-1620

Page 169

HOL-SDC-1620

There are two common ways to retrieve information about existing images:
Horizon (WebUI)
Glance (CLI tools)

Browsing the Image Catalog with Horizon


Public images are accessible to all projects (tenants), and are typically provided by an
OpenStack administrator. The pre-bundled image is public and can be found using
Horizon by navigating to Project > Images.

Browsing the Image Catalog with the Glance CLI


Using the Python CLI from a Windows command prompt, run the following command:
3.12.1 glance image-list
It may seems like a lot of work just to accomplish the same thing with the CLI, but the
upfront work will pay HUGE dividends later. There are a variety of OpenStack APIs that
are only available to users who are familiar with the command line tools. In other
words, even if you think of them as an Option B, they will at times be necessary. And
for those who are used to the CLI, these are a great entry point to automating
repeatable infrastructure activities (e.g. Provisioning entire topologies with a single
script).

Image Conversion and Creation


Why convert?
There are a large number of pre-built OpenStack images available, but they exist in
qcow2 image format. This is not an ESXi-friendly image without conversion. The

HOL-SDC-1620

Page 170

HOL-SDC-1620

following exercises are a great example of how migration works from a pre-existing
development cloud where other hypervisors are in use, and therefore qcow2 images.
After conversion, we will upload (or import) to the Glance repository. The behavior of the
instance is exactly the same, but it is now running on vSphere with the ability to
leverage all of the underlying platform technology (think HA, vMotion, DRS, and so on).

Convert a QCOW Image Using QEMU


The package qemu-utils has been installed on your ControCenter workstation. This
package includes qemu-img utility which will be run for image conversion.
Change directory to "Desktop\HOL Files" where the source image resides and then,
using the qemu-img utility with the following syntax we will convert an existing qcow2
image to VMDK:

3.13.1 qemu-img convert -p -O vmdk -f qcow2 trusty-server-cloudimg-amd64-disk1.


Once converted (100% completed), there should be a file called ubuntu.vmdk on the
"HOL Files" directory (path below).

HOL-SDC-1620

Page 171

HOL-SDC-1620

Import the image file into vSphere


In VIO 1.0, the process to create a Glance image that can boot correctly in vSphere,
requires said image to incorporate all the necessary and compatible metadata. The
safest way to guarantee this is to import the converted VMDK into vSphere to create a
reference VM. After that, we can export the disk as an OVA that can easily be used in
VIO to create the Glance image.
We understand the process described above in cumbersome, and for that reason we
have developed a utility in Glance, glance image-import, that facilitates the direct
creation of an image without having to import a disk into vSphere and export it to OVA.
This, however, is not available in VIO 1.0. VIO 2.0 incorporates the utility, but this lab is
based on VIO 1.0 and for that reason we must take the long route to image creation.
Let's import the converted QCOM-to-VMDK disk into vSphere. Log into the vSphere Web
Client and navigate to the datastores browser. Upload the converted ubuntu.vmdk file
to one of the available datastores.

HOL-SDC-1620

Page 172

HOL-SDC-1620

Create a VM from the VMDK


Navigate to Host and Clusters and create a new VM on the Management and Edge
Cluster. You can name it Glance-VM.

Create a VM from the VMDK (continued)


Use default settings in the VM creation wizard and ensure that the guest operating
system is correct (Ubuntu 64 bits).

Create a VM from the VMDK (continued)


Under Customize Hardware, please ensure you remove the New Hard Disk and instead
attach an existing hard disk. To do this click on New Device, select Existing Hard Disk

HOL-SDC-1620

Page 173

HOL-SDC-1620

and pick the ubuntu.vmdk file from the appropriate datastore. Complete the VM
creation process.

Create a VM from the VMDK (continued)


Once the VM is created, right click on it and navigate to Template > Export OVF
Template. You can name it Glance-VM and you can place it in the HOL Files directory.
Please make sure the format is Single File (OVA).

Using the Glance CLI to Create the Image


Once the conversion in previous step has completed successfully, we switch gears and
go back to using the glance (CLI) client we installed earlier.Image above is truncated.
Use the CLI command specified below.

HOL-SDC-1620

Page 174

HOL-SDC-1620

Images can be created using Horizon (Web UI), but we want to ensure the image is
properly tuned for ESXi, and this needs to be done with the CLI tool. The image creation
syntax is shown below (long command, mind the line break):

3.14.1 glance image-create --name My-Ubuntu-HOL-Image --disk-format vmdk --cont

Description of the Image Metadata


Most of the arguments built into the OVA are metadata used to help vCenter create a
virtual machine with the right specifications:
vmware_adaptertype ide creates a VM hard disk of type ide
vmware_disktype sparse combined with ide type will ensure disk is thin
provisioned.
The options above should not be changed when a disk has been converted to VMDK
using qemu-img. When we bring VMDKs in later from an existing vSphere deployment,
other possibilities will be introduced.
vmware_ostype will populate the Guest OS option in vSphere.
Finally, the container format can be used to support standards such as OVA. With a
single disk import we use bare (really just means unused).

HOL-SDC-1620

Page 175

HOL-SDC-1620

Creating a Test Instance Using the New Image


Using the Nova CLI, launch an Instance on the new image called "GlanceVM":

3.15.1 nova boot --image My-Ubuntu-HOL-Image --flavor m1.small --nic net-id=5b5

Verify Instance Creation


Run the nova-list command to verify the instance booted correctly. If you see an error
status message, please ignore it. We are resource-constrained in this lab and sometimes
we run out of space in the compute cluster when doing new instance placement.
3.16.1 nova list

Summary: What Did vSphere Just Do?


vSphere did more than just create a VM and power it on, here are the steps:
1. The VMDK is copied from the Glance datastore to the ESXi clusters compute
datastore. If the original image was sparse, the VMDK on the destination
datastore will be larger (same size as the virtual disk).
2. The VMDK is cached on the local datastore. Any future instances spawned from
this image will use linked clones, and thus be provisioned almost instantaneously.
3. A shadow VM is created per cached image. These will show up as Managed
VMs in vCenter, with a naming convention similar to meta-<uuid>.
4. The new instance is created as a linked clone VM using the correct meta-<uuid>
for a replica disk.
5. The new VM is powered up.

HOL-SDC-1620

Page 176

HOL-SDC-1620

This completes the advance Glance section.

HOL-SDC-1620

Page 177

HOL-SDC-1620

API Consumption: Heat Templates


Heat provides a mechanism for orchestrating OpenStack resources through the use of
modular templates. Heat uses YAML to describe the infrastructure for a cloud
application.
A Stack is a group of connected cloud resources (instances, volumes, )

HOL-SDC-1620

Page 178

HOL-SDC-1620

Heat Template sStructure


A Heat template uses YAML to describe the infrastructure for a cloud application in a
text file that is readable and writable by humans, and can be checked into version
control, diffed, etc. Infrastructure resources that can be described include: servers,
floating ips, volumes, security groups, users, etc. All of this is saved in a Heat
Orchestration Template (HOT) for repeated deployments. Other formats exist (like
JSON).
Heat is compatible with AWS CloudFormations format from Amazon. Topology and
Orchestration Specification for Cloud Application (TOSCA) is still a work in progress, for
now you can use a translate them to HOT using https://github.com/stackforge/heattranslator.
Heat also provides an autoscaling service that integrates with Ceilometer, so you can
include a scaling group as a resource in a template.
Templates can also specify the relationships between resources (e.g. this volume is
connected to this server). This enables Heat to call out to the OpenStack APIs to create
all of your infrastructure in the correct order to completely launch your application.
Heat manages the whole lifecycle of the application - when you need to change your
infrastructure, simply modify the template and use it to update your existing stack. Heat
knows how to make the necessary changes. It will delete all of the resources when you
are finished with the application, too.
Heat primarily manages infrastructure, but the templates integrate well with software
configuration management tools such as Puppet, Chef or Ansible.
Youll find a Hello World example below:
https://github.com/openstack/heat-templates/blob/master/hot/hello_world.yaml
Parameters can be: string, number, comma_delimited_list, json or bolean

HOL-SDC-1620

Page 179

HOL-SDC-1620

Exploring a Sample Heat template


Open the Windows explorer and navigate to Desktop > HOL Files. Right-click on the file
named "sample_heat_template.json" and select "Edit with Notepad++"

HOL-SDC-1620

Page 180

HOL-SDC-1620

Exploring a Sample Heat Template (continued)


Take your time and explore the template structure. This particular HOT builds a singletier application with a router connected to an external network (source NAT enabled by
default) and two VMs behind the router.

HOL-SDC-1620

Page 181

HOL-SDC-1620

Browse the HOT from Horizon


Important: Before proceeding, please make sure you terminate any instance that is in
"scheduling" or "spawning" state from previous exercises. You can do this from Project >
Compute > Instances.
From the Horizon UI login as admin tenant and:
1.
2.
3.
4.
5.

Navigate to Project > Orchestration > Stacks


Click on "Launch Stack".
Select "File" as Template Source.
Navigate to Desktop > HOL Files to select the "sample_heat_template".
Click "Next"

HOL-SDC-1620

Page 182

HOL-SDC-1620

Launch the Stack


1.
2.
3.
4.

Enter "MyStack" under Stack Name (no spaces).


Click "Rollback On Failure".
Enter a password for the admin user: VMware1!
Click "Launch".

HOL-SDC-1620

Page 183

HOL-SDC-1620

Verify the Stack Has Been Successfully Created


There are multiple ways to verify that the stack was successfully launched and the
components were successfully created:
1. Navigate to Project > Network > Network Topology and verify the single app has
been created, with the instances and the router properly placed on the network.
2. Navigate to Project > Orchestration > Stacks and verify that the stack you
launched shows "Complete".

HOL-SDC-1620

Page 184

HOL-SDC-1620

Understanding the Stack Sequence


You can display a sequence of actions executed by your HOT by navigating to Project >
Orchestration > Stacks, clicking on your HOT and then selecting the "Events" tab.
Generally, the HOT will follow a sequence similar to the manual workflow that you would
use if you were to build the application by interacting with the individual APIs.
This concludes the Heat automation section.

HOL-SDC-1620

Page 185

HOL-SDC-1620

Module 4:
Operationalizing VIO - (60
Minutes)

HOL-SDC-1620

Page 186

HOL-SDC-1620

Overview of OpenStack Operations


The purpose of the next section is to provide an overview of troubleshooting and
management tools that are available for your VIO deployment. There are many
components within OpenStack that should be managed and in order to quickly diagnose
and troubleshoot issues, the right tools should be in place to suppor your operational
teams. We will be diving into Log Insight, vRealize Operations, and native capabilities
within VIO vCenter plugin.

Operationalizing OpenStack
OpenStack by nature is a myriad of different open source projects pulled together to
provide a common platform for deploying compute, storage and network. Because of
the distributed nature of the platform, it can be complex and fragile at times. The
OpenStack community has published a guide
http://docs.openstack.org/openstack-ops/content/
that talks about the different pieces of OpenStack and operational aspects of supporting
an OpenStack environment. What you will clearly realize is that the documentation
provides a lot of insight into what should be checked, how to check it, where to find
logs, etc. However, it makes very few suggestions as to the tooling (for obvious reasons
to be unbiased) around operations. Regardless of which tool you use, you absolutely
need to have a infrastructure health management and logging tool at the very
minimum. In this section we will discuss the benefits of vRealize Log Insight and
vRealize Operations Manager and why these tools have been designed to help simplify
and manage large complex environments like OpenStack.

HOL-SDC-1620

Page 187

HOL-SDC-1620

vRealize Log Insight for OpenStack


vRealize Log Insight is a real-time log management platform that focuses on delivering
high performance search across physical, virtual, and cloud environments. Log Insight
is extremely intuitive to use and the integration with the VMware suite of solutions
makes capturing logs extremely easy.
Specifically as it relates to OpenStack, a special OpenStack management pack (there
are 30+ management packs) can be downloaded for free to help simplify and enable
operators to view OpenStack relevant information within a handful of dashboards that
are pre-created. This makes log insight immediately useful out of the box for whatever
application you want to collect logs from.
OpenStack is log heavy-- each service has a handful of logs and correlating all the
information across the different services is extremely painful without a centralized
logging service. Having a logging mechanism in place when managing an OpenStack
environment is a must.

vRealize Operations Manager for OpenStack


Similar to Log Insight, vRealize Operations Manager plays a crucial role in managing an
OpenStack environment. Part of managing OpenStack is keeping a close eye on the
infrastructure health of your cloud. Are you close to running out of memory? Cpu?
Storage? Do you have network/storage IO issues? How do you manage 50K VM's? Are
there parts of my OpenStack infrastructure that are over committed and performing
poorly as a result?? Are there any anomalies? Are my services up and running? You
can tell, there can be a tremendous amount of information to collect to get the real

HOL-SDC-1620

Page 188

HOL-SDC-1620

health of your environment. However, you want the information in digestable form. You
don't want to be collecting and viewing 50,000 CPU, memory, storage, network metrics.
That would be impossible. vRealize Operations Manager simplifies this by collecting all
the data but rolling up a health score and explaining why.
The OpenStack management pack for vRealize Operations offer pre-created dashboards
to quickly view the health of the environment all the way up to the services that are
running within the OpenStack infrastructure. Are my keystone services running? Is my
nova-compute running?
So many questions can be answered through vRealize Operations Manager and in
conjunction, both Log Insight and vRealize Operations are the foundation for keeping
your OpenStack cloud healthy so you can sleep easy at night and keep your users
happy.
Let's start some labs!

HOL-SDC-1620

Page 189

HOL-SDC-1620

Troubleshooting Scenario with Log


Insight and vRealize Operations
vRealize Operations is a platform that allows you to automate IT operations, manage
performance and gain visibility across physical and virtual infrastructure. There is a
large ecosystem around vRealize Operations and the management packs relevant to
VIO are the OpenStack management pack and the NSX-vSphere management pack. We
will get overview of these two management packs.

Before we Start the Section - Administrative Tasks


Before we begin, let's start a quick scenario for this troubleshooting section.
1. Click Windows Icon
2. Click on Putty

HOL-SDC-1620

Page 190

HOL-SDC-1620

Log into OpenStack Management Server (oms.corp.local)


1. Select the viouser@oms.corp.local preconfigured session
2. Click Load

HOL-SDC-1620

Page 191

HOL-SDC-1620

Type in the password and run log.sh


1. Type in the following for password
VMware1!
2. Run log.sh by typing in
4.1.1 ./log.sh
in the command prompt. There should be no output message.
3. Exit the Putty window and let's go on to the overview!

vRealize Operations and Log Insight Overview


Click on Google Chrome to launch the browser.
1. Click on vRealize Operations on the toolbar

HOL-SDC-1620

Page 192

HOL-SDC-1620

vRealize Operations Login


Login as
user: admin
password: VMware1!

Go to Dashboards List
Click on the Home button if it's not already the default.

HOL-SDC-1620

Page 193

HOL-SDC-1620

View the Different Dashboards


1. Click on Dashboard List
2. Choose OpenStack
3. OpenStack Controllers

HOL-SDC-1620

Page 194

HOL-SDC-1620

OpenStack Controllers Dashboard


Once you are on the OpenStack Controllers Dashboard, you will see the different
services that are being monitored.
1. Left click on the button once next to OpenStack Compute Services
2. You should see the compute services that are currently running in the
environment show up below. You will see nova-api appear twice (nova-api runs
on both controller01 and controller02). nova-compute runs once because it only
runs on the compute node (compute01). Alerts will be generated and severity
depends on how many of the services go down. For example, if nova-api loses
one of its services, it will alert with an immediate severity level. If all of the
services are down, it will alert as a critical alert.

View OpenStack Management Services Health


1. Click on the OpenStack Management Services icon. The Controller Service
Topology will appear on the right hand side and the service metrics will appear
below.

HOL-SDC-1620

Page 195

HOL-SDC-1620

The service topology is extremely useful to provide visibility into how and where the
services are running. you can see all the different services running on the controllers.
2. You can zoom in to get better granularity. The central node in the middle is the
management service IP, which is the internal API endpoint to all OpenStack
services.
3. The Controller Service Metrics show all the different services that are currently
running

HOL-SDC-1620

Page 196

HOL-SDC-1620

Log into VIO and launch a VM


Open and new tab or browser window
1. Click on VIO shortcut
2. Login as
User Name: admin
Password: VMware1!

HOL-SDC-1620

Page 197

HOL-SDC-1620

Launch VM
1. Click on Project
2. Click on Images
3. Click Launch next to ubuntu-14.04-server-amd64

HOL-SDC-1620

Page 198

HOL-SDC-1620

Launch VM
1. Name your VM
2. Click on Networking Tab

HOL-SDC-1620

Page 199

HOL-SDC-1620

Choose Your Network and Launch


1. Click on the plus symbol next to TestNet
2. Click Launch

HOL-SDC-1620

Page 200

HOL-SDC-1620

Everything worked right??


After launching your VM and waiting anxiously for the VM to successfully deploy, you will
see the following error, "No valid host found"
What in the world does that mean? Well, it could mean many different things but the
idea is that nova scheduler was unable to put the VM on any host. Are we out of
resources? Is there enough RAM? Is there enough CPU? Are the hosts up and running?
As you can see, the error generated by OpenStack are sometimes vague.
Let's start troubleshooting. Different operators have different approaches to
troubleshooting. Some folks take the "follow the logs" approach where you start looking
up the UUID to see if you can detect where it got stuck. Other folks might quickly
review infrastructure or look at their entire environment as a whole to see if anything
has gone down.
First thing should quickly do is click on the Instance Status to see if there are any
obvious issues.

HOL-SDC-1620

Page 201

HOL-SDC-1620

Take a Quick Look at Instance Status


1. Left Click on the HOLVM link to see the status and any details about the error
message.

HOL-SDC-1620

Page 202

HOL-SDC-1620

Instance Overview
The Instane Overview provides details as to status of the Instance. You can see that the
error message is not very descriptive as it just says "NoValidHost(reason=""). Basically,
OpenStack is not providing us a reason for the error. Maybe there is no hypervisor
available in OpenStack? Let's check real quick.

HOL-SDC-1620

Page 203

HOL-SDC-1620

Hypervisor Dashboard
Sometimes to see what nova-compute is reporting back, we check the Hypervisors to
see if there are enough resources to launch the instance.
1. Click on Admin tab
2. Click on Hypervisors.
There are no issues that we see here. Nothing is being used and there should be plenty
of storage. Hmm..time to check the logs.

HOL-SDC-1620

Page 204

HOL-SDC-1620

Open a new tab or window and go to Log Insight


1. Open a new tab or window and click on Log Insight icon
2. Login as
Username: admin
Password: VMware1!

HOL-SDC-1620

Page 205

HOL-SDC-1620

Log Insight OpenStack Dashboard


When you log into Log Insight, it should take you straight to the OpenStack Dashboard.
The OpenStack Overview dashboard is part of a content pack that is freely available to
download. The content pack is a default set of pre-created dashboards that provide
visibility into the different OpenStack services.
1. If you don't see the OpenStack dashboard, click on the dropdown menu in shown
above in 1. Choose OpenStack.
2. Once the dashboard opens up, it should show the default OpenStack dashboard.
Click on the warning bubble in the nova service column. We just ran into an
issue launching an instance and the best guess would be to look at any warnings
in the nova service.
3. If you don't see anything, it could be due to the fact that the default setting is to
show the last 5 minutes of logs. Perhaps 5 minutes have already been past and
you should update the interface to show the last 1 hour by clicking on the
dropdown as shown in 3.

HOL-SDC-1620

Page 206

HOL-SDC-1620

Click on Interactive Analysis


1. Left click on the warning bubble in the nova column and a menu should pop up.
Click on Interactive Analytics

HOL-SDC-1620

Page 207

HOL-SDC-1620

View the results of the Interactive Analysis


The interactive analysis interface allows you to conduct deep dive analysis into the
logging data and correlate events across the logs.
What you will see once you enter the interactive analysis panel is the log data itself.
You should see "Setting instance to ERROR state" at the end of the log entry
So what next? We know we have an error but how do we troubleshoot the issue? From
here, let's track the instance UUID.

HOL-SDC-1620

Page 208

HOL-SDC-1620

Follow the Instance UUID with Log Insight 1


Let's follow the UUID of the instance across the different services if we can find the root
cause.
1. Remove the current filters by clicking on the X next to "text",
"openstack_component", and "openstack_severity"

Follow the Instance UUID with Log Insight 2


In the same log event, next to the word "instance", drag (from left to right)the entire
alphanumeric letter.
1. A popup window should appear and click on "Contains <uuid>" This should bring
up a refreshed novapage with the new results. Make sure the log event has
"Setting instance to ERROR state" to make sure you have the right instance UUID.

HOL-SDC-1620

Page 209

HOL-SDC-1620

Resetting the logs


You can always go back to just the Warning Nova log files by clicking on the circle for the
warning events. This will load only the Warning log files.

HOL-SDC-1620

Page 210

HOL-SDC-1620

Follow the Instance UUID with Log Insight 3


This will bring up a new page of log events. You can scroll through the events carefully
but you probably won't see anything that is completely obvious as to what the problem
is. Take a look at the different log messages that appear.
After you have looked through the different log messages, there is something else we
can take a look at. Let's try to track the req-ID to see where it failed. To do this,
1. Delete the current filter by clicking on the "x" next to the text field.
2. Scroll through the log event error and find the that states "Setting instance to
ERROR state

HOL-SDC-1620

Page 211

HOL-SDC-1620

Follow the Request ID with Log Insight 1


1. Find the text after nova.scheduler.driver where it starts with [req-XXXXXXXXXXXX-XXXX-XXXX-XXXXXXXXXXXX]. Left click and drag to highlight the entire
string. Make sure you highlight the req-ID that is in the same log event as the
"Setting instance to ERROR state"
2. A popup should appear and left click on "Contains 'req......"

View Request ID Results


A request ID Is an identifier that is created for each API request. The value can be used
to track down problems and troubleshoot. Since we filtered by the request ID, we can
follow the status of the request ID. Scroll through the different events that have taken
place. The only event that appears to have any valuable information is the one that
says
Filter ComputeFilter returned 0 hosts
This means that the nova-scheduler, which is responsible for handling the requests to
place the VM's onto the compute nodes was unable to find any available compute
resources. From here, the natural next step is to figure out why the scheduler returned
0 hosts.

HOL-SDC-1620

Page 212

HOL-SDC-1620

Well that's strange. Last time you checked, all the infrastructure was available and you
had plenty of resources right? Well, let's quickly look at the infrastructure view of our
environment. Good thing we have vRealize Operations!

HOL-SDC-1620

Page 213

HOL-SDC-1620

vRealize Operations Part 2


From Log Insight, we were able to troubleshoot and determine that there might be
something wrong with our compute infrastructure. Remember that the nova-scheduler
did not pass the ComputeFilter so let's take a look at our Compute infrastructure in
OpenStack.
1. If you still have vRealize Operations tab, click on that, otherwise click on vRealize
Operations on the toolbar
2. Login as
user: admin
password: VMware1!

HOL-SDC-1620

Page 214

HOL-SDC-1620

OpenStack Compute Infrastructure

1. Click on Dashboard List


2. Click on OpenStack
3. Click on OpenStack Compute Infrastructure

HOL-SDC-1620

Page 215

HOL-SDC-1620

OpenStack Compute Infrastructure Dashboard


1. Click on the Openstack Computer Infrastructure icon
After clicking on Compute Cluster Infrastructure, it seems that everything is green.
There is plenty of compute resources and we have no issues with the infrastructure.
Network/Storage I/O seems good, no contention so ...let's look at other parts of the
environment. Perhaps the services?

HOL-SDC-1620

Page 216

HOL-SDC-1620

OpenStack Controller Dashboard


Let's check out the management services under OpenStack Controllers
1. Click on Dashboard List
2. Click on OpenStack
3. Click on OpenStack Controllers

HOL-SDC-1620

Page 217

HOL-SDC-1620

OpenStack Controllers
Red? That's not good! Let's click on the red badge.
1. Left click on the red badge under OpenStack Compute Services
2. Details about the Compute Services will appear. First, we see that "All novacompute services are unavailable" Click on this link.
NOTE: Based on the time, the icon may still be green. You should still see the Alert in
the bottom right of the page.

HOL-SDC-1620

Page 218

HOL-SDC-1620

Recommendations for nova-compute services


The recommendations indicate that we should "Restart any compute servies that are
down"
It looks like that nova-compute crashed, so let's start it up again. We found the
problem!

HOL-SDC-1620

Page 219

HOL-SDC-1620

Showing Alerts
Before we restart the nova-compute service, there's another way we would have seen
this alert, which is by left clicking on the Alerts button.
1. Click on the Alerts button

The Alerts Dashboard


1. Sort by most recent, it should by default but if you notice it is not, click on the
"Created On" to sort.
2. You should notice several alerts indicating that all-nova compute services

HOL-SDC-1620

Page 220

HOL-SDC-1620

OpenStack Tenants
Another mechanism to view the error is the OpenStack Tenants Dashboard.
1. Click on Dashboard List
2. Click on OpenStack
3. Click on OpenStack Tenants

HOL-SDC-1620

Page 221

HOL-SDC-1620

Tenant Issues
You will see that a tenant alert has been generated. This dashboard can be used to
track issues that users experience while using OpenStack.
Let's go ahead and restart-nova compute in the next step.

HOL-SDC-1620

Page 222

HOL-SDC-1620

Restart nova-compute
Either close or minimize the vRealize Operations window.
1. Click on the Windows icon on the bottom left hand corner
2. Click Putty

HOL-SDC-1620

Page 223

HOL-SDC-1620

Restart nova-compute services


1. Left click on oms.corp.local
2. Click Open

HOL-SDC-1620

Page 224

HOL-SDC-1620

Restart nova-compute services


1. type in the password:
VMware1!
2. type the following
4.1.2 ssh compute01 'sudo service nova-compute restart'
3. You should see stop: Unknown instance: This tells us that it was not running and that
we have restarted the service.
You can try and repeat launching an instance and it should work now

Summary
Some of you may be thinking to yourself, "Well, the first thing i would have checked
would have been the nova-compute service and I wouldn't have to even look at the
logs!" While you may be right in this specific case in hindsight, many errors are NOT
infrastructure related errors. For example, if a configuration file was wrong, or some
metadata in the image was incorrect, or the instance was launched with some strange
flags that caused no hosts to be found -- checking whether services are up would not
have helped. Through this exercise, we teach you how to fish, and for future problems,
you can walk through whatever framework you prefer for troubleshooting leveraging the
tools at hand to help you accelerate the troubleshooting process.

HOL-SDC-1620

Page 225

HOL-SDC-1620

Conclusion
Thank you for participating in the VMware Hands-on Labs. Be sure to visit
http://hol.vmware.com/ to continue your lab experience online.
Lab SKU: HOL-SDC-1620
Version: 20151005-072241

HOL-SDC-1620

Page 226