You are on page 1of 32

 

LAB   4:   Deploying   A   3-­‐Tier   Application   in   OpenStack  


Cloud  With  Cisco’s  ACI  
This  lab  will  show  you  the  interaction  between  your  OpenStack  private  Cloud  and  Ciscos  
Application   Centric   Infrastructure   (ACI).   We   assume   that   the   integration   part   has  
already   been   completed   and   OpenStack   successfully   talks   to   the   ACI   Controller   (APIC-­‐
DC).   To   better   understand   the   relationship   how   OpenStack   operations   relate   to   APIC-­‐DC  
configurations,  verification  is  done  on  both  OpenStack  and  APIC-­‐DC  controllers.  
 
In   this   lab   we   are   not   using   the   Dashboard   Horizon   to   configure   your   OpenStack   Private  
Cloud  but  rather  guide  you  through  OpenStacks  CLI  to  define  networks,  routers  and  spin  
up  VMs.    
 
A   typical   example   for   an   ACI   environment   is   a   3-­‐tier   application.   Here,   we   simulate   a  
Web,   App   and   DB   application   run   on   three   different   VMs   in   three   different   networks.  
These   networks   are   interconnected   using   routers.   You   will   learn   how   this   simple  
OpenStack  environment  relates  to  APIC-­‐DC  EPGs,  Contracts  and  endpoints.    
 
Objective  
• Verify  the  Integration  of  ACI  into  OpenStack  
• Understand  OpenStacks  CLI  to  create  networks,  routers  and  Instances  
• Create  a  3-­‐tier  application  environment  in  OpenStack  
• Understand  how  the  OpenStack  configuration  of  networks,  routers  and  instances  
relates  to  ACI  terminology  
• Verify   your   application   is   configured   on   both   ACI   and   OpenStack   and   is  
operational  
 
Estimated  Time  
90  minutes  

 
 

 
Virtual  Environment  

 
Physical  Environment  

 
 
 

 
 

Neutron  to  APIC  Object  Mapping  


Neutron  Object   APIC  Object  Mapping   Description  
Project   Tenant  (fvTenant)   The  project  is  directly  mapped  to  a  Cisco  APIC  
tenant.  
Network   EPG  (fvAEPg)   Network   creation   or   deletion   triggers   both  
Bridge  domain  (fvBD)   EPG   and   bridge   domain   configurations.   The  
Cisco   ACI   fabric   acts   as   a   distributed   Layer   2  
fabric,   allowing   networks   to   be   present  
anywhere.  
Subnet   Subnet  (fvSubnet)   The  subnet  is  a  1:1  mapping.  
Security   Group   and     Security   groups   are   fully   supported   as   part   of  
Rule   the  solution.  However,  these  resources  are  not  
mapped   to   Cisco   APIC,   but   are   instead  
enforced   through   IP   tables   as   they   are   in  
traditional  OpenStack  deployments.  
Router   Contract  (vzBrCP)   Contracts  are  used  to  connect  EPGs  and  define  
Subject  (vzSubj)   routed  relationships.  The  Cisco  ACI  fabric  also  
Filter  (vzFilter)   acts   as   a   default   gateway.   The   Layer   3   agent  
is  notused.  
Network:  external   Outside   An   outside   EPG,   including   the   router  
configuration,  is  used.  
Port   Static   path   binding   When   a   virtual   machine   is   attached,   a   static  
(fvRsPathAtt)   EPG   mapping   is   used   to   connect   a   specific   port  
and   VLAN   combination   on   the   top   of   the   rack  
(ToR).  

 
 

Table  of  Contents  


VERIFY  APIC  AND  OPENSTACK  INTEGRATION  ....................................................................................  5  
PREPARE  OPENSTACK  CLI  ENVIRONMENT  ...........................................................................................  9  
CREATE  3-­‐TIER  NETWORK  ENVIRONMENT  ......................................................................................  10  
CREATE  3  TENANT  NETWORKS  .....................................................................................................................................  10  
OpenStack  Configuration  and  Verification  .......................................................................................................  10  
ACI  Verification  .............................................................................................................................................................  12  
Summary  ..........................................................................................................................................................................  14  
CREATE  ROUTERS  BETWEEN  NETWORKS  ....................................................................................................................  14  
OpenStack  Configuration  and  Verification  .......................................................................................................  14  
ACI  Verification  .............................................................................................................................................................  16  
LAUNCH  AND  MANAGE  3-­‐TIER  APPLICATION  INSTANCES  ...........................................................  17  
INSTANTIATE  THREE  VMS  (WEB,  APP,  AND  DB)  ......................................................................................................  17  
OpenStack  Configuration  and  Verification  .......................................................................................................  17  
ACI  Verification  .............................................................................................................................................................  20  
VERIFY  3-­‐TIER  ARCHITECTURE  .....................................................................................................................................  20  
INTER  TENANT  COMMUNICATION  .......................................................................................................  24  
TENANT  A  CONFIGURATION  TASKS  ...............................................................................................................................  25  
TENANT  B  CONFIGURATION  ...........................................................................................................................................  31  
SUMMARY  ...........................................................................................................................................................................  32  
 

 
 

Verify  APIC  and  OpenStack  integration  


 
For   the   purpose   of   this   lab   we   pre-­‐configured   the   ACI/OpenStack   environment   and  
completed  the  necessary  steps  to  integrate  both.  The  following  section  is  used  to  verify  
the  integration.    
 
STEP  1 :   Verify   from   APIC   GUI   that   the   integration   is   successful.   Log   in   the   APIC  
controller:  10.22.44.221  using  the  following  credentials:  
tenantXX/tenantXX  (where  XX  is  your  POD  number)    
 
STEP  2 :   Verify   the   Physical   domain   created   for   OpenStack   servers,   and   the   associated  
vlan  pool.  

 
 
STEP  3 :  Verify  the  VLAN  pool  has  been  created  and  associated  with  the  physical  domain  
for  the  OpenStack  servers  

NOTE:   The   vlan   range   is   defined   by   [ml2_type_vlan]   in  


/etc/neutron/plugins/ml2/ml2_conf.ini.  The  physical  domain  “openstack”  is  associated  
with  the  vlan  pool.    

 
 

 
 
STEP  4 :  Verify  the  Attachable  Access  Entity  is  created  and  associated  with  Physical  
Domain  “openstack”.    

 
 
STEP  5 :  Verify  the  Interface  Profile  created  for  the  leaf  ports  connected  to  the  
OpenStack  Nodes    

The  interface  profile  “_openstack_pprofile-­‐101”  is  created  for  controller  node  and  the  
compute  node  1  connected  to  TOR1  on  port  1/18  and  port  1/17,  respectively.      
 

 
 

 
 
The   interface   profile   “_openstack_pprofile-­‐102”   is   created   for   compute   node   2  
connected  to  TOR2  on  port  1/17.    

 
 
Verify   the   Interface   Policy   Group   created   for   openstack   is   associated   with   the   openstack  
attachable  entity  profile.    

 
 
 
 
 

 
 

Verify  the  switch  profile  is  created  for  each  leaf  switch  connecting  to  openstack  compute  
node.    

 
 
STEP  6 :  Verify  the  shared  network  is  created  in  Tenant  Common  by  following  the  below  
steps:    
1. Select  Tenants  
2. Select  “Common”  Tenant  
3. Expand  Private  Networks,  the  private  network  “_openstack_shared”  is  created  
4. Expand  the  filters,  these  are  default  filters.    
 

 
 

 
 

Prepare  OpenStack  CLI  environment  


To  be  able  to  use  the  OpenStack  CLI  after  connecting  to  the  controller  via  SSH  we  have  to  
setup  the  CLI  environment.  For  that  please  follow  the  steps  below:  
STEP  1 :   SSH   into   the   OpenStack   controller   (IP   10.22.44.224)   with   the   credentials  
provided:  (cisco/cisco123)  
STEP  2 :   You   should   find   a   keystonerc_tenantXX   file   in   /home/cisco/tenants.   Here,   XX  
relates  to  the  Lab  POD  you  have  been  assigned.  For  example:  
[cisco@aci-controller tenants]# more keystonerc_tenant01
export OS_USERNAME=tenant01
export OS_TENANT_NAME=tenant_01
export OS_PASSWORD=tenant01
export OS_AUTH_URL=http://10.22.44.224:5000/v2.0/
export PS1='[\u@\h \W(keystone_tenant_01)]\$ '
[cisco@aci-controller tenants]#
This   file   contains   a   set   of   environment   setting   required   to   execute   OpenStack   CLI  
commands   for   a   particular   user   within   a   Tenant.   As   soon   as   you   sourced   that   file   no  
further  credentials  are  required  for  any  CLI  commands.    
STEP  3 :   Execute   “.   keystonerc_tenantXX”;   you   should   see   that   the   bash   prompt  
changed.  This  is  useful  as  it  shows  which  credentials  are  active  on  the  environment  you  
are  using:  
[cisco@aci-controller tenants]# . keystonerc_tenant01
[cisco@aci-controller tenants(keystone_tenant_01)]#
NOTE:  Make  sure  the  prompt  reflects  the  Lab  ID  assigned  to  you!  
STEP  4 :   To   verify   your   environment   is   setup   correctly   execute   “nova   list”   to   show  
currently  configured  instances.  You  should  not  see  any  instances  running  at  that  point.    
[cisco@aci-controller tenants(keystone_tenant_01)]# nova list

 
 

Create  3-­‐Tier  Network  Environment  


 
In  this  section  we  will  go  through  the  following  tasks:  
1. Create  three  networks  in  OpenStack  (tenXX-­‐net01,  tenXX-­‐net02,  tenXX-­‐net-­‐03)  
2. Create  two  routers  interconnecting  all  three  networks  (tenXX-­‐router01,  tenXX-­‐
router02)  
3. Verify  the  OpenStack  configuration  
4. Verify  APIC-­‐DC  configuration  as  pushed  by  OpenStack  APIC  plug-­‐in  
 
As  mentioned  earlier  the  first  three  tasks  will  be  performed  using  OpenStacks  CLI.  For  
Task  4  we  will  use  the  GUI  of  the  APIC-­‐DC  controller.    
 

 
Create  3  Tenant  Networks    
OpenStack  Configuration  and  Verification  
The  neutron  CLI  environment  is  used  to  setup  three  networks.  To  create  a  network  you  
have  to  first  define  the  network  itself  and  then  define  a  subnet  that  gets  attached  to  the  
network.    
The  following  commands  will  be  used  in  this  section:  
neutron net-create
neutron subnet-create
neutron net-list
neutron subnet-list
 
To  create  the  three  networks  and  their  attached  subnets  follow  these  steps:  
STEP  1 :  First,  create  all  three  networks  as  follows:  
neutron net-create tenXX_net01
neutron net-create tenXX_net02
neutron net-create tenXX_net03
   
XX  defines  the  LAB  POD  you  got  assigned.    
 

 
 

Verify  the  three  networks  got  created  successfully  by  executing:    


neutron net-list
 
You  should  see  something  similar  to  the  below:  
 
After  successfully  creating  the  networks  we  now  create  the  subnets  required  for  each.    
 
STEP  2 :  One  subnet  needs  to  be  created  (and  attached)  per  network.  Use  the  following  
three  commands  to  create  the  subnets:  
 
SUBNET  01:  
neutron subnet-create --name tenXX_subnet01 tenXX_net01 10.10.XX.0/24
--gateway 10.10.XX.100
 
SUBNET  02:  
neutron subnet-create --name tenXX_subnet02 tenXX_net02 20.20.XX.0/24
--gateway 20.20.XX.100
 
SUBNET  03:  
neutron subnet-create --name tenXX_subnet03 tenXX_net03 30.30.XX.0/24
--gateway 30.30.XX.100
 
Verify  that  the  three  subnets  got  created  successfully  and  are  assigned  to  the  networks  
previously  created  by  executing  the  following  commands:  
 
neutron subnet-list
neutron net-list
 
Before   we   continue   creating   the   two   routers   to   interconnect   the   networks   defined   for  
the  3-­‐tier  architecture  we  will  verify  the  configuration  pushed  to  the  APIC-­‐DC  controller.    
 

 
 

ACI  Verification  
Networks   in   OpenStack   relate   to   Bridge   Domains   in   ACI.   To   verify   all   three   Bridge  
Domains   got   created   successfully,   login   to   the   APIC   controller   (https://10.22.44.221  
with  the  credentials  provided)  and  navigate  to  “Tenants”  -­‐>  “All  Tenants”    
verify  the  tenant  “_openstack_tenantXX”  is  created.    
 
If  you  don’t  see  you  tenantXX  in  the  first  page  on  the  “All  Tenants”  window,  you  can     do  a  
quick  search  by  typing  your  tenant  name  in  the  search  box.    
 

 
 
Double  click  your  Tenant  name  “_openstack_tenantXX”  in  the  “All  Tenants”  Page  and  go  
into   the   Tenant   domain.   Expand   “Networking”   -­‐>   “Bridge   Domains”.   You   would   see  
three   bridge   domains   corresponding   to   the   3   networks   created   by   OpenStack   are  
displayed.  Expand  each  bridge  domain  and  verify  the  network  and  subnets.    
 

 
 

 
 

Three   bridge   domains   should   be   shown   (tenXX_net01   to   tenXX_net03).   If   you   click   on  


one   of   the   bridge   domains   and   expand   it   you   can   verify   that   the   subnet   is   correctly  
configured.    
 

 
 
After  verifying  the  network  and  subnet  configuration  is  correct,  make  sure  the  physical  
domain   has   been   properly   associated   with   the   EPGs.     Navigate   to   “Tenants”   -­‐>  
“_openstack_tenantXX”   -­‐>   “Application  Profiles”   -­‐>   “EPG  tenXX_net03”   -­‐>   “Domain”.  
You  can  verify  that  for  EPGs  created  for  all  three  networks.    
 

 
 

 
 

Summary  
In  this  lab  section  we  guided  you  through  creating  networks  and  attach  subnets.  We  also  
showed   how   the   APIC   plug-­‐in   drives   the   configuration   on   APIC-­‐DC   and   the   relation  
between  OpenStack  networks  and  subnets  and  ACI  concepts.    

Create  Routers  between  Networks  


 
To  configure  the  3-­‐Tier  environment  we  have  to  manually  define  router  ports  to  avoid  IP  
address  clashes  for  the  default  gateways  of  the  subnets  used.    
The  following  procedure  can  be  used  to  configure  the  required  ports  and  router  between  
the  WEB  and  APP  networks:  
1. Create  a  port  in  the  Web  network  tenXX_net01  with  fixed  IP  10.10.XX.1  
2. Create  a  port  in  the  App  network  tenXX_net02  with  fixed  IP  20.20.XX.1  
3. Create  a  router  tenXX_rtr_net01_net02  
4. Add  the  two  ports  created  in  Task  1  &  2  to  the  router  created  in  Task  3  
The  following  procedure  can  be  used  to  configure  the  required  ports  and  router  between  
the  APP  and  DB  networks:  
1. Create  a  port  in  APP  network  tenXX_net02  with  fixed  IP  20.20.XX.2  
2. Create  a  port  in  DB  network  tenXX_net03  with  fixed  IP  30.30.XX.1  
3. Create  a  router  tenXX_rtr_net02_net03  
4. Add  the  two  ports  to  the  router  
The   following   section   will   guide   you   through   creating   the   ports,   the   routers   and  
attaching  the  ports  accordingly.    

OpenStack  Configuration  and  Verification  

Creating  Ports  and  Router  between  WEB  and  APP  Network  


STEP  1 :  Login  to  the  Controller  using  the  provided  credentials  for  user  “cisco”  
STEP  2 :   Make   sure   you   have   the   environment   configured   properly   by   sourcing   the  
correct   keystonerc_tenantXX   file.   Refer   to   section   “Prepare   OpenStack   CLI   Environment”  
for  details.  
STEP  3 :  On  the  CLI  execute  the  following  commands  in  the  provided  order:  
neutron port-create tenXX_net01 --fixed-ip ip_address=10.10.XX.1
neutron port-create tenXX_net02 --fixed-ip ip_address=20.20.XX.1
neutron router-create tenXX_rtr_net01_net02
neutron router-interface-add tenXX_rtr_net01_net02 port=<port-id for
tenXX_net01>
neutron router-interface-add tenXX_rtr_net01_net02 port=<port-id for
tenXX_net02
 
To  verify  the  “Port-­‐ID”  for  the  ports  created  you  can  use  the  following  command:  
 
neutron port-list
 
This  will  give  you  an  output  similar  to  the  one  shown  below:  
 

 
 

[root@aci-controller ~(keystone_admin)]# neutron port-list


+--------------------------------------+------+-------------------+----------------------------------------------------------+
| id | name | mac_address | fixed_ips |
+--------------------------------------+------+-------------------+----------------------------------------------------------+
| 07c99e95-bd23-4ab7-b0fd-75571bdc18c4 | | fa:16:3e:22:80:cb | {"subnet_id": "1ea1cd04-7a45-4a5a-a204-a69c281498f1",
"ip_address": "100.1.1.2"} |
| 09c367e2-2321-427a-9c22-0723a580169d | | fa:16:3e:b0:6e:2d | {"subnet_id": "d8a9ee04-934b-4d86-889f-8a991461804b",
"ip_address": "40.40.2.2"} |
| 0f712968-52c7-47d9-9e7e-e11f49885373 | | fa:16:3e:e6:32:21 | {"subnet_id": "12a01932-0f19-4456-b615-b3769f35eb48",
"ip_address": "20.20.30.2"} |
| f54ffb8b-c7b7-47b3-9790-fb727acf994d | | fa:16:3e:b8:00:63 | {"subnet_id": "27c08b11-4481-4780-970d-c6cbfdb67250",
"ip_address": "30.30.30.101"} |
| fd8f8d50-6fb7-4e54-9f3d-4552c9f24187 | | fa:16:3e:11:db:43 | {"subnet_id": "12a01932-0f19-4456-b615-b3769f35eb48",
"ip_address": "20.20.30.101"} |
+--------------------------------------+------+-------------------+----------------------------------------------------------+

 
The   ID   in   the   first   field   identifies   your   port.   Use   this   ID   for   the   “neutron   router-­‐
interface-­‐add”  command.    
 
STEP  4 :   After   creating   the   ports,   the   router   and   creating   the   interfaces   on   the   router  
you  can  verify  the  configuration  using  the  following  commands:  
 
neutron port-list
neutron router-list
neutron net-list
neutron subnet-list
 

Creating  Ports  and  Router  between  APP  and  DB  Network  


STEP  1 :  On  the  CLI  execute  the  following  commands  in  the  provided  order:  
neutron port-create tenXX_net02 --fixed-ip ip_address=20.20.XX.2
neutron port-create tenXX_net03 --fixed-ip ip_address=30.30.XX.1
neutron router-create tenXX_rtr_net02_net03
neutron router-interface-add tenXX_rtr_net02_net03 port=<port-id for
tenXX_net02>
neutron router-interface-add tenXX_rtr_net02_net03 port=<port-id for
tenXX_net03>

 
 

ACI  Verification    
After  creating  the  two  routers  to  interconnect  the  three  networks  you  now  can  verify  the  
configuration  applied  to  the  APIC  controller.  A  router  in  OpenStack  equals  to  a  contract  
in   ACI.   The   contracts   are   displayed   under   “Tenant”   -­‐>   “_openstack_tenantXX”   -­‐>  
“Application  Profiles”  -­‐>  “Application  EPGs”.    
 
Click  on  one  of  the  contracts  to  verify  that  the  filter  ”os_filter”  has  been  created.    
 

 
 
A   filter   is   used   to   allow   traffic.   You   can   verify   the   filter   being   created   successfully   by  
going  to  “Tenant”  -­‐>  “All  Tenants”  -­‐>  “Common”  -­‐>  “Security  Policies”  -­‐>  “Filters”.    
 

 
 
The   filter   is   created   in   the   “common”   tenant   and   therefore   can   be   shared   by   all   other  
tenants.  

 
 

Launch  and  Manage  3-­‐Tier  Application  Instances  


Instantiate  three  VMs  (Web,  App,  and  DB)  
OpenStack  Configuration  and  Verification  
In  this  section  we  will  instantiate  three  VMs  for  our  3-­‐Tier  environment.  To  spin-­‐up  VMs  
nova  has  to  be  used  with  the  following  steps:  
 
STEP  1 :  Login  to  the  Controller  using  the  provided  credentials  for  user  “cisco”  
STEP  2 :   Make   sure   you   have   the   environment   configured   properly   by   sourcing   the  
correct   “keystonerc_tenantXX”   file.   Refer   to   section   “Prepare   OpenStack   CLI  
Environment”  for  details.  
STEP  3 :   To   start   a   VM   use   the   “nova   boot”   command.   Before   you   can   execute   this  
command  you  have  to  collect  some  details  such  as  the  “image-­‐ID”  and  the  “network-­‐ID”  
• “nova   image-­‐list”:   This   command   shows   you   the   images   configured   on   your  
Controller  node  available  by  nova.  Here,  select  the  CirrOS  image,  which  is  a  very  
small  Linux  distribution  that  can  be  used  for  testing.    
 
[cisco@aci-controller tenants(keystone_tenant01)]$ nova image-list
+--------------------------------------+--------------+--------+--------+
| ID | Name | Status | Server |
+--------------------------------------+--------------+--------+--------+
| 2e166325-b460-4397-a87a-77c63fcb4fb9 | CirrOS 0.3.1 | ACTIVE | |
+--------------------------------------+--------------+--------+--------+
[cisco@aci-controller tenants(keystone_tenant01)]$

 
• “neutron   net-­‐list   |   grep   tenXX_net01”:   The   “neutron   net-­‐list”   shows   you   all  
networks   created   in   neutron.   Here   we   filter   the   output   to   only   show   the  
tenXX_net01  network  we  use  for  this  VM.    
 
cisco@aci-controller tenants(keystone_tenant01)]$ neutron net-list | grep
ten01_net01
| fffcab74-8c59-43fc-8a6e-5d16bce3436b | ten01_net01 | c036107f-604f-4545-
af09-67ce73efe806 10.10.32.0/24 |

 
STEP  4 :  Now  that  you  have  the  required  IDs  you  can  execute  the  following  command  to  
spin  up  the  first  VM:  
 
nova boot --image <image id> --flavor m1.tiny --nic net-id=<net id> <VM-
Name>

[cisco@aci-controller tenants(keystone_tenant32)]$ nova boot --image 2e166325-b460-4397-a87a-


77c63fcb4fb9 --flavor m1.tiny --nic net-id=fffcab74-8c59-43fc-8a6e-5d16bce3436b web
+--------------------------------------+-----------------------------------------------------+
| Property | Value |
+--------------------------------------+-----------------------------------------------------+

 
 

| OS-DCF:diskConfig | MANUAL |
| OS-EXT-AZ:availability_zone | nova |
| OS-EXT-SRV-ATTR:host | - |
| OS-EXT-SRV-ATTR:hypervisor_hostname | - |
| OS-EXT-SRV-ATTR:instance_name | instance-00000014 |
| OS-EXT-STS:power_state | 0 |
| OS-EXT-STS:task_state | scheduling |
| OS-EXT-STS:vm_state | building |
| OS-SRV-USG:launched_at | - |
| OS-SRV-USG:terminated_at | - |
| accessIPv4 | |
| accessIPv6 | |
| adminPass | uMEaBH85ccbt |
| config_drive | |
| created | 2015-01-14T02:16:22Z |
| flavor | m1.tiny (1) |
| hostId | |
| id | 6e277507-70e4-410d-acd1-c7809b13bb51 |
| image | CirrOS 0.3.1 (2e166325-b460-4397-a87a-77c63fcb4fb9) |
| key_name | - |
| metadata | {} |
| name | web |
| os-extended-volumes:volumes_attached | [] |
| progress | 0 |
| security_groups | default |
| status | BUILD |
| tenant_id | 83e4ba34bd564e57973280f9213e392b |
| updated | 2015-01-14T02:16:23Z |
| user_id | 857053ab32c14cd792dcc7c971ad9086 |
+--------------------------------------+-----------------------------------------------------+
[cisco@aci-controller tenants(keystone_tenant32)]$

STEP  5 :  Verify  that  the  VM  got  created  successfully  by  executing  “nova  list”  
STEP  6 :  Repeat  Step  1  to  5  using  details  below  for  the  APP  and  DB  VM:  
VM  Name   Flavor   Image-­‐ID   Network-­‐ID  
APP   m1.tiny   ID  for  Cirros  Image   ID  for  tenXX_net02  
DB   m1.tiny   ID  for  Cirros  Image   ID  for  tenXX_net03  
 
Use   “nova   image-­‐list”   and   “neutron   net-­‐list   |   grep   tenXX_net0Y”   to   figure   out   the  
Image-­‐ID  and  Networking-­‐ID.    
 
To  conclude  this  step  execute  “nova  list”.  You  now  should  see  three  VMs  (Web,  App,  DB)  
similar  to  the  output  below:  
 
[cisco@aci-controller tenants(keystone_tenant32)]$ nova list
+--------------------------------------+------+--------+------------+-------------+-----------
------------+
| ID | Name | Status | Task State | Power State | Networks
|
+--------------------------------------+------+--------+------------+-------------+-----------
------------+
| 6e638294-c4e2-4fbd-be8a-e530aeb6b37a | app | ACTIVE | - | Running |
t32_net2=20.20.32.102 |
| 6c41ffa3-c96c-4be0-96c5-e1f115a2bf60 | db | ACTIVE | - | Running |
t32_net3=30.30.32.11 |
| 6e277507-70e4-410d-acd1-c7809b13bb51 | web | ACTIVE | - | Running |
t32_net1=10.10.32.103 |
+--------------------------------------+------+--------+------------+-------------+-----------
------------+

 
 

[cisco@aci-controller tenants(keystone_tenant32)]$

 
Login  Dashboard  to  see  three  VM  instances  are  created  and  in  active  status.    

 
 
Verify  on  which  compute-­‐node  the  instance  is  running.    
Click  “Admin  -­‐>  System  Panel  -­‐>  Instances”:  

 
 

ACI  Verification    
Now   login   APIC   controller   https://10.22.44.221   with   “tanantXX/tenantXX”   where   XX   is  
your  pod  number.  Verify  the  end  points  are  learned  for  each  EPG.    
 
Click   ‘Tenant   -­‐>   _openstack_tenantXX   -­‐>   Application   Profiles   -­‐>   3-­‐Tier   -­‐>  
Application   EPGs   -­‐>   EPG-­‐tXX-­‐net1”,  in  the  right  panel,  click  “OPERATIONAL”,  you  will  
see  two  end  points  are  learned  via  the  ports.    
 
In   this   example,   the   end   point   with   IP   10.10.32.103   is   the   “web”   instance,   which   is  
learned  from  port  1/17  on  TOR1.  The  IP  address  10.10.32.102  belongs  to  the  OpenStack  
DHCP   agent   for   this   network   and   is   learnt   from   the   port   1/18   on   TOR1   which   is  
connected   to   the   OpenStack   controller.   The   Encap   vlan   id   matches   the   vlan   id   for   this  
network  assigned  by  OVS.    
 

 
Verify  3-­‐Tier  Architecture  
To   verify   our   3-­‐Tier   environment   has   been   configured   correctly   we   will   send   pings  
between  the  three  VMs.    
 
Login  to  the  dashboard  and  select  Instances  via  “Project”  -­‐>  “Compute”  -­‐>  “Instances”.  
Select  one  of  the  VMs  and  click  on  the  Instance  name.  You  will  be  directed  to  the  Instance  
Details.    
 
Make  yourself  familiar  with  the  configuration  of  the  Instance  you  previously  configured  
using   the   CLI.   This   overview   provides   some   important   details   such   as   the   VMs   IP,   its’  
security  parameters  and  related  compute  and  storage  details.    
 
Continue  by  clicking  on  “Console”.  
 
You  should  see  something  similar  to:    
 

 
 

 
 
Log  in  to  the  cirros  based  VM  using  its  default  credentials  cirros/cubswin:).  
 
NOTE:  If  you  are  having  difficulties  writing  in  the  console  click  once  outside  the  console  
area.  This  will  activate  the  console  and  you  now  should  be  able  to  write.    
 
After   a   successful   login   verify   that   the   VM   has   received   an   IP   address   by   the   DHCP   agent  
and  can  ping  its  direct  gateway.  Below  are  examples  for  VM02  and  “tenXX_net02”.  If  you  
logged   in   to   VM01   you   should   see   a   similar   output   just   with   a   different   subnet   range  
(10.10.10.0/24).    
 

 
 
To  verify  the  three  VMs  can  talk  to  each  other  try  to  ping  the  other  VMs  IP  addresses.  If  
unsure,  log  in  to  the  other  VMs  and  verify  the  IP  addresses  assigned.  
 

 
 

 
 
Ping  “app”  instance  20.20.32.102.    
 

 
 
On  the  “web”  instance,  verify  the  MAC  address  for  the  gateway  10.10.32.100  is  the  fabric  
wide  default  MAC  for  this  BD.  
 

 
 
You   can   verify   that   on   the   APIC-­‐DC   controller   in   the   “Tenants”   -­‐>  
”_openstack_tenantXX”  -­‐>  “Networking”  -­‐>  “Bridge  Domain”  -­‐>  “t32_net1”  section:  

 
 

 
 
Similarly,  login  to  the  consoles  of  “app”  and  “db”  to  verify  their  connectivity.    

 
 

Inter  Tenant  Communication  


 
In   this   section   we   will   guide   you   through   the   configuration   necessary   to   enable   inter-­‐
tenant  communication  on  the  APIC-­‐DC.  You  should  have  a  functioning  environment  both  
within  OpenStack  and  APIC-­‐DC  before  starting  this  task.    
 
You   need   to   work   with   your   peer   together   with   one   as   TenantA   and   the   other   as  
TenantB.    
 
The  tasks  will  be  broken  down  between  TenantA  and  TenantB.    
 
TenantA  Tasks:  
1. In  OpenStack,  manage  security  rule  to  allow  ingress/egress  ICMP    
2. In  APIC,  create  one  contract  that  allows  ping  
3. In  APIC,  export  the  contract  to  TenantB  
4. In  APIC,  make  the  EPG  for  “web”  instance  provide  the  “allow_ping”  contract  
 
TenantB  tasks:  
1. In  APIC,  verify  the  imported  contract  
2. In  APIC,  select  the  EPG  for  “web”  instance,  add  consumed  contract  interface,  
selecting  the  imported  contract  
3. From  the  “web”  instance,  ping  the  “web”  instance  of  TenantA  
 

 
 

Tenant  A  Configuration  Tasks  


 
STEP  1 :  Login  to  the  OpenStack  Horizon  GUI  with  the  credentials  provided  for  your  Lab  
environment  
STEP  2 :   Navigate   to     “Project”   -­‐>   “Compute”   -­‐>   “Access   &   Security”   and   click   on  
“Manage  Rules”  

 
STEP  3 :   Click   “+Add   Rule”   to   add   a   new   ICMP   rule.   For   this   example   we   define   a  
“Custom   ICMP   Rule”  for  both  Ingress  and  Egress.  You  could  also  define  an  “All   ICMP”  
rule  instead.  An  example  rule  configuration  should  look  similar  to  below  figure:  

 
 
If   you   choose   to   create   a   “Custom   ICMP   Rule”   make   sure   you   add   one   for   both  
directions  (Ingress  &  Egress).  “-­‐1”  can  be  seen  as  a  wildcard  mask  for  “Type”  and  “Code”  
allowing  all  ICMP  packet  types  and  codes.    

 
 

STEP  4 :  After  successfully  creating  the  ICMP  rules  you  should  now  have  two  additional  
rule  entries  in  the  Default  Security  Group.    

 
 
STEP  5 :   Even   though   we   allowed   ICMP   ingress   and   egress   for   VMs   of   TenantA,   they  
should  not  be  able  to  ping  TenantBs  VMs  yet.    

 
 
Here,  Tenant  32  is  not  able  to  ping  Tenant  30s  VM.  
 
To  enable  inter-­‐tenant  communication  on  the  ACI  side  a  contract  is  required  that  accepts  
pings  between  TenantA  and  TenantB.    
 
STEP  6 :   To  create  the  “Allow_Ping”  Contract  go  to  TenantA,  Click  “Security   Policies”  -­‐>  
“Contracts”,  right  click  and  select  “Create  Contract”  

 
 

 
 
On  the  “Create  Contract”  screen,  
• In  the  “Name”  box,  type  in  “allow_tenXX_ping”  where  XX  is  the  Pod  number  of  
TenantB  
• In  the  “Scope”  drop  down  box,  select  “Global”  
• In  the  “Subjects”  window  area  click  on  “+”  to  add  subject/filter  
 

 
 
On  the  “Create  Contract  Subject”  screen,    
• In  the  Name  box,  type  allow_ping  
• In  the  Filter  Chain  window,  click  “+”  
• In  the  drop  down  window,  select  common/icmp  
• Click  Update  
• Click  OK  
 

 
 

 
 
Now  “allow_tenXX_ping”  contract  is  created.    

 
 
Right  Click  on  “Contract”  and  select  “Export  Contract”.  

 
 

 
 
 
On  the  “EXPORT  CONTRACT”  Screen,    
• In  the  Name  box,  type  “export_to_tenXX”  
• In  the  Global  Contract  drop  down  window,  select  “allow_tenXX_ping”  
• In  the  Tenant  drop  down  window,  select  “_openstack_tenantXX”  for  TenantB  
• Click  Submit  

 
 
 
Still  in  TenantA,  Add  “Provided  Contract”  
• Under  Application  EPGs,  expand  the  EPG  for  “web”  instance,  tenXX_net1  
• Right  click  “Contract”,  and  select  “Add  Provided  Contract”  
 

 
 

 
 
On  the  “ADDD  PROVIDED  CONTRACT”  Screen:  
• In  the  Contract  drop  down  window,  select  the  Contract  “allow_tXX_ping”  

 
 

 
 

Tenant  B  Configuration  
 
STEP  1 :   Log  in  to  the  APIC  controller  with  TenantBs  credentials  and  select  the  Tenant  
tab.  
STEP  2 :  Verify  that  the  contract  got  imported  from  TenantA.  

 
STEP  3 :  TenantB  can  now  consume  the  contract  provided  by  TenantA.  For  that,  execute  
the  following:  
 
Expand   “Application   Profiles”   -­‐>   “3-­‐Tiers”   -­‐>   “Application   EPGs”   -­‐>   “EPG  
tenXX_net1”,  which  is  the  EPG  for  “web”  instance.  Right  click  the  EPG  name,  select  “Add  
Consumed  Contract  Interface”.    

 
 
On  the  “ADD  CONSUMED  CONTRACT  INTERFACE”  screen,    
• In  the  “Contract  Interface”  drop  down  window,  select  the  imported  Contract  
“allow_tenXX_ping”  
• Click  “Submit”  

 
 

 
Now  verify  that  TenantB’s  “web”  instance  can  ping  TenantA’s  “web”  instance.    
 

 
Summary  
The  Cisco  APIC  is  the  unifying  point  of  automation  and  management  for  the  Application  
Centric   Infrastructure   (ACI)   fabric,   optimizes   the   application   lifecycle   for   scale   and  
performance,  and  supports  flexible  application  provisioning  across  physical  and  virtual  
resources.   Integrating   OpenStack   with   ACI   provides   a   solution   that   enables   the   next-­‐
generation   cloud   deployments   driving   business   agility,   lower   operational   costs   and  
avoids  vendor  lock-­‐in.  
 
 
 
 
 

You might also like