Professional Documents
Culture Documents
Virtual
Environment
Physical
Environment
STEP
3 :
Verify
the
VLAN
pool
has
been
created
and
associated
with
the
physical
domain
for
the
OpenStack
servers
STEP
4 :
Verify
the
Attachable
Access
Entity
is
created
and
associated
with
Physical
Domain
“openstack”.
STEP
5 :
Verify
the
Interface
Profile
created
for
the
leaf
ports
connected
to
the
OpenStack
Nodes
The
interface
profile
“_openstack_pprofile-‐101”
is
created
for
controller
node
and
the
compute
node
1
connected
to
TOR1
on
port
1/18
and
port
1/17,
respectively.
The
interface
profile
“_openstack_pprofile-‐102”
is
created
for
compute
node
2
connected
to
TOR2
on
port
1/17.
Verify
the
Interface
Policy
Group
created
for
openstack
is
associated
with
the
openstack
attachable
entity
profile.
Verify
the
switch
profile
is
created
for
each
leaf
switch
connecting
to
openstack
compute
node.
STEP
6 :
Verify
the
shared
network
is
created
in
Tenant
Common
by
following
the
below
steps:
1. Select
Tenants
2. Select
“Common”
Tenant
3. Expand
Private
Networks,
the
private
network
“_openstack_shared”
is
created
4. Expand
the
filters,
these
are
default
filters.
Create
3
Tenant
Networks
OpenStack
Configuration
and
Verification
The
neutron
CLI
environment
is
used
to
setup
three
networks.
To
create
a
network
you
have
to
first
define
the
network
itself
and
then
define
a
subnet
that
gets
attached
to
the
network.
The
following
commands
will
be
used
in
this
section:
neutron net-create
neutron subnet-create
neutron net-list
neutron subnet-list
To
create
the
three
networks
and
their
attached
subnets
follow
these
steps:
STEP
1 :
First,
create
all
three
networks
as
follows:
neutron net-create tenXX_net01
neutron net-create tenXX_net02
neutron net-create tenXX_net03
XX
defines
the
LAB
POD
you
got
assigned.
ACI
Verification
Networks
in
OpenStack
relate
to
Bridge
Domains
in
ACI.
To
verify
all
three
Bridge
Domains
got
created
successfully,
login
to
the
APIC
controller
(https://10.22.44.221
with
the
credentials
provided)
and
navigate
to
“Tenants”
-‐>
“All
Tenants”
verify
the
tenant
“_openstack_tenantXX”
is
created.
If
you
don’t
see
you
tenantXX
in
the
first
page
on
the
“All
Tenants”
window,
you
can
do
a
quick
search
by
typing
your
tenant
name
in
the
search
box.
Double
click
your
Tenant
name
“_openstack_tenantXX”
in
the
“All
Tenants”
Page
and
go
into
the
Tenant
domain.
Expand
“Networking”
-‐>
“Bridge
Domains”.
You
would
see
three
bridge
domains
corresponding
to
the
3
networks
created
by
OpenStack
are
displayed.
Expand
each
bridge
domain
and
verify
the
network
and
subnets.
After
verifying
the
network
and
subnet
configuration
is
correct,
make
sure
the
physical
domain
has
been
properly
associated
with
the
EPGs.
Navigate
to
“Tenants”
-‐>
“_openstack_tenantXX”
-‐>
“Application
Profiles”
-‐>
“EPG
tenXX_net03”
-‐>
“Domain”.
You
can
verify
that
for
EPGs
created
for
all
three
networks.
Summary
In
this
lab
section
we
guided
you
through
creating
networks
and
attach
subnets.
We
also
showed
how
the
APIC
plug-‐in
drives
the
configuration
on
APIC-‐DC
and
the
relation
between
OpenStack
networks
and
subnets
and
ACI
concepts.
The
ID
in
the
first
field
identifies
your
port.
Use
this
ID
for
the
“neutron
router-‐
interface-‐add”
command.
STEP
4 :
After
creating
the
ports,
the
router
and
creating
the
interfaces
on
the
router
you
can
verify
the
configuration
using
the
following
commands:
neutron port-list
neutron router-list
neutron net-list
neutron subnet-list
ACI
Verification
After
creating
the
two
routers
to
interconnect
the
three
networks
you
now
can
verify
the
configuration
applied
to
the
APIC
controller.
A
router
in
OpenStack
equals
to
a
contract
in
ACI.
The
contracts
are
displayed
under
“Tenant”
-‐>
“_openstack_tenantXX”
-‐>
“Application
Profiles”
-‐>
“Application
EPGs”.
Click
on
one
of
the
contracts
to
verify
that
the
filter
”os_filter”
has
been
created.
A
filter
is
used
to
allow
traffic.
You
can
verify
the
filter
being
created
successfully
by
going
to
“Tenant”
-‐>
“All
Tenants”
-‐>
“Common”
-‐>
“Security
Policies”
-‐>
“Filters”.
The
filter
is
created
in
the
“common”
tenant
and
therefore
can
be
shared
by
all
other
tenants.
• “neutron
net-‐list
|
grep
tenXX_net01”:
The
“neutron
net-‐list”
shows
you
all
networks
created
in
neutron.
Here
we
filter
the
output
to
only
show
the
tenXX_net01
network
we
use
for
this
VM.
cisco@aci-controller tenants(keystone_tenant01)]$ neutron net-list | grep
ten01_net01
| fffcab74-8c59-43fc-8a6e-5d16bce3436b | ten01_net01 | c036107f-604f-4545-
af09-67ce73efe806 10.10.32.0/24 |
STEP
4 :
Now
that
you
have
the
required
IDs
you
can
execute
the
following
command
to
spin
up
the
first
VM:
nova boot --image <image id> --flavor m1.tiny --nic net-id=<net id> <VM-
Name>
| OS-DCF:diskConfig | MANUAL |
| OS-EXT-AZ:availability_zone | nova |
| OS-EXT-SRV-ATTR:host | - |
| OS-EXT-SRV-ATTR:hypervisor_hostname | - |
| OS-EXT-SRV-ATTR:instance_name | instance-00000014 |
| OS-EXT-STS:power_state | 0 |
| OS-EXT-STS:task_state | scheduling |
| OS-EXT-STS:vm_state | building |
| OS-SRV-USG:launched_at | - |
| OS-SRV-USG:terminated_at | - |
| accessIPv4 | |
| accessIPv6 | |
| adminPass | uMEaBH85ccbt |
| config_drive | |
| created | 2015-01-14T02:16:22Z |
| flavor | m1.tiny (1) |
| hostId | |
| id | 6e277507-70e4-410d-acd1-c7809b13bb51 |
| image | CirrOS 0.3.1 (2e166325-b460-4397-a87a-77c63fcb4fb9) |
| key_name | - |
| metadata | {} |
| name | web |
| os-extended-volumes:volumes_attached | [] |
| progress | 0 |
| security_groups | default |
| status | BUILD |
| tenant_id | 83e4ba34bd564e57973280f9213e392b |
| updated | 2015-01-14T02:16:23Z |
| user_id | 857053ab32c14cd792dcc7c971ad9086 |
+--------------------------------------+-----------------------------------------------------+
[cisco@aci-controller tenants(keystone_tenant32)]$
STEP
5 :
Verify
that
the
VM
got
created
successfully
by
executing
“nova
list”
STEP
6 :
Repeat
Step
1
to
5
using
details
below
for
the
APP
and
DB
VM:
VM
Name
Flavor
Image-‐ID
Network-‐ID
APP
m1.tiny
ID
for
Cirros
Image
ID
for
tenXX_net02
DB
m1.tiny
ID
for
Cirros
Image
ID
for
tenXX_net03
Use
“nova
image-‐list”
and
“neutron
net-‐list
|
grep
tenXX_net0Y”
to
figure
out
the
Image-‐ID
and
Networking-‐ID.
To
conclude
this
step
execute
“nova
list”.
You
now
should
see
three
VMs
(Web,
App,
DB)
similar
to
the
output
below:
[cisco@aci-controller tenants(keystone_tenant32)]$ nova list
+--------------------------------------+------+--------+------------+-------------+-----------
------------+
| ID | Name | Status | Task State | Power State | Networks
|
+--------------------------------------+------+--------+------------+-------------+-----------
------------+
| 6e638294-c4e2-4fbd-be8a-e530aeb6b37a | app | ACTIVE | - | Running |
t32_net2=20.20.32.102 |
| 6c41ffa3-c96c-4be0-96c5-e1f115a2bf60 | db | ACTIVE | - | Running |
t32_net3=30.30.32.11 |
| 6e277507-70e4-410d-acd1-c7809b13bb51 | web | ACTIVE | - | Running |
t32_net1=10.10.32.103 |
+--------------------------------------+------+--------+------------+-------------+-----------
------------+
[cisco@aci-controller tenants(keystone_tenant32)]$
Login
Dashboard
to
see
three
VM
instances
are
created
and
in
active
status.
Verify
on
which
compute-‐node
the
instance
is
running.
Click
“Admin
-‐>
System
Panel
-‐>
Instances”:
ACI
Verification
Now
login
APIC
controller
https://10.22.44.221
with
“tanantXX/tenantXX”
where
XX
is
your
pod
number.
Verify
the
end
points
are
learned
for
each
EPG.
Click
‘Tenant
-‐>
_openstack_tenantXX
-‐>
Application
Profiles
-‐>
3-‐Tier
-‐>
Application
EPGs
-‐>
EPG-‐tXX-‐net1”,
in
the
right
panel,
click
“OPERATIONAL”,
you
will
see
two
end
points
are
learned
via
the
ports.
In
this
example,
the
end
point
with
IP
10.10.32.103
is
the
“web”
instance,
which
is
learned
from
port
1/17
on
TOR1.
The
IP
address
10.10.32.102
belongs
to
the
OpenStack
DHCP
agent
for
this
network
and
is
learnt
from
the
port
1/18
on
TOR1
which
is
connected
to
the
OpenStack
controller.
The
Encap
vlan
id
matches
the
vlan
id
for
this
network
assigned
by
OVS.
Verify
3-‐Tier
Architecture
To
verify
our
3-‐Tier
environment
has
been
configured
correctly
we
will
send
pings
between
the
three
VMs.
Login
to
the
dashboard
and
select
Instances
via
“Project”
-‐>
“Compute”
-‐>
“Instances”.
Select
one
of
the
VMs
and
click
on
the
Instance
name.
You
will
be
directed
to
the
Instance
Details.
Make
yourself
familiar
with
the
configuration
of
the
Instance
you
previously
configured
using
the
CLI.
This
overview
provides
some
important
details
such
as
the
VMs
IP,
its’
security
parameters
and
related
compute
and
storage
details.
Continue
by
clicking
on
“Console”.
You
should
see
something
similar
to:
Log
in
to
the
cirros
based
VM
using
its
default
credentials
cirros/cubswin:).
NOTE:
If
you
are
having
difficulties
writing
in
the
console
click
once
outside
the
console
area.
This
will
activate
the
console
and
you
now
should
be
able
to
write.
After
a
successful
login
verify
that
the
VM
has
received
an
IP
address
by
the
DHCP
agent
and
can
ping
its
direct
gateway.
Below
are
examples
for
VM02
and
“tenXX_net02”.
If
you
logged
in
to
VM01
you
should
see
a
similar
output
just
with
a
different
subnet
range
(10.10.10.0/24).
To
verify
the
three
VMs
can
talk
to
each
other
try
to
ping
the
other
VMs
IP
addresses.
If
unsure,
log
in
to
the
other
VMs
and
verify
the
IP
addresses
assigned.
Ping
“app”
instance
20.20.32.102.
On
the
“web”
instance,
verify
the
MAC
address
for
the
gateway
10.10.32.100
is
the
fabric
wide
default
MAC
for
this
BD.
You
can
verify
that
on
the
APIC-‐DC
controller
in
the
“Tenants”
-‐>
”_openstack_tenantXX”
-‐>
“Networking”
-‐>
“Bridge
Domain”
-‐>
“t32_net1”
section:
Similarly,
login
to
the
consoles
of
“app”
and
“db”
to
verify
their
connectivity.
STEP
3 :
Click
“+Add
Rule”
to
add
a
new
ICMP
rule.
For
this
example
we
define
a
“Custom
ICMP
Rule”
for
both
Ingress
and
Egress.
You
could
also
define
an
“All
ICMP”
rule
instead.
An
example
rule
configuration
should
look
similar
to
below
figure:
If
you
choose
to
create
a
“Custom
ICMP
Rule”
make
sure
you
add
one
for
both
directions
(Ingress
&
Egress).
“-‐1”
can
be
seen
as
a
wildcard
mask
for
“Type”
and
“Code”
allowing
all
ICMP
packet
types
and
codes.
STEP
4 :
After
successfully
creating
the
ICMP
rules
you
should
now
have
two
additional
rule
entries
in
the
Default
Security
Group.
STEP
5 :
Even
though
we
allowed
ICMP
ingress
and
egress
for
VMs
of
TenantA,
they
should
not
be
able
to
ping
TenantBs
VMs
yet.
Here,
Tenant
32
is
not
able
to
ping
Tenant
30s
VM.
To
enable
inter-‐tenant
communication
on
the
ACI
side
a
contract
is
required
that
accepts
pings
between
TenantA
and
TenantB.
STEP
6 :
To
create
the
“Allow_Ping”
Contract
go
to
TenantA,
Click
“Security
Policies”
-‐>
“Contracts”,
right
click
and
select
“Create
Contract”
On
the
“Create
Contract”
screen,
• In
the
“Name”
box,
type
in
“allow_tenXX_ping”
where
XX
is
the
Pod
number
of
TenantB
• In
the
“Scope”
drop
down
box,
select
“Global”
• In
the
“Subjects”
window
area
click
on
“+”
to
add
subject/filter
On
the
“Create
Contract
Subject”
screen,
• In
the
Name
box,
type
allow_ping
• In
the
Filter
Chain
window,
click
“+”
• In
the
drop
down
window,
select
common/icmp
• Click
Update
• Click
OK
Now
“allow_tenXX_ping”
contract
is
created.
Right
Click
on
“Contract”
and
select
“Export
Contract”.
On
the
“EXPORT
CONTRACT”
Screen,
• In
the
Name
box,
type
“export_to_tenXX”
• In
the
Global
Contract
drop
down
window,
select
“allow_tenXX_ping”
• In
the
Tenant
drop
down
window,
select
“_openstack_tenantXX”
for
TenantB
• Click
Submit
Still
in
TenantA,
Add
“Provided
Contract”
• Under
Application
EPGs,
expand
the
EPG
for
“web”
instance,
tenXX_net1
• Right
click
“Contract”,
and
select
“Add
Provided
Contract”
On
the
“ADDD
PROVIDED
CONTRACT”
Screen:
• In
the
Contract
drop
down
window,
select
the
Contract
“allow_tXX_ping”
Tenant
B
Configuration
STEP
1 :
Log
in
to
the
APIC
controller
with
TenantBs
credentials
and
select
the
Tenant
tab.
STEP
2 :
Verify
that
the
contract
got
imported
from
TenantA.
STEP
3 :
TenantB
can
now
consume
the
contract
provided
by
TenantA.
For
that,
execute
the
following:
Expand
“Application
Profiles”
-‐>
“3-‐Tiers”
-‐>
“Application
EPGs”
-‐>
“EPG
tenXX_net1”,
which
is
the
EPG
for
“web”
instance.
Right
click
the
EPG
name,
select
“Add
Consumed
Contract
Interface”.
On
the
“ADD
CONSUMED
CONTRACT
INTERFACE”
screen,
• In
the
“Contract
Interface”
drop
down
window,
select
the
imported
Contract
“allow_tenXX_ping”
• Click
“Submit”
Now
verify
that
TenantB’s
“web”
instance
can
ping
TenantA’s
“web”
instance.
Summary
The
Cisco
APIC
is
the
unifying
point
of
automation
and
management
for
the
Application
Centric
Infrastructure
(ACI)
fabric,
optimizes
the
application
lifecycle
for
scale
and
performance,
and
supports
flexible
application
provisioning
across
physical
and
virtual
resources.
Integrating
OpenStack
with
ACI
provides
a
solution
that
enables
the
next-‐
generation
cloud
deployments
driving
business
agility,
lower
operational
costs
and
avoids
vendor
lock-‐in.