You are on page 1of 31

Modular Layer 2 In

OpenStack Neutron
Robert Kukura, Red Hat
Kyle Mestery, Cisco
1. I’ve heard the Open vSwitch and Linuxbridge
Neutron Plugins are being deprecated.
2. I’ve heard ML2 does some cool stuff!
3. I don’t know what ML2 is but want to learn
about it and what it provides.
What is Modular Layer 2?
A new Neutron core plugin in Havana
• Modular
o Drivers for layer 2 network types and mechanisms -
interface with agents, hardware, controllers, ...
o Service plugins and their drivers for layer 3+
• Works with existing L2 agents
o openvswitch
o linuxbridge
o hyperv
• Deprecates existing monolithic plugins
o openvswitch
o linuxbridge
Motivations For a
Modular Layer 2 Plugin
Before Modular Layer 2 ...

Neutron Server Neutron Server

OR OR ...

Open vSwitch Plugin Linuxbridge Plugin


Before Modular Layer 2 ...
I want to write
a Neutron
Plugin.

What a pain. :(
Neutron Server
But I have to
duplicate a lot of
DB,
segmentation,
etc. work.

Vendor X Plugin
ML2 Use Cases
• Replace existing monolithic plugins
o Eliminate redundant code
o Reduce development & maintenance effort
• New features
o Top-of-Rack switch control
o Avoid tunnel flooding via L2 population
o Many more to come...
• Heterogeneous deployments
o Specialized hypervisor nodes with distinct network
mechanisms
o Integrate *aaS appliances
o Roll new technologies into existing deployments
Modular Layer 2 Architecture
The Modular Layer 2 (ML2) Plugin is a
framework allowing OpenStack Neutron to
simultaneously utilize the variety of layer 2
networking technologies found in complex
real-world data centers.
What’s Similar?
ML2 is functionally a superset of the monolithic
openvswitch, linuxbridge, and hyperv plugins:
• Based on NeutronDBPluginV2
• Models networks in terms of provider attributes
• RPC interface to L2 agents
• Extension APIs
What’s Different?
ML2 introduces several innovations to achieve
its goals:
• Cleanly separates management of network types from
the mechanisms for accessing those networks
o Makes types and mechanisms pluggable via drivers
o Allows multiple mechanism drivers to access same
network simultaneously
o Optional features packaged as mechanism drivers
• Supports multi-segment networks
• Flexible port binding
• L3 router extension integrated as a service plugin
Tail-F NCS
Open
vSwitch
Linuxbridge
ML2 Architecture Diagram

Mechanism Manager

L2
Population
API Extensions

Hyper-V
Cisco Nexus
Arista
VXLAN
TypeDriver
VLAN
TypeDriver
Neutron Server

Type Manager
ML2 Plugin
GRE
TypeDriver
Multi-Segment Networks

VXLAN 123567

physnet1 VLAN 37 physnet2 VLAN 413

VM 1 VM 2 VM 3

● Created via multi-provider API extension


● Segments bridged administratively (for now)
● Ports associated with network, not specific segment
● Ports bound automatically to segment with connectivity
Type Driver API
class TypeDriver(object):
@abstractmethod
def get_type(self):
pass

@abstractmethod
def initialize(self):
pass

@abstractmethod
def validate_provider_segment(self, segment):
pass

@abstractmethod
def reserve_provider_segment(self, session, segment):
pass

@abstractmethod
def allocate_tenant_segment(self, session):
pass

@abstractmethod
def release_segment(self, session, segment):
pass
Mechanism Driver API
class MechanismDriver(object): def create_port_precommit(self, context):
@abstractmethod pass
def initialize(self):
pass def create_port_postcommit(self, context):
pass
def create_network_precommit(self, context):
pass def update_port_precommit(self, context):
pass
def create_network_postcommit(self, context):
pass def update_port_postcommit(self, context):
pass
def update_network_precommit(self, context):
pass def delete_port_precommit(self, context):
pass
def update_network_postcommit(self, context):
pass def delete_port_postcommit(self, context):
pass
def delete_network_precommit(self, context):
pass def bind_port(self, context):
pass
def delete_network_postcommit(self, context):
pass def validate_port_binding(self, context):
return False
def create_subnet_precommit(self, context):
pass def unbind_port(self, context):
pass
def create_subnet_postcommit(self, context): class NetworkContext(object):
pass @abstractproperty
def current(self):
def update_subnet_precommit(self, context): pass
pass
@abstractproperty
def update_subnet_postcommit(self, context): def original(self):
pass pass

def delete_subnet_precommit(self, context): @abstractproperty


pass def network_segments(self):
pass
def delete_subnet_postcommit(self, context):
pass
Port Binding
• Determines values for port’s binding:vif_type and class PortContext(object):
@abstractproperty
binding:capabilities attributes and selects
def current(self):
segment pass
• Occurs when binding:host_id set on port or
@abstractproperty
existing valid binding def original(self):
• ML2 plugin calls bind_port() on registered pass
MechanismDrivers, in order listed in config, until
@abstractproperty
one succeeds or all have been tried def network(self):
• Driver determines if it can bind based on: pass

o context.network.network_segments @abstractproperty
def bound_segment(self):
o context.current[‘binding:host_id’] pass
o context.host_agents()
@abstractmethod
• For L2 agent drivers, binding requires live L2 def host_agents(self, agent_type):
agent on port’s host that: pass

o Supports the network_type of a segment @abstractmethod


of the port’s network def set_binding(self, segment_id,
o Has a mapping for that segment’s vif_type,
physical_network if applicable cap_port_filter):
pass
• If it can bind the port, driver calls
context.set_binding() with binding details
• If no driver succeeds, port’s binding:vif_type set
to BINDING_FAILED
Havana Features
Type Drivers in Havana
The following are supported segmentation
types in ML2 for the Havana release:
● local
● flat
● VLAN
● GRE
● VXLAN
Mechanism Drivers in Havana
The following ML2 MechanismDrivers exist in
Havana:
● Arista
● Cisco Nexus
● Hyper-V Agent
● L2 Population
● Linuxbridge Agent
● Open vSwitch Agent
● Tail-f NCS
Before
ML2 L2 Population MechanismDriver
“VM A” wants to talk to “VM G.” “VM A” sends a
broadcast packet, which is replicated to the entire
tunnel mesh.

VM A VM B

Host 1
VM I
VM C

Host 1 Host 2

VM H

Host 4 Host 3

VM G VM F VM E VM D
With
ML2 L2 Population MechanismDriver
The ARP request from “VM A” for “VM G” is
intercepted and answered using a pre-populated
Traffic from “VM A” to “VM G” is neighbor entry.
encapsulated and sent to “Host 4”
according to the bridge forwarding
table entry. VM A VM B

Host 1 Proxy Arp


VM I

VM C

Host 2
Host 1

VM H

Host 4 Host 3

VM G VM F VM E VM D
Modular Layer 2 Futures
ML2 Futures: Deprecation Items
• The future of the Open vSwitch and
Linuxbridge plugins
o These are planned for deprecation in Icehouse
o ML2 supports all their functionality
o ML2 works with the existing OVS and Linuxbrige
agents
o No new features being added in Icehouse to OVS
and Linuxbridge plugins
• Migration Tool being developed
Plugin vs. ML2 MechanismDriver?
• Advantages of writing an ML2 Driver instead
of a new monolithic plugin
o Much less code to write (or clone) and maintain
o New neutron features supported as they are added
o Support for heterogeneous deployments

• Vendors integrating new plugins should


consider an ML2 Driver instead
o Existing plugins may want to migrate to ML2 as well
ML2 With Current Agents
● Existing ML2 Plugin
Neutron Server
works with existing
agents
● Separate agents for
Linuxbridge, Open ML2
vSwitch, and Hyper-V Plugin

API Network

Host A Host B Host C Host D

Linuxbridge Hyper-V Open vSwitch Open vSwitch


Agent Agent Agent Agent
ML2 With Modular L2 Agent
● Future direction is to
Neutron Server
combine Open
Source Agents
● Have a single agent
which can support ML2
Linuxbridge and Open Plugin
vSwitch
● Pluggable drivers for
additional vSwitches,
Infiniband, SR-IOV, ...

API Network

Host A Host B Host C Host D

Modular Modular Modular Modular


Agent Agent Agent Agent
ML2 Demo
What the Demo Will Show
● ML2 running with multiple MechanismDrivers
○ openvswitch
○ cisco_nexus
● Booting multiple VMs on multiple compute
hosts
● Hosts are running Fedora
● Configuration of VLANs across both virtual
and physical infrastructure
ML2 Demo Setup
Host 1 VLAN is added on Host 2 VLAN is added on
nova api thecompute
nova VIF for VM1 the VIF for VM2
and also on the and also on the
... br-eth2 ports by br-eth2 ports by
neutron server the
neutron ovsML2
agentOVS nova compute the ML2
neutron ovs OVS
MechanismDriver. agent
MechanismDriver.
neutron dhcp neutron l3 agent

vm1 vm2

br-int VM1 can ping br-int


VM2 … we’ve
br-eth2 successfully br-eth2
completed the
eth2 standard network eth2
test.

The ML2 Cisco


The ML2 Cisco
Nexus Nexus
MechanismDriver
MechanismDriver
trunks the VLAN
trunks the VLAN
on eth2/1. on eth2/2.

eth2/1 eth2/2

Cisco Nexus Switch


Questions?

You might also like