You are on page 1of 82

Cisco Application Policy Infrastructure Controller (APIC)

Application Centric Infrastructure Overview: Implement a Robust Transport Network for


Dynamic Workloads
HOME
PRODUCTS &
SERVICES
CLOUD AND
SYSTEMS MANAGEMENT
CISCO APPLICATION
POLICY
INFRASTRUCTURE
CONTROLLER (APIC)
DATA SHEETS AND LITERATURE
WHITE PAPERS

Viewing Options

PDF (828.4 KB)

Feedback
What You Will Learn
Application centric infrastructure (ACI) provides a robust transport network for todays dynamic workloads. ACI is built on a network
fabric that combines time-tested protocols with new innovations to create a highly flexible, scalable, and resilient architecture of lowlatency, high-bandwidth links. This fabric delivers a network that can support the most demanding and flexible data
center ENVIRONMENTS .

Application Centric Infrastructure Fabric Overview

Application Centric Infrastructure Overview:


Implement a Robust Transport Network
The ACI fabric is designed from the foundation to support emerging industry demands while maintaining a migration path for
for Dynamic Workloads
architecture already in place. The fabric is designed to support the industry move to MANAGEMENT automation, programmatic
policy, and dynamic workload-anywhere models. The ACI fabric accomplishes this with a combination of hardware, policy-based
control systems, and software closely coupled to provide advantages not possible in other models.
The fabric consists of three major components: the Application Policy Infrastructure Controller, spine switches, and leaf switches.
These three components handle both the application of network policy and the delivery of packets. Figure 1 shows these three
components in the ACI fabric architecture.
Figure 1.

ACI Fabric Architecture

In Figure 1, the fabric is designed in a leaf-and-spine architecture, with links connecting each leaf to each spine. This design enables
linear scalability and robust multipathing within the fabric, optimized for the east-to-west traffic required by applications. No connections
are created between leaf nodes or spine nodes because all nonlocal traffic flows from ingress leaf to egress leaf across a single spine
switch. The only exceptions to this rule are certain failure scenarios.
In this architecture, the scalability of the fabric is limited only by the available ports on the spine; at least one port per leaf node is
required. Bandwidth scales linearly with the addition of spine switches. Also, each spine switch added creates another network path,
which is used to load-balance traffic on the fabric.

Fabric MANAGEMENT
The ACI fabric is designed from the foundation for programmability and simplified MANAGEMENT . These capabilities are provided
by the APIC, which is a clustered network control system. The APIC itself exposes a northbound API through XML and
JavaScript OBJECT Notation (JSON) and provides both a command-line interface (CLI) and GUI that use this API to manage the
fabric. The system also provides an open source southbound API, which allows third-party network service vendors to implement policy
control for supplied devices through the APIC.
The APIC is responsible for tasks from fabric activation and switch firmware MANAGEMENT to network policy configuration and
instantiation. While the APIC acts as the centralized policy and network management engine for the fabric, it is completely removed
from the data path, including the forwarding topology. Therefore, the fabric can still forward traffic even when communication with the
APIC is lost. The APIC itself is delivered as an appliance, and it typically is run as three or more appliances for performance and
availability.
The design of the APIC is modeled on distributed computing to provide scalability and reliability that meets the needs of the data center
now and in the future. Rather than using an active-standby configuration, each node is always active, processing data and accepting
input. The fabric configuration data is sharded, or spread, across the appliances. Multiple copies are maintained for redundancy and
performance. Figure 2 shows this clustering and sharding behavior.
Figure 2.

Application Policy Infrastructure Controller Clustering and Sharding

Applying Network Policy


The fabric is designed with application connectivity and policy at the core. This focus allows both traditional enterprise applications and
internally developed applications to run side by side on a network infrastructure designed to support them in a dynamic and scalable
way. The network configuration and logical topologies that traditionally have dictated application design are instead applied based on
application needs. This approach is accomplished through the ACI object model.
Within the APIC, software applications are defined logically using constructs that are application centric, rather than network centric.
For example, a group of physical and virtual web servers may be grouped in a single tier of a three-tier application. The communication
between these tiers and the policies that define that communication make up the complete application. Within the APIC, this complete
application definition is known as an Application Network Profile.
Application Network Profiles are defined based on the communication, security, and performance needs of the application. They are
then used by the APIC to push the logical topology and policy definitions down to stateless network hardware in the fabric. This
approach is the reverse of traditional architectures, in which VLANs, subnets, firewall rules, etc. dictate where and how an application
can run. Figure 3 shows this behavior in the ACI fabric.
Figure 3.

Application Deployment in ACI Fabric

Fabric Forwarding
The ACI fabric is designed for consistent low-latency forwarding across high-bandwidth links (40 Gbps, with 100-Gbps future
capability). Traffic with the source and destination on the same leaf is handled locally, and all other traffic travels from the ingress leaf
to the egress leaf through a single spine switch. Although this is a two-hop architecture from a physical perspective, it is a single Layer
3 hop because the fabric itself operates as a single Layer 3 switch. Figure 4 shows the basic forwarding within the fabric.
Figure 4.

ACI Fabric Traffic Forwarding

Figure 4 shows two basic forwarding behaviors. In the first example, the traffic destination is on a different leaf than the source. In this
instance, a load-balancing algorithm chooses one of the spine switches to which to forward the packet. The spine then forwards the
packet to the destination egress leaf. Any spine can be chosen with consistent latency across the fabric as a whole, enabling extremely
efficient load balancing. The second example shows traffic in which the source and destination are on the same leaf. The traffic is
forwarded locally without the need to traverse the fabric.

Conclusion
The ACI fabric uses a unique coupling of hardware and software to provide a robust set of networking features that are exceptional in
the INDUSTRY . Through the use of hardware aware overlays, policy-based connectivity, stateless network hardware, and an open
ecosystem ACI is built for the needs of both todays workloads and tomorrows changing demands.

For More Information


Please visit http://www.cisco.com/go/aci.

What You Will Learn : Network Programmability with Cisco Application Centric Infrastructure

This document examines the programmability support on Cisco Application Centric Infrastructure (ACI). The Cisco ACI programmability model allows complete programmatic access to the
application centric infrastructure. Cisco ACI provides read and write access, through standard Representational State Transfer (REST) APIs, to the underlying OBJECT model, which is a
representation of every physical and logical attribute of the entire system. With this access, customers can integrate network deployment into MANAGEMENT and monitoring tools and deploy
new workloads programmatically.

Challenges with Current Approaches to Network Programmability


Most networks in use today were built on hardware with tightly coupled software intended to be MANAGED and administered through the command-line interface (CLI). These systems worked
well in a world of static network configurations, static workloads, and predictable slower change rates for application scaling. As data center networks have been virtualized and begun moving to
cloud and agile IT models, this model no longer works.
Therefore, vendors are working to layer programmability onto existing offerings and device operating systems. Although this approach increases capabilities, it is not an ideal method for
incorporating programmability. This model creates MANAGEMENT complexity by introducing an entirely new point of MANAGEMENT , typically identified as a Network Controller, which tries
to artificially map application and user policies to inflexible network constructs. Further these Network Controllers and the models they expose are limited to network functions and cannot extend
to supporting rest of the infrastructure. True programmability needs to be incorporated at the foundation, not applied as an afterthought. The infrastructure components and the constructs they
expose needs to be designed with programmability at its foundation using a model that developers can understand and use quickly.

Cisco ACI Programmability with OBJECT -Oriented Data Model and REST APIs
Cisco has taken a foundational approach to building a programmable network infrastructure with the Cisco ACI solution. This infrastructure operates as a single system at the fabric level,
controlled by the centralized Cisco Application Policy Infrastructure Controller (APIC). With this approach, the data center network as a whole is tied together cohesively and treated as an
intelligent transport system for the applications that support business. On the network devices that are part of this fabric, the core of operating system has been written to support this system
view and provide an architecture for programmability at the foundation.
Instead of opening up a subset of the network functionality through programmatic interfaces, like previous generation Software Defined Networking (SDN) solutions, the entire infrastructure is
opened up for programmatic access. This is achieved by providing access to Cisco ACI object model, the model that represents the complete configuration and runtime state of every single
software and hardware component in entire infrastructure. Further this object model is made available through standard REST interfaces, making it easier to access and manipulate the object
model, and hence, the configuration and runtime state of the system.
At the top level, the Cisco ACI object model is based on promise theory, which provides a scalable control architecture, with autonomous objects responsible for implementing the desired state
changes provided by the controller cluster. This approach is more scalable than traditional top-down management systems, which require detailed knowledge of low-level configurations and the
current state. With promise theory, desired state changes are pushed down, and objects implement the changes, returning faults when required.
Beneath this high-level concept is the core of Cisco ACI programmability: the object model. The model can be divided into two major parts: logical and physical. Model-based frameworks provide
an elegant way to represent data. The Cisco ACI model provides comprehensive access to the underlying information model, providing policy abstraction, physical models, and debugging and
implementation data. Figure 1 depicts the Cisco ACI model framework. The model can be accessed over REST APIs, thus opening up the system for programmability.
Figure 1. Cisco ACI Object-Oriented Data Model and REST APIs

As shown in Figure 1, the logical model is the interface with the system. ADMINISTRATORS or upper-level cloud MANAGEMENT systems interact with the logical model through the API,
CLI, or GUI. Changes to the logical model are then pushed down to the physical model, which typically becomes the hardware configuration.
The logical model itself consists of the OBJECTS - configuration, policies and runtime state - that can be manipulated and the attributes of those objects. In the Cisco ACI framework, this
model is known as the MANAGEMENT information tree (MIT). Each node in the MIT represents a MANAGED object or group of objects. These objects are organized in a hierarchical way,
creating logical object containers. Figure 2 depicts the logical hierarchy of the MIT object model.
Figure 2. Management Information Tree (MIT)

Objects in the MIT


The Cisco ACI uses an information-model-based architecture in which the model describes all the information that can be controlled by a management process. Object instances are referred to
as managed objects (MOs). Every managed object in the system can be identified by a unique distinguished name (DN). This approach allows the object to be referred to globally.
In addition to its distinguished name, each object can be referred to by its relative name (RN). The relative name identifies an object relative to its parent object. Any given object's distinguished
name is derived from its own relative name appended to its parent object's distinguished name. Distinguished names are directly mapped to URLs. Either the relative name or the distinguished

name can be used to access an object, dependent on the current location in the MIT. The relationship among MANAGED
3.
Figure 3. MANAGED

objects, relative names, and distinguished names is shown in Figure

Objects, Relative Names, and Distinguished Names

Figure 3 depicts the distinguished name, which uniquely represents any given MANAGED
object. All objects in the tree exist under the root object.

object instance, and the relative name, which represents it locally underneath its parent MANAGED

Because of the hierarchical nature of the tree and the attribute system used to identify object classes, the tree can be queried in several ways for managed object information. Queries can be
performed on an object itself through its distinguished name, on a class of objects such as Switch Chassis, or on a tree-level, discovering all members of an object. Figure 4 shows two tree-level
queries.
Figure 4. Tree-Level Queries

Figure 4 shows two chassis being queried at the tree level. Both queries return the referenced object and its child objects. This approach is a useful tool for discovering the components of a
larger system.
The example in Figure 4 discovers the cards and ports of a given switch chassis. Figure 5 shows another type of query: the class-level query.
Figure 5. Class-Level Queries

As shown in Figure 5, class-level queries return all the objects of a given class. This approach is useful for discovering all the objects of a certain type available in the MIT. In this example, the
class used is Cards, which returns all the objects of type Cards.
The third query type is an object-level query. In an object-level query a distinguished name is used to return a specific object. Figure 6 depicts two object-level queries: one for Node 1 in Chassis
2, and one for Node 1 in Chassis 1 in Card 1 in Port 2.
Figure 6. Class-Level Queries

For all MIT queries, you can optionally return the entire subtree or a partial subtree. Additionally, the role-based access control (RBAC) mechanism in the system dictates which objects are
returned; only the objects that the user has rights to view will ever be returned.

Managed-Object Properties
Managed objects in Cisco ACI contain properties that define the managed object. Properties in a managed object are divided into chunks managed by given processes within the operating
system. Any given object may have several processes that access it. All these properties together are compiled at runtime and presented to the user as a single object. Figure 7 shows an
example of this relationship.
Figure 7. Managed-Object Properties

In Figure 7, the example object has three processes that write to property chunks in the object. The Data Management Engine (DME), which is the interface between the Cisco APIC (thus the
user) and the object, the Port Manager, which handles port configuration, and the Spanning Tree Protocol (STP) all interact with chunks of this object. The object itself is presented to the user
through the API as a single entity compiled at runtime.

Accessing the Object Data through REST Interfaces


REST is a software architecture style for distributed systems such as the World Wide Web. REST has emerged over the past few years as a predominant web service design model. REST has
increasingly displaced other design models such as Simple Object Access Protocol (SOAP) and Web Services Description Language (WSDL) due to its simpler style. The Cisco APIC supports
REST interfaces for programmatic access to the entire Cisco ACI solution.
The object-based information model of Cisco ACI makes it a very good fit for REST interfaces: URLs and URIs map directly to distinguished names identifying objects on the tree, and any data
on the MIT can be described as a self-contained structured text tree document encoded in XML or JavaScript Object Notation (JSON). The objects have parent-child relationships that are
identified using distinguished names and properties, which are read and modified by a set of create, read, update, and delete (CRUD) operations.
Objects can be accessed at their well-defined address, their REST URLs, using standard HTTP commands for retrieval and manipulation of Cisco APIC object data. The URL format used can be
represented as follows:
<system>/api/[mo|class]/[dn|class][:method].[xml|json]?{options}
The various building blocks of the preceding URL are as follows:
System: System identifier; an IP address or DNS-resolvable host name
mo | class: Indication of whether this is a managed object or tree (MIT) or class-level query
class: Managed-object class (as specified in the information model) of the objects queried; the class name is represented as <pkgName><ManagedObjectClassName>
dn: Distinguished name (unique hierarchical name of the OBJECT in the MIT tree) of the OBJECT queried
method: Optional indication of the method being invoked on the object; applies only to HTTP POST requests
xml | json: Encoding format
options: Query options, filters, and arguments
With the capability to address and access an individual object or a class of OBJECTS with the REST URL, you can achieve complete programmatic access to the entire object tree and,
thereby, to the entire system.

Software Development Kits for Programming ENVIRONMENTS


The REST APIs for Cisco ACI allow easy integration into any programmatic ENVIRONMENT , regardless of the language and development methodology used. To further accelerate
development in commonly used programming environments, software development kits (SDKs) for Cisco ACI will be made available. The Cisco ACI-pysdk, a Python-based SDK is one such
SDK for a Python programming environment. The Python libraries and APIs that are part of the SDK abstract underlying REST API calls and provide easy and rapid integration into software
suites that are Python based.

Conclusion
The Cisco ACI object-oriented data model is designed from the foundation for network programmability. At the device level, the operating system has been rewritten as a fully object-based
switch operating system for Cisco ACI. The components of Cisco ACI are managed by the Cisco APIC, which provides full-function REST APIs. On top of this API are both a CLI and a GUI for
day-to-day ADMINISTRATION .
The object model enables fluid programmability and full access to the underlying components of the infrastructure using REST APIs. Objects are organized logically into a hierarchical model and
stored in the MIT. This approach provides a framework for network control and programmability with a degree of openness that is not found in other systems.

For More Information


http://www.cisco.com/go/aci.
Cisco Application Centric Infrastructure

OpFlex: An Open Source Approach

About OpFlex
OpFlex is an extensible policy protocol designed to exchange abstract policy between anetwork controller and a set of smart devices capable of rendering policy. OpFlex relies on a separate
information model understood by AGENTS in both the controller and the devices. This information model, which exists outside the OpFlex protocol itself, must be based on abstract policy,
giving each device the freedom and flexibility to render policy within the semantic constraints of the abstraction. For this reason, OpFlex can support any device, including hypervisor switches,
physical switches, and Layer 4 through 7 network services.

Overview of Open Source Efforts


Cisco is proposing OpFlex as an informational RFC to the IETF and plans to lead the standardization process through that forum. At the same time, Cisco is working with the open source
community to provide an open source implementation. An OpenDaylight (ODL) project is underway to define a uniform policy model that can extend across the data center, access layer, and
WAN, and Cisco is also working on an open source OpFlex AGENT for Open vSwitch (OVS). The goal is to offer three main components to the community:
An

open source policy implementation

A controller-side
A switch-side

OpFlex implementation in ODL

OpFlex AGENT

for Open vSwitch

Figure 1 provides an overview of OpFlex.


Figure 1. OpFlex Overview

OpenDaylight and OpFlex


The ODL community has created a new incubated project called the ODL Group Policy plug-in. The goal of this project is to provide a policy-based API that can serve, in practice, as a standard
information model in OpFlex implementations. This project includes contributions from Cisco, IBM, Midokura, and Plexxi, and the list of contributors is quickly expanding. Anyone is welcome to
join this community and participate in the development and definition of the policy model. Information can be found athttp://wiki.opendaylight.org.
The ODL Group Policy API will be supported through several different southbound APIs, including OpFlex. OpFlex essentially serves as a native back end through which policy can be passed to
devices directly. The project will also allow policy to be rendered imperatively over existing southbound APIs such as OpenFlow without involving OpFlex.
Figure 2 presents a logical view of the ODL Group Policy plug-in.
Figure 2. Logical View of ODL Group Policy Plugin

OpFlex on Open vSwitch


Cisco is also building a fully open source Apache 2.0 licensed OpFlex AGENT that can run with OVS, rendering abstract policy through OVS native interfaces such as OpenFlow. The goal
here is to provide a reference example of an OpFlex agent that can render policy as defined in ODL directly into local switching behaviors. Although this agent will be designed to work with OVS,
it will be available and reusable on any platform, assuming that the appropriate mapping is created from abstract policy to device capabilities. Cisco will maintain this agent and help ensure that
it remains compatible with the Cisco Application Policy Infrastructure Controller (APIC) to offer vendors a starting point for Cisco APIC integration.

Conclusion: OpFlex Is Open


Cisco and its PARTNERS are strongly committed to creating an open protocol through both a standardization effort in the IETF and development of an abstract policy model and reference
implementation in the open source community. Any vendor, customer, or partner is invited and encouraged to participate as we develop these modules through OpenDaylight, OpFlex, and Open
vSwitch.

For More Information


http://www.cisco.com/go/aci

Is Cisco Application Centric Infrastructure an SDN Technology?


Executive Summary

Software-defined networking (SDN) has garnered much attention in the networking INDUSTRY
infrastructure.

over the past few years due to its promise of a more agile and programmable network

The initial OpenFlow 1.1 specification was introduced in 2011, the moment many INDUSTRY analysts cited as the start of the modern SDN movement. Yet production deployments of SDN
using OpenFlow technology (especially in the data center) as well as software-based network overlays, are still in their infancy[1]. Many IT professionals view operations, scalability, and reliability
as new challenges that SDN technologies must address.
In November 2013, Cisco announced the acquisition of Insieme Networks. In doing so, Cisco also announced its strategy to address the challenges that SDN is trying to solve, through the
introduction of Cisco Application Centric Infrastructure (ACI). Cisco ACI not only addresses the challenges of current networking technologies that OpenFlow and software-based overlay
networks are trying to address, but it also presents solutions to the new challenges that SDN technologies are creating.
This document analyzes the benefits and limitations of current networking technologies that are fueling the movement to adopt SDN. It also looks at how Cisco ACI can meet the new challenges
facing these SDN technologies. Finally, it addresses this fundamental question: Is Cisco ACI an SDN solution?

The Need for a New Network Architecture


The Open Networking Foundation (ONF) is a user-led organization dedicated to the promotion and adoption of SDN. The ONF has published a white paper titled Software-Defined Networking:
The New Norm for Networks[2]. This paper describes the traditional way of building hierarchical tree-structure, or tiered, networks, and explains why this approach is not suited to the dynamic
nature of modern computing and storage needs. It also presents some of the computing trends shaping the need for new network architecture, including:

Changing traffic patterns: The data center is shifting away from traditional client-server application architectures to models in which significantly more data is being transferred from
machine to machine. The result is a shift from north-south traffic patterns to more east-west traffic in the data center. Content from the enterprise also needs to be accessible at any time,
from anywhere. In addition, many CORPORATE IT departments are showing great interest in moving to public, private, or hybrid cloud ENVIRONMENTS .
The consumerization of IT: Users are demanding more bring-your-own-device (BYOD) flexibility, so that personal laptops, tablets, and smartphones can be used to access corporate
information. A result of this trend is a need for greater emphasis on protection of corporate data with security policies and enforcement.
The rise of cloud services: Public cloud services available from companies such as Amazon, Microsoft, and Google have given corporate IT departments a glimpse of self-service IT
and demonstrate how agile applications and services can be. Organizations are now demanding the same service levels from their own IT departments. However, unlike public
cloud ENVIRONMENTS , private cloud environments need to meet strict security and compliance requirements, which cannot be sacrificed for increased agility.
Big data means more bandwidth: Enterprises are INVESTING in big data applications to facilitate better business decision making. However, these applications require massive
parallel processing across hundreds or thousands of servers. The demand to handle huge data sets is placing greater stress and load on the network and driving the need for greater
capacity.

Limitations of Current Networking TECHNOLOGY


The ONF paper also discusses significant limitations of current networking TECHNOLOGIES that must be overcome to meet modern IT requirements. These challenges are presented in the
context of traditional requirements: provide stable, resilient, yet static, connectivity. But the computing trends mentioned earlier require networks to support rapid deployment of applications. They
also require the network to scale to accommodate increased workloads with greater agility, while also keeping COSTS at a minimum.
The traditional approach has substantial limitations:

Complexity that leads to stasis: The abundance of networking protocols and features defined in isolation has greatly increased network complexity. The ONF paper states that each
protocol is solving a specific problem and without the benefit of any fundamental abstractions. Additionally, old technologies were often recycled as quick fixes to address new business
requirements. An example of this recycled approach is the loose use of VLANs in current networks: Initially, the purpose of VLANs was to create smaller broadcast domains. Today
VLANs are being used as policy and security domains for isolation. This use has created complex dependencies that increase security risk and reduce agility, because a change in
security policy requires a change in the broadcast and forwarding domain, while a change in VLANs may also impact security policy.
Inconsistent policies: Security and quality-of-service (QoS) policies in current networks need to be manually configured or scripted across hundreds or thousands of network devices.
This requirement makes policy changes extremely complicated for organizations to implement without significant INVESTMENT in scripting language skills or tools that can automate

configuration changes. Manual configuration is prone to error and can lead to many hours of troubleshooting to discover which line of a security or QoS application control list (ACL) was
entered incorrectly for a given device.
Inability to scale: As application workloads change and demand for network bandwidth increases, the IT department either needs to be satisfied with an oversubscribed static network
or needs to grow with the demands of the organization. Unfortunately, the majority of traditional networks are statically provisioned in such a way that increasing the number of
endpoints, services, or bandwidth requires substantial planning and redesign of the network. Server virtualization and private cloud deployments are challenging IT networking
professionals to reevaluate their architecture. Some may choose to massively overprovision the network to accommodate the dynamic nature of virtual machines, which can be deployed
on demand and instantiated anywhere on the network, but most will need to evaluate new ways to design the network.
Vendor dependence: Although many early proponents of SDN have pointed to vendor lock-in and the high cost of networking equipment as the main reasons for moving to SDN, the
industry is much more aware of the average selling price (ASP) of 10- and 40-Gbps ports in the data center, and incumbent vendors are also less inclined to offload high profit margins to
the end customer. Hence, the discussion of vendor dependence and its challenges in the context of SDN focuses more on the capability of vertically integrated solutions from a given
vendor to deliver capabilities and services in rapid response to changing business needs or user demands. The ONF paper argues that product cycles can take many years to respond
to customer requirements, and that lack of standard, open interfaces limits the ability of network operators to tailor the network to their individual ENVIRONMENTS .

How SDN Can Help


Various surveys by networking publications[3] plus informational RFCs, such as RFC 3535, list the requirements that end users have associated with SDN. These include:

Capability to automate provisioning and MANAGEMENT


Capability to implement network-side policies
Improved security
Increased visibility into applications that are using the network
Increased scalability
Support for creation and dynamic movement of virtual machines
Support for creation of a private or hybrid cloud
Need for networks to be configured as a whole
Need for text-based configuration for simplified revision control
Need for network MANAGEMENT tools more advanced than Simple Network Management Protocol (SNMP) and a command-line interface (CLI)

In March 2013, Gartner released a new research note titled Ending the Confusion Around Software Defined Networking (SDN): A Taxonomy[4]. The article states: SDN is a new approach to
designing, building and operating networks that supports business agility. SDN brings a similar degree of agility to networks that abstraction, virtualization and orchestration have brought to
server infrastructure. It goes on to describe three models for SDN deployment:

Switch based: This model is well suited for greenfield (new) deployments in which the COST of physical infrastructure and multivendor options are important. Its drawback is that it
does not use existing Layer 2 and 3 networking equipment.
Overlay: This model is well suited for deployments over existing IP networks or those in which the server virtualization team manages the SDN ENVIRONMENT . Here, the SDN
endpoints reside in the hypervisor environment. The biggest drawbacks are that this model doesnt address the overhead required to manage the underlying infrastructure, debugging
problems in an overlay can be complicated, and the model does not support bare-metal hosts.
Hybrid: This model is a combination of the other two approaches, with nondisruptive migration that can evolve to a switch-based model through time.

Overview of Cisco Application Centric Infrastructure


Cisco ACI is a new data center architecture designed to address the requirements of todays traditional networks, as well as to meet emerging demands that new computing trends and business
factors are placing on the network.
Figure 1 provides an overview of Cisco ACI. A high-level summary of its main building blocks is presented here.

Figure 1.

Cisco ACI

Application-Centric Policy Model Using Group-Based Policy


As mentioned earlier, one of the biggest challenges for current networking TECHNOLOGIES is the tight coupling of networking protocols and features, forwarding, and policy. As a result of this
coupling, a change in policy will likely adversely affect forwarding, and the converse. Furthermore, because the network protocols and features are designed for their own specific use cases,
manipulation of these protocols and features requires a deep understanding of networking semantics.
To provide agility and simplicity in data center infrastructure, a new language describing the abstracted intent of connectivity is required so that the end user doesnt need significant networking
knowledge to describe the requirements for connectivity. Additionally, this intent should be decoupled from network forwarding semantics so that the end user can describe the policy in such a
way that a change in policy need not affect forwarding behavior, and the converse.
Because this abstracted, decoupled policy model did not exist prior to Cisco ACI, Cisco created such a model. It is called group-based policy (GBP ) and is a working project
in OpenStack and OpenDaylight.
OpenDaylight describes group-based policy as an application-centric policy model that separates information about application connectivity requirements from information about the
underlying details of the network infrastructure.
This approach offers a number of advantages, including:

Easier, application-focused way of expressing policy: By creating policies that mirror application semantics, this framework provides a simpler, self-documenting mechanism for
capturing policy requirements without requiring detailed knowledge of networking.
Improved automation: Grouping constructs allow higher-level automation tools to easily manipulate groups of network endpoints simultaneously.
Consistency: By grouping endpoints and applying policy to groups, the framework offers a consistent and concise way to handle policy changes.
Extensible policy model: Because the policy model is abstract and not tied to specific network implementations, it can easily capture connectivity, security, Layer 4 through 7, QoS, etc.

Cisco ACI makes extensive use of group-based policy in its application-centric policy model, in which connectivity is defined by consolidating endpoints (physical or virtual) into endpoint groups
(EPGs). Connectivity is defined when the end user specifies a contractual relationship between one EPG and another. The end user does not need to understand the protocols or features that
are EMPLOYED to create this connectivity. Figure 2 provides an overview of this model.
Figure 2.

Application-Centric Policy Model

Cisco Application Policy Infrastructure Controller


In general, people want policies that are consistent across the entire network. However, one of the main challenges in MANAGING policies in existing networks is the number of devices to
which policies need to be applied coupled with the need to ensure consistency. The Cisco Application Policy Infrastructure Controller (APIC) addresses this issue.
Cisco APIC is a distributed system implemented as a cluster of controllers. It provides a single point of control, a central API, a central repository for global data, and a repository for group-based
policy data for Cisco ACI.
Cisco APIC is a unified point for policy-based configuration expressed through group-based policy (Figure 3). The primary function of Cisco APIC is to provide policy authority and policy
resolution mechanisms for the Cisco ACI fabric and devices attached to the fabric. Automation is provided as a direct result of policy resolution and renders its effects on the Cisco ACI fabric, so
that end users no longer have to touch each network element and manually make sure that all policies are configured appropriately. Note that Cisco APIC is not involved in forwarding
calculations or route provisioning, which provides additional scalability, stability, and performance.
Figure 3.

The Role of Cisco APIC in the ACI Fabric

Cisco APIC communicates with the Cisco ACI fabric to distribute policies to the points of attachment and provide several critical ADMINISTRATIVE functions to the fabric. Cisco APIC is not
directly involved in data-plane forwarding, so a complete failure or disconnection of all Cisco APIC elements in a cluster will not result in any loss of forwarding capabilities, increasing overall
system reliability.
In general, policies are distributed to nodes as needed upon endpoint attachment or by an ADMINISTRATIVE

static binding, allowing greater scalability across the entire fabric.

Cisco APIC also provides full native support for multitenancy so that multiple interested groups (internal or external to the organization) can share the Cisco ACI fabric securely, yet still be
allowed access to shared resources if required. Cisco APIC also has full, detailed support for role-based access control (RBAC) down to each MANAGED object in the system, so that
privileges (read, write, or both) can be granted per role across the entire fabric.

Cisco APIC also has completely open APIs so that users can use Representational State Transfer (REST)-based calls (through XML or JavaScript Object Notation [JSON]) to provision, manage,
monitor, or troubleshoot the system. Additionally, Cisco APIC includes a CLI and a GUI as central points of MANAGEMENT for the entire Cisco ACI fabric.

Cisco ACI Fabric


As discussed previously, workloads continue to evolve, and traffic is becoming more east-west based. Networks need to respond faster to dynamic virtualized and cloud-based workloads and
accommodate traffic growth as data sets become larger.
Figure 4.

Cisco ACI Fabric

Scalability, extensibility, simplicity, flexibility, and efficiency are some of the main goals in the design of next-generation data center fabrics. When designing the Cisco ACI fabric, Cisco needed to
consider all the new challenges facing the data center, but it also needed to understand and cater to existing challenges. The Cisco ACI fabric (Figure 4) is designed to address both todays
requirements and tomorrows requirements, with the following main goals:

Scalable fabric: The Cisco ACI fabric is designed based on one of the most efficient and scalable network design models: a spine-and-leaf bipartite graph in which every leaf is
connected to every spine, and the converse. To reduce the likelihood of hotspots of activity forming in the fabric, all devices (regardless of their functions) connect at the leaf nodes of the
fabric. This approach allows the fabric to provide a simple way to scale the number of devices connected, by adding more leaf nodes. If the amount of cross-sectional bandwidth that is
servicing the fabric needs to be increased, the ADMINISTRATOR simply has to add spine nodes. This flexibility allows the fabric to start as a small environment, but gradually grow to
a much larger environment if the need arises. The fabric is also built using standards-based IP routed interfaces, offering greater stability in larger scale-out deployments.
Extensibility: The ACI fabric is highly extensible. The fabric ADMINISTRATOR can integrate virtual networking (through integration of Microsoft System Center Virtual Machine
Manager [SCVMM]) as well as Layer 4 through 7 services (firewalls, load balancers, etc.) today. This integration allows the end user to specify connectivity requirements using groupbased policy on Cisco APIC, and the configuration for virtual networks and for Layer 4 through 7 services will automatically be rendered on the respective end systems, eliminating the
need for the end user to coordinate connectivity and policies through those devices. Future software releases will also include WAN router integration.
Simplicity: Although numerous protocols and features exist in the networking domain, the role of the fabric is very simple: to provide any connectivity anywhere. Rather than supporting
numerous different protocols and features, the Cisco ACI fabric is designed with data center use cases in mind. The result is a simplified architecture without unnecessary complexity. A
single Interior Gateway Protocol (IGP) has been chosen as the underlying fabric node discovery protocol: Intermediate System to Intermediate System (IS-IS). IS-IS is a link-state
protocol that very efficiently detects link failures and recovers from such failures. Standards-based Virtual Extensible LAN (VXLAN) provides a simple overlay for tenant-facing traffic,
supporting full Layer 2 bridging and Layer 3 routing across the entire fabric.
Flexibility: The Cisco ACI fabric supports the native capability to allow users to attach any host anywhere across the entire fabric. By using the integrated penalty-free VXLAN overlay,
traffic can be flexibly bridged and routed across the entire fabric. Furthermore, the Cisco ACI fabric can provide normalization for multiple different encapsulation types arriving from hosts
or their respective hypervisors, including VLAN, VXLAN, and Network Virtualization using Generic Routing Encapsulation (NVGRE). This feature allows physical, virtual, and containerbased hosts to all co-exist on the same shared infrastructure. In addition, next-generation data center fabrics need to be backward-compatible with sunset-type applications, which may
not be IP based or which may use network flooding semantics for discovery and communication. The Cisco ACI fabric can support both modern data center requirements and the
requirements of traditional bare-metal and mainframe-based applications.
Efficiency: An inherent benefit to the spine-and-leaf bipartite graph architecture of the Cisco ACI fabric is that every host is exactly two physical hops away from every other host in the
fabric. So for big data workloads that require a significant amount of east-west traffic between machines, the Cisco ACI fabric provides predictable low latency at scale. This approach
delivers efficient support for traditional data center applications as well. The Cisco ACI fabric can exceed other traditional spine-and-leaf fabrics in fabric bandwidth efficiency, because it

can take into account packet arrival time, end-to-end fabric congestion, and flowlet switching to make more intelligent load-balancing decisions. More information about these innovations
is documented in the SIGCOMM paper CONGA: Distributed Congestion-Aware Load Balancing for Datacenters.
Investment Protection: Customers may also want applications on their current IP networks to participate in the Cisco ACI fabric policy. The Cisco ACI Fabric allows for investment
protection where the Cisco APIC manages policy for virtual or physical servers in the existing network. For virtual servers, they connect to an application-centric virtual switch (AVS) and
enabled for Cisco ACI, in the existing Cisco Nexus network. AVS acts as a virtual leaf for the Cisco ACI spine-and-leaf fabric, and as an edge switch that is Cisco ACI aware, it can
forward traffic according to Cisco ACI policy rules and apply Layer 4 through 7 services managed by Cisco ACI. For physical servers, a Cisco Nexus 9300 platform switch acts as an
access-layer switch to the existing overlay network. Compared to other software-defined networking (SDN) overlay solutions, this solution provides a common infrastructure for physical
and virtual workloads, along with a more advanced application-centric policy model. For more information, refer to the white paper: Transform Your Business and Protect Your Cisco
Nexus Investment While Adopting Cisco Application Centric Infrastructure. Open APIs, Partner Ecosystem, and OpFlex:-

Cisco ACI supports an extensible partner ecosystem that includes Layer 4 through 7 services; hypervisors; and management, monitoring, and cloud orchestration platforms. All use Cisco ACIs
open APIs and development kits, device packages, and plug-ins, as well as a new policy protocol, OpFlex, which is used to exchange group-based policy information.

Open APIs: Cisco ACI supports API access through REST interfaces, GUIs, and the CLI as well as a number of software development kits (kits for Python and Ruby are available today).
Cisco APIC supports a comprehensive suite of REST APIs over HTTP/HTTPS with XML and JSON encoding bindings. The API provides both class-level and tree-oriented data access.
REST is a software architecture for distributed systems. It has emerged over the past few years as a leading web services design model and has increasingly displaced other design
models such as Simple Object Access Protocol (SOAP) and Web Services Description Language (WSDL) because of its simpler approach.
Partner ecosystem and OpFlex: OpFlex, the southbound API, is an open and extensible policy protocol used to transfer abstract policy in XML or JSON between a policy controller,
such as Cisco APIC, and any device, including hypervisor switches, physical switches, and Layer 4 through 7 network services. Cisco and its partners, including Intel, Microsoft, Red
Hat, Citrix, F5, Embrane, and Canonical, are working through the IETF and open source community to standardize OpFlex and provide a reference implementation.
OpFlex is a new mechanism for transferring abstract policy from a modern network controller to a set of smart devices capable of rendering policy. Whereas many existing protocols
such as the Open vSwitch Database (OVSDB) management protocol focus on imperative control with fixed schemas, OpFlex is designed to work as part of a declarative control system,
such as Cisco ACI, in which abstract policy can be shared on demand. One major benefit of this model is the capability to expose the complete feature set of an underlying device,
allowing differentiation of hardware and software objects such as Layer 4 through 7 devices.
In addition to its implementations in the open source community, OpFlex is one of the primary mechanisms through which other devices can exchange and enforce policy with Cisco
APIC. OpFlex defines that interaction. As a result, by integrating a number of devices from both Cisco and ecosystem partners using Cisco ACI, organizations can use it to gain
investment protection.

How Cisco ACI Addresses Current Networking Limitations


Cisco ACI can address all the traditional networking limitations outlined in the ONF paper. Cisco ACI is built using a balanced approach that weighs the best software against the best hardware,
custom silicon against merchant silicon, centralized models against distributed models, and the need to address old problems against the need to meet new challenges. It tackles business
challenges rather than championing only one particular technology approach.
Cisco ACI addresses these specific limitations:
Complexity that leads to stasis: Cisco ACI removes complexity from the network. Cisco ACI sets out to decouple policy from forwarding by allowing network routing and switching to be
completely distributed across the entire fabric. Cisco ACI packet forwarding across the fabric uses a combination of merchant and custom silicon to deliver standards-based VXLAN
bridging and routing with no performance penalty or negative impact on the user or application.
In addition, Cisco ACI can apply policy without the need to derive this information from network information (such as IP addresses.) It does this by populating each VXLAN frame with a 16bit ID to uniquely identify the originating (source) group of the packet as specified in the group-based policy VXLAN IETF draft[5]. This approach provides outstanding flexibility for the end
user, allowing users to modify network policies with little network knowledge and no negative impact on network forwarding.

Cisco ACI further simplifies policies by introducing an abstraction model - group-based policy - so that end users can define connectivity using higher-level abstracted constructs instead
of concrete networking semantics. This model enables end users of Cisco ACI to define policy rules without the need for knowledge of networking, opening the way for application
administrators and developers to directly interact with Cisco ACI policies to express their intent without the need to involve IT network administrators.

Inconsistent policies: One of the biggest challenges in managing network policies across a large network is the requirement to touch a large number of devices and make sure that the
policy configuration remains consistent. Cisco ACI addresses this challenge by offloading this task to Cisco APIC, which is the central policy authority and the central point of
management for Cisco ACI and associated physical and virtual services. The end user simply needs to specify on Cisco APIC the desired intent of group-based policy, and Cisco APIC
distributes the policy to all the nodes in the Cisco ACI fabric (Figure 5).

Figure 5.

Using Cisco APIC to Implement Policy across the Fabric

Cisco APIC uses a variant of promise theory, with full formal separation between the abstract logical model and the concrete model, and with no configuration performed on concrete
entities. Concrete entities are configured implicitly as a side effect of the logical model implementation. This implementation of promise theory provides policy consistency throughout the
network at scale.

Inability to scale: Cisco ACI is designed to scale transparently throughout its deployment, supporting changes in connectivity, bandwidth, tenants, and policies. The spine-and-leaf
topology of the Cisco ACI fabric supports a scale-out design approach. If additional physical connectivity is required, leaf nodes can be added by connecting them to the spines. Similarly,
if additional bandwidth or redundancy is required, additional spine nodes can be introduced. This scale-out deployment model also allows end users to start small and later scale to
extremely large environments, thereby reducing the initial capital expenditure required to implement a scalable fabric. However, the addition of new devices does not mean an increased
number of management points. After registering the new devices on the Cisco ACI fabric through Cisco APIC, the end user can administer the entire fabric, including the new devices,
from the central Cisco APIC. Introduction of the new devices requires no intervention by the administrator.

Tenants and policies also use a scale-out approach. Policies are centrally stored on the fabric and are rendered to fabric nodes as required. The Cisco APIC policy repository itself is a
scale-out clustered database. It can increase from 3 to more than 31 nodes in a single cluster, depending on the scale of tenants and policies required. Even with additional cluster
nodes, all nodes are considered active, and policies can be managed on any cluster member.

Vendor dependence: A complete Cisco ACI deployment will likely include Layer 4 through 7 services, virtual networking, computing and storage resources, WAN routers, and
northbound orchestration services. A main strength of Cisco ACI is its openness, with its published APIs, Layer 4 through 7 device packages, and use of the open OpFlex protocol. With
these open APIs, plug-ins, and protocols, end users can incrementally add functions to their solutions without the need to wait for a single vendor to introduce new capabilities.

How Cisco ACI Meets the Requirements of SDN


As mentioned previously, the ONF has prescribed that a new network model is needed to address the emerging trends in computing. A proposed solution is SDN, with SDN defined as an
approach to computer networking that allows network administrators to manage network services through abstraction of lower-level functionality. This is done by decoupling the system that
makes decisions about where traffic is sent (the control plane) from the underlying systems that forward traffic to the selected destination (the data plane).
As explained in this document, the Cisco ACI solution meets all the requirements of this definition, but to satisfy the requirements of SDN, it must also be able to accommodate the emerging
trends in computing, including:

Changing traffic patterns: As explained previously, east-west traffic in the data center is increasing, and next-generation data center architectures need to be optimized for such
workloads. Cisco ACI uses a spine-and-leaf architecture that is well suited for east-west workloads, helping ensure consistent latency and performance from any source in the data
center to any destination.
Another requirement for the modern data center is that content be accessible anytime, anywhere. Cisco ACI meets this requirement with its penalty-free overlay. It allows applications
and services to be deployed anytime, anywhere across the entire fabric. It is also well suited as the platform of choice for any public, private, or hybrid cloud deployment. Because
connectivity policies are abstracted away from network forwarding, higher-layer cloud orchestration platforms (such as OpenStack, CloudStack, and Microsoft AzurePack) can manage
services instead of the underlying infrastructure.

The consumerization of IT: Cisco ACI enables BYOD in the enterprise in a number of ways. First, security and compliance is inherently built in to the group-based policy model; the
application administrator must explicitly grant access to outside groups for those groups to consume applications or services. Second, an audit of security policies is a much simpler task
because policies are defined in an abstracted group-based policy representation rather than in concrete security access lists or firewall rules. Finally, group-based policies can be
migrated to other areas of the network, such as campus and branch offices, where devices can be allocated to their own endpoint groups based on flexible classifications such as device
type, access location, access method, time of day, and day of the week instead of network addresses.
The rise of cloud services: Cisco ACI enables enterprise IT departments to easily implement private clouds by abstracting applications, security policies, and IT services away from the
underlying infrastructure. Organizations can spend more time creating connectivity policies through group-based policy rather than managing individual network devices. Additionally,
many IT departments will prefer this self-service model and be more likely to adopt it because they can retain control over their sensitive content on their own premises, instead of giving
control to a public cloud service. However, if an organization later wants to shift certain workloads to the public cloud, Cisco ACI group-based policy enables the organization to move
network and security policies easily; a consistent group-based policy profile definition can exist in the on-premises private cloud as well as in the hosted cloud with very little modification.
More bandwidth for big data: Cisco ACI provides an optimal solution for hosting big data workloads. In addition to providing predictable low latency using 40-Gbps fabric links as
mentioned earlier, Cisco ACI provides a cost-effective, scale-out fabric, allowing the end user to incrementally increase the cross-sectional bandwidth of the fabric simply by introducing
an additional scale-out spine node. More computing nodes can also be incrementally added to a big data cluster to improve performance through the addition of leaf nodes. Also,
compared to an equivalently built-out network fabric using the same number of devices and interface speeds, Cisco ACI can use its fabric bandwidth more efficiently, through the use of
distributed congestion-aware load balancing (CONGA). This mechanism reduces flow completion time for network-intensive applications such as big data.

Cisco ACI also delivers other features that end users are seeking in SDN solutions, including:

Automated provisioning and management: Cisco ACI provides complete automated provisioning and management of the network infrastructure. Fabric administrators can
transparently add spine or leaf nodes to the Cisco ACI fabric without any configuration on the devices themselves. Provisioning of ecosystem partner solutions is also fully automated
through the group-based policy model on Cisco APIC, so the end user doesnt need to manually and individually manage those devices and systems. Day-two management of these
devices and systems is also enabled through Cisco APIC.

Capability to implement network-side policies: Cisco ACI manages the entire system as a single entity, and any associated network-side policies (whether associated with physical,
virtual, or Layer 4 through 7 services) can be defined, deployed, and managed through Cisco APIC.
Improved security: Security is implicit in the application-centric policy model. Connectivity between groups is disallowed until the policy administrator defines, through group-based
policy, which groups are allowed to talk, and which conduit each group can use to communicate. Cisco ACI is multitenant aware. Traffic, connectivity, and policies from different tenants
can share the same infrastructure without leakage of information across tenants. All manageable entities (called objects) in Cisco ACI have unique privileges associated with them, and
the administrator can assign highly specific security controls using RBAC definitions.
More visibility into applications that are using the network: The definition of an application is provided directly to Cisco ACI in the form of a group-based policy profile (called an
application network profile). From this profile, Cisco ACI can glean information about network dependencies and report on how well a given application is performing based on the
underlying objects that are being monitored by Cisco ACI.
Increased scalability: The dynamic nature of the Cisco ACI fabric, the use of scale-out spine-and-leaf architecture and a clustered database, and hardware reachability for more than
one million endpoints makes Cisco ACI the most scalable SDN solution on the market today. In addition, Cisco ACI can use endpoint location information to conservatively render
policies in hardware only to enforcement points to which endpoints are attached. This feature allows greater scalability without the need to overprovision policy resources.
Capability to create and move virtual machines dynamically: The Cisco ACI fabric employs a penalty-free overlay, allowing outstanding flexibility for virtual machine deployment and
unrestricted virtual machine mobility across the entire Cisco ACI fabric. Cisco ACI also includes innovations to make virtual machine mobility transparent.
Capability to create a private or hybrid cloud: Cisco ACI is an excellent solution for organizations wanting to deploy private, hybrid, and public clouds. Most Cisco ACI early adopters
were public cloud service providers. In addition to its scalability, security, and flexibility, Cisco ACI comes out of the box with an abstracted representation of network connectivity though
the group-based policy model. This feature is extremely attractive in cloud deployments because a main requirement for such deployments is the capability to abstract the complexity of
the network infrastructure so that it is not visible to the consumer of the cloud service.
Capability to configure the network as a whole: Cisco ACI inherently allows the entire fabric to be viewed as a single Layer 2 and 3 virtual forwarder. Additional private networks can
be created, but they always are configured as a single entity. This approach allows the end user to deploy new services (network services or applications) without the need to understand
how the underlying network is provisioned or connected.
Text-based configuration for simplified revision control: All configuration settings are represented as managed objects through Cisco APIC. These managed objects can be
accessed through APIs, which can be managed through structured REST commands in either XML or JSON format. Additionally, because these objects have a defined structure that is
abstracted from the underlying physical and virtual networks, the same profile definitions can be used to implement the same connectivity policies in a completely different Cisco ACI
fabric.
Advanced network management tools beyond SNMP and CLI: In addition to SNMP and the CLI, the Cisco ACI object model natively supports REST interfaces through XML and
JSON, GUIs, and a number of software development kits. In addition to general network device management, Cisco ACI includes many innovative management capabilities that are not
available in other SDN solutions. For example, Cisco ACI provides health scores, which capture dependencies across underlay, overlay, physical, virtual, and ecosystem devices, and
contextualizes them into detailed health scores for specific objects. These scores can also be rolled into higher-level scores at the endpoint group level, application network profile level,
or tenant level to provide an easy-to-see top-level view. The end user can then use this view to troubleshoot any anomalies throughout the Cisco ACI fabric. Atomic counters are another
Cisco ACI innovation.
These provide end-to-end counts of packets entering and leaving the fabric, and they can also be scoped to individual leaf nodes. Atomic counters allow end users to easily see areas of
the fabric with high, medium, and low use, through the creation of a traffic map, and to identify the location of any traffic drops in the Cisco ACI fabric - all from Cisco APIC.

Joe Skorupa, vice president and distinguished analyst at Gartner, defined the value of Cisco ACI succinctly when in March 2013 he described what he hoped SDN would deliver: In a data
center context, SDN is a component of the policy driven data center. It provides the programmable connectivity required to link the network to other components within the data center, delivering
a more integrated, functional system. For example, a provisioning application could specify that an instance of the CRM application must have certain services delivered in a specific sequence
and would ensure that the traffic flows through the appropriate devices in the correct sequence.[6]

SDN Faces a New Set of Challenges


Although SDN, software-based virtual overlays, and OpenFlow present some interesting solutions for both traditional and emerging computing workloads, except in academic and research
institutions, few data centers have adopted software overlays and OpenFlow. This lack of adoption can be attributed to a new set of challenges, including the following:

Software-based virtual overlays are difficult to manage and troubleshoot: Joe Skorupa, vice president and distinguished analyst at Gartner, identified this limitation in a brief Q&A
session in March 20136: The greatest limitations of this [software-based virtual overlay] approach are that it does not address the overhead of managing the underlying infrastructure,
debugging problems in an overlay can be complex, and it does not support bare-metal hosts.

The fundamental problem with software-based virtual overlays is that they have little or no relationship with the underlying physical infrastructure (which, of course, is always required),
so any drops in the underlay network cannot easily be traced back to the service or application that was affected, and the converse. Furthermore, the software-based virtual overlay may
be managed by a different team, with a different skillset, that isnt equipped to troubleshoot end-to-end network connectivity issues, leading to finger-pointing across IT operations
departments, and possibly vendors. This drawback is potentially more severe in traditional networks, in which flexibility is limited, but at least such networks have predictability and
deterministic points of failure for which a single IT operations group takes ownership.

OpenFlow protocols are too primitive and concrete for the data center: OpenFlow in a network switch applies a match function on the incoming packet (typically based on an
existing network field attribute such as a source MAC address or destination IP address) and then takes some form of action, which may be depend on the switch implementation but
typically involves forwarding the packet out through a given physical or logical port.
For most data center use cases, such detailed control is not required, because bandwidth (and hence paths) in the data center is typically abundant. Therefore, detailed path decisions
based on network header parameters may not be needed and may incur unnecessary overhead for data center network administrators to manage.

In addition, the OpenFlow protocol assumes a particular hardware architecture using a series of generic table look-ups that generate actions to apply to the packet. This specific view of
the underlying hardware architecture limits scalability and portability. A higher level of abstraction allows different specific hardware architectures to provide specific benefits while still
meeting the requirements of the APIs.

Merchant silicon is not optimized for OpenFlow today: Most merchant silicon available in the market today has been optimized for general data center workloads. Typically, such
deployments mandate allocation of a large amount of memory to forwarding tables to perform longest-prefix match and adjacency, Address Resolution Protocol (ARP) and MAC and IP
address binding lookups; and a finite amount of memory to ACLs, typically using ternary content-addressable memory (TCAM) that is optimized for masking, packet matching, and action
sets.
OpenFlow tables use the latter form of memory in switches. Unfortunately, forwarding memory is not easily changeable with ACL memory, so the total amount of memory available to
install flow entries in todays modern merchant silicon is somewhat limited. As mentioned earlier, if any misses occur, packets may be unexpectedly dropped or forwarded to the SDN
controller for further processing.

How Cisco ACI Addresses the New SDN Challenges


A next-generation SDN-based architecture must address all these challenges. Cisco ACI accomplishes this in context of the modern data center.

Software-based virtual overlays are difficult to MANAGE and troubleshoot: Cisco ACI does not rely solely on software-based virtual overlays for fabricwide connectivity. Although
Cisco ACI does deploy an overlay, it is instantiated in hardware, and the MANAGEMENT features of Cisco ACI have enough intelligence to provide full coordination and
contextualization to show what services are affected by failures in the underlay or overlay network.
Note that Cisco ACI supports software-based overlays that either terminate on the Cisco ACI fabric or run over the top. These are completely optional modes of deployment, and the end
user can decide how to deploy workloads. The point is that Cisco ACI does not rely on software-based overlays to achieve the flexibility and agility sought in SDN.

OpenFlow protocols are too primitive and concrete for the data center: In addition to providing automation and flexibility, Cisco ACI introduces a new model to describe connectivity
in the data center through group-based policy. Because the policy intent is abstracted, the end user can define how certain things connect to other things in the data center. This
abstracted view of policies is a lot easier for cloud orchestration tools, applications, and security ADMINISTRATORS to consume than is the case with OpenFlow protocols.
Merchant silicon is not optimized for OpenFlow today: Cisco ACI uses a combination of merchant silicon and custom silicon developed by Cisco to provide the right level of
capabilities at scale, while still delivering the solution at an attractive price. The custom silicon embedded in the Cisco Nexus 9000 Series Switches includes memory table structures
optimized so that certain functions can be offloaded to the on-board merchant silicon, with the custom silicon dedicated to functions such as policy enforcement, encapsulation
normalization, and VXLAN routing. Conclusion:

The IT INDUSTRY is going through a significant transformation, with BYOD, big data, cloud computing, IT as a service, and security now prominent concerns. At the same time, COMPANIES
increasingly want to reduce overall IT spending (through reduction in both capital expenditures and operating expenses) and to provide much-improved levels of service to business units and
functions by increasing overall IT agility. Many in the networking industry have cited SDN as the model to move the industry forward, but adoption of the prescribed technologies presents
challenges.
Cisco ACI was developed not as a COMPETITOR to SDN, but as a catalyst to help promote the adoption of SDN throughout the IT industry: in essence, as an enabler of the SDN vision.
Although some in the industry claim that Cisco ACI is not SDN, assessing Cisco ACI in the context of the definitions presented in the ONF SDN white paper shows that it does indeed meet the
requirements of SDN. Cisco ACI also goes a step further, addressing areas that OpenFlow has not yet addressed.
Ultimately, the industry will be the final judge of whether Cisco ACI (and SDN) succeeds, but based on the current interest and adoption momentum, Cisco ACI is well positioned to become the
new norm for data center networks.

For More Information

Visit http://www.cisco.com/go/aci

[1]

The State of SDN Adoption, at blogs.gartner.com. Retrieved December 1, 2014.


Software-Defined Networking: The New Norm for Networks, at http://www.opennetworking.org. Retrieved December 1, 2014.
[3]
2013 SDN Survey: Growing Pains, at InformationWeek.com. Retrieved December 1, 2014.
[4]
Software Defined Networking Creates a New Approach to Delivering Business Agility, at http://www.gartner.com. Retrieved December 1, 2014.
[5]
VXLAN Group Policy Option [ draft-smith-vxlan-group-policy-00], M. Smith and L. Kreeger.
[6]
Software Defined Networking Creates a New Approach to Delivering Business Agility http://www.gartner.com. Retrieved December 1, 2014.
[2]

Cisco Application Policy Infrastructure Controller Data Center Policy Model


This paper examines the Cisco Application Centric Infrastructure (ACI) approach to modeling business applications to the Cisco ACI
network fabric and applying consistent, robust policies to those applications. The approach is a unique blend of mapping hardware
and software capabilities to the deployment of applications either graphically through the Cisco Application Policy Infrastructure
Controller (APIC) GUI or programmatically through the Cisco APIC API.
Cisco ACI Policy Theory
The Cisco ACI fabric is designed as an application-centric intelligent network. The Cisco APIC policy model is defined from the top down as a policy enforcement engine focused on the
application itself and abstracting the networking functionality underneath. The policy model marries with the advanced hardware capabilities of the Cisco ACI fabric underneath the business
application-focused control system.
The Cisco APIC policy model is an object-oriented model based on promise theory. Promise theory is based on declarative, scalable control of intelligent OBJECTS , in comparison to legacy
imperative models, which can be thought of as heavyweight, top-down MANAGEMENT .
An imperative model is a big brain system or top-down style of MANAGEMENT . In these systems the central MANAGER must be aware of both the configuration commands of underlying
objects and the current state of those objects. Figure 1 depicts this model.
Figure 1. Imperative Model of Systems Management in Which Intelligent Controller Pushes Configuration to Underlying Components

Promise theory, in contrast, relies on the underlying objects to handle configuration state changes initiated by the control system as desired state changes. The objects are in turn also
responsible for passing exceptions or faults back to the control system. This lightens the burden and complexity of the control system and allows for greater scale. These systems scale further
by allowing for methods of underlying objects to in turn request state changes from one another and/or lower level objects. Figure 2 depicts promise theory.
Figure 2. Promise Theory Approach to Large-Scale System Control

Underlying the promise theory-based model, Cisco APIC builds an OBJECT model to focus on the deployment of applications and with the applications themselves as the central focus.
Traditionally applications have been restricted by capabilities of the network. Concepts such as addressing, VLAN, and security have been tied together, limiting the scale and mobility of the
application itself. Because todays applications are being redesigned for mobility and web scale, this is not conducive to rapid and consistent deployment.

The physical Cisco ACI fabric itself is built on a spine-leaf design; its topology is illustrated in Figure 3 using a bipartite graph, where each leaf, known as an leaf, is a switch that connects to each
spine switch, and no direct connections are allowed between leaf switches and between spine switches. The ileaves act as the connection point for all external devices and networks, and the
spine acts as the high-speed forwarding engine between leaves. Cisco ACI fabric is MANAGED , monitored, and administered by the Cisco APIC.
Figure 3. ACI Spine-Leaf Fabric Design

Cisco APIC Policy OBJECT Model


At the top level the APIC policy model is built on a series of one or more tenants, which allow the network infrastructure ADMINISTRATION and data flows to be segregated out. One or more
tenants can be used for customers, business units, or groups, depending on organizational needs. For instance, a given enterprise might use one tenant for the entire organization, while a cloud
provider might have customers using one or more tenants to represent its organization.
Tenants further break down into private Layer 3 networks, which directly relate to a Virtual Route Forwarding (VRF) instance or separate IP space. Each tenant may have one or more private
Layer 3 networks depending on the business needs of that tenant. Private Layer 3 networks provide a way to further separate the organizational and forwarding requirements below a given
tenant. Because contexts use separate forwarding instances, IP addressing can be duplicated in separate contexts for the purpose of multitenancy.
Below the context the model provides a series of OBJECTS that define the application itself. These OBJECTS are endpoints, endpoint groups (EPGs), and the policies that define their
relationship. It is important to note that policies in this case are more than just a set of access control lists (ACLs) and include a collection of inbound/outbound filters, traffic quality settings,
marking rules/redirection, rules and Layer 4 through 7 service device graphs. This relationship is shown in Figure 4.
Figure 4. Cisco APIC Logical OBJECT

Model

Figure 4 depicts two contexts under a given tenant and the series of applications that make up that context. The EPGs shown are groups of endpoints that make up an application tier or other
logical application grouping. For example, Application B shown expanded on the right, could be a blue web tier, red application tier, and orange database tier. The combination of EPGs and the
policies that define their interaction is an application profile on the Cisco ACI fabric.

Endpoint Groups
Endpoint groups (EPGs) are a collection of similar endpoints representing an application tier or set of services. They provide a logical grouping for objects that require similar policy. For example,
an EPG could be the group of components that make up an applications web tier. Endpoints themselves are defined using NIC, vNIC, IP address, or DNS name with extensibility for future
methods of identifying application components.
EPGs are also used to represent other entities such as outside networks, network services, security devices, network storage, and so on. EPGs are collections of one or more endpoints
providing a similar function. They are a logical grouping with varying use options depending on the application deployment model in use. Figure 5 depicts the relationship between endpoints,
EPGs, and applications themselves.
Figure 5. Endpoint Group Relationships

EPGs are designed for flexibility, allowing their use to be customized to one or more deployment models a given customer might choose. The EPGs themselves are then used to define where
policy is applied. Within the Cisco ACI fabric, policy is applied between EPGs, therefore defining how EPGs communicate with one another. This is designed to be extensible in the future to
policy application within an EPG itself.
Some example uses of EPGs are:
EPG

defined by traditional network VLANs

- All endpoints connecting to a given VLAN are placed in an EPG


EPG

defined by a VxLAN

- Same as preceding using VxLAN


EPG

mapped to a VMware port group

EPG

defined by IPs or subnet

- For example, 172.168.10.10 or 172.168.10*


EPG

defined by DNS names or DNS ranges

- For example, example.web.cisco.com or *.web.cisco.com

The use of EPG is intentionally left both flexible and extensible. The model is intended to provide tools to build an applications network representation in a model that maps to the actual
environments deployment model. Additionally, the definition of endpoints themselves is intended to be extensible to provide support for future product enhancements and INDUSTRY
requirements.
The implementation of EPGs within the fabric provides several valuable benefits. EPGs act as a single policy enforcement point for a group of contained OBJECTS . This simplifies
configuration of these policies and makes sure that it is consistent. Additional policy is applied based on not subnet, but rather on the EPG itself. This means that IP addressing changes to the
endpoint itself do not necessarily change its policy, as is commonly the case in traditional networks (the exception here is an endpoint defined by its IP). Alternatively, moving an endpoint to
another EPG would apply the new policy to the leaf switch to which the endpoint is connected and define new behavior for that endpoint based on the new EPG. Figure 6 depicts these benefits.
Figure 6. EPG Benefits within Fabric

The final benefit provided by EPGs is in the way in which policy is enforced for an EPG. The physical ternary content-addressable memory (TCAM) in which policy is stored for enforcement is
an EXPENSIVE component of switch hardware and therefore tends to lower policy scale or raise hardware costs. Within the Cisco ACI fabric, policy is applied based on the EPG rather than
the endpoint itself. This policy size can be expressed as n*m*f, where n is the number of sources, m is the number of destinations, and f is the number of policy filters. Within the Cisco ACI
fabric, sources and destinations become one entry for a given EPG, which reduces the number of total entries required. This benefit is shown in Figure 7.
Figure 7. EPG Role in Policy Reduction

As discussed, policy within an Cisco ACI fabric is applied between two EPGs. These policies can be applied in either a unidirectional or bidirectional mode between any given pair of EPGs.
These policies then define the allowed communication between EPGs. This is shown in Figure 8.
Figure 8. Unidirectional and Bidirectional Policy Enforcement

Cisco APIC Policy Enforcement


The relationship between EPGs and policies can be thought of as a matrix with one axis representing source EPGs (sEPGs) and the other representing destination EPGs (dEPGs). One or more
policies will be placed in the intersection between appropriate sEPGs and dEPGs. The matrix will end up sparsely populated in most cases because many EPGs will have no need to
communicate with one another. (See Figure 9)
Figure 9. Policy Enforcement Matrix

Policies themselves break down into a series of filters for quality of service, access control, and so on. Filters are specific rules that make up the policy between two EPGs. Filters are composed
of inbound and outbound: permit, deny, redirect, log, copy (separate from SPAN), and mark functions. Policies allow for wildcard functionality within the definition. The enforcement of policy
typically takes a most specific match first approach. The table in Figure 10 shows the specific enforcement order.
Figure 10. Wildcard Enforcement Rules

Enforcement of policy within the fabric is always guaranteed; however, policy can be applied in one of two places. Policy can be enforced opportunistically at the ingress leaf; otherwise, it is
enforced on the egress leaf. Whether or not policy can be enforced at ingress is determined by whether the destination EPG is known. The source EPG will always be known, and policy rules

pertaining to that source as both an sEPG and a dEPG are always pushed to the appropriate leaf switch when an endpoint attaches. After policy is pushed to an leaf, it is stored and enforced in
hardware. Because the Cisco APIC is aware of all EPGs and the endpoints assigned to them, the leaf to which the EPG is attached will always have all policies required and will never need to
punt traffic to a controller, as might be the case in other systems. (See Figure 11)
Figure 11. Applying Policy to Leaf Nodes

If the destination EPG is not known, policy cannot be enforced at ingress. Instead, the source EPG is tagged, and policy applied bits are not marked. Both of these fields exist in the reserved bits
of the VxLAN header. The packet is then forwarded to the forwarding proxy, typically resident in the spine. The spine is aware of all destinations in the fabric; therefore, if the destination is
unknown, the packet is dropped. If the destination is known, the packet is forwarded to the destination leaf. The spine never enforces policy; this will be handled by the egress leaf.
When a packet is RECEIVED by the egress leaf, the sEPG and the policy applied bits are read (these were tagged at ingress). If the policy applied bits are marked as applied, the packet is
forwarded without additional processing. If instead the policy applied bits do not show that policy has been applied, the sEPG marked in the packet is matched with the dEPG (always known on
the egress leaf), and the appropriate policy is then applied. This is shown in Figure 12.
Figure 12. Enforcing Policy on Fabric

The opportunistic policy application allows for efficient handling of policy within the fabric. This opportunistic nature of policy application is further represented in Figure 13.
Figure 13. Opportunistic Ingress Enforcement of Policy

Multicast Policy Enforcement


The nature of multicast makes the requirements for policy enforcement slightly different. Although the source EPG is easily determined at ingress because it is never a multicast address, the
destination is an abstract entity; the multicast group may consist of endpoints from multiple EPGs. In multicast cases the Cisco ACI fabric uses a multicast group for policy enforcement. The
multicast groups are defined by specifying a multicast address range or ranges. Policy is then configured between the sEPG and the multicast group. (See Figure 14)
Figure 14. Multicast Group (Specialized Multicast EPG)

The multicast group (EPG group corresponding to the multicast stream) will always be the destination and never used as a source EPG. Traffic sent to a multicast group will be either from the
multicast source or a RECEIVER joining the stream through an IGMP join. Because multicast streams are nonhierarchical and the stream itself will already be in the forwarding table (using
IGMP join), multicast policy is always enforced at ingress. This prevents the need for multicast policy to be written to egress leaves. (See Figure 15)
Figure 15. Multicast Policy Enforcement

Application Network Profiles


As stated earlier, an application network profile (ANP) within the fabric is a collection of the EPGs, their connections, and the policies that define those connections. ANPs become the logical
representation of all of an application and its interdependencies on the Cisco ACI fabric.
ANPs are designed to be modeled in a logical fashion, which matches the way applications are designed and deployed. The configuration and enforcement of the policies and connectivity are
then handled by the system itself using the Cisco APIC rather than through an ADMINISTRATOR . Figure 16 shows an example ANP.
Figure 16. Application Network Profile

Creating ANPs requires three general steps:


Creation

of EPGs, as discussed earlier

Creation

of policies that define connectivity and include:

- Permit
- Deny
- Log
- Mark
- Redirect
- Copy to
- Service graphs
Creating

connection points between EPGs utilizing policy constructs known as CONTRACTS

CONTRACTS
CONTRACTS define inbound and outbound permits, denies, QoS, redirects, and service graphs. CONTRACTS allow for both simple and complex definition of how a given EPG
communicates with other EPGs dependent on the requirements of a given environment. This relationship is shown in Figure 17.
Figure 17. Contracts with Application Network Profiles (ANPs)

In Figure 17 we see the relationship between the three tiers of a web application defined by EPG connectivity and the contracts that define their communication. The sum of these parts becomes
an ANP. Contracts also provide reusability and policy consistency for services that typically communicate with multiple EPGs. Figure 18 uses the concept of network file system (NFS) and
management resources to show this.

Figure 18. Complete Application Profile

Figure 18 shows the basic three-tier web application used previously with some common additional connectivity that would be required in the real world. In this diagram we see shared network
services, NFS, and management, which would be used by all three tiers as well as other EPGs within the fabric. In these cases the CONTRACT provides a reusable policy defining how the
NFS and MGMT EPGs produce functions or services that can be consumed by other EPGs.
Within the Cisco ACI fabric, the what and where of policy application have been intentionally separated. This allows policy to be created independently of how it is applied and reused where
required. The actual policy configured in the fabric will be determined based on the policy itself defined as a CONTRACT (the what) and the intersection of EPGs and other CONTRACTS
with those policies (the where).
In more complex application deployment environments, contracts can be further broken down using subjects, which can be thought of as applications or subapplications. To better understand
this concept, think of a web server. Although it might be classified as web, it might be producing HTTP, HTTPS, FTP, and so on, and each of these subapplications might require different policies.
Within the Cisco APIC model, these separate functions or services are defined using subjects, and subjects are combined within contracts to represent the set of rules that define how an EPG
communicates with other EPGs. (See Figure 19)
Figure 19. Subjects within CONTRACT

Subjects describe the functions that an application exposes to other processes on the network. This can be thought of as producing a set of functions: that is, the web server produces HTTP,
HTTPS, and FTP. Other EPGs then consume one or more of these functions; which EPGs consume these services are defined by creating relationships between EPGs and contracts, which
contain the subjects defining applications or subapplications. Full policy is defined by ADMINISTRATORS defining groups of EPGs that can consume what another provides. This model
provides functionality for hierarchical EPGs, or more simply EPGs that are groups of applications and subapplications. (See Figure 20)
Figure 20. Subjects within CONTRACT

Additionally, this model provides the capability to define a disallow list on a per EPG basis. These disallows override the CONTRACT itself, making sure that certain communication can be
denied on a per EPG basis. These disallows are known as taboos. This capability provides the ability to provide a blacklist model within the Cisco ACI fabric, as shown in Figure 21.
Figure 21. Use of Taboos to Create Blacklist Behavior

Figure 21 shows that a CONTRACT can be defined allowing all traffic from all EPGs. This allow is then refined by creating a taboo list of specific ports or ranges that are undesirable. This
model provides a transitional method for customers desiring to migrate over time from a blacklist model, which is typically in use today, to the more desirable whitelist model. Blacklist is a model
in which all communication is open unless explicitly denied, whereas a whitelist model requires communication to be explicitly defined before being permitted. It is important to remember that
disallow lists are optional, and in a full whitelist model they will rarely be needed.
CONTRACTS provide a grouping for the descriptions and associated policies that define those application services. They can be contained within a given scope, tenant, context, or EPG as a
local CONTRACT . An EPG is also capable of subscribing to multiple contracts, which will provide the superset of defined policies.
Although CONTRACTS can be used to define complex real-world application relationships, they can also be used very simply for traditional application deployment models. For instance, if a
single VLAN/VxLAN is used to define separate services, and those VLANs are tied to port groups within VMware, a simple CONTRACT model can be defined without unneeded complexity.
(See Figure 22)
Figure 22. Using Contracts with VMware Port Groups

However, in more advanced application deployment models such as PaaS, SOA 2.0, and Web 2.0 models, where more application granularity is required, complex CONTRACT
can be used. These relationships can be used to define detailed relationships between components within a single EPG and to multiple other EPGs. This is shown in Figure 23.

relationships

Figure 23. Modeling Complex Applications Using Contracts

Figure 23 shows that multiple application tiers may exist within a single EPG, and relationships are defined between those tiers as well as tiers residing in external EPGs. This allows complex
relationships to be defined where certain constructs are consumed by other constructs that might reside within various EPGs. Functionality is provided by the Cisco ACI policy model to define
relationships based on these components, which can be thought of as services, functions, applications, or subapplications residing in the same container.

Figure 23 also depicts the ability to provide intra-EPG policy, which is policy applied within a given EPG. This functionality will be supported in future releases without the requirement to change
the model deployed today. As shown in the diagram, an EPG can consume its own resources as defined by the CONTRACT . In the diagram both NFS and database exist within an EPG that
has requirements to consume those relationships. The policy is depicted by the arrows looping back to the application policy construct.
Although CONTRACTS provide the means to support more complex application models, they do not dictate additional complexity. As stated, for simple application relationships, simple
contracts can be used. For complex application relationships, the CONTRACT provides a means for building those relationships and reusing them where required.
Contracts break down into subcomponents:
Subjects:
Filters,

Group of filters that apply to a specific app or service

which are used to classify traffic

Actions

such as permit, deny, mark, and so on to perform on matches to those filters

Labels,

which are used optionally to group objects such as subjects and EPGs for the purpose of further defining policy enforcement

In a simple ENVIRONMENT , the relationship between two EPGs would look similar to that in Figure 24. Here web and app EPGs are considered a single application construct and defined by
a given set of filters. This will be a very common deployment scenario. Even in complex ENVIRONMENTS , this model will be preferred for many applications.
Figure 24. Simple Policy CONTRACT

Relationships

Many environments will require more complex relationships; some examples of this are:
Environments

using complex middleware systems

Environments
PaaS,

in which one set of servers provides functionality to multiple applications or groups (for example, a database farm providing data for several applications)

SOA, and Web 2.0 environments

Environments

where multiple services run within a single OS

In these environments the Cisco ACI fabric provides a more robust set of optional features to model actual application deployments in a logical fashion. In both cases the Cisco APIC and fabric
software are responsible for flattening the policy down and applying it for hardware enforcement. This relationship between the logical model, which is used to configure application relationships,
and the concrete model used to implement them on the fabric simplifies design, deployment, and change within the fabric.
An example of this would be an SQL database farm providing database services to multiple development teams within an organization: for instance, a red team, blue team, and green team each
using separate database constructs supplied by the same farm. In this instance, a separate policy might be required for each teams access to the database farm. Figure 25 depicts this
relationship.
Figure 25. Single Database Farm Serving Three Separate Groups Requiring Separate Policy Controls

The simple models discussed previously do not adequately cover this more complex relationship between EPGs. In these instances, we need the ability to separate policy for the three separate
database instances within the SQL-DB EPG, which can be thought of as subapplications and are referred to within ACI as subjects.
The Cisco ACI fabric provides multiple ways to model this application behavior depending on user preference and application complexity. The first way in which to model this behavior is using
three CONTRACTS , one for each team. Remember that an EPG can inherit more than one CONTRACT and will receive the superset of rules defined there. In Figure 26 each app teams
EPG connects to the SQL-DB EPG using its own specific contract.
Figure 26. Utilizing Three Contracts to Define Separate Consumer Relationships

As shown, the SQL-DB EPG inherits the superset of policies from three separate CONTRACTS . Each application teams EPG then connects to the appropriate CONTRACT . The contract
itself designates the policy, while the relationship defined by the arrows denotes where policy will be applied or who is providing/consuming which service. In this example the Red-App EPG will
consume SQL-DB services with the QoS, ACL, marking, redirect, and so on behavior defined within the Red-Team APC. The same will be true for the blue and green teams.bbb
In many instances, there will be groups of CONTRACTS that get applied together quite frequently. For example, if multiple DB farms are created that all require access by the three teams in
our example or development, test, and production farms are used. In these cases, a bundle can be used to logically group the CONTRACTS . Bundles are optional; a bundle can be thought of
as a container for one or more contracts for the purpose of ease of use. The use of bundles is depicted in Figure 27.
Figure 27. Using Bundles to Group Contracts

In Figure 27 it is very important to note the attachment points of the arrows showing relationship. Remember that policy is determined by what and where within the fabric. In this example we
want SQL-DB EPG to provide all contracts within the contract bundle, so we attach the bundle itself to the EPG. For each of the three application teams, we only want access defined by its
specific contract, so we attach each team to consume the corresponding contract itself within the bundle.
This same relationship can optionally be modeled in another way using labels. Labels provide another grouping function for use within application policy definition. In most ENVIRONMENTS
labels will not be required, but they are available for deployments with advanced application models and teams who are familiar with the concept.
Using labels, a single contract can be used to represent multiple services or components of applications provided by a given EPG. In this case the labels represent the DB EPG providing
database services to three separate teams. By labeling the subjects and the EPGs using them, separate policy can be applied within a given contract even if the traffic types or other classifiers
are identical. Figure 28 shows this relationship.
Figure 28. Using Labels to Group OBJECTS

within Policy Model

In Figure 28 the SQL-DB EPG provides services using a single CONTRACT called SQ-DB, which defines the database services it provides to three different teams. Each of the three teams
EPGs that will consume these services are then attached to the same traffic. By using labels on the subjects and the EPGs themselves, specific separate rules are defined for each team. The
rules within the contract that matches the label will be the only ones applied for each EPG. This holds true even if the classification within the construct is the same: for example, the same Layer
4 ports and so on.
Labels provide a very powerful classification tool that allows OBJECTS to be grouped together for the purpose of policy enforcement. This also allows applications to be moved quickly through
various development lifecycles. For example, if the red label service Red-App represented a development ENVIRONMENT that needed to be promoted to test represented by the blue label,
the only required change would be to the label assigned to that EPG.

Summary
The Cisco ACI policy model is designed top down using a promise theory model to control a scalable architecture of defined network and service objects. This model provides robust repeatable
controls, multitenancy, and minimal requirements for detailed knowledge by the control system known as the Cisco APIC. The model is designed to scale beyond current needs to the needs of
private clouds, public clouds, and software-defined data centers.
The policy enforcement model within the fabric is built from the ground up in an application-centric OBJECT model. This provides a logical model for laying out applications, which will then be
applied to the fabric by the Cisco APIC. This helps to bridge the gaps in communication between application requirements and the network constructs that enforce them. The Cisco APIC model
is designed for rapid provisioning of applications on the network that can be tied to robust policy enforcement while maintaining a workload anywhere approach.

Additional References

For additional references: http://www.cisco.com/go/aci.

OpFlex: Framework for a Broad Partner Ecosystem


Current Data Center Challenges
Current infrastructure MANAGEMENT tools generally embed the provisioning logic in scripts and workflows. Todays software-defined networking (SDN) approaches generally EMPLOY
centralized control planes, which can create challenges, limiting an organizations capability to operate at scale, troubleshoot failures in the infrastructure, and support interoperability and
innovation.

Introduction of OpFlex
Cisco Application Centric Infrastructure (ACI) uses a fundamentally different approach, implementing a declarative control model that allows each device to RECEIVE a high-level, abstract
policy that can be rendered and enforced locally. In addition, Cisco works with its partners to build a powerful, comprehensive set of APIs to allow any device to connect to the Cisco Application
Policy Infrastructure Controller (APIC) and the Cisco ACI solution.
OpFlex, the southbound API, is an open and extensible policy protocol used to transfer abstract policy in XML or JavaScript Object Notation (JSON) between a policy controller such as the Cisco
APIC and any device, including hypervisor switches, physical switches, and Layer 4 through 7 network services. Cisco and its partners, including Intel, Microsoft, Red Hat, Citrix, F5, Embrane,
and Canonical, are WORKING through the IETF and open source community to standardize OpFlex and provide a reference implementation.
OpFlex is a new mechanism for transferring abstract policy from a modern network controller to a set of smart devices capable of rendering policy. Although many existing protocols such as the
Open vSwitch Database (OVSDB) management protocol focus on imperative control with fixed schemas, OpFlex is designed to work as part of a declarative control system such as Cisco ACI in
which abstract policy can be shared on demand.
In addition to its implementations in the open source community, OpFlex is one of the primary mechanisms through which other devices can exchange and enforce policy with the Cisco APIC.
OpFlex defines that interaction. As a result, by integrating a number of devices from both Cisco and an ecosystem PARTNER using the Cisco ACI fabric, it can be used to
provide INVESTMENT protection.

Value to the Ecosystem


OpFlex has been widely accepted by the Cisco partner ecosystem because it offers a powerful set of advantages. It allows vendors to continue to innovate and expose new features in their
platforms. These new features can be exposed through an abstract policy model that each device autonomously interprets. This approach allows one device to take advantage of a new feature
without requiring others to do so, too. Additionally, OpFlex offers interoperability and ease of integration with declarative control systems through support for abstract policy.

Main OpFlex Use Cases


OpFlex can be broadly applied across a range of devices, but several use cases stand out as points of customer interest.

Core Routing and Data Center Interconnect

Cisco OpFlex provides INVESTMENT protection by extending policy management support to the core routing and data center interconnect (DCI) with Cisco Nexus 7000 Series Switches and
Cisco ASR 9000 Series AGGREGATION Services Routers (Figure 1). Cisco APIC is the central point of datacenter policy management in this architecture while WAN configuration is done
through a separate WAN controller or directly by the user. The goal with OpFlex in this scenario is to automate fabric-facing configuration and exchange per-tenant information.
Figure 1. OpFlex Extended to Core Routing and Data Center interconnect

Hypervisor PARTNERS
OpFlexs declarative model approach allows virtual switches in popular hypervisors to enforce network policy. In this role, these switches function as extended policy leaves, or virtual leaves
(vLeaves). Supported devices include Cisco Nexus 1000V Switch for VMware vSphere, Microsoft Hyper-V, Red Hat KVM, Canonical KVM and Xen, and Citrix XenServer (Figure 2).
Figure 2. OpFlex Extended to Virtual Computing and Hypervisor Switching

Layer 4 through 7 Services


OpFlex provides an alternative mechanism for deep integration between Layer 4 through 7 devices and the Cisco APIC to activate service chain behaviors, allowing the Cisco APIC to MANAGE
the full cycle of the Layer 4 through 7 service deployment (Figure 3).
Figure 3. OpFlex Extended to Layer 4 Through 7 Services

The OpFlex Layer 4 through 7 ecosystem allows customers to use their existing service nodes and their current modes of operation. OpFlex allows the customer to deploy automated security
and configuration and advanced performance monitoring for Layer 4 through 7 services.

Conclusion
OpFlex provides an extensible way to implement scalable infrastructure while providing INVESTMENT protection for Cisco data center core and WAN platforms, providing network policy
extensions to hypervisor switches and integrating with Layer 4 through 7 services. OpFlex openness provides the framework for a broad and open ecosystem that allows policy exchange
between the Cisco APIC and any device.

For More Information


http://www.cisco.com/go/aci.

The Cisco Application Policy Infrastructure Controller

ntroduction: What Is the Cisco Application Policy Infrastructure Controller?

The Cisco Application Policy Infrastructure Controller (APIC) is a distributed system implemented as a cluster of controllers. The Cisco APIC provides a single point of control, a central API, a
central repository of global data, and a repository of policy data for the Cisco Application Centric Infrastructure (ACI). Cisco ACI is conceptualized as a distributed overlay system with external
endpoint connections controlled and grouped through policies. Physically, Cisco ACI is a high-speed, multipath leaf and spine (bipartite graph) fabric (Figure 1).
The Cisco APIC is a unified point of policy-driven configuration. The primary function of the Cisco APIC is to provide policy authority and policy resolution mechanisms for the Cisco ACI and
devices attached to Cisco ACI. Automation is provided as a direct result of policy resolution and of rendering its effects onto the Cisco ACI fabric.
The Cisco APIC communicates in the infrastructure VLAN (in-band) with the Cisco ACI spine and leaf nodes to distribute policies to the points of attachment (Cisco leaf) and provide a number of
key ADMINISTRATIVE functions to the Cisco ACI. The Cisco APIC is not directly involved in data plane forwarding, so a complete failure or disconnection of all Cisco APIC elements in a
cluster will not result in any loss of existing datacenter functionality.
In general, policies are distributed to nodes as needed upon endpoint attachment or by an ADMINISTRATIVE static binding. You can, however, specify resolutional immediacy, which
regulates when policies are delivered into Cisco nodes. Prefetch or early resolution is one of the modes. The most scalable mode is the just-in-time mode, in which policies are delivered to
nodes just in time upon detection of the attachment. Attachment detection is based on analysis of various triggers available to the Cisco APIC.
A central Cisco APIC concept is to express application networking needs as an extension of application-level metadata through a set of policies and requirements that are automatically applied
to the network infrastructure. The Cisco APIC policy model allows specification of network policy in an application- and workload-centric way. It describes sets of endpoints with identical network
and semantic behaviors as endpoint groups. Policies are specified per interaction among such endpoint groups.

Main features of Cisco APIC include:


Application
Data

centric network policies

model-based declarative provisioning

Application,
Third-party
Image
Cisco

topology monitoring, and troubleshooting

integration (Layer 4 through 7 services, storage, computing, WAN)

MANAGEMENT

ACI INVENTORY

Implementation

(spine and leaf)


and configuration

on a distributed framework across a cluster of appliances

Figure 1. Cisco APIC Policy Model

Scalable and Flexible


A single Cisco APIC cluster supports over 1 million Cisco ACI endpoints, more than 200,000 ports, and more than 64,000 tenants and provides centralized access to Cisco ACI information
through a number of interfaces, including an object-oriented RESTful API with XML and JSON bindings, a modernized user-extensible command-line interface (CLI), and a GUI. All methods are
based on a model of equal access to internal information. Furthermore, Cisco APIC clusters are fully extensible to computing and storage MANAGEMENT .

Cisco APIC Is Not Another NMS


The Cisco APIC is a network policy control system. However, it is not a network element MANAGEMENT system and should not be mistaken for a manager of managers. Instead, it is
designed to extend the MANAGEABILITY innovations of the Cisco ACI Fabric OS platform by augmenting it with a policy-based configuration model and providing end-to-end Cisco ACI global
visibility (Figure 2). Cisco has cultivated a broad ecosystem of partners to work with the Cisco APIC and provide important functions, including:
Fault

and event MANAGEMENT

System

and configuration tools

Performance
Automation

tools

Orchestration
Statistical

MANAGEMENT

frameworks

collection and analysis

Hypervisor,

storage, and computing MANAGEMENT

Layer

4 through 7 services integration

Cloud

management

IP

address management (IPAM)

Figure 2. Policy-Based Fabric

Virtual Cisco ACI Context: Securing Tenants


A tenant is a logical container or a folder for application policies. It can represent an actual tenant, an organization, or a domain or can just be used for the convenience of organizing information.
A normal tenant represents a unit of isolation from a policy perspective, but it does not represent a private network. A special tenant named common has sharable policies that can be used by
all tenants. A context is a representation of a private Layer 3 namespace or Layer 3 network. It is a unit of isolation in our Cisco ACI framework. A tenant can rely on several contexts. Contexts
can be declared within a tenant (contained by the tenant) or can be in the common tenant. This approach enables us to provide both multiple private Layer 3 networks per tenant and shared
Layer 3 networks used by multiple tenants. This way, we do not dictate a specific rigidly constrained tenancy model. The endpoint policy specifies a common Cisco ACI behavior for all endpoints
defined within a given virtual Cisco ACI context.

Endpoints and Policy Control


The Cisco ACI is conceptualized as a distributed overlay system with external endpoint connections controlled and grouped through policies (Figure 3). The central concept here is to group
endpoints (EPs) with identical semantics into endpoint groups (EPGs) and then write policies that regulate how such groups can interact with each other. These policies provide rules for
connectivity, visibility (access control), and isolation of the endpoints. The Cisco APICs primary responsibility is distributing, tracking, and updating such policies to corresponding Cisco ACI
nodes as client endpoint connectivity to the Cisco ACI is ESTABLISHED , changed, or removed.
Endpoint policy control consists of two logically coupled elements:
Policy repository: This is a collection of policies and rules applied to existing or hypothetical (either deleted or not yet created) endpoints.
Endpoint registry: This is a registry of endpoints currently known to the Cisco ACI. External client endpoints are any external computing, network, or storage element that is not a component of
the Cisco ACI. Client endpoints are directly connected to the Cisco leaf, or indirectly through a fabric extender (FEX) or intermediate switches (such as blade switches in the blade systems) or a
virtual switch. For example, a computing endpoint can be a single virtual machines VM network interface card (vmNIC) or virtual NIC (vNIC), a physical server connection through a physical
NIC, or a virtualized vNIC (SR-IOV, Palo, etc.)
Figure 3. Endpoint Identification

Endpoints and their Cisco ACI attachment location may or may not be known when the EPG is initially defined on the Cisco APIC. Therefore, endpoints either can be prespecified into
corresponding EPGs (statically at any time) or can be added dynamically as they are attached to the Cisco ACI. Endpoints are tracked by a special endpoint registry mechanism of the policy
repository. This tracking serves two purposes: It gives the Cisco APIC visibility into the attached endpoints and dictates policy consumption and distribution on the Cisco leaf switches.

Endpoint Groups: Building Blocks of Policy and Automation


An endpoint group (EPG) is a collection of endpoints with identical Cisco ACI behaviors and requirements. The practical implication of the EPG in relation to the VM management layer is that it
can be thought of as a vSphere port group, or as a network as defined in the OpenStack Neutron API. Groups have an application-tier-like semantic at the application metadata level, where a
web server connects to the network as a member of the EPG web-servers and all rules pertaining to web servers in a given application are applied. However, in more traditional environments,
it is possible to think of an EPG as a collection of physical ports, VLANs, a subnet, or some other unit of isolation or networked workload containment. This group behavior represents the most
basic building block of network configuration automation on the Cisco APIC.

Endpoint Group Contracts


The Cisco APIC policy model defines EPG contracts between EPGs that control the various parameters between application tiers such as connectivity, visibility, service insertion, packet quality
of service (QoS), etc. A contract allows a user to specify rules and policies for groups of physical or virtual endpoints without understanding any specific identifiers or even who is providing the
service or who is consuming it, regardless of the physical location of the devices (Figure 4). This abstraction of specificity makes the Cisco policy model truly object oriented and highly flexible.
Each contract consists of a filtering construct, which is a list of one or more classifiers (IP address, TCP/UDP ports, etc.), and an action construct that dictates how the matching traffic is handled
(allow, apply QoS, log traffic, etc.).
Figure 4. Endpoint Group Contracts

Another implication of the contract filters is their effect on the distribution of policy and endpoint visibility information. For a given endpoint attaching to a given leaf, only the information about
related endpoints is communicated, along with corresponding policies.

A Model-Based Controller Implemented with Promise Theory


In model-driven architectures, the software maintains a complete representation of the administrative and operational state of the system (the model).
Subject-matter experts (SMEs) do not configure the physical system resources directly but rather define logical configuration policy abstractions of policy state (hardware independent) that
control different aspects of the system behavior.
Figure 5. Promise Theory Model

The Cisco APIC uses a variant of promise theory with full formal separation of the logical and concrete models, in which no configuration is carried out against concrete entities (Figure 5).
Concrete entities are configured implicitly as a side effect of the logical model implementation. Concrete entities can be physical, but they dont have to be (such as VMs and VLANs).
Logical configurations are rendered into concrete configurations by applying the policies in relation to the available physical resources, taking into account their state and capabilities.
Concrete configurations are deployed to the managed endpoints in an asynchronous, nonblocking fashion.
This involves profiles and policies in which logical entities are expressed as policy requirements. The management model applies uniformly to the Cisco ACI, services, and system behaviors.
The enforcement of polices is achieved through a hive of policy elements (PEs), a concept inspired by sensor networks. PEs enforce the desired state expressed in declared policies on each
node of the Cisco ACI, in accordance with promise theory. This is distinctly different from traditional top-down management, in which every resource is configured directly.
Given an excitation trigger, it is the responsibility of a PE, directly or indirectly through a proxy, to trigger policy resolution. Once policy is resolved, its artifacts are rendered to the concrete model,
and the backend (Cisco ACI Fabric OS processes) reacts to such changes.

Why Promise Theory?


Promise theory has many advantages over traditional elemental management (Figure 6):
Provides

declarative automation.

State

convergence: The PE continuously conducts checks to help ensure that the configuration complies with the desired state of the infrastructure.

Tight

policy loop: The PE corrects drift of the operational state from the desired state.

Provides

continual real-time auditing of the operational state as compared to the defined state.

Removes
Model
Scales

remains intact at scale.


linearly with the object-driven model.

Objects
No

the side effects of change.

are responsible for the requested configuration.

assumptions are made concerning the current object state. Relies on trust relationships and end-device ownership of configuration change.

Figure 6. Promise Theory Compared to Top-Down Management

Cisco ACI Operating System (Cisco ACI Fabric OS)


Cisco has taken the traditional Cisco Nexus OS (Cisco NX-OS Software) developed for the datacenter and pared it down to the essential features required for a modern datacenter deploying
the Cisco ACI. There have also been deeper structural changes so that the Cisco ACI Fabric OS can easily render policy from the Cisco APIC into the physical infrastructure. A Data
Management Engine (DME) in Cisco ACI Fabric OS provides the framework that serves read and write requests from a shared lockless datastore. The datastore is object oriented, with each
object stored as chunks of data.
A chunk is owned by one Cisco ACI Fabric OS process, and only the owner of this process can write to the data chunk. However, any Cisco ACI Fabric OS process can read any of the data
simultaneously through the CLI, Simple Network Management Protocol (SNMP) or an API call. A local policy element (PE) enables the Cisco APIC to implement the policy model directly in Cisco
ACI Fabric OS (Figure 7).
Figure 7. Cisco ACI Fabric OS

Architecture: Components and Functions of the Cisco APIC


The Cisco APIC consists of a set of basic control functions, including:
Policy

Manager (policy repository)

Topology

Manager

Observer
Boot

Manager

Appliance
VMM
Event

Director (cluster controller)

Manager
Manager

Appliance

Element

Figure 8 shows the Cisco APIC components.


Figure 8. Cisco APIC Component Architecture

Policy Manager
The Policy Manager is a distributed policy repository responsible for the definition and deployment of the policy-based configuration of the Cisco ACI. This is a collection of policies and rules
applied to existing or hypothetical (not yet created) endpoints. The endpoint registry is a subset of the Policy Manager that tracks endpoints connecting to the Cisco ACI and their assignment to
endpoint groups as defined by the policy repository.

Topology Manager
The Topology Manager maintains up-to-date Cisco ACI topology and inventory information. Topology information is reported to the Cisco APIC by the leaf and spine switches. The physical
topology is based on the information discovered by the Link Layer Discovery Protocol (LLDP) and the routing topology of the fabric as reported by protocols (modified intermediate system to
intermediate system [IS-IS]) running within the fabric infrastructure space.
A global view of time-accurate topology information is available in the Topology Manager, including:
Physical
Logical

topology (Layer 1; physical links and nodes)

path topology (reflection of Layer 2 + Layer 3)

Topology information, along with associated aggregated operational state, is asynchronously updated in the Topology Manager upon detection of topology changes, and is available for queries
through the Cisco APIC API, CLI, and UI.

A subfunction of Topology Manager performs inventory management for the Cisco APIC and maintains a complete inventory of the entire Cisco ACI. The Cisco APIC inventory management
subfunction provides full identification, including model and serial number, as well as user-defined asset tags (for ease of correlation with asset and inventory management systems) for all ports,
line cards, switches, chassis, etc.
Inventory is automatically pushed by the DME-based policy element/agent embedded in the switches as soon as new inventory items are discovered or removed or transition in state in the local
repository of the Cisco ACI node.

Observer
The Observer is the monitoring subsystem of the Cisco APIC, and it serves as a data repository of the Cisco ACIs operational state, health, and performance, including:
Hardware

and software state and health of Cisco ACI components

Operational

state of protocols

Performance
Outstanding
Record

data (statistics)

and past fault and alarm data

of events

Monitoring data is available for queries through the Cisco APIC API, CLI, and UI.

Boot Director
The Boot Director controls the booting and firmware updates of the Cisco spine and leaf and the Cisco APIC controller elements. It also functions as the address allocation authority for the
infrastructure network, which allows the Cisco APIC and the spine and leaf nodes to communicate. The following process describes bringing up the Cisco APIC and cluster discovery.
Each

Cisco APIC in the Cisco ACI uses an internal private IP address to communicate with the Cisco ACI nodes and other Cisco APICs in the cluster. Cisco APICs discover the IP address
of other Cisco APICs in the cluster using an LLDP-based discovery process.

Cisco

APICs maintain an appliance vector (AV), which provides a mapping from an Cisco APIC ID to an Cisco APIC IP address and a universally unique identifier (UUID) of the Cisco
APIC. Initially, each Cisco APIC starts with an AV filled with its local IP address, and all other Cisco APIC slots are marked unknown.

Upon

switch reboot, the PE on the leaf gets its AV from the Cisco APIC. The switch then advertises this AV to all of its neighbors and reports any discrepancies between its local AV and
neighbors AVs to all the Cisco APICs in its local AV.

Using

this process, Cisco APICs learn about the other Cisco APICs in the Cisco ACI through switches. After validating these newly discovered Cisco APICs in the cluster, Cisco APICs
update their local AV and program the switches with the new AV. Switches then start advertising this new AV. This process continues until all the switches have the identical AV and all
Cisco APICs know the IP address of all the other Cisco APICs.

Appliance Director
The Appliance Director is responsible for formation and control of the Cisco APIC appliance cluster. A minimum of three controllers are initially installed for control of the scale-out Cisco ACI
(Figure 9). The ultimate size of the Cisco APIC cluster is directly proportionate to the Cisco ACI size and is driven by the transaction rate requirements. Any controller in the cluster is able to
service any user for any operation, and a controller can be seamlessly added to or removed from the Cisco APIC cluster. It is important to understand that unlike an OpenFlow controller, none of
the Cisco APIC controllers are ever in the data path.
Figure 9. Appliance Director

VMM Manager
The VMM Manager acts as an agent between the policy repository and a hypervisor and is responsible for interacting with hypervisor management systems such as VMwares vCenter and
cloud software platforms such as OpenStack and CloudStack. VMM Manager inventories all of the hypervisor elements (pNICs, vNICs, VM names, etc.) and pushes policy into the hypervisor,
creating port groups, etc. It also listens to hypervisor events such as VM mobility.

Event Manager
The Event Manager is a repository for all the events and faults initiated from the Cisco APIC or the fabric nodes.

Appliance Element
The Appliance Element is a monitor for the local appliance. It manages the inventory and state of the local Cisco APIC appliance.

Architecture: Data Management with Sharding


The Cisco APIC cluster uses a technology from large databases called sharding. To understand the sharding concept, consider the concept of database partitioning. Sharding is a result of the
evolution of what is called horizontal partitioning of a database. In this partitioning, the rows of the database are held separately instead of being normalized and split vertically into columns.
Sharding goes further than horizontal partitioning, also partitioning the database across multiple instances of the schema. In addition to increasing redundancy, sharding increases performance
because the search load for a large partitioned table can be split across multiple database servers, not just multiple indexes on the same logical server. With sharding, large partitionable tables
are split across the servers, and smaller tables are replicated as complete units. After a table is sharded, each shard can reside in a completely separated logical and physical server, data
center, physical location, etc. There is no ongoing need to retain shared access between the shards to the other unpartitioned tables located in other shards.
Sharding makes replication across multiple servers easy, unlike with horizontal partitioning. It is a useful concept for distributed applications, for which otherwise a lot more interdatabase server
communication would be needed because the information wouldnt be located in a separated logical and physical server. Sharding, for example, reduces the number of data center interconnect
links needed for database querying. Sharding requires a notification and replication mechanism between schema instances, to help ensure that the unpartitioned tables remain as synchronized
as the applications require. In situations in which distributed computing is used to separate loads between multiple servers, a shard approach offers a strong advantage.

Effect of Replication on Reliability


Figure 10 shows the proportion of data that is lost when the nth appliance is lost out of a total of 5 appliances and a variable replication factor of K. When K = 1, no replication occurs, and each
shard has one copy; when K = 5, full replication occurs, and all appliances contain a copy. N indicates the number of Cisco APIC appliances lost. When n = 1, one appliance has been lost; when
n = 5, the last appliance has been disconnected.
Consider the example of K = 1: just one copy is made. Therefore, for every appliance lost, the same amount of data is lost from n = 1 to n = 5. As the replication factor K increases, no data loss
occurs unless at least K appliances are lost; also, the data loss is gradual and starts at a smaller value. For example with three appliances (K = 3), no data is lost until the third appliance (n = 3)
is lost, at which point only 10 percent of the data is lost. Cisco APIC uses a minimum of three appliances (n = 3) for this reason.
Figure 10. Effect of Replication on Reliability

Effect of Sharding on Reliability


In Figure 11, L represents the number of appliances, starting with a minimum of three. By maintaining a replication factor of K = 3, no data loss occurs as long as the three appliances are not lost
at the same time. Only when the third Cisco APIC appliance is lost does data loss occur, and it will be a complete loss. Increasing the number of appliances significantly and rapidly improves
resilience. For example, with four appliances, as shown in the figure, losing the third appliance means a loss of 25 percent of the data. With 12 appliances, the loss of the third appliance means
only a 0.5 percent data loss. With sharding, increasing the number of appliances can very quickly reduce the likelihood of data loss. Full replication is not needed to achieve a very high rate of
data protection.
Figure 11. Effect of Sharding on Reliability

Sharding Technology
The sharding technology provides scalability and reliability to the data sets generated and processed by the Distributed Policy Repository, the endpoint registry, the Observer, and the Topology
Manager (Figure 12). The data for these Cisco APIC functions is partitioned into logically bounded subsets called shards (analogous to database shards). A shard is a unit of data management,
and all of the above data sets are placed into shards:
Each

shard has three replicas

Shards

are evenly distributed

They

enable horizontal (scale-out) scaling

They

simplify the scope of replications

Figure 12. Sharding

One or more shards are located on each Cisco APIC appliance and processed by a controller instance located on that appliance. The shard data assignments are based on a predetermined
hash function, and a static shard layout determines the assignment of shards to appliances.
Each replica in the shard has a use preference, and writes occur on the replica that is elected leader. Other replicas are followers and do not allow writes. In the case of a split-brain condition,
automatic reconciliation is performed based on timestamps. Each Cisco APIC has all Cisco APIC functions; however, processing is evenly distributed throughout the Cisco APIC cluster.

User Interface: Graphical User Interface (GUI)


The GUI is an HTML5-based web UI that works with most modern web browsers. The GUI provides seamless access to both the Cisco APIC and the individual nodes.

User Interface: Command-Line Interface (CLI)


A full stylistic and semantic (where it applies) compatibility with Cisco NX-OS CLI is provided. The CLI for the entire Cisco ACI is accessed through the Cisco APIC and supports a transactional
mode. There is also the ability to access specific Cisco ACI nodes with a read-only CLI for troubleshooting. An integrated Python-based scripting interface is supported that allows user-defined
commands to attach to the command tree as if they were native platform-supported commands. Additionally, the Cisco APIC provides a library for any custom scripts.

User Interface: RESTful API


The Cisco APIC supports a comprehensive RESTful API over HTTP(S) with XML and JSON encoding bindings. Both class-level and tree-oriented data access is provided by the API.

Representational state transfer (REST) is a style of software architecture for distributed systems such as the World Wide Web. REST has emerged over the past few years as a predominant
web services design model. REST has increasingly displaced other design models such as SOAP and Web Services Description Language (WSDL) due to its simpler style.
The uniform interface that any REST interface must provide is considered fundamental to the design of any REST service, and thus the interface has these guiding principles:
Identification

of resources: Individual resources are identified in requests, for example, using URIs in web-based REST systems. The resources themselves are conceptually separate from
the representations that are returned to the client.

Manipulation

of resources through these representations: When a client holds a representation of a resource, including any metadata attached, it has enough information to modify or
delete the resource on the server, provided it has permission to do so.

Self-descriptive

messages: Each message includes enough information to describe how to process the message. Responses also explicitly indicate their cache ability.

An

important concept in REST is the existence of resources (sources of specific information), each of which is referenced with a global identifier (such as a URI in HTTP). In order to
manipulate these resources, components of the network (user agents and origin servers) communicate through a standardized interface (such as HTTP) and exchange representations of
these resources (the actual documents conveying the information).

Any number of connectors (clients, servers, caches, tunnels, etc.) can mediate the request, but each does so without seeing past its own request (referred to as layering, another constraint of
REST and a common principle in many other parts of information and networking architecture). Thus, an application can interact with a resource by knowing two things: the identifier of the
resource and the action required - it does not need to know whether there are caches, proxies, gateways, firewalls, tunnels, or anything else between it and the server actually holding the
information. The application does, however, need to understand the format of the information (representation) returned, which is typically an HTML, XML, or JSON document of some kind,
although it may be an image, plain text, or any other content.

System Access: Authentication, Authorization, and RBAC


The Cisco APIC supports both local and external authentication and authorization (TACACS+, RADIUS, Lightweight Directory Access Protocol [LDAP]) as well as role-based administrative
control (RBAC) to control read and write access for all managed objects and to enforce Cisco ACI administrative and per-tenant administrative separation (Figure 13). The Cisco APIC also
supports domain-based access control, which enforces where (under which subtrees) a user has access permissions.
Figure 13. Authentication, Authorization, and RBAC

API Requests and URL/URI


The Cisco ACI Fabric OS Data Management Engine (DME) hierarchical object model approach is a very good fit for a RESTful interface, as URLs and URIs map directly into distinguished
names identifying managed objects (MO) on the tree, and any data on the distributed Management Information Tree (dMIT) can be described as a self-contained structured text document
encoded in XML or JSON (Figure 14). This structure is similar to that of Common Management Information Protocol (CMIP) and other X.500 invariants. The DME was designed to allow the
control of managed resources by presenting their manageable characteristics as object properties. The objects have parent-child relationships that are identified using distinguished names and
properties, which are read and modified by a set of create, read, update, and delete (CRUD) operations. The object model features a full unified description of entities and no artificial separation
of configuration, state, or runtime data (Figure 15).
Figure 14. Unified Data Model

Accessing the dMIT:


Queries

return a set of objects or subtrees.

There

is an option to return an entire or partial subtree for each object in resolution scope.

RBAC

privileges define what types of objects can be accessed.

Domain

identifies what subtrees are accessed.

Figure 15. Organization of Managed Objects

The REST API uses standard HTTP commands for retrieval and manipulation of Cisco APIC data. The URL format used in the API is represented as follows:
<system>/api/[mo|class]/[dn|class][:method].[xml|json] {options}
system:
mo

| class: Indicates whether this is an mo/tree (MIT) or class-level query.

class:
dn:

System identifier, an IP address or DNS-resolvable host name.

Managed object class (as specified in the information model) of the objects queried. Class name is represented as <pkgName><ManagedObjectClassName>.

Distinguished name (unique hierarchical name of the object on the MIT) of the object queried.

method:

Optional indication of what method is being invoked on the object; applies only to HTTP POST requests.

xml|json:

Encoding format.

Options:

Query options, filter, arguments.

For example, ifc-1.foo.com:7580/api/node-20/mo/sys/ch/lcslot-1/lc.xml globally identifies linecard 1 of system 20 (Figure 16).


Figure 16. Class-Level Queries

The API also supports using a specific class-level URL format, providing access to all of a certain class of objects from the dMIT (Figure 17).
Figure 17. Object-Level Queries

The API further supports the use of a tree- or subtree-level URL format, providing access to a specific tree or subtree of objects from the dMIT (Figure 18).
Figure 18. Tree-Level Queries

Conclusion
The Cisco Application Policy Infrastructure Controller (APIC) is a modern, highly scalable distributed control system that manages the Cisco ACI switch elements and provides policy-driven
network provisioning that is implicitly automated. Additionally, the Cisco APIC provides the technology to implement a new paradigm of application policy and a robust automation platform for the
network and all the attached elements. The Cisco APIC is designed to do all of this while remaining out of the data path, thus allowing extremely high performance of the Cisco ACI.

For More Information


http://www.cisco.com/go/aci

Application-centric infrastructure
Trevor Eddolls Nov 2, 2014 | Comments (0)
inShare
0

I wrote a little while ago about Software Defined just-about-anything (SDx), and Software Defined Networking (SDN) seems to be the route that most vendors are
taking. But one vendor has a different approach. I thought it might be worth a look at Ciscos hardware-based alternative, which they call Application-Centric
Infrastructure (ACI).

In a nutshell, what ACI does is integrate data centre MANAGEMENT and cloud computing. In many ways, you can see it as a natural response from a company that
is clearly losing market share as the potential customers for its hardware are moving everything into the cloud effectively outsourcing their networking to the likes of
Amazon and Google.
A traditional network connects computing devices together so they can share data. That network can comprise gateways, routers, and switches the hardware to
manage the movement of data and application software. With application-centric infrastructure, the focus is on the application and whats needed for it to work
optimally, or, more importantly, whats needed to optimize the users experience of using the application. The sorts of metric used are uptime and response times.
For this to work in a hybrid network (ie both local and cloud-based), you need the ability to see your application across the whole network. You need to see all the
components and how the data is flowing between them.
In application-centric networking, the problem to be solved is not specific to a router or switch, it is more tied to the application and the IT systems that support it as a
whole. This implies a level of integration with surrounding infrastructure beginning with the application. Data about the application needs to be collected and
correlated. Performance issues arent fixed until applications are running optimally across the network infrastructure.
Software-defined networking works differently, it decouples control from the data-forwarding tasks of the network devices. It makes networking theoretically fairly
simple because it makes the network itself programmable and MANAGES to separate the application completely from the hardware bits and pieces that its running
on. And for a network hardware vendor like Cisco, that means users can install any old hardware from any old vendor, so long as it runs a bit of controlling software.
Cisco criticizes SDN, saying that it doesnt give you dynamic centralized policy management. They claim SDN is flow-based and focuses on individual components,
rather than providing a single configurable system.
Does ACI have legs? Is it a runner? Will anyone take it up? Or will it be one more acronym that well vaguely remember in 5 years time? I dont have the answers to
any of those questions, but I can see why their approach could be successful. The criticism of IT has always been that the IT guys are totally focused on their toys
they care more about the speed and capacity of their favourite components than they do about the company they work for. With this approach, its clearly businessfocused. If your business needs a particular application to run within a specific short time, then this is the approach for you which makes it an approach that could
keep your company in business.
And for companies that might be thinking of migrating to a cloud-based ENVIRONMENT , but wouldnt consider a big bang approach, where everything changes at

once, it provides an easy architecture to make this happen. With ACI, its perfectly possible to migrate one application at a time to the cloud. It also makes it easier for
companies that are accepting BYOD as a strategy to use ACI to ensure that every user, no matter what their device of choice is, are getting a good response time
when using the applications they need for work.
Plus, there are benefits for organizations that have OFFICES spread across the country, or a large number of branch offices that are currently using a WAN (WideArea Network) and are experiencing latency issues. ACI looks like it will provide a way to improve the end user experience particularly, if they are using voice or
video over the network.
Im not saying that you should rush to the phone and get a Cisco salesman to call. What I am saying is that Cisco has come up with a way of looking at networking
that will have a lot of appeal to the non-technical managers and thought leaders in a company, while, at the same time, providing a technical solution to networking
issues.

Cisco Insieme Launches New Application Centric SDN Vision


Cisco takes the wraps off new vision for Application Centric network, announces new Dynamic Fabric Automation and the Nexus 7700
By Sean Michael Kerner | Posted Jun 26, 2013
Page of | Back to Page 1

There has been a lot of SPECULATION in recent months about a bold new Cisco effort to redefine its place in the emerging world of Software
Defined Networking (SDN). Today that speculation bore fruit when Cisco detailed its vision for Insieme Networks.
Related Articles

Cisco On Board with SDN, OpenFlow


Cisco Acquires Composite Software for $180 Million

What is Cisco's Biggest Open Source Contribution Ever?


Cisco Opens Up Networking Fabric for SDN

Soni Jiandani, senior vice president for Insieme Networks, explained to


owned by Cisco.

Enterprise Networking Planet that

Insieme is a subsidiary company 85 percent

"Insieme is primarily an organization that is focusing on WORKING on the next-generation data center solutions, around an application-centric
infrastructure that will complement Cisco's existing portfolio," Jiandani said.
The goal at Insieme is to simplify networking from end to end by taking an application-centric approach that doesn't rely simply on building a
network of disparate boxes.

7 Steps to Business Success on the Internet of Things


Download Now

As to why Cisco is doing this innovation through Insieme and not directly with its own research and development, entirely under the Cisco BRAND
name, Jiandani said that there are a few key reasons. Among them is the fact that Cisco needs and wants to drive innovation through every
means possible.
"It's very common in Cisco's practice to look at innovation through a variety of methods, including both internal development and models like
Insieme," Jiandani said.
From a financial model perspective, Jiandani explained that as a majority subsidiary owned by Cisco, Insieme aims to
with Cisco to build out the portfolio and solution base.

WORK

in a synergistic way

"We will LEVERAGE , where applicable, Cisco technology so we are not reinventing the wheel," Jiandani said. "From a go-to-market
perspective, MANUFACTURING , customer support, and all the elements that are already built-out within Cisco will be brought to bear."
Application-Centric Networking

As to what, specifically, Insieme is building, the

COMPANY

currently talks mostly about its vision

of
where its future products will fit in. Jiandani explained that the Insieme platform
should be considered a set of capabilities at the system level. It's not just software, as it will also involve innovation in hardware and silicon
ASICS.
"It will allow our customers to build-out a penalty-free overlay," Jiandani said.
The Insieme platform vision is to deliver a virtualization model that can run across the network, tying the application and network infrastructure
layers together though a common policy MANAGEMENT framework.
The Insieme Platform will also leverage and benefit from existing Cisco SDN efforts, including Cisco ONE and the open source OpenDaylight
project.
Jiandani said that further details on the Insieme platform will be revealed later this year.
"I would expect that in the second half of the calendar year, we will be rolling out more details, including the product details," Jiandani said. "You
should expect that there will be some beta customers participating with us in the launch, talking about how a migration to this model has enabled
their success."
Dynamic Fabric Automation
While Insieme represents the forward-looking vision for Cisco, there are also some tangible innovations that Cisco wants its customers to have
more rapidly. Among those innovations is the new Dynamic Fabric Automation (DFA) model and a new Nexus 7700 switch.

Shashi Kiran, senior director of data center, cloud, and open networking, explained to
it helps to solve the challenges of complex network provisioning.
"What DFA does is, it optimizes the network for greater efficiency, and it brings in

Enterprise Networking Planet that

MANAGEMENT

what is unique about DFA is that

simplicity," Kiran said.

Nexus 7700
Cisco is also expanding its data center switching portfolio with the Nexus 7770 series. The Nexus 7770 is the latest evolution of the Nexus 7000,
which Cisco first announced in 2008. The first generation of Nexus 7000 boxes supported up to 512 10 GbE connections and 15 Tbps of
switching capacity. The new top-end Nexus 7718 is an 18 slot chassis and scales up to 83 Tbps of switching capacity. In terms of port density,
the system can support up to 192 ports of 100 GbE and 384 ports of 40 GbE.
Sean Michael Kerner is a senior editor at Enterprise Networking Planet and InternetNews.com. Follow him on Twitter@TechJournalist.