You are on page 1of 409

Intersight:

A Handbook for
Intelligent Cloud
Operations

Cisco Intersight: A Handbook for Intelligent Cloud Operations


Introduction 9

Intersight Foundations 13
Introduction 14
Intersight architecture 15
Licensing 44
Wrapping up 45

Security 47
Introduction 48
Connectivity 49
Claiming 51
Role-Based Access Control (RBAC) 54
Audit logs 58
Data security 60
Security advantages 62
Wrapping up 64

Infrastructure Operations 65
Introduction 66
Device health and monitoring 67
Intelligence feeds 74
Integrated support 88
Infrastructure configuration 94
ITSM integration 101
UCS Director integration 104
Wrapping up 111

Cisco Intersight: A Handbook for Intelligent Cloud Operations


Server Operations 113
Introduction 114
Supported systems 115
Actions 116
Server deployment 125
Domain management 130
Firmware updates 141
Wrapping up 146

Network Operations 147


Introduction 148
Policy-driven network infrastructure 149
Wrapping up 158

Storage Operations 159


Introduction 160
HyperFlex 162
Deploying HyperFlex Clusters 177
Managing HX Clusters 189
Traditional storage operations 204
Wrapping up 209

Virtualization Operations 211


Introduction 212
Claiming a vCenter target 214
Contextual operations 217
Virtualization orchestration 225
Wrapping up 227

Cisco Intersight: A Handbook for Intelligent Cloud Operations


Kubernetes 229
Introduction 230
Intersight Kubernetes Service 232
Benefits 236
Creating clusters with IKS 237
Intersight Workload Engine 241
Wrapping up 242

Workload Optimization 243


Introduction 244
Users and roles 248
Targets and configuration 250
The Supply Chain 254
Actions 258
Groups and policies 261
Planning and placement 268
Public cloud 271
Wrapping up 277

Orchestration 279
Introduction 280
Automation and orchestration 281
Intersight orchestration 283
Wrapping up 304

Programmability 305
Introduction 306
Client SDKs 312

Cisco Intersight: A Handbook for Intelligent Cloud Operations


Authentication and authorization 314
Crawl, walk, run 320
Advanced usage 360
Next steps: use cases 370

Infrastructure as Code 383


Introduction 384
What is Infrastructure as Code? 386
HashiCorp Terraform 392
Cisco and Infrastructure as Code 396
Wrapping up 405

Acknowledgments 407

Cisco Intersight: A Handbook for Intelligent Cloud Operations


Cisco Intersight: A Handbook for Intelligent Cloud Operations
Introduction

Cisco Intersight: A Handbook for Intelligent Cloud Operations


10 Introduction

Cisco Intersight is a cloud operations platform that delivers intelligent


visualization, optimization, and orchestration for applications and
infrastructure across multicloud environments. Intersight offers a new
paradigm that allows traditional infrastructures to be operated and
maintained with the agility of cloud-native infrastructure. Conversely,
Intersight provides cloud native environments with many of the proven
stability and governance principles inherent to traditional infrastructure.

Getting to this point has been a fascinating journey, especially for a


longstanding technology company such as Cisco. The initial development of
Intersight and its continued evolution has been a path filled with both
exciting innovations and its fair share of challenges to overcome, requiring
cultural shifts, partnerships, and an extremely dedicated engineering team.

Managing and operating IT infrastructure is one of the ongoing struggles that


many organizations have dealt with for years. Over time, there has been little
consolidation in the data center space — not only has the number of bare
metal devices expanded rapidly, but the management challenge has been
exacerbated by the exponential increase in storage, network, and compute
virtualization. Technologies have been evolving that allow for increased
agility and faster response times, but little has been done to decrease
complexity. With the rapid adoption of these new technologies,
organizational silos have often increased and there has been no quick
solution to ease the management burden. Antithetically, both IT vendors and
third-party companies have created more and more tools and consoles to
“make life easier.” All this has achieved is to add tooling sprawl on top of the
already overwhelming technology sprawl. Operations teams have been
stretched thin and often have had to divide-and-conquer to develop the
specialized skill sets to get their work done.

To compensate, IT groups have been broken down into application, security,


performance, cloud, network, virtualization, storage, automation, edge,
backup, Linux, Windows, and other sub-teams. Each of these teams
typically adopts a suite of tools for their piece of the operations pie. Most
tools do not span vendors and certainly do not account for both virtual and
physical servers. Out of this IT malaise, two different paths have emerged in
data center operations.

Cisco Intersight: A Handbook for Intelligent Cloud Operations


Introduction 11

The first path was focused on creating a comprehensive view of


environments, and some adventurous operations teams even endeavored to
“roll their own” sets of tools and dashboards. These environments consisted
of a conglomeration of open source tools and custom scripts to scrape log
files and poll the environment for issues. If correlation happened at all, it was
a manual effort, often resulting in many failed attempts to complete root
cause analysis. Also, most of these homegrown management systems relied
heavily on one or two key individuals in the organization. If those people
were to leave the company, the management system usually came
crumbling down shortly after their departure.

For the second path, rather than creating their own management tools,
organizations began to adopt commercial software packages that were
designed to consolidate the vendor-or domain-specific tools. Legacy tools
such as Microsoft SMS, Tivoli Management Framework, and HP OpenView
were expensive, cumbersome, and rarely fully implemented. These and
other similar tools created what was often referred to as the “wedding cake”
approach to systems management, with tool upon tool in a layered
manager-of-managers approach.

This complexity led many organizations to quickly begin adopting the public
cloud after Amazon launched the Elastic Compute Cloud (EC2) service in
2006. Over time, AWS, Google, Azure, and other public cloud providers
have built services that are vital for businesses. However, the public cloud
has not solved the operational issue; it has relocated the problem and in
many ways added complexity.

Over the past several years, the industry has seen the rise of the AI Ops
buzzword. The concept suggests that IT organizations should use assistive
technologies such as machine learning and artificial intelligence to offload
burdensome manual tasks, become more efficient, and improve uptime.
Tools using these technologies have emerged in the industry, promising
improved operations capabilities across private, public, and even hybrid
cloud environments.

Cisco identified a major gap between concept and reality concerning true
multicloud infrastructure operations, which include not just traditional

Cisco Intersight: A Handbook for Intelligent Cloud Operations


12 Introduction

hardware infrastructures such as servers, storage, and networking but also


software resources such as hypervisors, virtual workloads, container
services, and public cloud services. Cisco introduced Intersight to address
this gap as a true cloud operations platform across these myriad
infrastructure components, applying appropriate AI Ops techniques to make
systems not only easier to manage but more efficient and performant.

As a result, Intersight allows operations teams to:

• Monitor their entire environment, from infrastructure to applications,


and gain visibility into their complex interdependencies
• Connect and correlate multiple threads of telemetry from each
component to optimize workload resources and assure performance
while lowering costs
• Establish consistent environments by orchestrating technical and
non-technical policies across each component

This book provides an in-depth overview of the entire Intersight platform,


including key concepts, architectural principles, and best practices that will
assist organizations as they transition to a cloud operations model. Its
authors are subject matter experts with decades of experience drawn from
many corners of the IT landscape. The book supports both an end-to-end
reading approach as well as dipping into chapters of individual interest as
needed.

Cisco Intersight: A Handbook for Intelligent Cloud Operations


Intersight
Foundations

Cisco Intersight: A Handbook for Intelligent Cloud Operations


14 Intersight Foundations

Introduction

Intersight was fundamentally designed to be consumed as a cloud service,


either directly from the Intersight Software-as-a-Service (SaaS) cloud or an
instance within an organization's private cloud. While it can be a challenge
for many organizations to move from the traditional, customer-hosted
management application to a cloud-based service, the benefits of doing so
are undeniable.

Cisco Intersight: A Handbook for Intelligent Cloud Operations


Intersight Foundations 15

Intersight architecture

Unlimited scalability
Every management tool requires some sort of database mechanism to
maintain a device inventory. For traditional management applications, this is
sometimes embedded with the application itself or is a link to an external
database. The external databases are sometimes small unique instances,
else the applications can sometimes utilize existing, large production
database instances. The small unique instance is fast and easy to set up but
can introduce security vulnerabilities and lead to a sprawl of databases
dispersed on many different systems for every different management
application. Alternatively, using a larger production database is more reliable
with increased security, but can be time-consuming to set up because of
organizational bureaucracy and require maintenance, upkeep, and backup
over time.

If this situation is not dire enough, no matter which of the database


approaches a traditional application uses, they all have scalability limitations.
For most applications, the end user may never reach these limits, but the
vendor still must test and verify the limits for each release, leading to a slow
and cumbersome test and release cycle.

Fortunately, the public cloud has swooped in and come to the rescue.
Cloud-based applications such as Intersight are not only able to operate
without scalability limitations, but also allow applications to use different
databases catering to specific capabilities. Intersight uses several different
cloud-based databases for different purposes (time series, document
oriented, object relational, graph, and real-time analytics), each of which is
deployed as a separate, highly-scalable service. While the end user never
sees this, the effect is that it allows the organizations to scale their Intersight
instances infinitely. It does not matter if an organization has 10, 100, 10,000,
or more devices, all those devices can be operated from their single
Intersight instance.

Cisco Intersight: A Handbook for Intelligent Cloud Operations


16 Intersight Foundations

Service architecture
The time, effort, and expertise of IT organizations are ultimately measured by
their impact on the business they support. Unfortunately, the extensive
number of tools and applications required to run most traditional
infrastructures require an inordinate amount of care-and-feeding. Highly
trained individuals spend too much time updating, patching, and maintaining
not only the vast number of management tools but also the virtual machines,
hypervisors, operating systems, and other infrastructure required to host
these platforms. Currently, it has become a best practice in many
enterprises to purchase multiple hyperconverged systems just to support
infrastructure operations themselves. It seems like madness, but entire
infrastructures within infrastructures have been created just to keep the
lights on in the data center. It is no wonder that organizations are running to
the public cloud.

It is probably apparent by now, but one of the main design goals when
building Intersight was to ease the burden and complexity of creating
infrastructure just to operate the other infrastructure. By shifting this
infrastructure to a cloud service, the necessity to maintain these systems
was also removed. It is a double-bonus for operations teams — there is no
hardware or software required, and someone else (Cisco) performs all the
care and feeding. There is nothing to install, nothing to patch, and there are
not even any version numbers. It simply just works!

Invariably the question arises, what if an organization cannot use a public


cloud SaaS service? The answer to this question should be, organizations
are already using public cloud services. Whether it is the 82% of financial
http://cs.co/
services or 79% of healthcare organizations running Office 365 (http://cs.co/
9001Hbu5l
9001Hbu5l),
9001Hbu5l or SalesForce, or ServiceNow, or one of the many cloud
storage solutions, public cloud usage is pervasive. Analysts estimate that
90-94% of enterprises use the cloud (http://cs.co/9006Hbujr
http://cs.co/9006Hbujr,
http://cs.co/9006Hbujr
http://cs.co/9006HbuZO
http://cs.co/9006HbuZO)
http://cs.co/9006HbuZO and 85% of enterprises store sensitive data in the
cloud (http://cs.co/9003HbuZL
http://cs.co/9003HbuZL).
http://cs.co/9003HbuZL The usage of cloud services is not a trend;
it is the current reality, and organizations should not be trying to push back
from this reality, but instead should be adopting best practices to
operationalize cloud services.

Cisco Intersight: A Handbook for Intelligent Cloud Operations


Intersight Foundations 17

Some organizations are limited in the cloud services they can consume due
to data sovereignty requirements or other regulations; Even some, mostly
governmental agencies, are mandated to maintain air-gapped infrastructure.
For these types of organizations, Intersight is still a viable option. Later in the
book, it will be discussed how to run an Intersight instance from either a
private cloud or a completely air-gapped infrastructure.

As a cloud-consumed service, Intersight can provide seamless feature


updates. New feature updates and capabilities are added weekly, and unlike
traditional operational tools, they do not require any reconfiguration,
patching, or software updating. Intersight users simply log in to the interface
and consume or use the new capabilities. In the case of organizations
choosing to host Intersight in their private cloud, updates are pushed
similarly and will be discussed in more detail later. A complete and current
listing of the updates to the Intersight platform is maintained at
http://intersight.com/help/whats_new
http://intersight.com/help/whats_new.
http://intersight.com/help/whats_new

As with any cloud platform, organizations need to be able to view the status
of the services along with information related to any potential service
disruptions. For Intersight, this information is provided at:
http://status.intersight.com
http://status.intersight.com.
http://status.intersight.com

Cisco Intersight: A Handbook for Intelligent Cloud Operations


18 Intersight Foundations

Figure 1: Service status information

From here, organizations can observe the status of individual Intersight


services or capabilities, review information related to past incidents, and
subscribe to notifications via email, text message, RSS feed, or through a
triggered webhook.

Operational intelligence
The volume of blogs, tweets, articles, and general pontification about the
transformation to a cloud-first world seems omnipresent, overwhelming, and

Cisco Intersight: A Handbook for Intelligent Cloud Operations


Intersight Foundations 19

just flat out exhausting. Whether it is a discussion of DevOps (or the death of
DevOps), a comparison of Waterfall versus CI/CD models, or continual
predictions about Kubernetes taking over the world, these discussions have
dominated IT forums for years. To be fair, all these topics are extremely
important conversations, and several will be discussed later in this book.
However, other simultaneous transformations are happening that are not
getting near as much “podium time” but could be affecting the daily lives of
IT operators as much, or in some cases more, than these trends. One of the
most impactful, but under-discussed trends is the transformation of the
relationship between traditional vendors and their customers.

As an example of this, Intersight has transformed the relationship between


Cisco (often perceived as an “old school,” conservative networking giant)
and its customers into a modern, collaborative partnership. In the past,
getting in touch with someone from Cisco could be a daunting challenge. If
an organization were large enough, there was an assigned sales team that
could be communicated through, but it required knowing who they were and
personnel changed frequently. For everyone else, the support organization
known as the Technical Assistance Center (TAC) was about the only
communication means. Neither of these channels enabled any form of
communication with the actual experts, the people “behind-the-curtain”
who were designing and building the products and tools: engineering and
product management. Occasionally, at an industry conference, it might be
possible to hear one of these individuals speak, and a few lucky customers
may be able to connect them in the hallway of a conference center.

However, Intersight has turned this model on its head. From any browser or
smartphone, users can now communicate directly with the experts, and
usually have an interactive dialogue in a short amount of time. A ubiquitous
send us feedback button allows users to submit requests for things they
would like to see changed or added to the platform, or users can describe
issues they are seeing directly to the engineers working on the platform.
Each of these submissions is immediately sent to and reviewed by both
engineering and product management. In previous years, roadmaps and
feature additions were set many months or years in advance, but with the
continuous integration model used by the Intersight team, real-time
feedback from the users can be integrated into the tool within hours or days.

Cisco Intersight: A Handbook for Intelligent Cloud Operations


20 Intersight Foundations

The individuals logging into Intersight are not perceived as “users” or


“customers,” but rather an extension of a large team all working in unison to
build the world’s best cloud operations platform through open, bi-directional
communication. This direct access to the architects and engineers is not
limited to the largest organizations or those with the most expensive service
contracts. It is available to anyone and everyone using the platform, no
matter if they are a one-person, non-profit entity or a worldwide corporation
with 100,000s of devices, the access is the same.

Beyond the ability to communicate with the development teams, Cisco is


also opening other channels of communication through Intersight. In the
past, looking up contract or warranty information on hardware infrastructure
was tedious and often required communications with a third-party partner,
however, with Intersight this information is readily available. The type of
service contracts along with useful information, such as the start and end
dates, are available for individual servers, switches, hyperconverged
systems, and other infrastructure. Also, warnings and alerts associated with
systems that have contracts that are expiring soon are easily viewed.
Intersight takes the guesswork out of knowing which pieces of critical
hardware infrastructure are adequately covered from a contract and support
perspective.

Intersight opens the door to a whole new suite of intelligence capabilities.


Owing to its cloud-native design and API-first model, Intersight can interact
with many other services and databases to provide an unprecedented set of
capabilities and insights such as:

• Custom security advisories

• Automated log collection and analysis


• Proactive parts replacement
• Hardware compatibility guidance

Each of these specific capabilities are discussed in more detail in further


chapters, but it is important to understand that each of these insights are
custom generated for each system in Intersight. By utilizing both big data
and machine learning techniques, Intersight shows event correlation,

Cisco Intersight: A Handbook for Intelligent Cloud Operations


Intersight Foundations 21

anomaly detection, and causality determination in context. As an example,


memory DIMM failures in servers are a common issue that can be
catastrophic to virtual machines, applications, and business processes that
rely on those servers. Intersight can monitor the unique telemetry from
individual servers and will proactively open a support case when certain
error thresholds are crossed for memory DIMMs. Once the support case is
opened, logs are automatically collected from that system, automatically
analyzed by backend analytics, and replacement parts are dispatched before
an actual failure has occurred.

Consumption models
The Intersight cloud operations platform is a service that can be consumed
in multiple ways depending on a given organization’s preferences and
policies for security, data sovereignty, and convenience. While the primary
means of consumption is the traditional SaaS model in which most
configuration data and functionality are stored, maintained, and accessed
through the SaaS interface, Intersight also offers two appliance-based
options for organizations with specific needs not met with the full SaaS
model.

The following sections will expand on these various consumption models.

Software-as-a-Service (SaaS)
As noted in the Introduction, Intersight was conceived and designed as a
SaaS service from the ground up (see figure below). Locating its data lake
and core services such as role-based access control (RBAC), policy
management, license management, monitoring, analytics, etc. in the cloud
enables a broad degree of agility, flexibility, and security, making SaaS the
preferred model for the vast majority of Intersight customers. Intersight SaaS
customers are relieved of most of the management burdens that accompany
traditional software deployments on-premises, such as infrastructure
allocation and support, software monitoring and backup, upgrade and
change management, and security hardening. SaaS customers receive new

Cisco Intersight: A Handbook for Intelligent Cloud Operations


22 Intersight Foundations

features, bug fixes, and security updates immediately as these emerge from
the agile CI/CD development pipeline. Essentially, unless an organization has
specific reasons why it cannot consume Intersight via SaaS, SaaS is the
recommended model.

Figure 2: Intersight SaaS connectivity

On-premises appliance-based options


Since Intersight is built on a modern, lightweight micro-services foundation,
most of its core services can be packaged together into a virtual machine
which can then be deployed on-premises. This Intersight Virtual Appliance
consumption model comes in two flavors: the Connected Virtual Appliance
(CVA) and the Private Virtual Appliance (PVA). In both cases, the exact
features supported are license-dependent and will differ from those
available via SaaS. While the virtual appliance model intends to replicate the
SaaS experience as fully as possible, the appliance model will invariably lag
behind the SaaS model, dependent upon specific feature requirements and
appliance update frequencies. For a full, up-to-date list of all features

Cisco Intersight: A Handbook for Intelligent Cloud Operations


Intersight Foundations 23

supported in each mode, please see


https://intersight.com/help/getting_started#supported_features_matrix
https://intersight.com/help/getting_started#supported_features_matrix.
https://intersight.com/help/getting_started#supported_features_matrix

Connected Virtual Appliance


Under this model, most of Intersight’s functionality is replicated within the
VM appliance, and the appliance itself has limited communication with the
intersight.com cloud (see image below). This approach serves two
purposes. First, it centralizes all communication between intersight.com and
the on-premises data center through a single point: the Connected Virtual
Appliance (CVA) itself, rather than having individual targets communicating
with the Intersight cloud service directly. This approach may be necessary
when certain managed targets do not have direct access to intersight.com
(because of network segmentation and/or security policies) but can reach
the appliance internally. Second, it allows organizations to exert greater
control over what system details leave their premises (often desirable in
certain countries and regulated markets) while also minimizing the network
traffic between the appliance and intersight.com (which may be desirable for
remote sites with intermittent or limited internet connectivity).

Cisco Intersight: A Handbook for Intelligent Cloud Operations


24 Intersight Foundations

Figure 3: Intersight CVA connectivity

In the CVA model, all Intersight configuration data remains on-premises by


default, stored locally on the appliance. See the table below for details on
the minimum data collected by Intersight via the CVA.

Cisco Intersight: A Handbook for Intelligent Cloud Operations


Intersight Foundations 25

Table 1: Virtual Appliance — minimum data collected by intersight.com

• The appliance ID (serial number)


• The IP address of the appliance
From Intersight CVA
• The hostname of the appliance

• The Device Connector version and public key on


the appliance

Appliance software auto- The version of software components or the services


upgrade running on the appliance

• CPU usage

Appliance health • Memory usage


• Disk usage
• Service statistics

Licensing Server count

• Serial number and PID (to support Connected TAC)


Information about the
endpoint target • UCS Domain ID
• Platform Type

Note, intersight.com may collect additional data from the CVA under two
circumstances:

• The administrator opts in by enabling the Data Collection option in the


appliance’s settings (from the appliance UI, navigates to Settings
“cog” icon → Settings → General → Appliance).

Cisco Intersight: A Handbook for Intelligent Cloud Operations


26 Intersight Foundations

• The administrator chooses to update a target’s Device Connector


from the cloud, via the CVA. During the upgrade from the cloud,
some device data (server inventory) from the appliance leaves the
premises:
• The endpoint device type — e.g., Cisco UCS Fabric
Interconnect, Integrated Management Controller, Cisco
HyperFlex System, Cisco UCS Director.
• The firmware version(s) of the endpoint
• The serial number(s) of the endpoint device

• The IP address of the endpoint device


• The hostname of the endpoint device
• The endpoint Device Connector version and the public key

Regardless of the above, periodic communication between the CVA and the
Intersight cloud is required to receive new alerts, update the Hardware
Compatibility List (HCL), connect to Cisco TAC, and enable the CVA to auto-
update. When the connection to the Intersight cloud is interrupted and the
connectivity is not restored within 90 days, the device claim capability will be
lost. Intersight Appliance features including Connected TAC, Firmware
Upgrade, HyperFlex Cluster Deployment, and User Feedback that require
connectivity to Intersight cloud, may also be impacted until connectivity is
restored.

Private Virtual Appliance


For organizations with strict requirements for an air-gapped approach,
Intersight offers the Private Virtual Appliance (PVA) model. The PVA is a
wholly self-contained, offline instantiation of Intersight in a single virtual
machine that does not communicate with intersight.com (see image below).
Administrators separately download the appliance and the required licensing
from Cisco Software Central or intersight.com and deploy it manually in the
disconnected facility. All updates to the appliance are initiated manually by
the administrator in a similar offline fashion. While this model ensures
complete air gap compliance and full data sovereignty for the organization,

Cisco Intersight: A Handbook for Intelligent Cloud Operations


Intersight Foundations 27

the management and upkeep burden are commensurately increased for the
administrator. That said, every effort is made to provide PVA Intersight users
with an experience that is as close to the SaaS model as possible.

Figure 4: Intersight PVA connectivity

Target connectivity
Intersight ingests data from and performs actions against hardware and
software components, referred to as targets, sometimes referred to as
devices. Hardware targets include infrastructures such as servers, Fabric
Interconnects (FIs), HyperFlex clusters, network switches, APIC clusters, or
other hardware endpoints. Software targets may be hypervisors, Kubernetes
clusters, cloud services, an Intersight Appliance, or other software
endpoints. This book will generally refer to these as targets.

Cisco Intersight: A Handbook for Intelligent Cloud Operations


28 Intersight Foundations

To be managed by Intersight, each target must have both of the following:

• A mechanism to securely establish and maintain a logical connection


to the Intersight service
• A northbound API for the target to receive commands from Intersight
to carry out against itself

In the case of public cloud services such as Amazon Web Services,


Microsoft Azure, and Cisco’s AppDynamics, both of the above criteria are
easily achieved. These cloud services have well-established public APIs and
straightforward, secure mechanisms for invoking them directly from the
Intersight Cloud. For all other targets, however, the means of establishing a
secure, durable communication path poses a significant challenge. These
targets invariably reside behind an opaque, non-permeable firewall and
security infrastructure designed to prevent exactly the kind of external
access required to invoke their powerful APIs. A new connection vehicle is
needed.

Device Connector
For targets whose design Cisco has direct control over (such as UCS
servers, Fabric Interconnects, and HyperFlex clusters), the problem has
been solved with a lightweight, robust software module called the Device
Connector. The Device Connector code is embedded in supported Cisco
hardware at the factory and has one simple job: establish and secure a
durable websocket connection to intersight.com (see below).

Cisco Intersight: A Handbook for Intelligent Cloud Operations


Intersight Foundations 29

Figure 5: Intersight connectivity to Cisco hardware via Device Connector

When a hardware device with an embedded Device Connector is powered


on and connected to the network, the Device Connector will attempt to
reach out to the intersight.com cloud and make itself available to be claimed
by a unique Intersight account (for details on the claiming process, see the
section below). Once claimed, the Device Connector will maintain the secure
connection and attempt to re-establish it wherever the connection is
severed by a network outage, device reboot, etc.

Device Connector maintenance is a hands-off operation and it is self-


maintained as part of the Intersight service. When updates to the Device
Connector are available, they are published through Intersight and the
Device Connector is automatically upgraded.

Assist Service
While the hardware-embedded Device Connector is an excellent solution for
Cisco devices, the problem of connecting to non-Cisco hardware and
software on-premises remains. The solution is to take the same Device
Connector logic and, rather than try to coax third parties into embedding it in
their targets, make it available through an on-premises virtual appliance as
its own function: the Assist Service, sometimes referred to as Intersight

Cisco Intersight: A Handbook for Intelligent Cloud Operations


30 Intersight Foundations

Assist. The Assist Service is available via the CVA, PVA, or an assist-only
appliance for SaaS users.

This Assist Service can both maintain the necessary durable, secure link to
Intersight and proxy the target-specific API calls necessary for Intersight
management. The Intersight Appliance will receive the necessary functions
from Intersight to add and manage any supported targets on-premises (see
below). These functions are delivered as microservices to the appliance and
are capable of making API calls to the desired targets.

Figure 6: Intersight SaaS with the Assist Service

Cisco Intersight: A Handbook for Intelligent Cloud Operations


Intersight Foundations 31

Figure 7: Intersight Appliance with the Assist Service

In this manner, all the benefits of the hardware-embedded Device Connector


carry over to managing non-Cisco targets, and Intersight Administrators are
provided a single, consistent mechanism for claiming and managing targets
regardless of where they reside. Additionally, once an Appliance is claimed,
there is nothing for an administrator to manage — Intersight will automatically
update the Appliance and any needed target functions. As support for new
targets is released in Intersight, those capabilities are immediately made
available to connected Appliances. Furthermore, multiple Intersight
Appliances can be claimed under the same account, enabling administrators
to deploy multiple appliances in separate locations as they see fit. This
flexibility can help avoid potentially significant issues with providing network
connectivity between separate sites and can also help with scalability by
sharing the target load across multiple Appliances.

Cisco Intersight: A Handbook for Intelligent Cloud Operations


32 Intersight Foundations

Adding targets to Intersight


To enable Intersight management capabilities, targets must be claimed
through a process that securely assigns ownership of the target to an
Intersight Account. To get started, an administrator can create a Cisco
Intersight account by browsing to http://intersight.com
http://intersight.com and selecting the
Create an account link.

Figure 8: Create an Intersight account

Most organizations only need a single Intersight account, with the


segmentation of hardware and software being performed by Intersight’s
robust role-based access control (RBAC) security model, as discussed in
the Security chapter of this book.

Cisco Intersight: A Handbook for Intelligent Cloud Operations


Intersight Foundations 33

Target claim process options


Broadly speaking, there are two methods for claiming a target into an
Intersight account, dependent upon the type of target being claimed. The
first target category is defined by the presence of an embedded Device
Connector at the target. These targets are most often Cisco-branded
components. The second target category is defined by the lack of an
embedded Device Connector at the target. These targets are most often, but
not exclusively, non-Cisco-branded components; for instance, VMware
vCenter, non-Cisco storage controllers, cloud-native targets, and many
others.

As a prerequisite to the Device Connector-based target claim process, it is


important to outline an additional step the Device Connector will perform
automatically, known as registration.

Intersight target registration


The Device Connector running on any target will automatically attempt to
register itself with the Intersight cloud service, running at intersight.com. If
the registration connection is successful, Intersight will obtain an inventory of
the target including vendor, description/model, and serial number/identifier.
This registration does not assign ownership, but it does provide an
opportunity for Intersight to verify that the Device Connector running on the
target is up to date. If not, the Device Connector is updated. After the Device
Connector is updated (or if it is already current), the Device Connector will
wait indefinitely for the claiming process to be initiated.

Cisco Intersight: A Handbook for Intelligent Cloud Operations


34 Intersight Foundations

Figure 9: Intersight target registration process

At times, this registration process may fail and require some form of
intervention from the target administrator or network administrator. When
this happens, the Device Connector will retry periodically until the Device
Connector can connect to intersight.com.

The most common reason for a failed Device Connector registration is due
to a local network configuration issue. Many organizations maintain a
network infrastructure that prevents devices from directly connecting to
external sites. A firewall may be blocking the Device Connector, or an
HTTP/S proxy (explicit or transparent proxy) may be required to proxy the
connection to the Internet. All Device Connectors must properly resolve
svc.intersight.com and allow outbound initiated HTTPS connections on port
443. When a network proxy server is required, the target administrator will
be required to provide the necessary proxy information in the Device
Connector settings. Additional details and information are located at: https://
https://
www.intersight.com/help/getting_started#network_connectivity_requiremen
www.intersight.com/help/getting_started#network_connectivity_requiremen
ts
ts.
ts

Cisco Intersight: A Handbook for Intelligent Cloud Operations


Intersight Foundations 35

Successful registration will trigger Intersight to obtain an inventory of the


target being registered.

Intersight claim process using embedded Device Connector (SaaS


Model)
Once a successful registration is completed, the target may be claimed. The
claiming process binds a registered device to a specific Intersight Account.
To claim a target, an administrator with access to the target must log in to
the target and retrieve its Device ID and Claim Code. The Device ID and
Claim Code can be used to claim the target through the Intersight portal
detailed in the steps below:

1 Log in as an administrator to the management interface on the target

2 Navigate to the Device Connector

3 Copy Device ID and Claim Code

4 Login to intersight.com

5 Navigate to Admin → Targets

6 Choose Claim a New Target

7 Select the type of target being claimed

8 Select Start

9 Enter the Device ID/Claim Code (copied from the target Device
Connector in step 3)

10 Select Claim

Completion of the target claim process registers the target with the
Intersight service and assigns ownership of the target to the Intersight
account that was connected to in step 4 above.

Cisco Intersight: A Handbook for Intelligent Cloud Operations


36 Intersight Foundations

The Device Connector interface on a successfully claimed device is shown


below:

Figure 10: Device Connector view of successful target claim

Intersight claim process using embedded Device Connector (Virtual


Appliance model)
The target claim process for the Virtual Appliance model is similar to the
SaaS process (above), but this process starts from Intersight and this
process takes advantage of the Device Connector API on the target to
retrieve the Device ID and Claim Code. See the steps detailed below:

1 Login to on-prem Intersight portal as an Account Administrator

2 Navigate to Intersight Dashboard → Devices → Claim a New Device

Cisco Intersight: A Handbook for Intelligent Cloud Operations


Intersight Foundations 37

3 Two options:

- Claim a Single Device

1 Choose IP/Name

2 Pick device type from Device Type drop-down menu

3 Enter the IP address or hostname of the target

4 Enter an administrator user ID for the target


management interface

5 Enter the password for the administrator user ID in the


previous step

6 Select Claim

- “From File”

1 Construct a .csv file with one line per target


containing: target type, hostname/IP address or IP
range, username, password

2 Browse and select .csv file

3 Select Claim

Completion of the claim process registers the target with the Intersight
service (running on the appliance) and assigns ownership of the target to the
Intersight account registered to the appliance.

Claiming targets without an embedded Device Connector


Intersight can also manage targets that do not ship with their own embedded
Device Connector, such as on-premises storage and hypervisors from third
parties, and public cloud SaaS-based targets. Claiming these targets is a

Cisco Intersight: A Handbook for Intelligent Cloud Operations


38 Intersight Foundations

straightforward process of providing to Intersight the basic credentials to


communicate with them via their APIs, as described below.

On-premises targets
As described above, access to on-premises targets that lack their Device
Connector occurs through the Assist Service within the Intersight Appliance.
The exact steps for claiming such targets will vary and are well documented
in the Intersight Help section, but the general process is to use the wizard
via the Admin → Targets → Claim a New Target button. After selecting the
desired target (for example, vCenter), the wizard will request some basic
information such as the target’s IP/hostname, and a username/password
combination for Intersight to use to communicate with the target, as well as
the desired Intersight Appliance through which the claiming process and
subsequent target management should occur.

Public cloud SaaS targets


Public cloud SaaS-based targets are perhaps the most straightforward of all
targets to add since they do not depend on a Device Connector or even the
Assist Service. Essentially the intersight.com cloud can speak directly to the
public cloud services via their public APIs. To add such a target, again
navigate to Admin → Targets → Claim a New Target button and launch the
Add Target wizard. Choose the desired service, such as Amazon Web
Services, and supply the required credentials (e.g., Access Key, Secret Key,
etc.). Once claimed, no further manual intervention is required.

Interacting with Intersight


A foundational principle of Intersight is that the platform should be fully API-
driven. Therefore, anything that can be configured on the platform — any
interaction with Intersight at all — is accomplished through a call to the
Intersight API. The REST-based API is built on the modern, flexible OpenAPI
standard and is automatically updated as new features are released through
the CI/CD pipeline. While the extensive power of the Intersight API will be

Cisco Intersight: A Handbook for Intelligent Cloud Operations


Intersight Foundations 39

covered at length in a later chapter dedicated to Programmability, a few


common use cases for the API are worth noting here:

• Retrieving consolidated reports


• Integrate inventory, alarm, and audit data with existing ITSM systems
(ServiceNow, Splunk, et al.)

• Tagging assets in Intersight in bulk

The API-driven nature of Intersight opens additional access possibilities


including alternative user interfaces such as the Intersight Mobile App. This
smartphone app (available for both iOS and Android platforms) leverages the
Intersight API to provide a subset of WebGUI functionality in an easy-to-use,
access-anywhere format (see below).

Figure 11: OpenAPI driven

Cisco Intersight: A Handbook for Intelligent Cloud Operations


40 Intersight Foundations

Figure 12: Intersight mobile app

The mobile app authenticates users and retrieves


content from the Intersight cloud platform using
the same mechanisms as the WebGUI since both
are interacting with the same underlying Intersight
API. Users of the mobile app can quickly and
easily view critical information about the current
health and inventory of their environment,
including current alarms (see left)

Cisco Intersight: A Handbook for Intelligent Cloud Operations


Intersight Foundations 41

Figure 13: Intersight mobile app dashboard

and can drill down to see specific advisories,


even displaying affected systems (see left).

Cisco Intersight: A Handbook for Intelligent Cloud Operations


42 Intersight Foundations

Figure 14: Security advisory alert and affected systems via a mobile app

This allows a mobile user to quickly assess the


breadth of impact of a security advisory (see left).
Intersight Advisory functionality is described in
the Infrastructure Operations chapter.

Cisco Intersight: A Handbook for Intelligent Cloud Operations


Intersight Foundations 43

Finally, the same Intersight API is available for Cisco’s partners to leverage
within their solutions and integrations. As an example, the popular SaaS-
based ITSM platform ServiceNow integrates with the Intersight API via the
Intersight ITSM plugin for ServiceNow (available through the ServiceNow
store). The plugin provides Inventory Synchronization and Incident
Management:

• The inventory from Intersight is imported into ServiceNow and can be


tracked there alongside other (non-Intersight) inventories.
• As part of the Incident Management module, the Intersight ITSM
plugin raises an Incident for configurable alarm severity from
Intersight in ServiceNow. Administrators can leverage the advanced
features of the plugin to raise Incidents with support for complex
queries.

Given Intersight’s API-first approach, the possibilities for tapping into the
power of the platform are vast. Additional examples can be found in the
Programmability chapter.

Cisco Intersight: A Handbook for Intelligent Cloud Operations


44 Intersight Foundations

Licensing

Intersight uses a subscription-based license model with multiple feature


tiers. UCS servers, HyperFlex systems, and UCS Director automatically
include a subscription to the base tier of Intersight, at no additional cost. As
more Intersight features are desired, organizations have the option to license
additional features and targets at higher tiers.

Cisco Intersight: A Handbook for Intelligent Cloud Operations


Intersight Foundations 45

Wrapping up

Intersight was conceived with many foundational principles that are


pervasive throughout the service. Programmability, operational intelligence,
a choice of consumption models, and heterogeneous target management
provide extensive flexibility for organizations to meet their daily IT
challenges. Perhaps most fundamental to the Intersight platform however is
security, which will be addressed in-depth in the next chapter.

Cisco Intersight: A Handbook for Intelligent Cloud Operations


Cisco Intersight: A Handbook for Intelligent Cloud Operations
Security

Cisco Intersight: A Handbook for Intelligent Cloud Operations


48 Security

Introduction

In a world where cyberattacks are increasing in frequency and


sophistication, security is top of mind for any IT organization. The increasing
complexity of applications and their supporting infrastructures, whether on-
premises or in the public cloud, only compounds the concerns surrounding
IT security. Organizations must ensure that security is not an afterthought or
bolt-on to any given process or service but rather baked into their
foundations from the beginning. As such, security is paramount in the
design, development, integration, and testing of the Intersight platform.
Intersight users can rest assured knowing that Intersight complies with the
Cisco Secure Development Lifecycle guidelines (http://cs.co/9009HkkPd
http://cs.co/9009HkkPd)
http://cs.co/9009HkkPd
and Cisco’s strict security and data handling standards
(http://cs.co/9002HkkPq
http://cs.co/9002HkkPq)
http://cs.co/9002HkkPq to keep Intersight environments, users, and data
safe. Furthermore, Intersight’s centralized model, secure connectivity,
granular access control, and integrated access to current security updates
and advisories provide unique security advantages over that which traditional
operations practices can achieve.

This chapter discusses the importance of security in every human and


device interaction with Intersight and explains how Intersight ensures secure
practices at every level of the platform and the targets it manages.

Cisco Intersight: A Handbook for Intelligent Cloud Operations


Security 49

Connectivity

Each of the nearly one million devices connected to the Intersight cloud uses
Transport Layer Security (TLS) with restricted ciphers on port 443 to
communicate with the Intersight service, using the Advanced Encryption
Standard (AES) with a 256-bit randomly generated key. This scale is a
testament to the architecture and security of the platform.

Except for public cloud SaaS targets (which Intersight speaks to directly),
communication between a claimed target and Intersight is always initiated by
the Device Connector (whether that Device Connector is embedded in the
target itself or is external to the target — i.e. the Intersight Appliance; see the
Foundations chapter for more information on the Device Connector). With
the use of these secure communication standards, firewalls need only allow
outbound HTTPS port 443 connections with no special provisions for
Intersight. This eases the configuration burden on the network team and
avoids exposing data centers to additional security risks.

The Device Connector uses a cryptographic key to uniquely identify itself


and a private/public key pair to initiate the connection to the Intersight URL
via HTTPS. During the creation of the SSL connection, the Intersight
certificate is validated by the Device Connector against a list of trusted
Certificate Authorities (CA). Lastly, the HTTPS connection is converted to a
websocket, allowing low-latency, bi-directional secure communication
between the target and Intersight. These properties of the websocket make
it well-suited for latency-sensitive operations such as tunneled vKVM.

The use of cryptographic tokens by the target ensures that only legitimate
devices can attempt to authenticate to Intersight, thereby protecting against
Trojan horse attacks. Additionally, Intersight’s use of a signed certificate
allows the Device Connector to validate the identity of Intersight, thereby
protecting against man-in-the-middle attacks. If the Device Connector
receives an unsigned or invalid certificate from Intersight, it will abort the
connection attempt.

Cisco Intersight: A Handbook for Intelligent Cloud Operations


50 Security

User connectivity is handled with the same level of caution and security as
Intersight targets. Users can utilize a Cisco login ID with support for
multifactor authentication (MFA) when using the SaaS platform. Both SaaS
and on-premises implementations of Intersight allow SSO through SAML 2.0
and integration with the most popular identity providers (IdPs), allowing an
entity to use their preferred authentication services.

Cisco Intersight: A Handbook for Intelligent Cloud Operations


Security 51

Claiming

To bring a new target into Intersight for visibility and management, it must
first be claimed. Target claiming options are described in detail in the
Foundations chapter, but for the specific case of Device Connector-
embedded hardware targets, the process begins with the Device Connector
establishing the secure connection to Intersight. However, there are some
restrictions that administrators can place on that connection:

• The Device Connector can be turned off completely, preventing the


device from communicating with the Intersight cloud in any way.
• The Device Connector can be configured for read-only access from
Intersight, allowing the device to be monitored but not controlled
from Intersight.
• Tunneled vKVM traffic can be disabled. Tunneled vKVM allows
remote console access to the device from properly authenticated
users of Intersight using the Device Connector websocket data path
for the vKVM data. This feature also requires an appropriate license.

• The Device Connector can be modified to allow configuration only


from Intersight and disable local configuration for standalone servers.

Cisco Intersight: A Handbook for Intelligent Cloud Operations


52 Security

These restrictions are illustrated in the screenshot below:

Figure 1: Cisco IMC Device Connector configuration

Administrators need physical or network connectivity to the device being


claimed as well as proper login credentials for that device. Due to the control
available through the Device Connector, access to the target should be
appropriately limited to administrators only using local security measures.

Intersight uses two-factor authentication for device claiming. Administrators


need the Device ID as well as the rolling Claim Code issued by Intersight that
refreshes every ten minutes. This process prevents malicious entities from
claiming a device that does not belong to them, even if they can guess the
Device ID. The screenshot below shows the Device ID and Claim Code. The
Claim Code has a blue expiration bar beneath it indicating how much time is
left before the Claim Code will be invalidated and a new one will be
displayed.

Cisco Intersight: A Handbook for Intelligent Cloud Operations


Security 53

Figure 2: The claim process for a Cisco UCS rack server

It is possible to unclaim a target from Intersight through the Device


Connector, although it is recommended to unclaim devices from within
Intersight when possible.

Cisco Intersight: A Handbook for Intelligent Cloud Operations


54 Security

Role-Based Access Control


(RBAC)

RBAC provides a way to define a user’s capabilities and scope of access


within the Intersight platform. Once a user is authenticated, they are
restricted both by the organizations and roles assigned to their user
accounts.

• Organizations are a logical grouping of Intersight objects, such as


servers, clusters, policies, etc.
• Privileges are a user’s permissions and/or restrictions against an
object
• Roles are a grouping of privileges applied to one or more
organizations

The image below shows the creation of a role that will have multiple
privileges for the “Infra” organization, but only Read-Only privileges for the
“HX-Cloud-Native-Kit” organization.

Cisco Intersight: A Handbook for Intelligent Cloud Operations


Security 55

Figure 3: Creating a granular role within Intersight

Roles can then be assigned to either individual users or groups as shown


below. Each user or group can be assigned multiple roles by clicking the
plus sign (+) when creating or editing a user or group. The figure below
shows two roles assigned to the user being created (on the left) but only one
role for the group being created (on the right). Group membership is
maintained within the Identity Provider so that Intersight is never out of sync
with a company’s organizational changes.

Cisco Intersight: A Handbook for Intelligent Cloud Operations


56 Security

Figure 4: Creating an Intersight user and a group

Finally, each Intersight target can be assigned to one or more organizations


within the platform. Since roles assign privileges to organizations, roles
effectively assign privileges to collections of targets. Account administrators
can create new organizations within their account as needed. Creating a
new organization involves only naming it and selecting which targets belong
in that organization as shown below. Administrators can use the search
function to locate targets in Intersight accounts that have many targets.

Cisco Intersight: A Handbook for Intelligent Cloud Operations


Security 57

Figure 5: Creating a new organization

Cisco Intersight: A Handbook for Intelligent Cloud Operations


58 Security

Audit logs

Part of any good security practice is the collection and examination of audit
logs. Intersight does this yeoman’s work for IT departments. Intersight saves
information related to events (login, logout, created, modified, and deleted)
that occur on every managed object within Intersight, including user
accounts, pools, profiles, servers, clusters, and many more.

The Intersight interface can be used to browse the audit logs and search by
Event, User Email, Client Address, or Session ID as shown in the screenshot
below. A session ID is assigned to each user login session so that
administrators can locate every action performed by a user in a given
session. Audit logs can also be searched programmatically using the
Intersight API. Refer to the Programmability chapter in this book for more
information.

Cisco Intersight: A Handbook for Intelligent Cloud Operations


Security 59

Figure 6: A view of Intersight audit logs

Audit logs are stored by Intersight using the same data protection standards
described later in this chapter. Audit log entries cannot be deleted, even by
an Intersight account administrator, which ensures a reliable record of
Intersight events.

Cisco Intersight: A Handbook for Intelligent Cloud Operations


60 Security

Data security

Data collected
The Device Connector transmits the following data to Intersight:

• Inventory and configuration data for servers, hyperconverged nodes,


fabric interconnects, and the inventory and configuration data of their
subcomponents

• Server operational data such as performance, environmental data,


and faults
• Technical support files created when requested by Cisco TAC

Customers using the Connected Virtual Appliance can control whether any
of this data is transmitted to the Intersight cloud (otherwise the data is kept
locally).

The Device Connector does not collect or transmit sensitive data such as
passwords.

Data protection
Customer data is segregated by the Intersight account within the Intersight
cloud. Data segregation and per-customer encryption keys ensure that each
Intersight account can only access the data for that Intersight account. At
rest, data is encrypted with block storage or volume encryption. While in
transit, data is encrypted with TLS 1.2 and higher with restricted ciphers and
HTTPS on the standard HTTPS port 443.

Third parties are never given access to the data stored in the Intersight
cloud.

Cisco Intersight: A Handbook for Intelligent Cloud Operations


Security 61

Certifications
The ISO/IEC 27001 standard specifies an Information Security Management
System (ISMS) that identifies stakeholders, controls, and methods for
continuous improvement for information security. Intersight has achieved ISO
27001:2013, which defines how Intersight perpetually manages the
implementation, monitoring, maintenance, and continuous improvement of
security within Intersight. Certification is performed by an independent third-
party accredited certification body with additional annual audits to ensure
continued compliance.

As a cloud operations platform by design, customer traffic (including


cardholder data) never flows through Intersight. As a result, the Payment
Card Industry Data Security Standard (PCI DSS) is out of scope and not
applicable.

Similarly, individually identifiable health information (IIHI) never flows through


the Intersight platform, making The Health Insurance Portability and
Accountability Act (HIPAA) out of scope and not applicable to Intersight.

Cisco Intersight: A Handbook for Intelligent Cloud Operations


62 Security

Security advantages

Intersight development follows Continuous Integration/Continuous Delivery


(CI/CD) practices. New features are deployed to the platform weekly, but
critical defect fixes and security fixes can be deployed continuously. When
security risks are identified, they are addressed in Intersight without
requiring that IT departments download new software or establish a
maintenance window to patch anything. The combination of Intersight being
a SaaS platform with CI/CD development means that IT departments have
one less thing to worry about.

The Device Connector facilitates communication with the Intersight cloud by


establishing and maintaining the websocket connection. It has an important
role, so it must always remain updated with the latest security patches.
Intersight automatically upgrades the Device Connector, keeping the
customer’s environment secure without the hassle of manual upgrades.

The Cisco Product Security Incident Response Team (PSIRT) is tasked with
detecting and publishing any potential security vulnerabilities that affect
Cisco products. Intersight curates those reports and pushes the relevant
ones as security advisories (this feature requires a license). Organizations
can read the publicly available reports, but Intersight provides alerts for only
the advisories that affect an organization (and details for remediation) based
on the devices in their account and the software and firmware versions
installed on those devices. Many organizations do not have the time or
personnel to correlate these reports manually. Intersight gives them only the
information they need, when they need it.

Cisco publishes a hardware compatibility list (HCL) that details combinations


of software, firmware, and hardware that have been validated by Cisco.
Most organizations have processes in place to ensure that servers are
deployed with a configuration listed in the HCL but do not have processes in
place to periodically verify that server configurations remain in compliance
with the HCL. This can expose those organizations to performance and

Cisco Intersight: A Handbook for Intelligent Cloud Operations


Security 63

security vulnerabilities if their systems are running configurations that have


not been validated for compatibility. Intersight can alert organizations to
devices that do not match the Cisco HCL (this feature requires a license) and
steps for remediation.

Additionally, Intersight can facilitate the generation of compliance reports.


With an organization’s entire infrastructure available from a single portal,
reports can be generated showing everything from support contract status
to which vendors supplied the disks in every server.

Cisco Intersight: A Handbook for Intelligent Cloud Operations


64 Security

Wrapping up

Many organizations are understandably concerned about potential security


risks when consuming services from the public cloud, especially when such
a service has the power to directly manage their on-premises infrastructure.
Intersight was conceived from the start not just as a secure cloud SaaS
platform, but as a cloud platform that enables its customers to achieve a
greater degree of overall security than they might otherwise be able to attain
on their own. By leveraging its CI/CD pipeline to provide up-to-the-minute
security alerts and patches; by centralizing infrastructure operations,
regardless of physical location, thereby avoiding policy sprawl and
configuration drift throughout the enterprise; by integrating directly with
Cisco support and compatibility databases and systems to speed problem
resolution; and by leveraging corporate access control and user
management systems, Cisco Intersight helps organizations move to a more
secure operations model.

Cisco Intersight: A Handbook for Intelligent Cloud Operations


Infrastructure
Operations

Cisco Intersight: A Handbook for Intelligent Cloud Operations


66 Infrastructure Operations

Introduction

IT Operations teams charged with the “care and feeding” of infrastructure


face numerous daily maintenance and support challenges. Across their
entire infrastructure landscape — in however many data centers, edge
locations, and clouds it may reside — they must find a way to monitor for
hardware faults, maintain hardware compatibility with an ever-changing list
of constraints, stay current on critical security advisories and vendor field
notices, collect logs for troubleshooting, track support contract status, and
interact with vendor support. These tasks are difficult on an individual scale
and can easily consume a team’s limited human resources even in the best-
run organizations.

Intersight was designed from the ground up to address these sorts of


challenges. Intersight aims to relieve administrators of the often tedious
“custodial” work to keep their systems healthy, current, and
compatible. Intersight targets benefit from integrated intelligence-based
recommendations, advisories, hardware compatibility assessments, and an
enhanced support experience with Cisco TAC, as well as direct integration
into popular ITSM systems, allowing organizations to efficiently identify and
resolve issues and smoothly integrate Intersight into their operational
processes.

Cisco Intersight: A Handbook for Intelligent Cloud Operations


Infrastructure Operations 67

Device health and monitoring

Infrastructure status awareness is foundational to successful management.


Intersight tracks the health of the devices (targets) it manages, monitoring
for issues and raising relevant items as Intersight alarms. An Intersight alarm
can originate from a UCS hardware fault, a HyperFlex alarm, or the crossing
of a target threshold that is deserving of an administrator’s attention (for
example, managed storage exceeding a capacity runway threshold). An
alarm includes information about the operational state of the affected object
at the time the issue was raised.

Intersight alarm mapping


Intersight alarms do not necessarily correspond directly to faults in Cisco
UCS servers nor alarms in HyperFlex devices. For example, not every UCS
fault will generate an Intersight alarm. For UCS faults that trigger Intersight
alarms, the initial alarm will display when an event is received from the
target, while subsequent alarm updates from the target will occur if/when
the fault condition at the target changes, triggering a new event. Also, fault
information at the target is updated as part of the daily target inventory (this
update interval is not configurable).

The following table maps Intersight alarm severity to UCS faults and
HyperFlex alarms.

Cisco Intersight: A Handbook for Intelligent Cloud Operations


68 Infrastructure Operations

Table 1: Intersight alarm severity mapping to UCS and HyperFlex

Intersight alarm severity UCS faults HyperFlex alarms

Critical Critical and major faults Red

Warning Minor and warning faults Yellow

Informational Informational faults Alarm not raised

Cleared The alarm is deleted at the endpoint Green

It is important to note that when a UCS Manager fault is in a “flapping state,”


these faults are ignored by Intersight. Once the flapping state at UCS
Manager is corrected, Intersight will process the fault as an alarm.

Detail regarding all UCS and HyperFlex faults can be found at:

• UCS: http://cs.co/UCSFaultCatalog
http://cs.co/UCSFaultCatalog
• HyperFlex: http://cs.co/HyperFlexAlarmCatalog
http://cs.co/HyperFlexAlarmCatalog

Alarm types
Intersight offers multiple alarm types:

Critical alarms — indicate that a service-affecting issue requiring immediate


attention is present at the target. These can include major events such as
power supply failure, loss of connectivity to targets, and certificate
expiration.

Warning alarms — indicate target issues that have the potential to affect
service. These may include sensor readings beyond an acceptable
threshold, notification of the power state of a spare device, etc. Warning
alarms may have no significant or immediate impact on the target, but action
should be taken to diagnose and correct the problem to ensure the issue
does not become more severe.

Cisco Intersight: A Handbook for Intelligent Cloud Operations


Infrastructure Operations 69

Informational alarms — provide details that are important, but not directly
impactful to the operations.

Cleared alarms — have been acted upon already (or resolved on their own)
and are listed for historical reference.

Table 2: Intersight alarm severity descriptions

Alarm states
Intersight alarms exist in an Active state by default but can be changed to an
Acknowledged state by a user (see below under Viewing and reacting to
alarms). Acknowledging an alarm does not change the status of the issue at
the target or the severity status of the alarm; the condition will still be
present. Instead, acknowledgment of an alarm only mutes the alarm within
Intersight.

Cisco Intersight: A Handbook for Intelligent Cloud Operations


70 Infrastructure Operations

Viewing and reacting to alarms


Access to alarm information is provided through three icons in the top menu
bar (see figure below).

Figure 1: Intersight alarm indicators

By clicking on the bell (All active alarms), the red (Critical active alarms), or
yellow (Warning active alarms) icons, a summary of the corresponding active
alarms will be generated in a slide-out “drawer” on the right-hand side of
the Intersight portal interface. This summary will display the alarm severity,
alarm code, and date/time the alarm was triggered. Users can acknowledge
an individual alarm directly from this drawer by hovering over the alarm and
selecting the “mute” icon (a bell with a line through it — see figure below).

Cisco Intersight: A Handbook for Intelligent Cloud Operations


Infrastructure Operations 71

Figure 2: Alarm slide-out “drawer”

Additional alarm detail, such as the alarm fault code, source name/type,
component from which the issue originated, and a description, are all
displayable by selecting the specific alarm, which brings the user to the
alarms inventory screen (pre-filtered in the Search field to the desired
alarm). The alarms inventory can also be reached by clicking on “View All” at
the bottom of the alarm drawer, which will pre-filter the inventory
corresponding to All, Critical, or Warning alarms based on the status
selected in the drawer (see below).

Cisco Intersight: A Handbook for Intelligent Cloud Operations


72 Infrastructure Operations

Figure 3: All active Intersight alarms

From this alarm inventory screen, users can select all Active or
Acknowledged alarms via the tabs at the top and can multi-select alarms to
change their states en masse.

Once an alarm is acknowledged, its state changes from Active to


Acknowledged, but the severity of the alarm is maintained. Additionally, the
alarm will only appear in the Acknowledged Alarms inventory list (see below)
and the counters displayed in the top menu bar will be decremented
accordingly.

Cisco Intersight: A Handbook for Intelligent Cloud Operations


Infrastructure Operations 73

Figure 4: Acknowledged Intersight alarms

Finally, Intersight can forward alarm notifications to a user or group of users


with email. The Intersight SaaS implementation forwards notifications using
an embedded Intersight email server and the appliance implementation
requires a reference to the organization's SMTP server. The Intersight API
simplifies the retrieval of alarm information; a detailed example of this is
provided as Use Case 1 in the Programmability chapter.

Cisco Intersight: A Handbook for Intelligent Cloud Operations


74 Infrastructure Operations

Intelligence feeds

One of the key value-adds of Intersight is the ability to marry insight (or
intelligence) with guidance, and even action. As the industry has evolved,
awareness has as well. Server teams are inundated with information
describing issues that may or may not affect them. If they are affected, then
the next question is, where are they affected? This line of questioning
continues to, what can they do to remediate? Eventually ending with, are we
still affected?

Cisco Intersight combines insights with recommendations and actions by


linking to Cisco intelligence feeds, such as HCL (Hardware Compatibility List)
information, Security Advisories, Field Notices, and support information.
Intersight compares this detailed Cisco intelligence to the actual Intersight-
managed devices within an organization. If the intelligence indicates a
device is affected or known to be within the scope of the intelligence, a
remediation recommendation or descriptive notification is generated,
addressing all the above questions.

Hardware Compatibility List


Hardware Compatibility List (HCL) compliance is essential to mitigate the
impact of service issues caused by running unsupported combinations of
server hardware and software on Cisco UCS and HyperFlex systems.
Intersight flags hardware compatibility issues for a given server after
checking a combination of components against the Cisco HCL database. In
particular, Intersight assesses compatibility against the following:

• Operating System (OS) version


• Server hardware model
• Processor type

Cisco Intersight: A Handbook for Intelligent Cloud Operations


Infrastructure Operations 75

• Firmware version
• Adapter model(s)
• Adapter driver version(s)

Enablement of the HCL function requires a software component, or a Cisco


tool, to be present within the OS of the Intersight-managed device. This OS-
based tool is responsible for collecting OS and driver version information for
the server and provides it to Intersight.

HCL validation process


Once Intersight has access to the above list of components, it will compare
them against valid combinations in the HCL database. Based on the results
of this comparison, Intersight will report one of the following HCL statuses.

• Validated — The server model, firmware, operating system, adapters,


and drivers have been validated and the configuration is found in the
HCL compliance database.
• Not Listed — The combination of server model, firmware, operating
system, adapters, and drivers is not a tested and validated
configuration, and/or it is not found in the HCL compliance database.

• Incomplete — Missing or incomplete data for the compliance


validation. This status could be caused due to an unsupported server
or firmware, or the server is running an operating system without the
necessary Cisco tools.
• Not Evaluated — This status is displayed for server models that do
not have support for the HCL feature.

Intersight also offers a historical record of HCL validation data for long-term
status reporting. This capability is supported for target servers running
firmware release 4.0(1) and later.

Cisco Intersight: A Handbook for Intelligent Cloud Operations


76 Infrastructure Operations

Viewing HCL status information


The HCL Status information for a server or list of servers can be reported
through the Servers table view. See the figure below for a list of servers
showing the HCL status.

Figure 5: Servers table view showing HCL Status grid

Cisco Intersight: A Handbook for Intelligent Cloud Operations


Infrastructure Operations 77

To view the HCL status for a specific server, navigate to the server detail
view at Operate → Servers → servername. From the server detail view, select
the HCL tab at the top of the page to display the detailed HCL information
for the specific server. See the figure below for an example showing where
to find the HCL tab.

Figure 6: Launch HCL detail information from the server detail view

Cisco Intersight: A Handbook for Intelligent Cloud Operations


78 Infrastructure Operations

When selecting the HCL tab from a specific server’s detail page, the HCL
Status detail will be displayed as shown in the figure below. In the example
figure below, an Adapter Compliance issue is found in the 3rd phase of the
HCL Validation status check. Expanding the Adapter Compliance section
provides the adapter detail and the Software Status showing Incompatible
Firmware and Driver.

Figure 7: HCL Validation for device indicating “Not Listed” HCL Status

To resolve the incompatible adapter driver issue for the server shown in the
figure above, the administrator for this device has the option of following the
Intersight-recommended path to remediation. In this case, the HCL status of
Not Listed can be resolved by installing the correct adapter drivers and
upgrading the adapter firmware.

Cisco Intersight: A Handbook for Intelligent Cloud Operations


Infrastructure Operations 79

Locating and downloading recommended drivers


This section will focus on a different server with non-compliant drivers.
Traditionally, locating the necessary drivers has been a tedious task.
Fortunately, Intersight streamlines this process by directing the server
administrator to the exact download location of the drivers at cisco.com. To
begin this process, the administrator selects Get Recommended Drivers, as
shown in the figure below:

Figure 8: Get Recommended Drivers for HCL remediation

Cisco Intersight: A Handbook for Intelligent Cloud Operations


80 Infrastructure Operations

The selection of Get Recommended Drivers displays a pop-up outlining the


current and recommended Cisco adapter drivers for the running OS Vendor
and OS Version. For example, the image below (Figure 9) shows the current
and recommended drivers for each Cisco hardware device.

Figure 9: Get Recommended Drivers pop-up

From the Get Recommended Drivers pop-up, the administrator can choose
Download Driver ISO by following the link under Recommended Drivers (see
above) redirecting the administrator to the exact location at cisco.com to
download the ISO containing the HCL-recommended drivers.

Cisco Intersight: A Handbook for Intelligent Cloud Operations


Infrastructure Operations 81

Configuring OS Tools for HCL


As noted at the beginning of this section, an OS-level tool is required for
each device wishing to take advantage of the HCL capability. Specifically,
one of the following Cisco Tools must be running on the Intersight-managed
device to successfully gather OS and driver information:

• Cisco UCS Tools — A host utility vSphere Installation Bundle (VIB) for
gathering OS version and driver information. UCS Tools is available
for download on https://software.cisco.com
https://software.cisco.com and it is also packaged
in the Cisco ESXi OEM images available from VMware. Cisco custom
images do not require an additional tool or script to obtain the
required OS and driver information.
• OS Discovery Tool — An OS-level tool for non-Cisco-customized
ESXi installations, supported Windows, or supported Linux platforms.
For the full list of supported operating systems visit:
https://intersight.com/help/resources/os_discovery_tool
https://intersight.com/help/resources/os_discovery_tool

Required credentials and licensing


Any user with access to a server through its organization can view the server
HCL information. While HCL information is broadly available for all user
privileges, this Intersight HCL capability does require a target to be licensed
at the Intersight Essentials tier.

Advisories
Intersight Advisories warn operators that a specific condition is affecting at
least one Intersight-managed device within that organization. Advisories are
intended to alert an operator to exactly which devices are affected by the
advisory, provide detailed information about the advisory, and then guide
operators towards remediation. This comprehensive list of affected devices
is sometimes referred to as the “blast radius” of impacted systems.

Cisco Intersight: A Handbook for Intelligent Cloud Operations


82 Infrastructure Operations

Figure 10: Intersight proactive advisories

Advisory Types
At the time of publication, Intersight currently supports two types of
advisories: Field Notices and Security Advisories.

Field Notices are notifications of significant, non-security related issues


affecting a Cisco product, such as an End of Support announcement.

Security Advisories are notifications of a published security vulnerability


affecting a Cisco product. These security advisories are published by
Cisco’s Product Security Incident Response Team, or PSIRT.

Cisco Intersight: A Handbook for Intelligent Cloud Operations


Infrastructure Operations 83

Navigating Intersight Advisories


Active Intersight advisory totals are displayed in the top menu bar of the
Intersight portal when a published advisory affects an Intersight-managed
target, as indicated by the number next to the megaphone icon. The
numerical indicator is specific to a user’s Intersight organization. See the
figure below showing four active advisories affecting this organization.

Figure 11: Four advisories affecting Intersight-managed devices in this organization

Cisco Intersight: A Handbook for Intelligent Cloud Operations


84 Infrastructure Operations

When selecting the advisory indicator (megaphone) in the top menu bar, a
summary list of the advisories is displayed in a slide-out drawer. See the
example below showing one Field Notice and three Security Advisories.

Figure 12: Advisories summary list

Selecting a given Field Notice will display the details of that Field Notice (see
the figure below).

Cisco Intersight: A Handbook for Intelligent Cloud Operations


Infrastructure Operations 85

Figure 13: Field Notice information

As partially shown in the image above, a Field Notice will contain:

• The ID of the Field Notice


• The date that the Field Notice was published
• Date of the last update to the Field Notice
• Brief description of the Field Notice
• An easy-to-read summary describing the Field Notice
• URL pointing to the published Field Notice and all associated details
• Comprehensive list of targets affected by the Field Notice, within a
given organization
• Known workarounds or solutions for resolving the Field Notice

Cisco Intersight: A Handbook for Intelligent Cloud Operations


86 Infrastructure Operations

Selecting one of the Security Advisories from the Advisories summary


drawer mentioned earlier will display the specific Security Advisory
information (see below).

Figure 14: Field Notice Information

As partially shown in the image above, a Security Advisory will contain:

• The severity of the Security Advisory, as defined by the PSIRT Team


• CVEs (Common Vulnerability and Exposure) Identifiers assigned to
the Security Advisory
• The date that the Security Advisory was published
• Date of the last update to the Security Advisory
• Brief description of the Security Advisory

• An easy-to-read summary describing the Security Advisory


• URL pointing to the published Security Advisory and all associated
details

Cisco Intersight: A Handbook for Intelligent Cloud Operations


Infrastructure Operations 87

• A comprehensive list of targets affected by the Security Advisory,


within a given organization
• Description of any known workarounds or solutions to the Security
Advisory
• Fixed software release information or which software releases no
longer are affected by the Security Advisory

Intersight advisories are only reported for affected targets. If an advisory is


acknowledged, via the Acknowledge action (see figure above), the advisory
will no longer be accounted for in the numeric representation of the active
advisories displayed in the top menu bar megaphone. Acknowledged
advisories can be displayed through the View All selection in the
Advisories drawer (located at the very bottom).

Advisory caveats, credentials, and licensing

• Intersight will only display the advisories in the organization of the


logged-in user

• Viewing and acknowledging an Intersight advisory is a license-


dependent feature
• The following roles are required to view and/or acknowledge an
advisory:
• View and Acknowledge: Account Administrator

• View-only: Read-Only, Server Administrator, and HyperFlex


Cluster Administrator

• After a new target is claimed, it may take a few hours for any
advisories to be correlated to the new target
• Advisories that affect specific targets are created or deleted every 6
hours

Cisco Intersight: A Handbook for Intelligent Cloud Operations


88 Infrastructure Operations

Integrated support

Targets that are managed by Intersight have access to Cisco support, which
is known as the Technical Assistance Center (TAC). In the event of a system
issue, this allows for seamless troubleshooting and often a hands-free
resolution. These integrated support capabilities range from simple tasks
such as viewing the contract and warranty status of a system to advanced,
analytics-based operations such as proactively replacing parts based on
automated pre-failure notifications.

Open support case


Often, one of the most frustrating tasks in information technology is
contacting support when an issue or outage occurs. Intersight streamlines
this process by allowing a case to be painlessly opened for the specific
target from the Intersight interface. However, something does not have to
break before users can use this intuitive function. From Intersight, support
cases with Cisco TAC can be opened for a variety of reasons including:

• A request for help in diagnosing an issue


• A request for parts replacement (also known as an RMA)
• To simply ask a question about the system or operations

• To search for previously opened cases

Intersight is directly linked to the Cisco Support Case Manager (SCM) tool
so when a case is opened from Intersight, SCM is automatically launched
and pre-populated with the device name and serial number. If the user has
not already verified support coverage for the device, SCM will automatically
perform a check for the current contract and/or warranty coverage of the
system(s). Specific customer entitlement information such as contract
number does not need to be manually entered or maintained in Intersight.

Cisco Intersight: A Handbook for Intelligent Cloud Operations


Infrastructure Operations 89

Because of its cloud-based architecture, Intersight is linked via APIs to the


global entitlement databases and can automatically compare system serial
numbers to the entitlement systems.

Requirements
The integration with Cisco TAC, along with Proactive Support capabilities, is
available to all Intersight users. No additional licensing or fees are required
to utilize these capabilities, and the features are available across any type of
target in Intersight.

FastStart Log Collection


When a support case is opened with Cisco TAC, no matter if the case was
initiated from Intersight or through other means such as phone or website,
support files and logs can be automatically retrieved by the support engineer
assigned to the case. Once devices are claimed in an organization’s
Intersight account, the automatic generation of support files capability is
enabled.

Anytime a case is opened with Cisco TAC, it is run through an automated,


machine-learning algorithm that compares the system’s serial number to
devices that have been claimed in Intersight. For Intersight-claimed devices,
log files will be automatically generated and attached as external notes in
the SCM tool where they can be viewed by the assigned support engineer
as well as the end user. Administrators can configure notifications in SCM to
be sent via email to additional end users any time support files are added to
a case.

For security reasons, the only person able to generate support files directly
from Intersight is the Cisco support engineer who is assigned to the case.
That individual is only able to generate log files for the duration of their
activity on the case. When the case is closed or a different engineer is
assigned, the original engineer will no longer be able to retrieve support
files.

Cisco Intersight: A Handbook for Intelligent Cloud Operations


90 Infrastructure Operations

The individual support files are generated locally at each target, are
encrypted when transferred through Intersight to the support organization,
and encrypted at rest. After the files have been collected, Cisco will retain
them for 90 days after the case has been closed.

Contract and warranty status


Intersight can assist users in monitoring the current contract and/or warranty
status of individual infrastructure components. In addition to being able to
view the contract status, Intersight can also generate reports and
notifications when the contract or warranty term is nearing expiration.

An overview of the service contract status for targets in Intersight can be


viewed by utilizing either the Server Contract Status or Fabric Interconnect
Contract Status dashboard widget on the Intersight dashboard. This display
can be configured for one or five years. These widgets are available to users
with Account Administrator, Server Administrator, HyperFlex Cluster
Administrator, and Read-Only roles. The service contract status is updated
on a biweekly basis.

Cisco Intersight: A Handbook for Intelligent Cloud Operations


Infrastructure Operations 91

Proactive Support

Figure 15: Proactive Support process

Proactive RMA process


Leveraging telemetry from connected products, Cisco can deliver a near-
effortless customer experience when products experience certain failures.
Intersight will monitor systems in real-time for issues and notify if a match is
found with any of the known issues that are part of the cloud-based
backend intelligence system. For these types of issues, such as DIMM
failures, Intersight automatically detects the issue, auto-creates a support
case, and auto-initiates a Return Material Authorization (RMA) to a customer,
thereby creating a 100% proactive experience for Intersight users that have
devices covered under a valid support contract.

As part of this process, Cisco TAC will automatically gather additional


information about the affected target. For example, with a memory DIMM

Cisco Intersight: A Handbook for Intelligent Cloud Operations


92 Infrastructure Operations

failure Cisco will collect:

• Fault Details (Fault Time/Device Type/Etc.)


• Chassis or Cisco Integrated Management Controller tech support file

• UCS Manager tech support file (blade-based systems only)

Cases are typically opened and the RMA is created within one hour after the
fault occurs. This includes all the time needed to generate the appropriate
diagnostic data.

When an issue that is part of Proactive Support is detected:

• An email from cisco-proactive-rma@cisco.com


cisco-proactive-rma@cisco.com will be sent
• It is highly recommended that organizations whitelist this
address

• The case will be created with either:

• The last entitled user that logged into Intersight or


• The email address configured for Proactive Support
• All users in an organization’s Intersight instance who are
entitled under the support contract are copied in on the email

• Any entitled user can take ownership of the RMA and fill out the
required details

Proactive Support is enabled by default for all targets in an organization’s


Intersight account and must be specifically disabled if the organization
wishes to opt-out. Coverage can be disabled or re-enabled for an entire
Intersight account; individual targets do not have to be configured.

Cisco Intersight: A Handbook for Intelligent Cloud Operations


Infrastructure Operations 93

Proactive Support configuration


Configuration options for Proactive Support are set using tags within
Intersight and, as of today, these tags cannot be set through the GUI
interface. However, these tags can easily be configured using the API REST
Client that is available through the Intersight online documentation.
Accessing and using the Intersight API REST Client is covered at length in
the Programmability chapter of this book and includes a use case detailing
how to explicitly configure an auto RMA email address for a given account
as well as how to opt out should an organization choose to do so.

Cisco Intersight: A Handbook for Intelligent Cloud Operations


94 Infrastructure Operations

Infrastructure configuration

Software repository
Intersight provides a software repository for equipping Intersight-driven
actions with the necessary software components required updating
firmware, installing operating systems, updating drivers, and other tasks.

The Software Repository can be pre-populated by selecting the Admin →


Software Repository menu option. Alternatively, when an action requiring
software, such as installing an OS or updating firmware, is run, the
administrator has the option of adding the files to the repository if the
necessary software is not previously available in the repository.

All references to a repository must be configured with the target’s network


location in mind. The network location of the repository source should
always be considered relative to the target, making software source
placement critical. Proper placement of the repository source can
dramatically improve the delivery time of the software to the target. For
instance, if an Intersight action is running against a target and it is
referencing a repository source that is residing in an extremely remote
location (relative to the target), the action will take more time than if the
software source was on a local network (relative to the target).

There are three types of Intersight software repository mount options or


sources:

• NFS — Network File System


• CIFS — Common Internet File System
• HTTP/S — Web or Secure-web

Configuration of each source will require a fully qualified path to the file
location. Additionally, other details such as: mount options, username, and a

Cisco Intersight: A Handbook for Intelligent Cloud Operations


Infrastructure Operations 95

password may be required, depending on the type of source location being


defined.

See the figure below showing the wizard-driven configuration of a software


repository source.

Figure 16: Software Repository source definition options for HTTP/S

Cisco Intersight: A Handbook for Intelligent Cloud Operations


96 Infrastructure Operations

Intersight requests
The progress and completion status of every user-initiated action can be
viewed within Intersight in the Requests pane. This view can be accessed
from the top title bar in the Intersight UI. The screenshot below shows that
actions can be filtered by active (in progress) actions and completed
actions. Most actions that are in progress can be expanded to display a
detailed progress bar. This screenshot shows an action in progress that is
paused waiting for an acknowledgment from an administrator.

Figure 17: A summary of active actions

Cisco Intersight: A Handbook for Intelligent Cloud Operations


Infrastructure Operations 97

The Requests icon can take three shapes:

• A spinning blue circle indicates that an action is in progress which


does not currently require administrator interaction
• A circle with an arrow (as shown in the above screenshot) indicates
that an action is pending administrator interaction

• A checkmark indicates that there are no actions in progress.


Administrators can still click on that icon to view completed actions

Tagging
As a cloud operations platform, Intersight manages, maintains, and
inventories a vast array of objects (e.g. devices, profiles, policies, pools,
etc.) in an organization. The raw number of objects being selected or
displayed can be significant and potentially unwieldy, particularly for large
organizations. For example, selecting Operate → Servers will display every
claimed server in an organization, which can run into the many thousands.
Fortunately, Intersight has a mechanism to mark the objects with tags,
allowing the user to filter selection criteria down to a more manageable
scope.

Tags can be managed in three different ways:

• By assigning tags to an object through the object’s creation wizard


• From the tagged object’s Details screen, by selecting the Set link
under the Tags section
• Via the Intersight API

All tags will be validated as they are entered in the tag management
interface.

Cisco Intersight: A Handbook for Intelligent Cloud Operations


98 Infrastructure Operations

To apply search criteria using tags, simply add a given tag to the
Search field. The result will only return objects common to that tag (or tags if
more than one are specified). See below for an example search using the
Organizations: Infra tag.

Figure 18: Using tags in a search field

The format of a tag is always key: value (note the necessary single space
after the colon) and the combination of keys and values must be unique
across all the tags in use.

Tagging rules for any single object:

• Only one instance of a key may be set


• More than one instance of any value may be set

Cisco Intersight: A Handbook for Intelligent Cloud Operations


Infrastructure Operations 99

For example, these are all valid tags for a single object:

• Domain: Infrastructure
• Location: StPaul

• Role: Infrastructure
• Dept: IT

Note that in the above list, all the keys are unique and one of the values,
Infrastructure, is used more than once.

Exporting of data
Intersight is a phenomenal resource to gather and view a complete inventory
of your infrastructure, however, at times it is necessary to have the detailed
inventory information in a different tool such as Microsoft Excel. Fortunately,
it is extremely easy to export this information in a CSV format.

Any of the table views in Intersight such as Servers, Fabric Interconnects,


Virtualization, Targets, or Policies can be exported from the Intersight user
interface. Simply navigate to one of the table views and select the
Export option located near the top of the table.

Cisco Intersight: A Handbook for Intelligent Cloud Operations


100 Infrastructure Operations

Figure 19: Example of Policy Export

Once the Export option is selected, a CSV file containing all the information
in the Intersight table will be downloaded through the browser.

To obtain more detailed or customized sets of information it is highly


recommended to use the Intersight REST API to retrieve the desired
inventory information. Information on how to use the API for this type of
reporting, including examples, are covered in the Programmability chapter.

Cisco Intersight: A Handbook for Intelligent Cloud Operations


Infrastructure Operations 101

ITSM integration

For organizations that are leveraging ServiceNow for their ITSM


requirements, a robust plugin for Intersight is available in the ServiceNow
store. This plugin allows organizations to use a single platform to not only
view inventory and faults but to also create tickets based on alarms
originating from Intersight. This plugin functions with either the SaaS
implementation of Intersight or the Connected Virtual Appliance and
supports alarms raised from a variety of Intersight targets including servers,
HyperFlex clusters, and Fabric Interconnects.

Figure 20: Incidents from Intersight alarms in ServiceNow

Cisco Intersight: A Handbook for Intelligent Cloud Operations


102 Infrastructure Operations

Some of the key features of the plugin are:

• Automatic collection and updates of targets, which synchronizes the


inventory and alarms between Intersight and ServiceNow
• ServiceNow dashboard widgets to display incident analysis reports

• Incident creation and ticket monitoring for servers, HyperFlex


clusters, and Fabric Interconnects

By utilizing the plugin, users can receive automatic inventory collection and
updates from Intersight to ServiceNow's Change Management Database
(CMDB). Once the inventory from Intersight is imported, users can then track
the hardware infrastructure along with existing components in ServiceNow.
This facilitates incident management including the ability to send emails or
raise other notifications from ServiceNow as alarms are automatically
imported from Intersight.

Cisco Intersight: A Handbook for Intelligent Cloud Operations


Infrastructure Operations 103

Figure 21: Dashboard widgets for Intersight in ServiceNow

Currently, the plugin does not support the service graph connector, and it
does not pull in contract information. The ITSM plugin can be downloaded
directly from ServiceNow. The installation guide and additional
documentation are located at: http://cs.co/9001HbU6d
http://cs.co/9001HbU6d

Cisco Intersight: A Handbook for Intelligent Cloud Operations


104 Infrastructure Operations

UCS Director integration

Cisco UCS Director (UCSD) is a software solution that uses automation to


provision and manage infrastructure. The solution allows for complex
procedures and best practices to be converted into repeatable, automated
workflows. It features multi-vendor task libraries with over 2,500 out-of-the-
box workflow tasks for end-to-end converged stack automation of both
bare metal and virtualized environments.

While Cisco UCS Director remains a strategic platform for private cloud IaaS,
Cisco is creating a path forward to the transition to Intersight in the future.
This evolution will take time to implement, but Cisco has started the process
by introducing a UCS Director connector to Intersight.

Organizations using on-premises enterprise software such as UCS Director


traditionally go through a delayed patch and upgrade process. As a result,
they do not realize the benefits of new capabilities for 3-6 months, or even
longer, after a product release. However, starting with UCS Director 6.7, the
benefits of SaaS and CI/CD can be achieved by claiming on-premises UCS
Director instances in Intersight. Once claimed, the traditional on-premises
software is transformed into a secure hybrid SaaS setup that delivers
ongoing new capabilities and benefits such as:

• Automatic downloads of software enhancements upgrades, bug


fixes, and updates for:
• UCS Director Base Platform Pack
• System Update Manager
• Infrastructure specific Connector Packs such as EMC storage,
F5 load balancers, RedHat KVM, and many more

• Enhanced problem resolution with Cisco Support through Intersight

Cisco Intersight: A Handbook for Intelligent Cloud Operations


Infrastructure Operations 105

• Proactive notifications and streamlined, "one-click" diagnostics


collection

Claiming UCS Director Instances


UCS Director 6.6 and later instances ship with their own Device Connector
that is embedded in the management controller (Management VM for Cisco
UCS Director). The Device Connector allows Intersight to connect to and
claim UCS Director instances to manage and monitor them through Cisco
Intersight, similar to Cisco Unified Computing System servers and Cisco
HyperFlex hyperconverged infrastructure.

To claim a UCSD instance with Intersight:

• In UCS Director, if required, configure the Device Connector proxy


settings

• In Intersight, claim the instance by entering the device serial number


and the security code from the UCSD Device Connector

Using UCS Director with Intersight


After the Device Connector is configured and the target is claimed, a new
dashboard and additional table views are available in Intersight along with
the ability to cross-launch the UCS Director interface from Intersight. UCS
Director-specific dashboard widgets can be added to provide useful
summary information for:

• Instance summary

• Service status summary


• Last backup status
• Trends for last 10 backups

Cisco Intersight: A Handbook for Intelligent Cloud Operations


106 Infrastructure Operations

Figure 22: UCS Director dashboard widgets in Intersight

From the Admin → UCS Director menu on the left-hand side of the Intersight
user interface, two new tabs are available with specific information on the
UCS Director integration. The Instances tab displays a table view of the UCS
Director instances that have been claimed in Intersight. The Backups tab
displays the list of backup images that have been created for the UCS
Director instances and allows for report generation.

Cisco Intersight: A Handbook for Intelligent Cloud Operations


Infrastructure Operations 107

Figure 23: View detailed UCS Director information in Intersight

Cisco Intersight: A Handbook for Intelligent Cloud Operations


108 Infrastructure Operations

In the Instances table, specific actions can be initiated for a UCS Director
instance such as:

• Creating a backup
• Restoring from a backup

• Performing an update
• Cross-launching the UCS Director interface

Figure 24: View instance and launch UCS Director from Intersight

By selecting the hyperlink in the Name column, a detailed set of information


on that instance can be viewed. This includes the ability to see the
infrastructure components such as UCS Manager, Storage Arrays, and
vCenter instances that are managed by that UCS Director instance. Also, the
UCS Director Connector Packs and their version numbers can be viewed.
The Connector Packs can be updated from the Actions menu on this screen.

This view also makes it possible to see all the workflows that are or have
been executed on the UCS Director instance including any rollbacks that

Cisco Intersight: A Handbook for Intelligent Cloud Operations


Infrastructure Operations 109

have occurred. The workflow statuses are displayed as Service Requests.


The workflows shown here are unique and different from any workflows that
have been created in Intersight through the orchestration capabilities. It is
possible for an Intersight workflow to call a UCSD workflow if desired, which
can allow an organization to gradually migrate to Intersight as the primary
orchestrator. However, the UCS Director and Intersight workflows are not
compatible, and cannot be directly imported from UCS Director into
Intersight.

Figure 25: View and update Connector Packs from Intersight

To view and create new backups of a UCS Director instance in Intersight,


select Admin → UCS Director → Backups. From this tab, a CSV report of the
backups can be exported.

Cisco Intersight: A Handbook for Intelligent Cloud Operations


110 Infrastructure Operations

Figure 26: Backup files for UCS Director in Intersight

The UCS Director integration with Intersight can also assist with
troubleshooting and issue resolution. With permission from the organization,
Cisco TAC can pull log files directly from the UCS Director system which
eliminates multiple back-and-forth emails and calls. Not only is the support
process simplified, but a faster resolution is seen in most cases.

Cisco Intersight: A Handbook for Intelligent Cloud Operations


Infrastructure Operations 111

Wrapping up

Operations delivered as-a-service for individual infrastructure components is


not an entirely new idea, but Intersight has furthered this concept by
coupling it with unique capabilities such as intelligence recommendations,
proactive RMA, and automatic HCL validation to provide a complete solution
to visualize, optimize and operate hardware and software infrastructure no
matter where it is located. Intersight simplifies operations, not just for
individual servers, storage devices, or network switches, but for literally the
entire supporting infrastructure.

Cisco Intersight: A Handbook for Intelligent Cloud Operations


Cisco Intersight: A Handbook for Intelligent Cloud Operations
Server Operations

Cisco Intersight: A Handbook for Intelligent Cloud Operations


114 Server Operations

Introduction

Historically, when new tools and platforms are introduced, they force a shift
to a new operational model, which can bring complexity and lead to
disruptions in business continuity. Intersight has taken a different approach
to this dilemma. No matter if a UCS server is deployed in standalone mode,
Fabric Interconnect-attached mode (UCS Manager), or in the new Intersight
Managed Mode (IMM), all the day-to-day operations of the server can be
performed from Intersight. It is often referred to as the front door for UCS
server operations, providing a single entry point for control and configuration
for all UCS server types and generations.

A Device Connector-maintained connection (discussed in the Foundations


and Security chapters) provides a mechanism for secure communications
between a target and Intersight. This connection enables Intersight to carry-
out server operations upon the selected target. Common operations range
from gathering simple inventory information to configuring very specific BIOS
settings on a target server(s). This ability to perform such a wide range of
high and low-level tasks, all within Intersight, reduces the daily
administrative burden for a server administrator.

Cisco Intersight: A Handbook for Intelligent Cloud Operations


Server Operations 115

Supported systems

Intersight is the operations platform for both the current generation Cisco
UCS and HyperFlex servers as well as for generations to come. As
discussed in the Foundations chapter, the target server requires a
connection mechanism and an API. The requirement for these components
drives the minimum baseline server model and embedded
software/firmware supportable by Intersight.

A complete list of minimum firmware and infrastructure requirements is


maintained at https://www.intersight.com/help/supported_systems
https://www.intersight.com/help/supported_systems

Cisco Intersight: A Handbook for Intelligent Cloud Operations


116 Server Operations

Actions

Server actions are initiated through Intersight to perform a specific function


(or set of functions) against a server. As discussed in the Foundations
chapter, the Device Connector provides a secure connection channel
allowing Intersight to initiate API requests against the server. The single and
multiple server actions described below are all enabled by this technical
innovation.

Single server actions


The most common way to initiate server actions is to navigate to the
General tab of a specific server’s details page, then choose to initiate the
action against the server from the Actions dropdown.

Cisco Intersight: A Handbook for Intelligent Cloud Operations


Server Operations 117

Figure 1: Server detail view

In the General tab of the server’s details page, basic inventory and
health information are displayed on the left-hand-side and bottom of the
page, and events affecting the server are shown on the right-hand side. The
events are further categorized as Alarms, Requests, and Advisories. See the
Infrastructure Operations chapter for more detail on each of these.

Different actions are available depending on the type of server. The


actions shown below are for two different types of servers. Those on the left
are for a UCS standalone rack server and those on the right are for a UCS
blade server.

Cisco Intersight: A Handbook for Intelligent Cloud Operations


118 Server Operations

Figure 2: Single server actions for standalone (left) server and an FI-connected blade (right)

Whenever an action is initiated, the Requests indicator will increment by one,


indicating the action is executing. The Requests indicator was discussed in
the Infrastructure Operations chapter.

Not all actions shown above are possible against all servers. As noted
previously, this will depend on the type of server, but also on the user’s
permissions and the license tier of that server. For example, for a given
server, the Server Administrator role will be able to perform all operations
allowed by the server license level including power management, firmware
upgrade, and operating system installation. Server Administrators also can
cross launch element managers (such as UCS Manager or Cisco Integrated
Management Controller). The Read-Only role will not be able to perform

Cisco Intersight: A Handbook for Intelligent Cloud Operations


Server Operations 119

Intersight-driven actions from this drop-down but will be able to cross-


launch element managers.

The actions shown in the drop-down menu above are all actions that will be
performed within the context of the current server.

Bulk server actions


Bulk server actions are used to perform a specific function against more than
one server in a simultaneous fashion. The simplest way to initiate bulk server
actions for a selection of servers is to navigate to the servers table page by
selecting Operate → Servers. From this page, all server targets are listed.
The user can choose multiple servers by selecting the checkbox next to
each desired server as shown in the image below:

Figure 3: Server summary page showing selected servers

Cisco Intersight: A Handbook for Intelligent Cloud Operations


120 Server Operations

After target servers have been selected, the bulk Action can be triggered by
clicking the bulk server action icon, as shown by the ellipsis (...) in the
screenshot below:

Figure 4: Bulk server actions for standalone servers and FI-connected blades

The actions in the single server actions and bulk server actions lists have
some overlap. Additionally, not every action is available for every target type.
For example, a firmware upgrade cannot be applied to a server that is a
node in a HyperFlex Cluster. Instead, HyperFlex clusters are upgraded as a
cluster unit from either the Operate → HyperFlex Clusters table view or the
Operate → HyperFlex Clusters→ cluster-name Action drop-down.

Cisco Intersight: A Handbook for Intelligent Cloud Operations


Server Operations 121

Server action details


Power Cycle
The Power Cycle action mimics the behavior of physically depressing a
server’s power button and holding it in a depressed state for several
seconds. The server will power off and remain powered off for several
seconds before powering back on. This is an immediate action and has no
regard for the state of the server or any operating system running on the
server.

Hard Reset
The Hard Reset is complementary to the power-related Power Cycle action
but invokes a more graceful behavior. It mimics the behavior of physically
depressing a device’s power button momentarily. It provides a reset signal
to the server to attempt a graceful reset.

Install Operating System


As the name of the server action implies, the Install Operating System action
installs an operating system on the target server. This capability is unlike
traditional tools from OS vendors or utility software providers that lack an
underlying knowledge of the specific hardware target; it streamlines the
integration of vendor-specific drivers and tools into the OS installation
process.

The install OS action prompts the administrator to provide the following:

• Target server
• Operating system image

• Configuration process and corresponding input file


• Server Configuration Utility (SCU) image

Cisco Intersight: A Handbook for Intelligent Cloud Operations


122 Server Operations

It is highly recommended to add the OS image, configuration file, and the


SCU to the Software Repository before initiating the install OS action. The
software repository capabilities are covered in the Infrastructure Operations
chapter.

Installing an operating system through the install OS action offers three


different configuration methods for the OS:

• Cisco — This method takes advantage of the best practices that


Cisco recommends for the configuration of the server as captured in
Cisco-validated templates. The configuration file is automatically
chosen based on the OS image version selected.

• Custom — This method allows the administrator to provide a


configuration file containing an installation script or configuration
template/response file.
• Embedded — This option is for situations when the OS image itself
includes a configuration file

vKVM vs. Tunneled vKVM


Operating a large scale server environment requires the ability to remotely
administer systems as if the operator were standing in front of the server.
These remote presence capabilities are typically provided by an embedded
management controller within the server itself. The most common, and
perhaps the most important, remote presence capability is to virtually use
the keyboard, video, and mouse functions. This is called vKVM (or virtual
Keyboard-Video-Mouse).

Intersight provides two distinctly different options for vKVM capabilities,


vKVM and Tunneled vKVM. Both options provide the same functional benefit,
but the Tunneled vKVM takes full advantage of the Intersight architecture to
allow the administrator to connect the vKVM from anywhere.

Cisco Intersight: A Handbook for Intelligent Cloud Operations


Server Operations 123

• Tunneled vKVM is disabled by default and the administrator must


opt-in to enable this feature at the Device Connector
• Access to Tunneled vKVM functionality depends on the license tier of
the target device
• Tunneled vKVM requires authentication using an IMC (integrated
management controller) local user account. Other identity providers
cannot be used with tunneled vKVM.

Most vKVM solutions require that the administrator has access to the
network on which the server management interface resides. This is typically
achieved by using VPN software managed by an organization and placed on
the operator’s client device (see diagram below, left).

A VPN connection can introduce some significant security challenges. For


instance, a VPN user may open access to the entire network of servers,
workstations, and in some cases, Operational Technology (OT) devices. VPN
access can also open the door for impersonation since a VPN connection
can be attempted by anyone on the internet. As such, the VPN solution must
take advantage of authentication technologies such as SSO (single sign-on)
and MFA (multi-factor authentication) using a trusted IdP (Identity Provider)
to prevent identity spoofing. When using a VPN connection to access the
remote console of a server, it is especially important to ensure that the VPN
has been properly configured to limit the network segments which can be
accessed and that the connectivity solution has been properly secured and
hardened.

These challenges can be avoided with the Tunneled vKVM feature of


Intersight. Intersight’s architecture can ensure proper connectivity,
authentication, and network isolation from the client to the target console.
Since Intersight already maintains a secure connection to its targets and its
users, Intersight can take advantage of these connections to provide a
secure end-to-end vKVM session for the Intersight administrator.

Cisco Intersight: A Handbook for Intelligent Cloud Operations


124 Server Operations

Figure 5: Typical remote console vs Intersight tunnel remote console

The figure above, right shows an Intersight-authenticated user can establish


a remote Tunneled vKVM session through the existing Device Connector-
maintained secure connection to Intersight. The KVM traffic is automatically
tunneled from the remote user through Intersight to the target device,
without needing to rely on a VPN.

Cisco Intersight: A Handbook for Intelligent Cloud Operations


Server Operations 125

Server deployment

Intersight disaggregates the configuration of a server from the physical


server itself. The configuration is based on a set of rules defining the server
that are referred to as policies and unique configuration items such as
management IP addresses which are drawn from pools of resources. For a
specific server, a set of policies and pools are combined to form a Server
Profile, which is then applied to a physical server from Intersight.

Policies
Configurations for servers are created and maintained as policies in
Intersight and include items such as BIOS settings, disk group creation,
Simple Mail Transfer Protocol (SMTP), Intelligent Platform Management
Interface (IPMI) settings.

Once a policy is created it can be assigned to any number of servers (using


profiles discussed in the next section) to standardize and ensure compliance
across any server in any location. The use of policies provides agility and
minimizes the potential for mistakes when configuring many servers with
similar settings. Policies promote consistency by allowing an administrator to
make all necessary configuration changes in one place. For example, if the
NTP server for a location changes, the administrator simply updates the NTP
policy used by all servers at that location rather than updating each server
individually.

Intersight has unique policy types for servers, blade chassis, UCS Domains,
and HyperFlex clusters. In some cases, policies can be shared between
different target types. For example, an NTP policy can be used by both
servers and UCS Domains.

Cisco Intersight: A Handbook for Intelligent Cloud Operations


126 Server Operations

Pools
In Intersight, pools are used for consumable resources or identifiers such as
management addresses, WWNs, and MAC addresses that are unique for
each server. Pools are preconfigured ranges of these addresses which are
consumed when attached to a Server Profile. The image below shows a
small MAC address pool.

Figure 6: A view of a simple MAC address pool

Profiles
Intersight Server Profiles are composed of a group of policies and are used
to define the desired configuration and map that configuration to a target
server. The Assign Server Profile action triggers the automated execution of
configuring the server to match the settings defined within the Server Profile.

NOTE: A Server Profile referred to in this section is an Intersight construct


not to be confused with a Service Profile used within UCS Manager and
Central.

Cisco Intersight: A Handbook for Intelligent Cloud Operations


Server Operations 127

Much like policies, there are unique profile types for servers, chassis, UCS
Domains, and HyperFlex Clusters. In this chapter, Server Profiles for both
rack and blade servers will be covered. The other profile types will be
covered in other chapters.

Profile states
Because profiles do not have to be assigned to servers, nor are servers
always guaranteed to match the config state of the assigned profile, profiles
can be in one of several different states at a given point in time.

Table 1: Server Profile state definitions

Pro le state Description

Not Deployed The pro le has been assigned to a server but not deployed

Not Assigned The pro le has not been assigned to a server

The pro le has been deployed successfully to the server and the server
OK
con guration matches the policies de ned in the pro le

In Progress The pro le is in the process of being deployed to the server

Not Deployed The current pro le and its referenced policies are di erent from the last
Changes deployed policy con guration

Failed Server Pro le validation, con guration, or deployment has failed

This indicates that the policy con guration at the server is not in sync
with the last deployed policy con guration in the pro le. If the server
Out of Sync settings are altered manually after a pro le is deployed, Intersight
automatically detects the con guration changes and they will be shown
on the Server Pro le as Out of Sync

The most interesting of these Server Profile states has to do with the
situation when a Server Profile is assigned to a server and the server’s
configuration differs from the profile, known as configuration drift.

Cisco Intersight: A Handbook for Intelligent Cloud Operations


128 Server Operations

Configuration drift is indicated when a policy state is marked as either of the


following:

Out of Sync — The actual configuration of the server has changed at the
server and it no longer matches the configuration that was assigned to the
server by the Intersight Server Profile. The administrator can select the
Server Profile in Intersight and select Deploy to overwrite configuration
changes made locally at the server.

Not Deployed Changes — The configuration of the Server Profile has


changed in Intersight and it no longer matches the configuration that is
currently running on the server. This can occur when a profile or any of the
policies mapped to that profile are changed by the administrator. The
administrator can select the Server Profile in Intersight and select Deploy to
push these configuration changes to the server.

Standalone servers
Server Profiles for UCS rack servers can either be created natively in
Intersight or imported from the IMC of existing UCS rack servers that have
been configured locally.

When a profile is imported from an existing server into Intersight, a Server


Profile along with associated policies are created. This provides a simple
method to create a golden configuration. Once the golden configuration has
been imported, it can then be cloned and applied to other UCS rack servers.
Additionally, those imported policies can be used for other profiles in the
organization.

Most organizations will create profiles natively in Intersight. As discussed


earlier in this chapter, Server Profiles consist of policies. To create a new
profile, administrators must create the necessary policies in advance or be
prepared to create them inline while creating a new Server Profile.

Any Server Profile can be cloned, regardless of how it was initially created.
Administrators can create Server Profiles and leave them unassigned. This
allows administrators to perform all the configurations for new servers

Cisco Intersight: A Handbook for Intelligent Cloud Operations


Server Operations 129

before even physically installing them. The profiles can be deployed to the
servers later.

FI-attached servers
Servers deployed within an Intersight-managed compute domain are
operating in Intersight Managed Mode or IMM (discussed at length in the
next section) and are not deployable as standalone systems, thus the profile
and policy import capabilities described previously are not applicable.

The creation of an FI-attached Server Profile has all the same prerequisites
as the creation of a previously discussed standalone Server Profile. Similarly,
these profiles can also be cloned and reference reusable policy for
configuration.

Cisco Intersight: A Handbook for Intelligent Cloud Operations


130 Server Operations

Domain management

The UCS architecture for blade servers (and FI-attached rack-mount


servers) uses a pair of Fabric Interconnect switches as the network
infrastructure for the servers. The combination of a pair of these Fabric
Interconnect switches and the servers attached to them is referred to as a
UCS Domain. Each of these domains contains their own configuration and
policy manager known as the UCS Manager and are capped at 20 chassis or
approximately 160 servers. However, with Intersight, an updated
architecture is introduced which both consolidates the configuration and
policy management and allows for an unlimited number of servers. This
section will discuss managing and operating both types of UCS architecture
with Intersight.

Traditional UCS Domains


Intersight can be used to perform day-to-day operations on traditional UCS
Manager (UCSM) Domain infrastructure such as powering on, powering off,
accessing the vKVM, and updating firmware, etc. This provides a consistent
operational model for standalone rack servers, traditional UCS Domains, as
well as the newer architecture that will be discussed below.

Intersight coordinates server operations with UCSM using the embedded


Device Connector previously discussed in the Foundations and Security
chapters. Beginning with the 3.2 software release a Device Connector was
integrated into UCS Manager. Like other Cisco targets, administrators simply
claim the traditional UCS Domain in Intersight using the Device ID and Claim
Code obtained from the UCSM Device Connector.

Upon successfully claiming a traditional UCS Domain, Intersight conducts an


inventory collection of the entire domain infrastructure. All components
(servers, chassis, FIs, etc.) within the claimed domain are added to

Cisco Intersight: A Handbook for Intelligent Cloud Operations


Server Operations 131

Intersight’s inventory of managed devices. This allows Intersight to perform


operational tasks within the domain by working in harmony with UCSM.

Examples of Intersight’s operational capabilities within a traditional domain


include:

• Device inventory

• Graphical server views


• Health and monitoring
• Search and tagging

• Launch vKVM or Tunneled vKVM


• Upgrade firmware
• Advisories

• HCL compliance
• Open TAC case
• Proactive RMA

• License operations
• Launch UCS Manager

When the need arises to change or update traditional UCS servers’


configurations, Intersight provides a seamless and intuitive approach to
access necessary configuration tools. Intersight is unable to directly
configure or change policies on traditional UCS Manager (UCSM) Domain
infrastructure because UCSM acts as its own domain manager. Intersight
provides an option to context launch either UCS Manager (UCSM) for blade
and Fabric Interconnect-attached servers or the Integrated Management
Controller (IMC) for standalone UCS servers.

Pass-through authentication is used with authorization via the operator’s


current Intersight credentials and role-based access controls when these
tools are context launched from Intersight. Once the tools are context
launched, the ability to configure the Device Connector is disabled. This
ensures that a security loophole is not introduced and requires anyone

Cisco Intersight: A Handbook for Intelligent Cloud Operations


132 Server Operations

enabling Intersight connectivity or changing Device Connector settings to


have direct, local access to the device being managed.

Intersight Managed Mode


Intersight Managed Mode (IMM) enables Cisco’s future direction in
computing platform architecture, often referred to as Cisco’s modernized
compute architecture. This modernized architecture has many similarities to
traditional UCS Domains, including Fabric Interconnects (FIs), server chassis,
and servers. The two architectures are physically cabled the same way; a
pair of FIs form the top-of-rack (ToR) switching fabric for the chassis and
servers, ultimately coming together to form a UCS Domain. While the
terminology and physical attributes of the two architectures are identical, this
is where the similarities end. The underlying software architectures and the
governing control planes are fundamentally different.

IMM is enabled during the initial setup of a pair of FIs and includes not only
the switching devices but also all of the attached server infrastructure. The
configuration of all this infrastructure is defined by a policy that is managed
by Intersight directly. The following sections provide more details on this
modern operational mode as well as how organizations can benefit from
Intersight’s operational capabilities for both traditional UCSM and IMM
domains.

Benefits of Intersight Managed Mode


While Intersight can perform day-to-day operations for traditional UCS
Domain, the policy (or configuration) management is still handled at the
individual domain level. In contrast, the policy for devices connected to a
domain in Intersight Managed Mode is managed by Intersight. This allows
limitless scale for pools, policies, and profiles across an organization’s
infrastructure within a single operational view.

For organizations with multiple IMM compute domains, policies and pools
become easier to consume and can be shared globally. The risk of
identifiers, such as MAC addresses or fibre channel WWNs, overlapping

Cisco Intersight: A Handbook for Intelligent Cloud Operations


Server Operations 133

across domains is removed and existing policies can be reused in new


domains located anywhere in the world.

An NTP policy, for example, can be shared among thousands of standalone


rack servers, but that same policy can also be used by UCS Domains in
Intersight Managed Mode. The figure below shows an NTP policy used by
both UCS Server and UCS Domain Profile types.

Figure 7: Using policies across different device types in IMM

Intersight’s ability to manage both standalone and domain-attached servers


with a single policy model provides true administrative flexibility,
configuration consistency, and simplifies the operational burden no matter if
the computer infrastructure is a handful of servers or a large, diverse,
distributed server infrastructure.

Getting started with IMM


There are minimum hardware and software requirements for getting started
with IMM. The latest details for supported systems are always available at htt
htt
ps://intersight.com/help/supported_systems#supported_hardware_systems
ps://intersight.com/help/supported_systems#supported_hardware_systems
_and_software_versions
_and_software_versions.
_and_software_versions

Cisco Intersight: A Handbook for Intelligent Cloud Operations


134 Server Operations

The first step to getting started with IMM-compatible systems is to place the
Fabric Interconnects into Intersight Managed Mode. A new FI will present the
administrator with three questions (when connected to the FI serial port). A
previously configured FI must have its configuration erased first. Beginning
with the primary FI, the questions and appropriate answers to enable
Intersight Managed Mode are shown in the image below:

Figure 8: Initial configuration of the primary Fabric Interconnect

Cisco Intersight: A Handbook for Intelligent Cloud Operations


Server Operations 135

The FI will then guide the administrator through a basic networking setup for
the primary FI. Configuring the secondary FI is even easier as it can retrieve
most of its settings from the primary. When connected to the secondary FI
serial port, the administrator will answer the questions shown in the figure
below and then provide an IP address for the secondary FI.

Figure 9: Initial configuration of the secondary Fabric Interconnect

Once the initial configuration has been applied to the Fabric Interconnects,
they can be claimed into Intersight. This experience is nearly identical to the
process of claiming a standalone rackmount server in Intersight. For the
Fabric Interconnects, however, the administrator must connect via a web
browser to the IP address of either FI to access the Device Console. The
Device Console is a simple UI that presents a Device Connector screen for
claiming the FI pair into Intersight.

Cisco Intersight: A Handbook for Intelligent Cloud Operations


136 Server Operations

Fabric Interconnects are Intersight targets that use the Device Connector
discussed extensively in the Foundations and Security chapters of this book.
The image shown below will look familiar to anyone who has claimed other
Cisco targets into Intersight. In IMM, the Device Connector in the Fabric
Interconnects is responsible for maintaining connectivity to Intersight for all
devices attached to the Fabric Interconnects such as chassis, IO Modules,
and blade servers.

Figure 10: A Fabric Interconnect pair already claimed into Intersight

Cisco Intersight: A Handbook for Intelligent Cloud Operations


Server Operations 137

Using the Device ID and Claim Code from the Device Connector, the
administrator can initiate the claim process from Intersight by browsing to
Admin → Targets and selecting Claim a New Target. For IMM, the target type
is Cisco UCS Domain (Intersight Managed) as shown in the figure below:

Figure 11: Claiming an IMM domain target

Once the UCS Domain has been claimed, it may take several minutes or
longer (depending on the number of devices in the new domain) for
Intersight to discover all the chassis, blades, and other devices connected.

Pools, policies, and profiles for the devices in the newly claimed domain can
be created at any time, even before the domain is claimed.

Cisco Intersight: A Handbook for Intelligent Cloud Operations


138 Server Operations

Device Console
As previously mentioned, the Device Console is a UI that is accessed by
browsing to the IP address of the primary or secondary Fabric Interconnect
operating in Intersight Managed Mode. The Device Console provides some
basic system health information, but it has two primary purposes:

• The Device Connector is accessed through the Device Console and is


critical in the Intersight claiming process.
• The Device Console provides local KVM access to any attached
servers for authenticated users. Administrators can access KVM
through Intersight as well, so the Device Connector can function as a
secondary access method in the unlikely event of Intersight becoming
inaccessible from the data center. KVM access is shown in the figure
below.

Ideally, administrators should not need to access the Device Console after
the initial device claiming is complete.

Figure 12: Access to server KVM through the Device Console

Cisco Intersight: A Handbook for Intelligent Cloud Operations


Server Operations 139

Chassis Profiles
Each chassis in an Intersight Managed Mode domain is configured by a
Chassis Profile.

Chassis Profiles are created much like other profiles by selecting


Configure → Profiles and browsing to the Chassis Profiles tab. From there,
an administrator can clone an existing Chassis Profile or create a new one.

Chassis Profiles consist of policies that can be defined in advance or created


inline while creating a new profile.

• The IMC Access policy selects the IP Pool and VLAN to be used for
assigning IP addresses to the IMC of each installed blade. The IP Pool
can be previously defined or created inline.
• The SNMP policy defines the SNMP settings and trap destinations to
be used for alerts.

Note that while both policies are technically optional, a blade cannot be
managed without an IP address for its IMC (integrated management
controller). For this reason, the IMC Access policy with a valid IP Pool is
effectively required.

Hybrid domain operations


For organizations with upcoming greenfield server deployments, it is
recommended to deploy in Intersight Managed Mode. This will enable
Intersight as the configuration/policy manager for all the modernized
infrastructure providing unparalleled operational and scale benefits.

Organizations with existing traditional UCS Domains typically have adapted


their operational practices for efficient management of these platforms.
These organizations have built procedures and invested in tools centered
around the most efficient ways to meet their business objectives. Intersight
adds enhanced operational capabilities for these brownfield environments
without altering these existing practices. The benefits can be realized

Cisco Intersight: A Handbook for Intelligent Cloud Operations


140 Server Operations

starting with the base tier of Intersight at no additional cost. If more


advanced capabilities are desired, such as HCL or Advisories, a higher tier
license can be applied. Each server can be licensed at a different tier
depending on their specific requirements.

A design principle of Intersight is to enable a no-impact transition for


adoption. This refers to both the impact of Intersight on the infrastructure as
well as the impact on operational processes maintaining the infrastructure.
Cisco does not want to force an organization to adopt Intersight for
configuration and policy-related tasks. This philosophy allows Intersight to
embrace these traditional UCS Domains for operational functions. From the
organization’s perspective, this provides the best of both worlds; they can
maximize the benefits of Intersight, while not disrupting their current
operational processes or incurring unnecessary risk to their business
continuity.

Cisco Intersight: A Handbook for Intelligent Cloud Operations


Server Operations 141

Firmware updates

A common management challenge for on-premises servers is the


necessary, but often painful, process of maintaining and updating firmware
across a large number of server types. Most longtime server admins have
horror stories to tell about the hours spent trying to track down firmware
versions for all the servers in their organization and of updates going awry.
Fortunately, Intersight brings long-overdue relief for this issue.

By operating as a cloud-based service, either from Cisco’s SaaS platform or


within an organization's private cloud, Intersight has complete visibility to all
the server resources in an organization. Two key Intersight features,
hardware compatibility lists (HCL) and exporting reports, which were
covered in the previous Infrastructure Operations chapter, utilize this global
inventory to present a consolidated and complete picture of the state of
server firmware at any point in time. From the Intersight interface, a list of all
the server firmware can be generated by selecting Operate → Servers →
Export. Meanwhile, the HCL integration always ensures compliance for all
servers.

When the need arises to update the firmware on servers, Intersight also
simplifies that process. Depending on the server type, blade server, or rack
server, Intersight has an optimized firmware update option. For the sake of
simplicity, this section will use “blade server” to refer to any servers that are
connected to the Fabric Interconnects. (In the UCS architecture, all blade
servers are connected to the Fabric Interconnects, but it is also possible to
connect rack servers to the Fabric Interconnects.)

Cisco Intersight: A Handbook for Intelligent Cloud Operations


142 Server Operations

Standalone server updates


Updates for rack servers can be selected from either the Cisco repository or
a local repository that an organization creates. When the update is initiated
from Intersight, all of the firmware components in the server, including the
BIOS, Cisco IMC, PCI Adapters, RAID Controllers, and other firmware, will be
updated to compatible versions using a behind-the-scenes tool known as
the Cisco Host Upgrade Utility (HUU). Intersight retrieves the firmware from
either cisco.com or a local repository and initiates the HUU in a non-
interactive mode to complete the update. By default, all server components
will be upgraded along with drives, however, an Advanced Mode is available
that allows for the upgrade of individual hard drives to be excluded from the
automated process.

To download the firmware directly from Cisco, the Utility Storage option in
the server needs to be utilized. As shown in the figure below, the
administrator must select Cisco Repository rather than Software Repository
to see a complete listing of firmware available from cisco.com. Once the
correct version is selected and the update process is initiated, the firmware
download to the Utility Storage begins immediately. The installation will start
on the first boot after the download has been completed.

Cisco Intersight: A Handbook for Intelligent Cloud Operations


Server Operations 143

Figure 13: Rack server firmware update using Utility Storage

To use a local repository for updates, an organization needs to set up a


network share (NFS, CIFS, or HTTP/S) that contains previously downloaded
firmware images. Updates using the local software repository begin
immediately.

UCSM server updates


Updates for UCS blade servers are downloaded directly from intersight.com.
Much like the rack servers, all the server components (BIOS, Cisco IMC, PCI
Adapters, RAID Controllers, et. al.) will be upgraded along with storage
drives. Advanced Mode can be used to exclude the upgrade of storage
drives or controllers.

Cisco Intersight: A Handbook for Intelligent Cloud Operations


144 Server Operations

Blade servers in Intersight may be upgraded if they have a Server Profile


from UCS Manager attached. However, if the servers have a Global Server
Profile from UCS Central they cannot currently be updated from Intersight.

Once the update process is initiated from Intersight, the selected firmware
image is downloaded to the Fabric Interconnect that the server is attached
to and a check is run locally to determine if the server will require a reboot.

UCS Domain updates


The Fabric Interconnect (FI) switches are one of the most powerful aspects
of the UCS blade architecture but can be a challenge to update due to their
importance in the infrastructure. Since all data and management traffic for
many servers can be dependent on a pair of switches, firmware upgrades
must be handled with diligence to ensure there is no service disruption.
Intersight brings an added layer of intelligence for this type of process,
making the outcome much more reliable than if it is manually performed.

When the update process for FIs is initiated from Intersight, the selected
firmware bundle will be automatically downloaded from intersight.com,
which eliminates the need for an operator to login to cisco.com, manually
select one or more firmware bundles, download the bundles to a local
machine, and then individually upload those bundles to the appropriate FIs.
This streamlined, intelligent process not only provides a quicker, more
efficient upgrade approach for a single UCS blade server domain but also
ensures that the process is consistently executed across multiple blade
server domains.

As an example of this consistency, by default, the upgrade from Intersight


initiates traffic evacuation. This ensures traffic will failover to the primary FI
before beginning an upgrade on the secondary FI by disabling the host
interface ports on the IO Module connected to the latter.

Cisco Intersight: A Handbook for Intelligent Cloud Operations


Server Operations 145

Figure 14: Intersight ensures traffic is evacuated before the upgrade

Cisco Intersight: A Handbook for Intelligent Cloud Operations


146 Server Operations

Wrapping up

With simplifying management of infrastructure as the first driver for the


creation of Intersight, one can expect the already robust server capabilities
to continue to evolve. Intersight Managed Mode is a perfect example of this,
pushing further toward Cisco’s vision of a cloud operations platform. Later
chapters will explore additional details related to server operations along
with the evolution of the software that works in lockstep with the
infrastructure to ensure a simplified operations experience.

Cisco Intersight: A Handbook for Intelligent Cloud Operations


Network Operations

Cisco Intersight: A Handbook for Intelligent Cloud Operations


148 Network Operations

Introduction

Network operators of a multicloud environment face an ongoing challenge to


deliver infrastructure rapidly while also meeting the competing demands for
mobility and zero downtime. Having more visibility and control through a
single cloud operations platform such as Intersight helps ease this
administrative burden.

This chapter will discuss how network-related configuration can be defined


via reusable policy and applied to network infrastructure for servers. This is
analogous to the policies described in the Server Operations and Storage
Operations chapters of this book.

In addition to the network infrastructure for servers, Intersight also offers


monitoring and inventory capabilities for traditional switching infrastructure,
allowing operators to see more and do more under a single operational view.

Cisco Intersight: A Handbook for Intelligent Cloud Operations


Network Operations 149

Policy-driven network
infrastructure

Intersight provides functionality for configuring server network infrastructure


in a similar model to how servers themselves are configured. The same
reusable constructs discussed in Server Operations (such as pools, policies,
and domains) can be applied to a compute domain’s switching fabric.

Domain Policies and Domain Profiles are the building blocks within Intersight
for defining the configuration needs of a compute domain’s network. These
enable administrators to take advantage of pre-built policies created and
vetted by network personnel ensuring compliance while streamlining the
entire deployment process.

Domain Policies
As discussed in the Intersight Managed Mode section of the previous
chapter, Intersight can configure both the servers and the Fabric
Interconnect switches that serve as the network infrastructure for the
servers.

UCS Domains are a logical representation of the compute fabric and contain
many different types of policies used to represent different reusable portions
of configuration. Examples of such policies include BIOS settings, boot
order, RAID configuration, firmware, QoS, VLAN trunking, MTU, etc.

In previously deployed or brownfield UCS environments, all these


configuration policies for a UCS domain were housed in a tool known as
UCS Manager (UCSM) which runs locally on the Fabric Interconnect
switches. For these brownfield environments, the domain policies would
remain in UCSM and are not migrated or transferred to Intersight. Under this
traditional management model, Intersight does not replace UCS Manager but

Cisco Intersight: A Handbook for Intelligent Cloud Operations


150 Network Operations

instead enhances it by consolidating the day-to-day operations such as


firmware updates and remote console access.

For greenfield UCS deployments, Intersight supports a more modern


approach to managing a compute domain by hosting all the policies and the
tooling required to configure the infrastructure to match those policies,
directly within Intersight. This approach was introduced in the previous
chapter as Intersight Managed Mode (IMM) and explored in detail there. IMM
brings a new type of Intersight policy to support the network infrastructure
for the servers such as the Fabric Interconnect.

The screenshot below shows an example of a Port Policy that is used to


configure Fabric Interconnect port roles within an Intersight Managed Mode
compute domain.

Figure 1: Port Policy showing port roles configuration

Intersight avoids re-inventing the wheel by allowing policies to be defined


once, applied across multiple resource types, and consumed many times by
the desired resources. Instead of clicking on individual ports in a UI to

Cisco Intersight: A Handbook for Intelligent Cloud Operations


Network Operations 151

configure them, administrators can reuse a previously-configured port


policy. Instead of looking up and entering the IP address for NTP servers
each time a new domain is created, administrators can reuse the same NTP
policy for every UCS Domain Profile at a given site. This promotes
consistency, prevents duplicate work, and eliminates unnecessary/unwanted
operational overhead.

Cisco Intersight: A Handbook for Intelligent Cloud Operations


152 Network Operations

Some examples of policies that are usable across both a Server as well as a
Domain Profile include:

• Ethernet Network Control


• Ethernet Network Group

• Network Connectivity
• NTP
• SNMP

• Syslog

Domain Profiles
Since Intersight Managed Mode (IMM) supports both server and domain
infrastructure (amongst others), multiple types of profiles are supported. A
profile specific to a UCS Domain infrastructure is referred to as a Domain
Profile and defines the network configuration policies for a given domain.
This profile consists of one or more Domain Policies that define the desired
domain network configuration settings.

Some common examples of policies specific to Domain Profiles would be:

• Port configurations:
• Unified port selection (FCoE, Ethernet)
• Port role (traditional, port-channel, vPC)

• VLAN/VSAN configuration

• NTP service
• Network connectivity (DNS, IPv6)
• QoS

The use of Domain Profiles offers an easy and consistent method to simplify
the network configuration for servers across domains. Since most

Cisco Intersight: A Handbook for Intelligent Cloud Operations


Network Operations 153

organizations define standards for configuring the networking for servers


(specifically, uplink ports, downlink ports, VLAN settings, and more), the use
of Domain Policies and Profiles can significantly streamline this effort.
Policies such as the Network Connectivity Policy, Ethernet Network Group
Policy, and Port Policy are examples of commonly used Domain Policies for
defining the domain network configuration.

Domain Profiles are created much like other profiles by selecting


Configure → Profiles and browsing to the UCS Domain Profiles tab. From
there, an administrator can clone an existing Domain Profile or create a new
one.

The UCS Domain Profile consists of policies that can either be defined in
advance or created inline while building a new profile. The policies that
make up a UCS Domain Profile are defined as follows:

• The VLAN policy defines the name and VLAN ID of each VLAN that
will exist within the domain. Each VLAN can be assigned a different
Multicast Policy.

• The VSAN policy defines the name, VSAN ID, and FCoE VLAN ID of
each VSAN that will exist within the domain
• The Port Configuration profile defines the role (Ethernet uplink, FC
uplink, server port or unconfigured) and type (Ethernet or FC) of each
FI port. Fabric Interconnect A and B can have a different port profile.
• The NTP policy defines the IP address of one or more NTP servers

• The Network Connectivity policy defines the IP address of DNS


servers
• The System QoS policy defines the properties of different QoS
classes

A single Domain Profile may be used to configure all the uplink and downlink
ports, provision all required VLANs and VSANS, and configure other
networking settings such as NTP, DNS Domain, QoS, Multicast Policy using
reusable policies.

Cisco Intersight: A Handbook for Intelligent Cloud Operations


154 Network Operations

The figure below shows one of the three tabs of a Domain Profile applied to
a Fabric Interconnect pair. Three previously defined policies are shown,
which ensure consistent network connectivity. This figure also expands the
details of one of those policies (the NTP policy).

Figure 2: Summary of policies used for a Domain Profile

To apply the configuration for the domain, simply deploy the Domain Profile
to the desired Fabric Interconnects for the domain. Administrators looking to
standardize the network infrastructure for the servers will realize huge time
savings by employing thoughtfully designed profiles.

Nexus Dashboard
The Cisco Nexus Dashboard brings together real-time insights and
automation services to operate multicloud data center networks and
integrates with the Intersight cloud operating platform for a consistent and
holistic view of all infrastructure components including network, storage,
compute, virtualization, containers, and Kubernetes.

Cisco Intersight: A Handbook for Intelligent Cloud Operations


Network Operations 155

Nexus Dashboard works together with Intersight and provides a unified view
into proactive operations with continuous assurance and actionable insights
across data center fabrics for seamless management.

Figure 3: Intersight and Nexus Dashboard provides full infrastructure control

The integration of Nexus Dashboard to Intersight is formally referred to as


Cisco Intersight Nexus Dashboard (ND) Base. This capability in Intersight
enables organizations to view all their data center networking inventory. It
provides immediate access to a high-level view of the data center platforms
such as Cisco APIC or Cisco DCNM in their network.

The network controllers, APIC and DCNM, are connected to Intersight


through a Device Connector that is embedded in the management controller
of each system. Upon startup, the Device Connector in APIC and DCNM will
attempt to connect to Intersight automatically. However, it may be necessary
to manually configure the Device Connector with proxy and DNS settings.

Once the Device Connector can successfully communicate with Intersight,


the network controller (APIC or DCNM) can be claimed just like any other
target device from Intersight. (Refer back to the Foundations chapter for a
refresher of this process).

Within Intersight, Nexus Dashboard displays a view of all the data


center networking inventory. The summary view on the dashboard displays
graphical information about the platforms in the network and includes
information for device type, status, and firmware versions.

Cisco Intersight: A Handbook for Intelligent Cloud Operations


156 Network Operations

Figure 4: The summary display of networking infrastructure

On the General tab, the Details table displays information about the platform
such as name, status, device type, device IP, firmware version, nodes, and
organization. On the Inventory tab, a summary view and detailed information
of the controllers, spine switches, leaf switches, licenses, and features
associated with the platform in the network is displayed.

The search functionality can be utilized to find specific attributes such as


name, status, type, device IP, organizations, and to export the search results
to a CSV file.

Cisco Intersight: A Handbook for Intelligent Cloud Operations


Network Operations 157

Figure 5: Network device inventory in Intersight

Cisco Intersight: A Handbook for Intelligent Cloud Operations


158 Network Operations

Wrapping up

For operators to manage and maintain their networks effectively, it is


important to have a deep understanding of a network’s constituents and
their properties, what the network is doing, how it is being used, how it is
responding to the demands on it, and, most importantly, how it will respond
to new loads arising due to new business processes. With the
unprecedented increases in scale and complexity of networks, it is
challenging to keep up with the demands of operating a large network
without the aid of intelligent and adaptive tools.

The combination of Intersight and Nexus Dashboard provides a


comprehensive technology solution for network operators to manage and
operate their networks.

Cisco Intersight: A Handbook for Intelligent Cloud Operations


Storage Operations

Cisco Intersight: A Handbook for Intelligent Cloud Operations


160 Storage Operations

Introduction

Storage may not always be the first thought that comes to mind when
hearing the term Cloud Operations, but storage is a critical component of the
overall infrastructure. Application user experience can be drastically
influenced by the availability and performance of the storage systems
needed to persist and retrieve data. A common challenge that storage
administrators face is how to manage a very heterogeneous set of
infrastructures. Storage is often provisioned on a need or project basis and
can therefore take on many different forms, such as hyperconverged,
traditional block and file, or object-based.

Intersight helps minimize this operational burden by wrapping a single


intelligent operations model around both Cisco hyperconverged storage and
non-Cisco traditional storage. This chapter will focus on how Cisco delivers
an industry-leading, performant, scalable hyperconverged infrastructure
(HCI) offering, and how Intersight can be the centralized, SaaS-based
operations platform for both HCI and traditional storage.

Before getting into the specifics of Cisco’s HCI offering (HyperFlex), it is


important to review how HCI differs from traditional converged infrastructure
and when a systems architect would likely prefer one over the other.

Cisco Intersight: A Handbook for Intelligent Cloud Operations


Storage Operations 161

Hyperconvergence
Hyperconverged infrastructure has been a mainstream technology since at
least 2017 and changes the traditional server/storage infrastructure stack
considerably. In HCI, the traditional storage array is eliminated and the
storage itself is collapsed into the server tier as direct-attached storage. An
intelligent storage software layer then clusters those server nodes into a
pool of shared storage and compute, hosting both the application workloads
(typically virtual machines) and the storage they require.

Traditional storage (converged infrastructure)


While HCI has gained significant momentum for most mainstream
applications, there remain workloads or use cases that are not a great fit for
HCI. These workloads typically have extremely high demands on
performance or capacity. Examples of these workloads include high
frequency trading, high-performance computing, extremely large databases,
large scale file deployments, true archiving, and cold storage. For these use
cases, a more purpose-built traditional storage solution often makes more
sense, thus leading to the operational complexity mentioned in the
Introduction.

Cisco Intersight: A Handbook for Intelligent Cloud Operations


162 Storage Operations

HyperFlex

HyperFlex (HX) is Cisco’s internally developed hyperconverged storage


solution that is integrated into the Intersight operational model. One of the
core strengths and differentiators of HyperFlex is that it is built on top of the
UCS architecture, combining software-defined storage, software-defined
compute, and software-defined networking, all from a single vendor with
years of best practices built into all three layers. Instead of reading through
hundreds of pages of best practice guides or validated designs, HyperFlex is
deployed from Cisco-provided templates with all the configuration best
practices (such as BIOS settings, jumbo frames, network segmentation,
QoS, etc.) baked in, without burdening the operator with a lot of complexity.

Figure 1: Cisco HyperFlex system w/ UCS compute-only nodes

This template-driven approach, delivering pre-integrated, consistently


configured hyperconverged storage, compute, and network, with validated
best practices built-in and single-vendor support through Cisco, is all
operationally driven from Intersight. More details on the operational benefits
of HX within Intersight can be found in the following sections of this chapter.

Cisco Intersight: A Handbook for Intelligent Cloud Operations


Storage Operations 163

Solution architecture

Figure 2: Anatomy of a typical HX node

The diagram above shows the anatomy of a typical HyperFlex converged


node. The physical configuration of the node can vary based on the desired
use case (all-flash, all NVMe, or edge optimized). The controller Virtual
Machine (VM) houses the HX Data Platform (HXDP) which implements the
scale-out and distributed file system. The controller accesses all the node’s
disk storage through hypervisor bypass mechanisms for higher performance.
Additionally, it uses the node’s memory and SSD drives or NVMe storage as
part of a distributed caching layer, and it uses the node’s HDD, SSD, or
NVMe storage as a distributed capacity layer.

Cisco Intersight: A Handbook for Intelligent Cloud Operations


164 Storage Operations

The data platform spans three or more Cisco HX nodes to create a highly
available cluster as shown in the figure below:

Figure 3: HyperFlex Cluster architecture

The data platform controller interfaces with the hypervisor in two ways:

• IOVisor — The data platform controller intercepts all I/O requests and
routes requests to the nodes responsible for storing or retrieving the
blocks. The IOVisor presents a file system or device interface to the
hypervisor and makes the existence of the hyperconvergence layer
transparent.
• Advanced Feature Integration (VAAI) — A module uses the
hypervisor APIs to support advanced storage system operations such
as snapshots and cloning. These are accessed through the
hypervisor so that the hyperconvergence layer appears just as an
enterprise shared storage does.

The data platform implements a distributed, log-structured file system that


always uses a solid-state caching layer in SSD or NVMe storage to
accelerate write responses, a file system caching layer in SSD or NVMe
storage to accelerate read requests in hybrid configurations, and a capacity
layer implemented with SSD, HDD, or NVMe storage.

Cisco Intersight: A Handbook for Intelligent Cloud Operations


Storage Operations 165

Incoming data is distributed across all nodes in the cluster to optimize


performance using the caching layer as shown in the diagram below:

Figure 4: HX data distribution

Data distribution
Effective data distribution is achieved by mapping incoming data to stripe
units that are stored evenly across all nodes, with the number of data
replicas determined by the user-defined policies. When an application writes
data, the data is sent to the appropriate node based on the stripe unit, which
includes the relevant block of information.

This data distribution approach, in combination with the capability to have


multiple streams writing at the same time, prevents both network and
storage hotspots, delivers the same I/O performance regardless of virtual
machine location, and gives more flexibility in workload placement. Other
architectures use a locality approach that does not make full use of available
networking and I/O resources.

When migrating a virtual machine to a new location, the HX Data Platform


does not require data to be moved. This approach significantly reduces the
impact and cost of moving virtual machines among systems.

Cisco Intersight: A Handbook for Intelligent Cloud Operations


166 Storage Operations

Data optimization
HyperFlex makes use of several different techniques to achieve inline data
optimization that improves the overall performance and efficiency of the
platform. Finely detailed inline deduplication and variable block inline
compression is always on for objects in the cache (SSD or NVMe and
memory) and capacity (SSD, NVMe, or HDD) layers. Unlike other solutions,
which require turning off these features to maintain performance, the
deduplication and compression capabilities in the HX Data Platform are
designed to sustain and enhance performance and significantly reduce
physical storage capacity requirements.

Figure 5: HyperFlex data optimization

Independent scaling
The most common node type within a HyperFlex Cluster is referred to as a
converged node since it contributes storage, compute, and memory to the
cluster. This converged node architecture was described in the earlier
Solution architecture section. If an organization needs more compute power
or GPU acceleration independent of storage capacity, additional compute-
only or GPU-only nodes can be seamlessly added, providing the desired
additional resource capacity without the overhead of unneeded storage.

Cisco Intersight: A Handbook for Intelligent Cloud Operations


Storage Operations 167

High availability and reliability


High availability and reliability start with the enterprise-class file system
provided by HXDP. For data integrity and reliability, the platform uses block
checksums to protect against media errors, a flash friendly layout to
maximize flash life, and zero overhead, instantaneous snapshots for data
protection. For high availability, HXDP provides a fully-striped architecture
that results in faster rebuilds, fine-grained data-resync and rebalances, and
non-disruptive full-stack rolling upgrades.

Replication
Through the local HX Connect interface or Intersight, replication policies can
be created that specify the repair point objective (RPO). Virtual machines are
added to protection groups that inherit the policies defined by the user.
Native replication can be used for planned data movement (for example,
migrating applications between locations) or unplanned events such as data
center failures.

Unlike enterprise shared storage systems, which replicate entire volumes,


HXDP replicates data on a per-virtual-machine basis. This allows for
configuring replication on a fine-grained basis for mission-critical workloads
without any unnecessary overhead.

The data platform coordinates the movement of data, and all nodes
participate in the data movement using a many-to-many connectivity model.
This model distributes the workload across all nodes, avoiding hot spots,
and minimizing performance impacts. (http://cs.co/9004Hjr7A
http://cs.co/9004Hjr7A)
http://cs.co/9004Hjr7A

Stretched cluster
HX offers the option of stretched clusters in which two identical
configurations in two locations can act as a single cluster. With synchronous
writes between sites, a complete data center failure can occur while still
maintaining full availability to applications with zero data loss. The recovery
time objective (RTO) is only the time that it takes to recognize the failure and
put a failover into effect.

Cisco Intersight: A Handbook for Intelligent Cloud Operations


168 Storage Operations

Logical availability zones


The striping of data across nodes is modified if logical availability zones are
enabled. This feature automatically partitions the cluster into a set of
availability zones based on the number of nodes in the cluster and the
replication factor for the data. Each availability zone has at most one copy of
each block. Multiple component or node failures in a single zone can occur
and make the single zone unavailable. The cluster can continue to operate if
a single group has a copy of the data (http://cs.co/9004Hjr7A
http://cs.co/9004Hjr7A).
http://cs.co/9004Hjr7A

Cisco Intersight: A Handbook for Intelligent Cloud Operations


Storage Operations 169

Simplified HX operations with Intersight


A HyperFlex Cluster includes integrated management software known as
HyperFlex Connect, which provides a remarkably simple, yet feature-rich
administrative console as illustrated in the figure below:

Figure 6: HX Connect management interface

HyperFlex Connect works well for a single, non-edge HX Cluster but was not
built to support multiple, distributed HX Clusters. Intersight however, was
built for exactly this purpose and offers some unique benefits related to HX
that will be covered in this section.

Cisco Intersight: A Handbook for Intelligent Cloud Operations


170 Storage Operations

HyperFlex at the edge (HX Edge)


Rich digital experiences need always-on, low-latency, highly performant
computing that is close to end users. Retail, finance, education, healthcare,
transportation, manufacturing organizations, and remote and branch offices
in general, are all pushing computing to the network edge.

HyperFlex Edge is deployed as a pre-integrated cluster with a unified pool of


resources that can be quickly provisioned, adapted, scaled, and managed to
efficiently power remote office and branch office (ROBO) locations.
Physically, the system is delivered as a cluster of 2, 3, or 4 hybrid or all-flash
nodes that are integrated using an existing Gigabit Ethernet network.

Two node edge (Cloud Witness)


Cisco HyperFlex systems are built on a quorum mechanism that can help
guarantee consistency whenever that quorum is available. A quorum in
Cisco HyperFlex systems traditionally is based on a node majority in which
each node in the system casts a single vote, with a simple majority required
for the cluster to remain operational. This mechanism works well for three-
node and larger clusters that can tolerate one or more node losses and still
be able to obtain a majority consensus and continue operations.

Fault tolerance and file system consistency become more challenging when
only two nodes are deployed at a remote office/branch office (ROBO)
location. In this scenario, if one of the two nodes fails, a quorum can no
longer be established using a node majority algorithm alone. In the unlikely
event that the communication pathway between the two nodes is disrupted,
a split-brain condition may occur if both nodes continue to process data
without obtaining a quorum. The opposite outcome, the loss of availability, is
also possible. Both scenarios must be avoided to help prevent the
introduction of data inconsistency while also maintaining high availability.

For these reasons, hyperconverged two-node architectures require an


additional component, sometimes referred to as a witness or arbiter, that
can vote if a failure occurs within the cluster. This traditional and
burdensome deployment architecture requires the additional witness to be
provisioned on existing infrastructure and connected to the remote cluster

Cisco Intersight: A Handbook for Intelligent Cloud Operations


Storage Operations 171

over the customer’s network. Typically, these external witnesses are


packaged as either virtual machines or standalone software that is installable
within a guest operating system.

To remove this unwanted capital expenditure and ongoing operational


burden, Cisco has developed a next-generation invisible witness
architecture for HX Edge powered by Intersight. This approach eliminates
the need for witness virtual machines, ongoing patching and maintenance,
and additional infrastructure at a third site as shown in the figure below:

Figure 7: Two Node HX Edge with Invisible Cloud Witness

The Invisible Cloud Witness does not require any configuration input.
Instead, all components are configured transparently during the HX Edge
cluster installation. Before the installation process begins, the physical
servers must be securely claimed in Intersight. From that point forward, the
Invisible Cloud Witness is automatically deployed and configured.

Cisco Intersight: A Handbook for Intelligent Cloud Operations


172 Storage Operations

HX policies
A policy is a reusable group of settings that can be applied to a server,
domain, chassis, or cluster. Policies ensure consistency within the data
center. For example, a single DNS, NTP, and Timezone policy can be
created and applied to every HyperFlex Cluster within a single data center.
This limits the number of times the same data needs to be entered to ensure
consistency.

From the Configure → Policies section of Intersight (see figure below),


policies can be created for different categories of infrastructure. HyperFlex
Cluster is one of those categories. The use of policies will be covered in the
section on HyperFlex installation.

Cisco Intersight: A Handbook for Intelligent Cloud Operations


Storage Operations 173

Figure 8: Creating a HyperFlex policy

HyperFlex Cluster Profiles


Profiles are a management construct that consists of a group of policies.
While policies are reusable, profiles are not. A profile can be cloned to save
configuration time, but a profile can only be assigned to one physical
resource at a time.

The HyperFlex Cluster Profile contains all the best practices typically found
within a Cisco Validated Design (CVD) but is completely orchestrated by
Intersight. All that is required are a few unique identifiers (e.g., IP address,
MAC address) entered in a wizard-driven web form.

Cisco Intersight: A Handbook for Intelligent Cloud Operations


174 Storage Operations

The process of creating a HyperFlex profile will be covered in the HyperFlex


installation section of this chapter. The figure below provides an example of
managing multiple HyperFlex Cluster Profiles under a single view:

Figure 9: Multiple HX profile management within Intersight

Post-deployment configuration
Most post-deployment tasks such as cluster expansion, full-stack cluster
upgrades (server firmware, hypervisor and drivers, and HX Data Platform),
replication configuration, and backup and recovery, can all be centrally
managed with Intersight. Since everything within Intersight is policy-driven, it
is simple to reuse policies or clone profiles to keep configuration consistent
across the unlimited scale of HX Clusters.

At the writing of this book, there are a few operations that still require going
into HyperFlex Connect directly, such as creating datastores and expanding
the cluster. Intersight simplifies this by allowing a search through all HX

Cisco Intersight: A Handbook for Intelligent Cloud Operations


Storage Operations 175

Clusters under its management and quickly cross-launch HyperFlex


Connect, leveraging the always-on, durable websocket provided by the
Device Connector embedded within the HXDP. This means it is not
necessary to manually log in to that HyperFlex Connect instance or have
local reachability to the target HX Cluster.

Additional details for post-deployment configuration within Intersight can be


found in the Managing HX Clusters section of this chapter.

Visibility
The ongoing performance and capacity monitoring are essential for any
viable storage solution. Intersight has built-in monitoring widgets so end
users can easily view current storage capacity, forecast capacity runways,
performance, and overall cluster health as shown in the screenshot below:

Cisco Intersight: A Handbook for Intelligent Cloud Operations


176 Storage Operations

Figure 10: HyperFlex visibility with Intersight

Monitoring and visibility of HX Clusters with Intersight are covered in more


detail in the Managing HX Clusters section of this chapter.

Cisco Intersight: A Handbook for Intelligent Cloud Operations


Storage Operations 177

Deploying HyperFlex Clusters

Hyperconverged systems involve multiple components: converged nodes,


compute-only nodes, network switches, disks, firmware components, and a
hypervisor — to name a few. Having clear instructions and a reliable process
for installation is critical to a successful deployment. While individual
HyperFlex Clusters can be installed locally, Intersight provides a cloud-
based wizard-driven approach with preinstallation validation for any
deployment.

Intersight uses HyperFlex Cluster Profiles to ensure servers are configured


properly and consistently. The Intersight installer supports both the standard
(UCS Domain-based) and edge deployments of HyperFlex.
Complete instructions are available through Intersight (https://intersight.com/
https://intersight.com/
help/resources) but this section of the book will serve as an overview of the
help/resources
process.

Cisco Intersight: A Handbook for Intelligent Cloud Operations


178 Storage Operations

Figure 11: An overview of the HyperFlex installation process through Intersight

Pre-installation
A full pre-installation checklist is available at cisco.com. The checklist items
fall into the following categories:

• Validate network connectivity within the data center and to Intersight


• Reserve IP addresses and DNS names for HX components
• Get usernames and passwords for items such as vCenter and Fabric
Interconnects (if doing a standard HX install)

Note that multiple HX Clusters can be connected to a single pair of Fabric


Interconnects. Installing a second HX Cluster does not change the
configuration of the first HX Cluster. The HX nodes should have a hypervisor
installed before beginning the installation process, but the storage controller
VM will be created by Intersight during the install process (HX nodes are
factory shipped with a hypervisor already installed).

Cisco Intersight: A Handbook for Intelligent Cloud Operations


Storage Operations 179

Installation
To begin the installation, the administrator logs into Intersight and claims all
the targets (servers and Fabric Interconnects if applicable) to be used for
this HX installation. The installation involves stepping through a wizard to
create and deploy an HX Cluster Profile. This wizard is started by clicking the
Create HyperFlex Cluster Profile button at the top right of the profile
configuration screen (on the HyperFlex Cluster Profiles tab) as shown below:

Figure 12: Starting the HyperFlex Cluster Profile wizard

The wizard can create profiles for a standard (domain-based) or edge


configuration.

Cisco Intersight: A Handbook for Intelligent Cloud Operations


180 Storage Operations

Figure 13: Selecting the type of HX installation

Cisco Intersight: A Handbook for Intelligent Cloud Operations


Storage Operations 181

Step 1 — General
The first step of the process is to specify the Name and select the
Organization for the new HX Cluster as well as the Data Platform Version and
Server Firmware Version. As the versions of the server firmware and the
HyperFlex Data Platform are interdependent, the installer will not allow the
selection of incompatible versions.

The first step also includes a link to the detailed installation instructions.

Figure 14: Step 1 of the HX profile installation

Cisco Intersight: A Handbook for Intelligent Cloud Operations


182 Storage Operations

Step 2 — Nodes assignment


The second step of the installation allows the administrator to select the
servers to be used for this installation (or specify that they will be selected
later). It is recommended to select the servers at this time if available.
Intersight only shows servers that are unassigned and compatible with HX,
making it easy for an administrator to avoid mistakes.

Figure 15: Step 2 of the HX profile installation

Step 3 — Cluster configuration


The third step is the most involved step of the installation process. Each
section on this screen represents a policy that will be created when the
administrator completes the questionnaire in each section. The administrator
can also select an existing policy instead of manually entering the data in

Cisco Intersight: A Handbook for Intelligent Cloud Operations


Storage Operations 183

each section. For example, the same DNS, NTP, and Timezone policy could
be used for multiple HX Clusters in the same data center, saving the
administrator configuration time and reducing the potential for errors.

Intersight performs data validation of the parameters in each section and


shows a green checkmark for each one that validates correctly as shown
below: The validation ensures that each field is in the proper format. For
example, a field that requests an IP address should look like an IP address.
Flagging errors immediately as they occur saves the administrator time.

Figure 16: Step 3 of the HX profile installation

Cisco Intersight: A Handbook for Intelligent Cloud Operations


184 Storage Operations

Step 4 — Nodes configuration


The fourth step shows a summary of the management and controller VM IP
addresses (see figure below). The administrator can choose the replication
factor (the number of copies maintained for each data block) as well as the
hostname and IP for each node (by expanding each node at the bottom of
the screen). Again, Intersight performs basic data validation on each field in
this step as well.

Figure 17: Step 4 of the HX profile installation

Step 5 — Summary
The fifth step is a summary and final validation step. The following figure
shows that configuration errors are flagged by Intersight and the installation
process cannot proceed until these errors are addressed. These are simple

Cisco Intersight: A Handbook for Intelligent Cloud Operations


Storage Operations 185

but thorough validations that ensure that VLANs are not duplicated, DNS
names are in the right format, MAC addresses are in the right range, and
more.

Once these errors are cleared, the administrator can click either Validate or
Validate & Deploy to move to the next step.

Figure 18: Step 5 of the HX profile installation showing errors

Step 6 — Results
The sixth and final step performs a validation of the physical environment.
The previous steps perform basic yet thorough data validation, but this step
will perform physical validations. It verifies that the hypervisors are

Cisco Intersight: A Handbook for Intelligent Cloud Operations


186 Storage Operations

reachable, all DNS names resolve properly, VLANs exist on the Fabric
Interconnect, and more.

The following screenshot shows that the Intersight validation of the


HyperFlex Cluster Profile settings and the environment is quite thorough.
Issues are flagged, making it easy for an administrator to locate and correct
errors. For example, this screenshot indicates that the vCenter server is
either unreachable or the credentials are invalid.

Figure 19: Step 6 showing errors to validate the physical environment

Cisco Intersight: A Handbook for Intelligent Cloud Operations


Storage Operations 187

Once all issues are addressed and there are no errors on the validation step,
the administrator can deploy the profile to the HyperFlex hardware. Intersight
will update server firmware if necessary, configure Fabric Interconnect
VLANs if applicable, configure hypervisor virtual switches, install storage
controller VMs, and more.

Post-installation
When the HyperFlex Cluster Profile has been successfully deployed, the
administrator should verify that the cluster appears within the chosen
hypervisor manager. Any additional recommended post-installation steps
are documented here:
https://intersight.com/help/resources#cisco_hyperflex_cluster_deployment
https://intersight.com/help/resources#cisco_hyperflex_cluster_deployment

The administrator can now use HyperFlex Connect to create datastores for
use within the hypervisor. HyperFlex can present different sized datastores
to the hypervisor. To create datastores, an administrator can cross-launch
into HyperFlex Connect from within Intersight. From the Intersight screen for
Operating HyperFlex Clusters (Operate → HyperFlex Clusters), administrators
can click the ellipsis in the row for the chosen cluster and select Launch
HyperFlex Connect as shown below:

Cisco Intersight: A Handbook for Intelligent Cloud Operations


188 Storage Operations

Figure 20: Launching HyperFlex Connect from within Intersight

Cisco Intersight: A Handbook for Intelligent Cloud Operations


Storage Operations 189

Managing HX Clusters

A deployed HX Cluster that has been claimed in Intersight is available for


numerous management functions through Intersight. Intersight provides the
centralized, cloud-based platform from which HX administrators can monitor
performance and health status and run health checks, manage capacity,
configure backups, launch upgrades, and even provide for the remote cloud
witness for HX Edge environments.

Actions
From the Operate → HyperFlex Clusters main inventory screen, various
cluster-level actions are available by clicking on the cluster’s ellipsis on the
right (see below).

Cisco Intersight: A Handbook for Intelligent Cloud Operations


190 Storage Operations

Figure 21: Cluster-level actions shown

From the pop-up menu, administrators can launch the HyperFlex Connect
management utility or launch a cluster upgrade (both of which are described
earlier in this chapter) as well as run a Health Check or configure a backup of
the cluster as described below.

Run Health Check


For any HyperFlex Cluster claimed and connected to Intersight,
administrators can run pre-defined health checks to assure the proper
functioning of the clusters. It is good practice to run a health check and
remediate any resulting issues before any significant maintenance or
upgrade. From the main inventory screen seen above, one or more clusters
can be selected. After clicking the Health Check option from the ellipsis
menu, administrators will be prompted with a wizard to step through the
health check process (see below).

Cisco Intersight: A Handbook for Intelligent Cloud Operations


Storage Operations 191

Figure 22: HX Health Check wizard

Administrators can select all health check types by default or customize the
process to focus on specific checks (see below).

Cisco Intersight: A Handbook for Intelligent Cloud Operations


192 Storage Operations

Figure 23: Health Check customization

Results of the health check are available from the Health Check tab of
the cluster details screen (Operate → HyperFlex Clusters then select the
desired cluster).

Backup and restore


VM snapshot backups are a critical component of any virtualized
environment, but present extra challenges for remote sites which often have
little supporting IT infrastructure available on site. For HyperFlex Edge
clusters, administrators can use Intersight to configure, backup, and restore
VM snapshots leveraging a centralized HX backup target cluster. This allows
administrators to configure backups across multiple HX Edge clusters, even
those at different remote sites, in a consistent and centralized fashion. The
VM snapshots are retained both locally on the Edge clusters themselves and
the centralized backup target cluster.

Cisco Intersight: A Handbook for Intelligent Cloud Operations


Storage Operations 193

To configure backups for an HX Edge system, navigate to the Operate →


HyperFlex Clusters summary screen and select the Configure Backup as
described in the Actions section above.

Monitoring
As shown previously in Figure 23, the main HX Cluster inventory screen
provides a graphical view of the health, capacity, and utilization status
summary of all clusters at the top, as well as a status breakdown by cluster
beneath it. By clicking on a specific cluster, administrators can expose
greater cluster-specific detail including version, individual node status, and
open alarms as shown below:

Figure 24: Cluster details

Cisco Intersight: A Handbook for Intelligent Cloud Operations


194 Storage Operations

Within the cluster details page, additional tabs are available for: Profile,
Capacity Runway, Performance, and Health Check. The Profile tab
displays detailed information about current cluster policies, backups, etc.
(see below).

Figure 25: Cluster profile information

Cisco Intersight: A Handbook for Intelligent Cloud Operations


Storage Operations 195

The Performance tab (see below) highlights the historical IOPs, throughput,
and latency metrics for the cluster at user-configurable intervals (last day,
week, month, etc.).

Figure 26: Cluster performance statistics

Cisco Intersight: A Handbook for Intelligent Cloud Operations


196 Storage Operations

Under the Health Check tab (see below), after having run an HX Cluster
Health Check from the Actions menu as indicated above, the output of the
Health Check can be viewed to remediate any failures.

Figure 27: Health Check results

Cisco Intersight: A Handbook for Intelligent Cloud Operations


Storage Operations 197

Of note, in the cluster details page is the Capacity Runway tab (see below).
This latest feature uses historical data from the previous 6 months to
calculate daily consumption trends in the cluster and provide a predicted
time to exceed the 75th capacity percentile. Intersight will also raise various
alarms in advance of a predicted strain on capacity.

Figure 28: Capacity Runway data

Cisco Intersight: A Handbook for Intelligent Cloud Operations


198 Storage Operations

Full-stack HyperFlex upgrade


Intersight can perform a full-stack upgrade of a HyperFlex Edge cluster. At
the time of publication of this book, FI-attached clusters must be upgraded
through HyperFlex Connect.

To initiate the upgrade of a HyperFlex Edge cluster, the administrator must


click the three dots in the row representing that cluster (in the same manner
that HyperFlex Connect can be cross-launched) and select Upgrade as
shown below:

Figure 29: Initiating an upgrade of a HyperFlex Edge cluster

Cisco Intersight: A Handbook for Intelligent Cloud Operations


Storage Operations 199

In the first step, the administrator selects the desired Upgrade Bundle. The
administrator does not have to download that software bundle; it is provided
by Intersight. As shown below, the full stack upgrade can be configured to
upgrade the hypervisor as well. It is recommended to perform the hypervisor
upgrade during this process if possible.

Figure 30: Step one of the full stack upgrade

Cisco Intersight: A Handbook for Intelligent Cloud Operations


200 Storage Operations

Intersight will perform a series of validations before the upgrade can


proceed.

Figure 31: Pre-upgrade validations

Cisco Intersight: A Handbook for Intelligent Cloud Operations


Storage Operations 201

The next screen simply shows the results of the validations. The image
below shows that all validations passed and the cluster meets upgrade
requirements. These validations help minimize the chances of encountering
an error during the upgrade process.

Figure 32: Results of compatibility checks

Cisco Intersight: A Handbook for Intelligent Cloud Operations


202 Storage Operations

The last screen prompts for confirmation of the start of the upgrade. Virtual
machines will have to be evacuated from each host as each host is
upgraded. Depending on the hypervisor configuration, virtual machine
evacuation may occur automatically. See the image below for more details
related to virtual machine evacuation.

Figure 33: Confirm the start of the upgrade process

Cisco Intersight: A Handbook for Intelligent Cloud Operations


Storage Operations 203

The HyperFlex Edge installation will run in the background. The status of the
upgrade can be monitored by selecting the Requests icon (the spinning blue
icon at the top of the Intersight title bar) and then selecting the upgrade
process. The following image shows the status of a HyperFlex Edge
installation in progress.

Figure 34: HyperFlex Edge cluster upgrade progress

Cisco Intersight: A Handbook for Intelligent Cloud Operations


204 Storage Operations

Traditional storage operations

In addition to the powerful management capabilities that Intersight provides


for Cisco HyperFlex hyperconverged systems, Intersight can also provide
enhancements for traditional storage arrays from external storage targets
such as Pure Storage, Hitachi Data Systems, and NetApp. These traditional
storage targets are claimed using a similar process to other on-premises
targets via the Admin → Targets page in Intersight (see below). They are
claimed via the Assist function of the Intersight Appliance and managed
using the integrated storage management capabilities available from the
Operate → Storage page in Intersight.

Cisco Intersight: A Handbook for Intelligent Cloud Operations


Storage Operations 205

Figure 35: Claiming a Pure Storage FlashArray target

The ability to claim external storage targets and the capabilities available
to those targets are subject to Intersight account licensing.

Cisco Intersight: A Handbook for Intelligent Cloud Operations


206 Storage Operations

Once claimed, the Intersight Appliance’s Assist function can communicate


directly with these storage targets via their management APIs (e.g., Purity,
Data OnTAP, etc.). Claiming traditional storage targets enables Intersight to
collect their inventory details which can be displayed via widgets on the
Monitor screen (see below) or through the standard inventory screens in the
Operate → Storage screen.

Figure 36: Intersight dashboard widgets with storage inventory data

Cisco Intersight: A Handbook for Intelligent Cloud Operations


Storage Operations 207

In either case, by clicking on the name of the storage array, detailed


information can be displayed about the array’s capacity, software version,
and its logical volumes, hosts, controllers, and disks (see below).

Figure 37: Storage array general information

Cisco Intersight: A Handbook for Intelligent Cloud Operations


208 Storage Operations

Figure 38: Disk inventory information

While this level of visibility and inventory information is convenient to have


available in the same pane as other Intersight infrastructure items, Intersight
also enables traditional storage targets to be incorporated into workflows for
use cases such as provisioning new storage volumes and datastores,
shrinking/deleting storage volumes and reclaiming capacity, and
provisioning new storage hosts and host groups.

For a full understanding of Intersight’s orchestrator including its ability to


incorporate tasks involving traditional storage arrays, please see the chapter
on Orchestration.

Cisco Intersight: A Handbook for Intelligent Cloud Operations


Storage Operations 209

Wrapping up

Whether managing Cisco HyperFlex hyperconverged storage or traditional


converged infrastructure from external storage targets, Intersight provides a
consistent operational experience. Detailed inventory collection, dashboard
monitoring widgets, and utilization metrics help operators seamlessly
manage a diverse storage landscape.

HyperFlex supports Intersight’s native configuration constructs such as


pools, policies, and profiles allowing organizations to easily deploy
consistent HX Clusters based on validated design principles. Configuration
for external storage targets can also be realized using Intersight’s
orchestration capabilities outlined in the Orchestration chapter.

Cisco Intersight: A Handbook for Intelligent Cloud Operations


Cisco Intersight: A Handbook for Intelligent Cloud Operations
Virtualization
Operations

Cisco Intersight: A Handbook for Intelligent Cloud Operations


212 Virtualization Operations

Introduction

A cloud operations platform should be capable of operating infrastructure


across multiple cloud targets, including the on-premises private cloud.
Inventory collection, visibility, and orchestration of resources are examples
of Intersight’s support for such environments, referred to as Intersight
Virtualization Services (IVS). IVS is made possible with the Device Connector
and Intersight Assist Service detailed in the Foundations chapter of this
book. To recap, the Assist Service is embedded in the Intersight Appliance
and allows Intersight to communicate with on-premises resources, such as
VMware vCenter, within a customer’s private network. This communication
is proxied and secured using the same durable websocket provided by the
Device Connector described throughout this book.

Cisco Intersight: A Handbook for Intelligent Cloud Operations


Virtualization Operations 213

Figure 1: Intersight Assist Device Connector proxy

Once a hypervisor manager such as vCenter is claimed into Intersight as a


target via the Assist Service, all of its resources fall within the purview of
Intersight cloud operations. This chapter will focus on how to claim a
vCenter target within a customer environment along with the monitoring and
configuration capabilities that are subsequently instantiated within Intersight.

Cisco Intersight: A Handbook for Intelligent Cloud Operations


214 Virtualization Operations

Claiming a vCenter target

The claiming process starts like any other target claim, by navigating to
Admin → Targets and selecting Claim a New Target as shown below:

Figure 2: Claiming a new target

Cisco Intersight: A Handbook for Intelligent Cloud Operations


Virtualization Operations 215

The target type, in this case, would be VMware vCenter under the
Hypervisor category.

Figure 3: Setting the target type

Cisco Intersight: A Handbook for Intelligent Cloud Operations


216 Virtualization Operations

Because an Assist appliance is needed to communicate with a vCenter


target, the next step of the wizard prompts operators to select the
appropriate Intersight Assist appliance from the dropdown menu. The
appliance must have IP reachability to vCenter.

Figure 4: Completing the target claim

Cisco Intersight: A Handbook for Intelligent Cloud Operations


Virtualization Operations 217

Contextual operations

Once the vCenter target is claimed, a new Virtualization option will appear in
the Operate side navigation menu. If Workload Optimizer has been
appropriately licensed, the vCenter target’s resources will also be
automatically stitched into the Supply Chain (see the Workload Optimizer
chapter for more details). Navigating to Operate → Virtualization will show
the inventory visibility that has now been pulled into Intersight as shown in
the screenshot below:

Figure 5: Viewing vCenter inventory in Intersight

The image above lists all the available vCenter Datacenters and associated
resources. Searching through this inventory follows the same look and feel
as other portions of the Intersight user interface. Selecting a particular

Cisco Intersight: A Handbook for Intelligent Cloud Operations


218 Virtualization Operations

Datacenter in the table above will shift the user to a more context-specific
view of that Datacenter as shown in the screenshot below:

Figure 6: Detailed view of a vCenter Datacenter

Cisco Intersight: A Handbook for Intelligent Cloud Operations


Virtualization Operations 219

Any navigation from this point forward into the child resources will be tied to
the context of the selected Datacenter. The Clusters, Hosts, Virtual
Machines and Datastores tabs will each show users a view of their
respective inventory including useful capacity and utilization metrics as
shown in the screenshots below:

Figure 7: Hosts inventory

Cisco Intersight: A Handbook for Intelligent Cloud Operations


220 Virtualization Operations

Figure 8: Virtual machines inventory

Cisco Intersight: A Handbook for Intelligent Cloud Operations


Virtualization Operations 221

One of the pull-through benefits of having these resources available in


Intersight inventory is the ability to apply Intersight tags. Tags allow users to
apply their own preferred scheme to any resource tracked within the
Intersight inventory. Once tagged, these resources become much easier to
find within the UI and API alike. Tags can be applied to target resources in
bulk as shown in the screenshot below:

Figure 9: Tagging virtual machine resources

Cisco Intersight: A Handbook for Intelligent Cloud Operations


222 Virtualization Operations

Once tagged, a user can search through the resource inventory using the
newly assigned tags.

Figure 10: Searching virtual machine inventory by tag

Cisco Intersight: A Handbook for Intelligent Cloud Operations


Virtualization Operations 223

Intersight adds additional intelligence into the various vCenter resource


views by linking in other resource relationships. One example of this can be
observed when selecting one of the entries from the Hosts tab. This will
bring up a view similar to the one shown below:

Figure 11: Detailed view of a vCenter host

Cisco Intersight: A Handbook for Intelligent Cloud Operations


224 Virtualization Operations

In the Details section of the view above, the physical server has now been
associated and linked with this ESXi host. Operators troubleshooting a
potential problem on this host can now quickly click on the hyperlinked
server and pivot into the details of that server as shown below:

Figure 12: Detailed view of a physical server

Cisco Intersight: A Handbook for Intelligent Cloud Operations


Virtualization Operations 225

Virtualization orchestration

Intersight’s orchestration capabilities are covered in detail within the


Orchestration chapter of this book. It is worth reiterating in this section that
virtualization operations extend beyond just inventory collection and visibility.
It allows users to stitch together a sophisticated set of tasks within reusable
and shareable workflows. Below is an image of some of the tasks that are
available in the Workflow Designer, which is the canvas for creating
orchestration workflows specifically for virtualization.

Figure 13: Orchestration of virtualization tasks within Intersight

This type of orchestration will have some overlap with other virtualization-
related tools, especially with the VMware suite of products. Many
organizations are familiar with and may have operationalized the vRealize

Cisco Intersight: A Handbook for Intelligent Cloud Operations


226 Virtualization Operations

Suite or some of its sub-components, such as vRealize Automation. For


those organizations, the virtualization and orchestration capabilities in
Intersight should not be viewed as an alternative or wholesale replacement
for the VMware tools. Instead, Intersight functions as a hypervisor
agnostic orchestrator. While there are some overlapping capabilities, the two
platforms have vastly different design goals.

Cisco Intersight: A Handbook for Intelligent Cloud Operations


Virtualization Operations 227

Wrapping up

The integrations discussed in this chapter allow for a complete view of both
the virtualization and underlying hardware infrastructure from within
Intersight, making setup, configuration, operations, and troubleshooting
more efficient. Administrators no longer need to bounce between various
tools to understand the capacities, interactions, and configurations of the
physical server, storage, and virtualization layers. Intersight consolidates all
this information into a single source of truth.

In addition, the virtualization-specific orchestration capabilities of Intersight


range from mundane tasks such as simply powering on a virtual machine to
creating complex workflows that ensure consistent, repeatable operations
“from the ASIC to the App.” An entire stack can be completely automated,
from low-level hardware and firmware configuration, all the way to the
application including the setup and configuration of the physical servers,
networks, storage, operating system, hypervisor, Kubernetes, and containers
that the application may require.

The toolset integration within Intersight is flexible enough to give


administrators options for management, automation, and orchestration,
whether contained entirely within Intersight, or with the use of Intersight’s
open API capabilities to simplify integration with other tools such as vRealize
Automation, Terraform, Ansible, or Saltstack.

The virtualization capabilities will be extended not only to additional


hypervisors with the launch of the Intersight Workload Engine that is
discussed in the Kubernetes chapter but also to virtualization infrastructure in
the public cloud. This allows a cloud-agnostic workflow to be used to deploy
virtual machines on a variety of hypervisors running on-premises or on
public cloud services.

Cisco Intersight: A Handbook for Intelligent Cloud Operations


Cisco Intersight: A Handbook for Intelligent Cloud Operations
Kubernetes

Cisco Intersight: A Handbook for Intelligent Cloud Operations


230 Kubernetes

Introduction

With ever-changing business needs and pressure to increase innovation


while reducing costs, there is a constant stream of new technology
innovations positioning themselves to solve these challenges. Whether
coming from traditional IT companies, innovative startups, or the open
source community, it is important to evaluate why a particular solution, tool,
or technology is being incorporated and to identify which problems it will
solve in a particular organization. Most recently the industry has focused its
attention on Kubernetes (K8s).

Kubernetes has quickly become the de facto standard in container


orchestration platforms due to its ability to orchestrate deployment,
scheduling, health monitoring, updating, and scaling containerized
applications while being driven by a very extensible API framework. In a
well-running Kubernetes environment, a node can fail and applications will
be automatically spread across the remaining nodes, bringing enhanced
reliability. In addition, autoscaling and rolling updates can deliver near-
effortless agility. Deploying a K8s cluster includes many flexible options such
as on-premises atop virtualized or physical hardware or through various
cloud provider services such as Amazon EKS or Google GKE, giving
developers a consistent infrastructure experience to support their
applications, regardless of location.

It can be easy to read articles, join webinars and investigate each of these
technology trends in a vacuum, but with Kubernetes, there is a wealth of
complexity below-the-waterline. Beyond just the basic technology, the
items to be considered can range from issues such as organizational culture,
structure, and staff skill sets tied to existing operationalized tooling and
much more. Venturing into the Kubernetes “water” can be daunting not only
because it brings new abstractions such as pods, nodes, and deployments,
but because it is also inherently tied to containerization, microservices, and
modern application development practices such as continuous integration

Cisco Intersight: A Handbook for Intelligent Cloud Operations


Kubernetes 231

and continuous delivery (CI/CD), software pipelines, GitOps, and


Infrastructure-as-Code.

To ease the adoption and lower the learning curve, Cisco has brought
Kubernetes services into Intersight. To address the fundamental challenges
with Kubernetes adoption and management, Intersight provides a turn-key
SaaS solution for deploying and operating consistent, production-grade
Kubernetes anywhere, all backed by the Cisco Technical Assistance Center
(TAC). This eliminates the effort to professionally install, configure, optimize,
and maintain multiple Kubernetes deployments which speeds up adoption,
ensures enterprise standards, and delivers value directly to the business.

With Intersight, the right platform for each application can be chosen without
having to worry about the operational model changing or teams needing to
be cross-trained on new tools. Additionally, monitoring, alerting, and even
intelligence-driven optimization are normalized across both traditional and
modern application platforms enabling the various IT silos in an organization
to communicate effectively with a shared set of metrics, data, and tooling.

Cisco Intersight: A Handbook for Intelligent Cloud Operations


232 Kubernetes

Intersight Kubernetes Service

While Kubernetes usage often emerges in an organization organically —


perhaps through a development team or special project — the burden of
deploying and operating production-grade clusters, at scale, oftentimes falls
on IT Operations teams.

In these situations, IT administrators are not only responsible for the


Kubernetes layer, but also the end-to-end virtual and/or physical
infrastructure on which the clusters run. These teams are responsible for
transforming the initial “snowflake” Kubernetes instances into a solid,
common platform that allows for easy deployment, observation, and
management across the entire infrastructure stack, extending from the
servers and storage through the networking and down to the clusters
themselves.

The operational needs of Kubernetes environments extend beyond its initial


deployment and configuration into Day 2 support, including data protection,
automation, orchestration, and optimization across the infrastructure and
application stacks. Moreover, IT Operations teams must also enable agility
for the business. The application developers, who are essentially their
customers, need to be able to easily access and automate Kubernetes
clusters with their CI/CD pipelines through robust, mature APIs that abstract
the complexity of the underlying infrastructure.

It is a tall order for any IT Operations team, and one made more difficult as
demand continues to increase for Kubernetes clusters anywhere — at the
edge, on-prem, or public cloud.

Intersight Kubernetes Service (IKS) was created to address these


challenges.

IKS enables IT Operations teams to automate the installation, deployment,


and management of the myriad components necessary for a production

Cisco Intersight: A Handbook for Intelligent Cloud Operations


Kubernetes 233

Kubernetes cluster (regardless of where the cluster resides) and is fully


backed by Cisco TAC.

Figure 1: Expanding Intersight with Kubernetes

The core of IKS is its curated Kubernetes distribution complete with the
integration of networking, storage provider interfaces, load balancers, and
DevOps tooling. The IKS distribution includes native integrations that help
expand its capabilities, such as:

• Cisco HyperFlex (HX) integration for enterprise-class storage


capabilities (e.g., persistent volume claims and cloud-like object
storage).

• Cisco Application Centric Infrastructure (Cisco ACI) integration for


networking

IKS offers a choice of deployment options: on-premises atop VMware ESXi


hypervisor nodes or leveraging the new Intersight Workload Engine to deploy

Cisco Intersight: A Handbook for Intelligent Cloud Operations


234 Kubernetes

on either virtualized or bare metal servers, and directly to public cloud


services such as Amazon EKS or Google GKE. All this is available through a
centralized management and policy platform: Intersight.

Figure 2: IKS architecture

Consider an example from the retail sector: a new container-based


application needs to be deployed, requiring a small Kubernetes cluster in
each of hundreds of retail stores, along with a few core clusters in privately-
owned or co-located data centers. Without Intersight, just preparing the
physical infrastructure at those sites would likely involve a nightmare
scenario of endless server firmware upgrades, storage configurations, and
OS and hypervisor installations.

Cisco Intersight: A Handbook for Intelligent Cloud Operations


Kubernetes 235

Beyond that, without IKS, installing and managing a Kubernetes cluster at


each site in any sort of consistent fashion would border on the impossible.
However, with the power of both Intersight and the Intersight Kubernetes
Service, the above challenges are not only possible to overcome, but can be
achieved in an automated, straightforward, and maintainable manner,
alleviating the stresses that many overloaded operations teams are
experiencing.

Cisco Intersight: A Handbook for Intelligent Cloud Operations


236 Kubernetes

Benefits

With IKS providing a solution for managing production-grade Kubernetes


anywhere, organizations can reap multiple benefits:

• Simplify standardization and governance by consuming resource


pools and reusable policies through Kubernetes Cluster Profiles
• Easily deploy and manage consistent clusters regardless of physical
location: at the edge, in a data center, or public cloud
• Reduce risk and software licensing costs by deploying Kubernetes
clusters to natively integrated, bare metal IWE server and storage
infrastructure, all backed by Cisco TAC.

• Automate and simplify Kubernetes with curated additions and


optimizations such as service mesh, networking, monitoring, logging,
and persistent object storage.
• Reduce risk and cost with a security-hardened, highly-available
platform

Cisco Intersight: A Handbook for Intelligent Cloud Operations


Kubernetes 237

Creating clusters with IKS

As with other Intersight services, Intersight Kubernetes Service follows a


similar methodology of defining resources as policies and pools and
consuming these resources through the assignment of a profile.

To deploy a Kubernetes cluster on-premises, the administrator needs to first


make sure they have the Intersight Appliance in their data center (the
Appliance is outlined in the Foundation chapter). Once the Appliance is
claimed in Intersight, policies and Kubernetes Cluster profiles need to be
created. As with other Intersight profiles, consuming resources as pools and
policies provides consistency and validation of the configuration. IKS will
validate each step of the cluster creation process and deploy the cluster, as
depicted in the figure below.

Figure 3: IKS cluster creation workflow

Intersight resources form the logical building blocks of an IKS Cluster and
associated Kubernetes Cluster Profile. The following resource types may be
used and re-used to define these clusters:

• IP Pools

• Networking policies
• Node Pools
• Cluster add-ons

Cisco Intersight: A Handbook for Intelligent Cloud Operations


238 Kubernetes

Building an IKS Cluster profile involves consuming resources of the above


types. Some of these resources are specific to IKS. In particular, IKS requires
an administrator to define the:

• Cluster configuration
• IP Pools

• Load balancer configuration


• SSH users and corresponding public key
• Network CIDR blocks for service IP and pod IP address
consumption

• Node Pools configuration

• Kubernetes version
• Master Node Pool, for each pool:

- Master Node infrastructure provider


- Master Node VM configuration (CPU, disk size,
memory)

- Master Node count and IP Pool

• Worker Node Pools, for each pool:

- Count
- IP Pool

• Container image management


• Named trusted root certificate authorities and unsigned
registries

• Container runtime Docker HTTP proxy configuration

Cisco Intersight: A Handbook for Intelligent Cloud Operations


Kubernetes 239

The definitions noted above may be defined and consumed as


policies, pools, or configured directly within the IKS cluster profile.
Intersight provides a simple step-based form to collect each of the
data points listed above to ultimately deploy a unique cluster instance
as shown below:

Figure 4: Cluster deployment in progress

Cisco Intersight: A Handbook for Intelligent Cloud Operations


240 Kubernetes

Upon successful cluster deployment through Intersight, the


Operate → Kubernetes tab will contain a reference to the deployed
cluster. Each IKS cluster may also be managed through traditional
Kubernetes tools, such as the kubectl command-line tool. To access
and manage a specific IKS cluster, simply choose to Download
Kubeconfig for the desired cluster as shown in the image below:

Figure 5: Deployed IKS clusters showing Download Kubeconfig

Cisco Intersight: A Handbook for Intelligent Cloud Operations


Kubernetes 241

Intersight Workload Engine

Intersight Workload Engine (IWE) delivers an IKS-integrated,


container-as-a-service hardware and software platform that
simplifies provisioning and ongoing operations for Kubernetes from
the data center to the edge. IWE combines the benefits of Cisco’s
hyperconverged infrastructure offering, atop a custom hypervisor,
with the flexibility of IKS delivering a robust platform for hosting
Kubernetes clusters deployed on either bare metal or virtualized
nodes.

IWE integrates these functions:

• Software-defined storage, compute, and networking, supplied


by Cisco HyperFlex Data Platform
• SaaS-delivered hardware and software infrastructure
management from Cisco Intersight

• IKS-based Kubernetes clusters, deployed and managed


through Intersight

As powerful as each of these capabilities are individually, together


they create a complete platform for cloud-native application
development and delivery.

The full IWE stack is curated by Cisco and hardened and


productized for production-ready enterprise container deployment,
as a service. IWE ensures multicloud development consistency with
100% upstream Kubernetes compliance and up-to-date versions
while providing integrated lifecycle management with guided
installation and upgrade support, greatly simplifying the
administration experience for operations teams.

Cisco Intersight: A Handbook for Intelligent Cloud Operations


242 Kubernetes

Wrapping up

Kubernetes is a quickly evolving technology, with major features


being released at a pace where many organizations may struggle to
keep up. Intersight Kubernetes Service simplifies this experience, so
IT organizations can focus more on their products and services and
less on the maintenance of the systems on which they run. As the
Intersight Kubernetes Service evolves, expect to see new capabilities
incorporated into the distribution, deeper integration with cloud-
native infrastructure, and easier ways to manage and maintain the
clusters under its purview.

Cisco Intersight: A Handbook for Intelligent Cloud Operations


Workload
Optimization

Cisco Intersight: A Handbook for Intelligent Cloud Operations


244 Workload Optimization

Introduction

Prime directive
IT Operations teams have essentially one fundamental goal, a prime
directive against which their success is constantly measured: to deliver
performant applications at the lowest possible cost while maintaining
compliance with IT policies.

This goal is thwarted by the almost intractable complexity of modern


application architectures (whether virtualized, containerized, monolithic or
micro-services-based, on-premises or public cloud, or a combination of
them all) as well as the sheer scale of the workloads under management and
the constraints imposed by licensing and placement rules. Having a handle
on which components of which applications depend on which pieces of the
infrastructure is challenging enough; knowing where a given workload is
supposed to - or allowed to - run is more difficult still; knowing what
specific decisions to make at any given time across the multicloud
environment to achieve the Prime Directive is a Herculean task, beyond the
scale of humans.

As a result, this prime directive is oftentimes met with a brick wall.

Workload Optimizer aims to solve this challenge through Application


Resource Management (ARM) — assuring that applications get the resources
they need, when they need them, to perform well while simultaneously
minimizing cost (public cloud) and/or optimizing resource utilization (on-
premises), all the while complying with workload policies.

Cisco Intersight: A Handbook for Intelligent Cloud Operations


Workload Optimization 245

Traditional shortfalls
The traditional method of IT resource management has fallen short in the
modern data center. This process-based approach typically involves:

1 Setting static thresholds for various infrastructure metrics, such as


CPU or memory utilization

2 Generating an alert when these thresholds are crossed

3 Relying on a human being viewing the alert to:

1 Determine whether the alert is anything to worry about (what


percentage of alerts on any given day are simply discarded in
most IT shops? 70%? 80%? 90%?).

2 If the alert is worrisome, determine what action to take to push


the metric back below the static threshold

4 Execute the above action, then wash, rinse, repeat

This approach has significant, fundamental flaws.

First, most such metrics are merely proxies for workload performance, they
don’t measure the health of the workload itself. High CPU utilization on a
server may be a positive sign that the infrastructure is well-utilized and does
not necessarily mean that an application is performing poorly. Even if the
thresholds aren’t static, but centered on an observed baseline, there’s no
telling whether deviating from the baseline is good or bad, or simply a
deviation from normal.

Secondly, most thresholds are set low enough to provide human beings time
to react to an alert (after having frequently ignored the first or second
notification), meaning expensive resources are not used efficiently.

Thirdly, and maybe most importantly, this approach relies on human beings
to decide what to do with any given alert. An IT admin must somehow divine
from all current alerts, not just which are actionable, but which specific

Cisco Intersight: A Handbook for Intelligent Cloud Operations


246 Workload Optimization

actions to take! These actions are invariably intertwined with and will affect
other application components and pieces of infrastructure in ways that are
difficult to predict.

• A high CPU alert on a given host might be addressed by moving a VM


to another host, but which VM?
• Which other hosts?

• Does that other host have enough memory and network capacity for
the intended move?
• Will moving that VM create more problems than it solves?
• Multiply that analysis by every potential metric and every application
workload in the environment and the problem becomes exponentially
more difficult.

Lastly, usually the standard operating procedure is to clear the alert, but as
stated above, any given alert is not a true indicator of application
performance. As every IT admin has seen time and again, healthy apps can
generate red alerts and "all green" infrastructures can still have poorly
performing workloads. A different paradigm is needed.

Paradigm shift
Ultimately, Workload Optimizer (a separately licensed feature set within the
Intersight platform) provides the needed new paradigm.

Workload Optimizer is an analytical decision engine that generates


actions, optionally automatable in most cases, that drive the IT environment
toward a Desired State where workload performance is assured and the cost
is minimized. It uses economic principles, the fundamental laws of supply
and demand, in a market-based abstraction to allow infrastructure entities
(e.g., hosts, VMs, containers, storage arrays) to shop for commodities such
as CPU, memory, storage, or network.

Cisco Intersight: A Handbook for Intelligent Cloud Operations


Workload Optimization 247

Actions are the result of this market analysis — for example, a physical host
that is maxed out on memory (high demand) will be selling its memory at a
high price to discourage new tenants, whereas a storage array with excess
capacity will be selling its space at a low price to encourage new workloads.
While all this buying and selling takes place behind the scenes within the
algorithmic model and does not correspond directly to any real-world dollar-
values, the economic principles are derived from the behaviors of real-world
markets. These market cycles occur constantly, in real-time, to assure
actions are currently and always driving the environment toward the Desired
State. In this paradigm, workload performance and resource optimization are
not an either/or proposition; in fact, they must be considered together to
make the best decisions possible.

Workload Optimizer can be configured to either Recommend or


Automate infrastructure actions such as placement, resizing, or scale
up/down for either on-premises or cloud resources.

Cisco Intersight: A Handbook for Intelligent Cloud Operations


248 Workload Optimization

Users and roles

Workload Optimizer leverages Intersight’s core user, privilege, and role-


based access control functionality as described in the Foundations section.
Intersight Administrators can assign various pre-defined privileges specific
to Workload Optimizer (see table below for names and details) to a given
Intersight role to allow for a division of privileges as needed within an
organization. By default, an Intersight Administrator has full Administrator
privileges in Workload Optimizer, and an Intersight Read-Only user is
granted Observer privileges. Other Workload Optimizer privileges (e.g. WO
Advisor, WO Automator, etc.) must be explicitly assigned to a role via
Settings → Roles.

Cisco Intersight: A Handbook for Intelligent Cloud Operations


Workload Optimization 249

Table 1: Privileges and permissions

Workload Optimizer
Permissions
privileges

A Workload Optimizer Observer can view the state of the


Workload Optimizer
environment and recommended actions. They cannot run plans
Observer
or execute any recommended actions.

A Workload Optimizer Advisor can view all Workload Optimizer


Workload Optimizer
charts and data and run plans. They cannot reserve workloads or
Advisor
execute any recommended actions.

A Workload Optimizer Automator can execute recommended


Workload Optimizer
actions and deploy workloads. They cannot perform
Automator
administrative tasks.

A Workload Optimizer Deployer can view all Workload Optimizer


Workload Optimizer charts and data, deploy workloads, and create policies and
Deployer templates. They cannot run plans or execute any recommended
actions.

A Workload Optimizer Administrator can access all Workload


Workload Optimizer
Optimizer features and perform administrative tasks to con gure
Administrator
Workload Optimizer.

Cisco Intersight: A Handbook for Intelligent Cloud Operations


250 Workload Optimization

Targets and configuration

For the Workload Optimizer to generate actions, it needs information to


analyze. It accesses the information it needs via API calls to targets, as
configured under the Admin tab, described in the Foundations chapter. This
information — metadata, telemetry, metrics — gathered from infrastructure
targets, must be both current and actionable.

The number of possible data points available for analysis are effectively
infinite, so Workload Optimizer will only gather data that has the potential to
lead to or impact a decision. This distinction is important as it can help
explain why a given target is or is not available or supported — in theory
anything with an API could be integrated as a target, but the key question
would be “what decision would Workload Optimizer make differently if it had
this information?”

With that said, one of the great advantages of this approach, and the
economic abstraction that underpins the decision engine, is that it scales.
Human beings are easily overwhelmed by data, and more data usually just
means more noise that confuses the situation. In the case of Workload
Optimizer’s intelligent decision engine, the more data it has from a myriad of
heterogeneous sources, the smarter it gets. More data in this case means
more signal and better decisions.

Cisco Intersight: A Handbook for Intelligent Cloud Operations


Workload Optimization 251

Figure 1: Communication to public cloud services and on-premises resources

Workload Optimizer accesses its targets in two basic ways (see above):

1 Making direct API calls from the Intersight cloud to other cloud
services and platforms such as Amazon Web Services, Azure, and
AppDynamics SaaS — i.e., cloud to cloud, directly; and

2 Via the Assist function of the Intersight Appliance, which enables


Workload Optimizer to communicate with on-premises infrastructure
that would otherwise be inaccessible behind an organization’s
firewall.

Cisco Intersight: A Handbook for Intelligent Cloud Operations


252 Workload Optimization

It is therefore possible to use Workload Optimizer as a purely SaaS customer


— no Intersight Appliance is needed if all the targeted workloads and
infrastructure reside in the public cloud. Also, UCSM-managed devices
(e.g., Fabric Interconnects and HX Clusters) with native Device Connectors
directly claimed in Intersight do not need to leverage the Intersight
Appliance.

While all communication to targets occurs via API calls, without any
traditional agents required on the target side, Kubernetes clusters do require
a unique setup step: deploying the Kubernetes Collector on a node in the
target cluster. The Collector runs with a service account that has a cluster-
admin role and contains its Device Connector, essentially proxying
communications to and commands from Intersight and the native cluster
kubelet or node agent. In that respect, the Collector serves a somewhat
analogous role as the Intersight Appliance does for enabling secure
Intersight communications with third-party hardware on-premises.

One of the richest sources of workload telemetry for Workload Optimizer


comes from Application Performance Management (APM) tools such as
Cisco’s AppDynamics. As noted in the introduction above, the core focus of
Workload Optimizer is Application Resource Management. However, for an
application to truly perform well, it needs more than just the right physical
resources at the right time; it also needs to be written and architected well.

AppDynamics delivers on that promise by providing developers and


applications IT teams with a detailed logical dependency map of the
application and its underlying services, fine-grained insight into individual
lines of problematic code and poorly performing database queries and their
impact on actual business transactions, and guidance for troubleshooting
poor end-user experiences. The combination of Application
Performance Management (assuring good code and architecture) with
Workload Optimizer’s Application Resource Management (the right
resources at the right time for the lowest cost) completes the picture.

Cisco Intersight: A Handbook for Intelligent Cloud Operations


Workload Optimization 253

Figure 2: AppDynamics integration into Workload Optimizer

Cisco Intersight: A Handbook for Intelligent Cloud Operations


254 Workload Optimization

The Supply Chain

Workload Optimizer uses the information it gathers from targets to stitch


together a logical dependency mapping of all entities in the customer
environment, from the logical application at the top to the physical hosts,
storage, network, and facilities (or equivalent public cloud services) below.
This mapping is called the Supply Chain and is the primary method of
navigation in Workload Optimizer (see below).

Figure 3: The Supply Chain

Cisco Intersight: A Handbook for Intelligent Cloud Operations


Workload Optimization 255

The Supply Chain shows each known entity as a colored ring. The color of
the rings depicts the current state of the entity concerning pending actions —
red if there are critical pending performance or compliance actions, orange
for prevention actions, yellow for efficiency-related actions, and green if no
actions are pending. The center of each ring displays the known quantity of
the given entity, and the connecting arrows depict consumers’
dependencies on other providers.

The Supply Chain is clickable — clicking on a given entity raises a context-


specific window with detailed information about that entity type. For
example, in the screenshot below, the Container entity in the Supply Chain
has been selected which shows many container-relevant widgets, as well as
a series of tabs for policies, actions, etc.

Cisco Intersight: A Handbook for Intelligent Cloud Operations


256 Workload Optimization

Figure 4: Access additional details by clicking on Supply Chain entities

Furthermore, the Supply Chain is dynamic, meaning if a particular entity,


such as a specific business application is selected (i.e., AD-Financial-Lite-
ACI in the screenshot example below), then the Supply Chain will
automatically reconfigure itself to depict just that single business application
and only its related dependencies (including updating the color of the
various rings and their respective entity counts). This is known as narrowing
the scope of the Supply Chain view and is extremely helpful to home in on a
specific area of concern or work. Clicking on the Home link at the top-left
within Workload Optimizer, or the Optimize → Overview tab on the main
Intersight menu bar on the left, will always return to the full scope of the
Supply Chain.

Cisco Intersight: A Handbook for Intelligent Cloud Operations


Workload Optimization 257

Figure 5: Scoped Supply Chain view of a single application

Cisco Intersight: A Handbook for Intelligent Cloud Operations


258 Workload Optimization

Actions

The essence of Workload Optimizer is action. Many infrastructure tools


promise visibility, some even provide some insights on top of that visibility,
but Workload Optimizer is designed to go further and act, in real-time, to
continuously drive the environment toward the Desired State. All actions
follow an opt-in model — Workload Optimizer will never take an action unless
given explicit permissions to do so, either via direct user input or through a
custom policy. A display of the list of current actions can be seen in various
ways: via the Pending Actions dashboard widget in the Supply Chain view,
via the Actions tab after clicking on a component in the Supply Chain, or in
various scoped views and custom dashboards.

Cisco Intersight: A Handbook for Intelligent Cloud Operations


Workload Optimization 259

Figure 6: Execute actions

Bear in mind that the list of supported actions and their ability to be
executed via Workload Optimizer varies widely by target type and updates
frequently. A current, detailed list of actions and their execution support
status via Workload Optimizer can be found in the Target Configuration
Guide.

Cisco Intersight: A Handbook for Intelligent Cloud Operations


260 Workload Optimization

When organizations first start using Workload Optimizer, the number of


pending actions can be significant, especially in a large, active, and/or
poorly optimized environment. New organizations will generally take a
conservative approach initially and will execute actions manually, verifying as
they go that the actions are improving the environment and moving them
closer to the Desired State. Ultimately though, the power of Workload
Optimizer is best achieved through a judicious implementation of groups and
policies to simplify the operating environment and to automate actions
where possible.

Cisco Intersight: A Handbook for Intelligent Cloud Operations


Workload Optimization 261

Groups and policies

Groups
Workload Optimizer provides the capability of creating logical groups of
resources (VMs, hosts, datastores, disk arrays, etc.) for ease of
management, visibility, and automation. Groups can be either static (a fixed
list of a named set of resources) or dynamic. Dynamic Groups self-update
their membership based on specific filter criteria — a query, effectively — to
significantly simplify management. For example, one can create a dynamic
group of VMs that belong to a specific application’s test environment, and
further could restrict that group membership to just those running Microsoft
Windows (see below — note the use of “.*” as a catchall wildcard in the filter
criteria).

Cisco Intersight: A Handbook for Intelligent Cloud Operations


262 Workload Optimization

Figure 7: Creation of dynamic groups based on filter criteria

Generally, Dynamic Groups are preferred due to their self-updating nature.


As new resources are provisioned or decommissioned, or as their status
changes, their Dynamic Group membership adjusts accordingly, without any
user input. This benefit is difficult to understate, especially in larger
environments. Use Dynamic Groups whenever possible, Static Groups only
when strictly necessary.

Groups can be used in numerous ways. From the Search screen, a given
group can be selected and automatically scoped to just that group in the
supply chain. This is a handy way to zoom in on a specific subset of the
infrastructure in a visibility or troubleshooting scenario or to customize a
given widget in a dashboard. Groups can also be used to easily narrow the
scope of a given plan or placement scenario (see next section).

Cisco Intersight: A Handbook for Intelligent Cloud Operations


Workload Optimization 263

Policies
One of the most critical benefits to the use of groups arises when combined
with policies. In Workload Optimizer, all actions are governed by one or
more policies, including default global policies, user-defined custom
policies, and imported placement policies. Policies provide extremely fine-
grained control over the actions and automation behavior of Workload
Optimizer.

Policies fall under two main categories: Placement and Automation. In both
cases, Groups (Static or Dynamic) are used to narrow the scope of the
policy.

Placement policies
Placement policies govern which consumers (VMs, containers, storage
volumes, datastore) can reside on which providers (VMs, physical hosts,
volumes, disk arrays).

Affinity/Anti-affinity
The most common use for placement policies is to create affinity or anti-
affinity rules to meet a business need. For example, consider two Dynamic
Groups: one of VMs and another of physical hosts, both owned by the Test
and Dev team. To ensure that Test and Dev VMs always run on Test and Dev
hosts, an administrator can create a placement policy that enables such a
constraint in Workload Optimizer (see below).

Cisco Intersight: A Handbook for Intelligent Cloud Operations


264 Workload Optimization

Figure 8: Creating a new placement policy

That constraint — that policy — will restrict the underlying economic decision
engine that generates actions. The buying decisions that the VMs within the
Test and Dev group make when shopping for resources will be restricted to
just the Test and Dev hosts, even if there might be other hosts that could
otherwise serve those VMs. One might similarly constrain certain workloads
with a specific license requirement to only run on (or never run on) a given
host group that is (or isn’t) licensed for that purpose.

Merge
Another placement policy type that can be especially useful is the Merge
policy. Merge policies logically combine two or more groups of resources
such that the economic engine treats them as a single, fungible asset when
making decisions.

Cisco Intersight: A Handbook for Intelligent Cloud Operations


Workload Optimization 265

The most common example of a Merge policy is to combine one or more VM


clusters such that VMs can be moved between clusters. Traditionally, VM
clusters are siloed islands of compute resources that can’t be shared.
Sometimes this is done for specific business reasons, such as separating
accounting for different data center tenants — in other words, sometimes the
silos are built intentionally. But many times, they are unintentional: the
fragmentation and subsequent underutilization of resources is merely a by-
product of the artificial boundary imposed by the infrastructure and
hypervisor. In these scenarios, administrators can create a Merge policy that
logically joins multiple clusters’ compute and storage resources, enabling
Workload Optimizer to consider the best location for any given workload,
without being constrained by cluster boundaries. Ultimately this leads to
optimal utilization of all resources in a continued push toward the Desired
State.

Automation policies
The second category of policy in Workload Optimizer is the Automation
policy. Automation policies govern how and when Workload Optimizer
generates and executes actions. Like Placement policies, Automation
policies are restricted to a specific scope of resources based on Groups,
however, unlike Placement policies, Automation policies can be restricted to
run at specific times with schedules. Global Default policies govern any
resources that aren’t otherwise governed by another policy. As such, use
extra caution when modifying a Global Default policy as any changes can be
far-reaching.

Automation policies provide great control — either broadly or extremely


finely-grained — over the behavior of the decision engine and how actions
are executed. For example, it’s common for organizations to enable non-
disruptive VM resize-up actions for CPU and memory (for hypervisors that
support such actions), but some organizations wish to further restrict these
actions to specific groups of VMs (e.g., Test and Development only, not
Production), or to occur during certain pre-approved change windows, or to
control the increment of growth. Most critically, automation policies enable
supported actions to be executed automatically by Intersight, avoiding the
need for human intervention.

Cisco Intersight: A Handbook for Intelligent Cloud Operations


266 Workload Optimization

Best practices
When implementing placement and automation policies, a crawl-walk-run
approach is advisable.

Crawl
Crawling involves creating the necessary groups for a given policy, creating
the policy scoped to those groups, and setting the policy’s action to
manual so that actions are generated but not automatically executed.

This method provides administrators with the ability to double-check the


group membership and manually validate that the actions are being
generated as expected for only the desired groups of resources. Any
needed adjustments can be made before manually executing the actions and
validating that they do indeed move the environment closer to the Desired
State.

Walk
Walking involves changing an Automation Policy’s execution mechanism to
automatic for relatively low-risk actions. The most common of these are VM
and datastore placements, non-disruptive up-sizing of datastores and
volumes, and non-disruptive VM resizes up-actions for CPU and memory.

Modern hypervisors and storage arrays can handle these actions with little to
no impact on the running workloads, and these generally provide the
greatest bang for the buck for most environments. More conservative
organizations may want to begin automating with a lower priority subset of
their resources (such as test and development systems), as defined by
Groups. Combining the above with a Merge Policy to join multiple clusters
provides that much more opportunity for optimization in a reasonably safe
manner.

Cisco Intersight: A Handbook for Intelligent Cloud Operations


Workload Optimization 267

Run
Lastly, running typically involves more complex policy interactions, such as
schedule implementations, before- and after-action orchestration steps, and
rolling out automation across the organization, including production
environments.

That said, it is critical at this stage to have well-defined groups to restrict


unwanted actions. Many off-the-shelf applications such as SAP have
extremely specific resource requirements that must be met to receive full
vendor support. In those cases, organizations will typically create a group,
specific to that application, and add a policy for that group which disables all
action generation for it, effectively telling Workload Optimizer to ignore the
application. This can also be done for bespoke applications for which the
development teams have similarly stringent resource requirements.

Cisco Intersight: A Handbook for Intelligent Cloud Operations


268 Workload Optimization

Planning and placement

Plan
Since the entire foundation of Workload Optimizer’s decision engine is its
market abstraction governed by the economic laws of supply and demand, it
is a straightforward exercise to ask what-if questions of the engine in the
Plan module.

The Plan function enables users to virtually change either the supply side
(add/remove providers such as hosts or storage arrays) or the demand side
(add/remove consumers such as VMs or containers) or both, and then
simulate the effect of the proposed change(s) on the live environment. Under
the hood, this is a simple task for Workload Optimizer since it merely needs
to run an extra market cycle with the new (simulated) input parameters. Just
as in the live environment, the results of a Plan (see below) are actions.

Plans answer questions such as:

• If four hosts were decommissioned, what actions would need to be


taken to handle the workloads that they are currently running?
• Does capacity exist elsewhere to handle the load, and if so where
should workloads be moved?
• If there is not enough spare capacity, how much more and of what
type will be needed to be bought/provisioned?

Cisco Intersight: A Handbook for Intelligent Cloud Operations


Workload Optimization 269

Figure 9: Results of an added workload plan

This capability takes the concept of traditional capacity planning, which can
be a relatively crude exercise in projecting historical trend lines into the
future, to a new level: Workload Optimizer plans to tell the administrator
exactly what actions will need to be taken in response to a given set of
changes to maintain the Desired State. One of the most frequently used Plan
types is the Migrate to Cloud simulation, which will be addressed in greater
detail in the next section covering the public cloud.

Placement
The Placement module in Workload Optimizer is a variation on the Plan
theme but with real-world consequences. Placement reservations (see
below) allow an administrator, who knows that new workloads are coming
into the environment soon, to alter the demand side of all future market
cycles, taking the yet-to-be-deployed workloads into account.

Cisco Intersight: A Handbook for Intelligent Cloud Operations


270 Workload Optimization

Figure 10: Creating a placement reservation

Such reservations force the economic engine to behave as if those


workloads already existed, meaning it will generate real actions accordingly.
Reservations may therefore result in real changes to the environment if
automation policies are active, and/or real recommended pending actions
such as VM movements and server or storage provisioning (to accommodate
the proposed new workloads) are undertaken.

Placement reservations are a great way to both plan for new workloads and
to ensure that resources are available when those workloads deploy. A
handy feature of any Placement reservation is the ability to delay making the
reservation active until a point in the future, including the option of an end
date for the reservation. This delays the effect of the reservation until a time
closer to the actual deployment of the new workloads.

Cisco Intersight: A Handbook for Intelligent Cloud Operations


Workload Optimization 271

Public cloud

In an on-premises data center, infrastructure is generally finite in scale and


fixed in cost. By the time a new physical host hits the floor, the capital has
been spent and has taken its hit on the business’s bottom line. Thus, the
Desired State in an on-premises environment is to assure workload
performance and maximize utilization of the sunk cost of capital
infrastructure. In the public cloud, however, infrastructure is effectively
infinite. Resources are paid for as they are consumed, usually from an
operating expenditure budget rather than a capital budget.

Since the underlying market abstraction in Workload Optimizer is extremely


flexible, it can easily adjust to this nuance in the artifact. In the public cloud,
the Desired State is to assure workload performance and minimize spend.
This is a subtle but key distinction, as minimizing spend in the public cloud
does not always mean placing a workload in the cloud VM instance that
perfectly matches its requirements for CPU, memory, storage, etc.; instead,
it means placing that workload in the cloud VM template that results in the
lowest possible cost while still ensuring performance.

The public cloud’s vast array of instance sizes and types offer endless
choices for cloud administrators, all with slightly different resource profiles
and costs. There are hundreds of different instance options in AWS and
Azure, with new options and pricing emerging almost daily. To further
complicate matters, administrators have the option of consuming instances
in an on-demand fashion — i.e., pay as you use — or via Reserved Instances
(RIs) which are paid for in advance for a specified term, usually a year or
more. RIs can be incredibly attractive as they are typically heavily discounted
compared to their on-demand counterparts, but they are not without their
pitfalls.

The fundamental challenge of consuming RIs is that a public cloud customer


will pay for the RI whether they use them or not. In this respect, RIs are more
like the sunk cost of a physical server on-premises than to the ongoing cost

Cisco Intersight: A Handbook for Intelligent Cloud Operations


272 Workload Optimization

of an on-demand cloud instance. One can think of on-demand instances as


being well-suited for temporary or highly variable workloads, analogous to a
city dweller taking a taxi: usually cost-effective for short trips. RIs are akin to
leasing a car: often the right economic choice for longer-term, more
predictable usage patterns (say, commuting an hour to work each day).
Again, as the artifact changes, the flexibility of the underlying economic
abstraction of Workload Optimizer is up to the challenge.

When faced with myriad instance options, cloud administrators are generally
forced down one of two paths: 1) only purchase RIs for workloads that are
deemed static and consume on-demand instances for everything else
(hoping, of course, that static workloads really do remain that way); or 2)
picking a handful of RI instance types — e.g., small, medium, and large — and
shoehorning all workloads into the closest fit. Both methods leave a lot to be
desired. In the first case, it’s not at all uncommon for static workloads to
have their demand change over a year as new end users are added or new
functionality comes online. In these cases, the workload will need to be
relocated to a new instance type, and the administrator will have an empty
hole to fill in the form of the old, already paid-for RI (see examples below).

Cisco Intersight: A Handbook for Intelligent Cloud Operations


Workload Optimization 273

Figure 11: Fluctuating demand creates complexity with RI consumption

What should be done with that hole? What’s the best workload to move into
it? And if that workload is coming from its own RI, the problem simply
cascades downstream. Furthermore, even if the static workloads really
remain static over long stretches, the conservative administrator wary of
overspending may be leaving money on the table by consuming on-demand
instances where RIs would be more cost-effective.

In the second scenario, limiting the RI choices almost by definition means


mismatching workloads to instance types, negatively affecting either
workload performance or cost savings, or both. In either case, human
beings, even with complicated spreadsheets and scripts, will invariably get
the answer wrong because the scale of the problem is too large and
everything keeps changing, all the time, so the analysis done last week is
likely to be invalid this week.

Cisco Intersight: A Handbook for Intelligent Cloud Operations


274 Workload Optimization

Thankfully, Workload Optimizer understands both on-demand instances and


RIs in detail through its direct API target integrations. Workload Optimizer is
constantly receiving real-time data on consumption, pricing, and instance
options from the cloud providers, and combining such data with the
knowledge of applicable customer-specific pricing and enterprise
agreements to determine the best actions available at any given point in time
(see below).

Figure 12: A pending action to purchase additional RI capacity in Azure

Not only does Workload Optimizer understand current and historical


workload requirements and an organization’s current RI inventory, but it can
also intelligently recommend the optimal consumption of existing RI
inventory and recommend additional RI purchases to minimize future
spending. Continuing with the car analogy above, in addition to knowing
whether it’s better to pay for a taxi or lease a car in any given circumstance,

Cisco Intersight: A Handbook for Intelligent Cloud Operations


Workload Optimization 275

Workload Optimizer can even suggest a car lease (RI purchase) that can be
used as a vehicle for ride-sharing (i.e., fluidly moving on-demand workload
in and out of a given RI to achieve the lowest possible cost while still
assuring performance).

Finally, because Workload Optimizer understands both the on-premises and


public cloud environments, it can bridge the gap between them. As noted in
the previous section, the process of moving VM workloads to the public
cloud can be simulated with a Plan and will allow the selection of specific
VMs or VM groups to generate the optimal purchase actions required to run
them (see below).

Figure 13: Results of a cloud migration plan

Cisco Intersight: A Handbook for Intelligent Cloud Operations


276 Workload Optimization

The plan results offer two options: Lift & Shift and Optimized, depicted in the
blue and green columns, respectively. Lift & Shift shows the recommended
instances to buy, and their costs, assuming no changes to the size of the
existing VMs. Optimized allows for VM right-sizing in the process of moving
to the cloud, which often results in a lower overall cost if current VMs are
oversized relative to their workload needs. Software licensing (e.g., bring-
your-own vs. buy from the cloud) and RI profile customizations are also
available to further fine-tune the plan results.

Workload Optimizer’s unique ability to apply the same market abstraction


and analysis to both on-premises and public cloud workloads, in real-time,
enables it to add value far beyond any cloud-specific or hypervisor-specific,
point-in-time tools that may be available. Besides being multi-vendor,
multicloud, and real-time by design, Workload Optimizer does not force
administrators to choose between performance assurance and
cost/resource optimization. In the modern Application Resource
Management paradigm of Workload Optimizer, performance assurance and
cost/resource optimization are blended aspects of the Desired State.

Cisco Intersight: A Handbook for Intelligent Cloud Operations


Workload Optimization 277

Wrapping up

The flexibility and extensibility of the Intersight platform enable the rapid
development of new features and capabilities. Additional hypervisor, APM,
storage, and orchestrator targets are well on their way, as well as additional
reporting and application-specific support. Customers will find that the
return on investment for Workload Optimizer is rapid as the cost savings it
uncovers in the process of assuring performance and optimizing resources
quickly exceed the license costs.

Cisco Intersight: A Handbook for Intelligent Cloud Operations


Cisco Intersight: A Handbook for Intelligent Cloud Operations
Orchestration

Cisco Intersight: A Handbook for Intelligent Cloud Operations


280 Orchestration

Introduction

One of Intersight’s founding goals is to help simplify the lives of IT


professionals. Common IT tasks that are straightforward on a small scale can
quickly prove overwhelming at larger scales to IT Operations teams
struggling to deliver their services ever faster and cheaper. Orchestration
can help IT teams to easily manage complex tasks and workflows and to be
able to manage these systems at scale, where a manual approach is not
feasible.

While many orchestration platforms promise to address this challenge


through the synchrony of automated actions, Intersight, from its position as
an intelligence-driven, cloud-based operations platform, is unique. It can
visualize the entire infrastructure both on-premises and in the cloud,
understand interactions and dependencies in the infrastructure, and finally
orchestrate the infrastructure from the configuration of individual elements to
the management of entire systems, applications, and services.

This chapter will address the orchestration capabilities available in Intersight


today, with consideration for the possibilities in the future.

Cisco Intersight: A Handbook for Intelligent Cloud Operations


Orchestration 281

Automation and orchestration

The terms automation and orchestration are frequently conflated in the IT


context, but it is important to distinguish between them. These words are
often used interchangeably when trying to describe a way to make activities
repeatable, but for this book, the following definitions apply:

• Automation — the process of executing a single task without any


human intervention. A simple example would be to provision a VM in
a private data center. IT administrators can automate a wide variety
of tasks ranging from networking, storage, hypervisor, application
deployments, ticketing, etc. These tasks can extend from the private
cloud to the public cloud.

• Orchestration — the next stage of automation. It is a collection of


automation tasks in an optimized workflow to manage complex
distributed systems and services. A common example would be
creating a new VMFS datastore, which can involve multiple steps
such as creating a new volume with a given name and size, attaching
the volume to a host, verifying if the volume is connected to the host,
and finally creating the new VMFS datastore itself.

IT organizations are challenged to take business practices and decisions and


make them consistent and repeatable. Doing so successfully enables IT to
accelerate the delivery of infrastructure, networking, compute, storage, and
application services in an agile manner that returns value to the business.

As IT teams are tasked with managing large scale infrastructures,


orchestration becomes essential as a service delivery mechanism, but it's
not without its pitfalls. As organizations mature through the automation and
orchestration learning curve, they see the power of speeding up repetitive
tasks, freeing them to focus on other valuable projects. However, even small
mistakes in automation tasks or workflow logic are amplified when executed

Cisco Intersight: A Handbook for Intelligent Cloud Operations


282 Orchestration

at scale in a live environment. Having capable tools, such as Intersight Cloud


Orchestrator, enables organizations to build, test, manage and maintain
workflows to minimize potential risk.

Cisco Intersight: A Handbook for Intelligent Cloud Operations


Orchestration 283

Intersight orchestration

Intersight allows for the creation and execution of complex workflows for
infrastructure orchestration. These workflows consist of multiple automated
tasks where each task can pass parameters to the next so the flow can
proceed without human intervention. To build workflows, a drag-and-drop
Workflow Designer is included natively in Intersight. This allows the entire
orchestration process to be created and viewed graphically.

Figure 1: Logical example of infrastructure orchestration with Intersight

Cisco Intersight: A Handbook for Intelligent Cloud Operations


284 Orchestration

A few examples of use cases for orchestration with Intersight include:

• Complete provisioning of a Pure Storage FlashArray


• Deploying multiple virtual machines

• Deploying and configuring an entire Kubernetes environment

Orchestration tasks
A task is the basic building block of the workflow and can perform a simple
operation such as creating a new storage volume. The operation can be
executed within the Intersight cloud or, with the help of the Device
Connector at the endpoints, can carry out operations with Intersight targets.

Each task is configured by providing the necessary inputs, and the output of
the task may be passed onto another task as a required input after
successful execution of the task. The figure below shows an example list of
tasks available in Intersight Cloud Orchestrator.

Cisco Intersight: A Handbook for Intelligent Cloud Operations


Orchestration 285

Figure 2: List of tasks available to the Workflow Designer

Cisco Intersight: A Handbook for Intelligent Cloud Operations


286 Orchestration

Generic tasks
Intersight provides a few generic tasks that can support complex workflows.
One such example is the Conditional Task, available under Configure →
Orchestration → Create New Workflow, which provides the capability to
perform programmatic decisional expressions. The conditional expression
can be a simple expression or combined into a compound expression for
evaluation. Conditional expressions support the following operators:

• Comparison operators: ==, !=, >, <, >=, <=


• Arithmetic operators: =, -, *, / , %, **
• Logical operators: &&, ||, !

• Ternary operator: condition ? val1 : val2

The expression is evaluated during runtime and depending on the result, the
respective path is chosen. If none of the conditions match, the default path
is chosen.

Another commonly used generic task is the Invoke Web API Request task.
This task extends the automation capabilities of the workflow engine beyond
what is natively available in the task library by performing generic API calls in
a workflow. This task supports both HTTP and HTTPS protocols and the
commonly used methods (GET, POST, PUT, PATCH, DELETE, etc.). This task
can be used against Intersight itself or custom targets which can be
configured under Admin → Targets → Claim a New Target → Custom

Inputs and outputs


When orchestrating the automation of a set of tasks, input and output data
are almost always required. Inputs allow users to inject various types of data
such as strings, integers, Booleans, or more advanced object structures
using JSON. Intersight workflows provide a simple, flexible framework for
defining customer workflow inputs using data types, as described in the next
section. Validation rules, as well as conditional visibility rules (only making

Cisco Intersight: A Handbook for Intelligent Cloud Operations


Orchestration 287

inputs visible under specific conditions), can be configured as part of these


input definitions.

Inputs can be supplied by a user or can be passed in from task outputs. An


example of the former is the desired storage volume size when creating a
new storage volume. An example of the latter would be the ID of the newly
created storage volume being input to the task that assigns that volume to a
host.

Data types
Intersight orchestration supports the use of data types for workflows and
tasks. Data types provide a powerful framework for administrators to define
what data points should be passed in for a given workflow execution.
Variable type, name, label, validation rules, and conditional dependencies
can all be defined within a custom data type.

Cisco Intersight: A Handbook for Intelligent Cloud Operations


288 Orchestration

Intersight provides a range of system-defined data types that can be used in


workflows. Data types that are currently available or configured can be
viewed at Orchestration → Data Types. Some examples include
WindowsVirtualMachineGuestCustomizationType and HypervisorHostType,
as shown below:

Figure 3: Data types available in Intersight

Cisco Intersight: A Handbook for Intelligent Cloud Operations


Orchestration 289

Data types can be simple, containing only a singular value, or composite,


containing multiple properties. The following screenshot shows the definition
of a simple input and its constraints. Figure 5 shows a screenshot of
configuring a composite data type input. Multiple properties can be added
by clicking the plus (+) and each property can have a different data type and
different constraints. An example use case for the composite data type
could be defining the name, size, and ID of a storage volume as a single
input.

Figure 4: Configuring a simple data type input

Cisco Intersight: A Handbook for Intelligent Cloud Operations


290 Orchestration

Figure 5: Configuring a composite data type input

Cisco Intersight: A Handbook for Intelligent Cloud Operations


Orchestration 291

Workflows
Depending on the use case, a workflow can be composed of a single
automated task, multiple tasks, or even other embedded workflows. The
orchestrator includes both simple and complex sample workflows to help
flatten the learning curve.

Choosing Configure → Orchestration from the left navigation pane will


display all available workflows in a tabular format. Information such as the
workflow name, description, and statistics about its execution history are
available for easy viewing.

Figure 6: Sample workflows out of the box

Cisco Intersight: A Handbook for Intelligent Cloud Operations


292 Orchestration

The Workflow Designer can be launched by clicking Create New


Workflow or by selecting an existing workflow from the table. Only an
Account Administrator or a user with the Workflow Designer privilege can
use the Workflow Designer.

Figure 7: Workflow Designer

The Workflow Designer main interface has a list of Tasks and Workflows on
the left-hand side that can be dragged and dropped into the central pane,
which is sometimes referred to as the canvas. Existing tasks or workflows
(as a sub-workflow) can be used in the workflow design. Tasks are easily
chained together by dragging the success criteria (green lines) and failure
criteria (red lines).

Cisco Intersight: A Handbook for Intelligent Cloud Operations


Orchestration 293

The Properties option allows the designer to see and edit detailed
information about the overall workflow and for each of its tasks. In the
Workflow Designer, a JSON View is also available, providing a read-only
view of the workflow and is useful for understanding the API calls for each
step of the workflow. The JSON code can also be copied to be used for
internal documentation or in other tools that may have been previously
operationalized in an organization.

Figure 8: Workflow JSON view

Cisco Intersight: A Handbook for Intelligent Cloud Operations


294 Orchestration

Within the Workflow Designer, the Mapping option displays information on


workflow inputs and outputs as well as the relationship between task inputs
and outputs. This view is informational only. Later in this chapter, configuring
task inputs and outputs will be discussed.

Figure 9: Workflow mapping

Workflow version control


Intersight allows administrators to create and manage versions for
workflows. The default version for a workflow can be set to a specific
version and does not have to be the latest version. This allows teams to
continuously develop workflows, adding new steps and testing the results
before introducing the updated workflow to all users.

Read-Only users can view versions of a workflow, but they cannot create,
edit, execute, or delete versions. Users with Storage Administrator and

Cisco Intersight: A Handbook for Intelligent Cloud Operations


Orchestration 295

Virtualization Administrator privileges can only view and execute specific


versions of a workflow.

Entitled users can select Manage Versions from the ellipsis menu for a single
workflow in the list of workflows as shown in the screenshot below:

Figure 10: Managing versions menu option

Cisco Intersight: A Handbook for Intelligent Cloud Operations


296 Orchestration

This action will display that workflow’s versions and allow an administrator to
select the default version as shown in Figure 11. When selected, the
versions, execution status, description, and validity will be displayed. To add
additional changes to the workflow without affecting existing versions,
Intersight provides the ability to create a new version as shown in the figure
below:

Figure 11: Managing and creating versions

Workflow execution
Workflow execution requires that the executing user has the necessary
privileges for all of the tasks within the workflow. For example, to
successfully execute a workflow that includes storage and virtualization
tasks only, both Storage and Virtualization Administrator privileges are
required. In the absence of either one of these privileges, the workflows
cannot be executed.

Cisco Intersight: A Handbook for Intelligent Cloud Operations


Orchestration 297

History and cloning execution


When examining the history of previous workflow executions, the Workflow
Designer provides options for cloning or rolling back a previous execution.
The Execution history, as shown in the figure below, displays a view of all
previous execution attempts for the selected workflow. In addition, the
success of each previous execution is displayed — green for successful, red
for unsuccessful.

Figure 12: Workflow execution history

When re-executing a workflow within the Workflow Designer, it is a best


practice to first clone one of its previous executions. Cloning an execution
re-initiates the selected historical execution, prompting for input. Unlike a

Cisco Intersight: A Handbook for Intelligent Cloud Operations


298 Orchestration

traditional workflow execution, cloning will pre-populate the input fields with
previously used input values. The pre-population of input values is
particularly useful for workflow developers as they iteratively work through
the development lifecycle, frequently troubleshooting and re-executing their
code. For example, see below for a view of previous workflow input prompts
from a cloned workflow:

Figure 13: Cloned workflow input prompts

Rollback execution
Certain orchestration tasks support the ability to rollback within a previously
executed workflow. During a rollback, resources are reverted to the pre-
workflow execution state. The ability to rollback is task-dependent so it will

Cisco Intersight: A Handbook for Intelligent Cloud Operations


Orchestration 299

only be available for those tasks that support it. When a rollback action is
selected, operators can choose to select specific tasks or all eligible tasks
within the workflow.

Use cases
As organizations mature their capacity for orchestration of their workloads
and infrastructure in a multicloud world, they will gain agility and speed.
Numerous use cases can benefit from orchestration, but to maximize the
opportunity for success, IT administrators should start with identifying use
cases that are straightforward and quantifiable. These simple use cases can
then be expanded to handle more complex scenarios.

In organizations where different teams manage their own sets of


infrastructures, such as servers, virtualization, storage, networks, and cloud,
cross-domain orchestration is essential. Creating a framework where each
group can build their own automation to be consumed by other groups is the
best way to succeed with orchestration. Many infrastructure teams are
adopting practices that previously had been reserved for software
development such as version control, tagging, continuous delivery,
automated testing, etc.

Application stack orchestration


Developers frequently need the ability to quickly deploy and redeploy a full
stack of infrastructure for testing and development. Consider a basic
scenario where the requirement is to deploy a LAMP stack application in a
private data center with micro-segmentation between the tiers. In this
instance, it would be particularly beneficial to couple Infrastructure as Code
(IaC) capabilities with a more traditional orchestration engine. More detail is
covered in the Infrastructure as Code chapter later in this book, but for this
example, understand that Intersight enhances the capabilities of IaC to
automate the provisioning and management of the entire technology stack.

Cisco Intersight: A Handbook for Intelligent Cloud Operations


300 Orchestration

With the Orchestrator in Intersight, a workflow can be created to do the


following:

• Query IWO (Intersight Workload Optimizer) to determine the VM


placement strategy
• Invoke a Terraform plan to provision the VMs

• Use Ansible playbooks to deploy the application stack


• Use the ACI Terraform provider to introduce micro-segmentation

Storage orchestration
One of the more common VM administration tasks involves growing a VM
datastore that has reached capacity. This task is made more complex when
the underlying storage volume in which the datastore resides is itself at
capacity, creating a cross-domain problem involving both VM and storage
array expertise. Furthermore, in a heterogeneous storage vendor
environment, the storage domain expertise must be specific to the given
storage vendor’s software management system. This seemingly simple task
can involve numerous parties to execute, and this problem is exacerbated by
the frequency of the task and the breadth of the storage vendor
heterogeneity in the environment.

Cisco Intersight: A Handbook for Intelligent Cloud Operations


Orchestration 301

As noted in the Storage Operations chapter, Intersight can manage storage


and orchestrate storage functions on traditional storage arrays such as Pure
Storage, Hitachi Data Systems, and Netapp by communicating with them via
their APIs. This means that the above task can be solved once, in a generic
fashion with a single orchestration workflow, and executed repeatedly,
regardless of which storage vendor is supplying any given volume. To see
how this works, click on Configure → Orchestration to view a list of available
workflows (see below).

Figure 14: Orchestrator workflows

Cisco Intersight: A Handbook for Intelligent Cloud Operations


302 Orchestration

From there, click on Update VMFS Datastore from the list to launch the
Workflow Designer. From the main Workflow Designer screen, a graphical
representation of this predefined workflow is available, and details of its
properties can be displayed via the Properties button. In this example, the
Inputs tab under Properties shows the various input parameters required for
the workflow, such as the specific hypervisor manager, cluster, datastore,
and storage device (see below).

Figure 15: Workflow Designer

Cisco Intersight: A Handbook for Intelligent Cloud Operations


Orchestration 303

To see a mapping of how each task’s output relates to the next task’s input,
select the Mapping button. To manually execute this workflow, select the
Execute button which will prompt for the required input parameters for the
workflow (see below).

Figure 16: Manual workflow execution input parameters

After making the necessary selections in the wizard and executing the
workflow, the execution history can be viewed in the main Workflow
Designer screen via the History button.

Cisco Intersight: A Handbook for Intelligent Cloud Operations


304 Orchestration

Wrapping up

The allure of a fully orchestrated environment is promising, but getting there


has historically involved adopting complicated applications, linking together
siloed domain-specific tools, and significant staff re-training before it can be
fully operationalized. Intersight dramatically lowers the bar for entry into
orchestration for both hardware and software infrastructure. By providing a
broad view of infrastructure in private and public clouds, integration with
Infrastructure as Code tooling, and a powerful and intuitive orchestrator, it
truly can simplify the lives of IT professionals.

As Intersight Cloud Orchestrator evolves, expect to see deeper integration


with cloud native infrastructure, enhancement for AI/ML use cases, and
increased application layer functionality, allowing administrators and users to
more easily plan for and react to what lies ahead.

Cisco Intersight: A Handbook for Intelligent Cloud Operations


Programmability

Cisco Intersight: A Handbook for Intelligent Cloud Operations


306 Programmability

Introduction

One of the guiding principles of Intersight mentioned in the Foundations


chapter is open programmability. Accordingly, Intersight is architected to
be:

• Fully API driven — anything configurable or queryable by a user is


driven by the API
• Built upon an open standard — the API schema leverages the
OpenAPI Specification (version 3 at the time of this writing)

Throughout this book, readers are exposed to the various capabilities that
Intersight offers as a true cloud operations platform. All these capabilities are
driven by the Intersight API as detailed in this chapter, providing the
foundation for discussions in other chapters (e.g., Infrastructure as Code,
Orchestration, etc.) involving the consumption of Intersight resources
through the API.

This API-first approach, coupled with a modern, standards-based OpenAPI


schema, enables Intersight to provide unique benefits such as:

• Browsable/interactive API documentation that is always up-to-date

• Auto-generated client SDKs for multiple languages including


PowerShell, Python, Ansible, Terraform, Go, etc.
• Advanced query syntax to perform features such as filter, select,
aggregate, expand, min, max, count, etc., with efficient server-side
processing.

Intersight being a SaaS-based platform means new features are added very
rapidly (almost weekly). Thankfully, the API has been architected in such a
way that it can keep pace with these frequent feature releases by auto-
generating documentation and client SDKs (software development kits)
alongside the newly published API version.

Cisco Intersight: A Handbook for Intelligent Cloud Operations


Programmability 307

In this chapter, readers will learn the benefits of this open API architecture,
how to leverage the API to programmatically interact with Intersight, and
review examples of common and advanced use cases using various client
SDKs.

OpenAPI
Many readers may know of OpenAPI by its previous name, Swagger. The
Swagger API project dates to 2011 when it was first introduced to use JSON
markup to describe an API (also referred to as a schema). Utilizing a
consistent schema-based approach to documenting APIs “allows both
humans and computers to discover and understand the capabilities of a
service without requiring access to source code, additional documentation,
or inspection of network traffic” (http://spec.openapis.org/oas/v3.0.3
http://spec.openapis.org/oas/v3.0.3).
http://spec.openapis.org/oas/v3.0.3

As the OpenAPI specification evolved, new toolsets were created to


automatically generate:

• Browsable API documentation to display the API


• Client SDKs in various programming languages

• Testing tools to interactively make live requests against the API

Cisco Intersight: A Handbook for Intelligent Cloud Operations


308 Programmability

A great example of the benefits of these tools in action is the API Reference
section of the Intersight API docs (https://intersight.com/apidocs/apirefs
https://intersight.com/apidocs/apirefs):
https://intersight.com/apidocs/apirefs

Figure 1: API reference

The API reference section provides a fully interactive version of the Intersight
API. This reference site allows browsing and searching for any of the
available API endpoints (left portion of the above figure), reading the
documentation for a given endpoint (middle), and finally, running a live
instance of the selected request including query parameters and post body
data (right).

A more detailed explanation of the API reference tool within Intersight will be
provided later in this chapter, but it is introduced here as an example of what
can be achieved by using a well-documented API.

Cisco Intersight: A Handbook for Intelligent Cloud Operations


Programmability 309

As mentioned above, the Intersight API is written in version 3 of the OpenAPI


schema and can be downloaded in full from the API docs Downloads tab
(https://intersight.com/apidocs/downloads/
https://intersight.com/apidocs/downloads/).
https://intersight.com/apidocs/downloads/

Versioning
All open OpenAPI documents (schemas) are versioned as per the standard
so when Intersight services are updated to support a new capability, the
version of the Intersight OpenAPI document is changed and the
corresponding document is published. Newer versions of the Intersight
OpenAPI document will likely include new API endpoints, schema
adjustments, and possible security schemes. The API major version (1.x, 2.x,
etc.) is not changed when backward-compatible changes are deployed.

Backward compatibility
To assist with backward compatibility, the major schema version is always
sent along with any API requests, as will be shown in the examples at the
end of this chapter. To further ensure backward compatibility, API clients
should be written in such a way that additional properties passed by newer
versions of the API can be parsed without error.

Information model
Because everything within Intersight is API driven, it is important to
understand some of the nomenclature/building blocks of the underlying
schema.

Cisco Intersight: A Handbook for Intelligent Cloud Operations


310 Programmability

Managed objects (resources)


The term resource is used in a general sense to describe various
configurable objects within Intersight. Examples of such resource objects
would be:

• UCS servers
• Server components such as DIMMs, CPUs, GPUs, storage controllers,
and Cisco CIMC
• UCS Fabric Interconnects

• Firmware inventory
• HyperFlex nodes and HyperFlex clusters
• VLANs and VSANs

• Server, network, and storage policies


• Alarms, recommendations, and statistics
• Users, roles, and privileges

• Structured and free-text search results


• Collections of other resources

Within the Intersight OpenAPI schema, these resources are referred to as


managed objects.

Referencing objects (Moid)


Each managed object is referenced by a unique identifier called a Managed
Object ID or Moid. The Moid is auto-generated for each managed object and
is used to distinguish a given Intersight resource from all other resources.
Identifying resources by their Moid allows the support of lifecycle operations
such as renaming without affecting the underlying resource.

An example value of a Moid could be "59601f85ae84d80001dcc677".

Cisco Intersight: A Handbook for Intelligent Cloud Operations


Programmability 311

Managed object tagging


Every resource within the Intersight API supports the concept of custom
tagging. Tags drastically simplify resource searching via both the UI and API
alike. A tag consists of a key/value pair which helps the identification of
resources by their preferred scheme. Some common tag keys
include owner, environment, line of business, and geographic location.

The JSON document below shows a sample representation of a Managed


Object that has two tags, Site and Owner.

{
"Moid": "59601f8b3d1f1c0001c11d78",
"ObjectType": "hyperflex.Cluster",
"CreateTime": "2017-07-07T23:55:55.559Z",
"ModTime": "2017-07-08T01:18:38.535Z",
"Tags": [
{ "Key": "Site", "Value": "Austin"},
{ "Key": "Owner", "Value": "Bob"}
],
}

Examples of how to use tags within API calls to achieve a given set of tasks
are provided in the examples section of this chapter.

Rate limiting
To prevent abuse of the APIs, the Intersight APIs are rate limited. When too
many requests are received in a given period, the client may receive an
HTTP status code 429 Too Many Requests. The threshold of these rate limits
is dependent upon the end user’s account licensing tier.

Cisco Intersight: A Handbook for Intelligent Cloud Operations


312 Programmability

Client SDKs

Software development kits provide a pre-packaged API client written in a


specific language or toolset. These clients allow for an easy way to start
programming against Intersight, using a language the user is comfortable
with, which has full access to the API.

Client SDKs help simplify authentication by including built-in functions to


wrap each request with the necessary security methods. SDKs also typically
include language-native model representations of API features such as “tab-
to-complete” for languages such as PowerShell, and intelligent code
completion within editing tools such as Visual Studio Code.

As mentioned previously, the Intersight OpenAPI was written in such a way


that client SDKs can be auto-generated along with each new version of the
API. At the writing of this book the following SDKs are supported:

• Powershell
• Python

• Golang
• Ansible
• Terraform

The current list of available client SDKs along with the full JSON schema can
be found at https://intersight.com/apidocs/downloads/
https://intersight.com/apidocs/downloads/.
https://intersight.com/apidocs/downloads/ It is always
recommended to check the Downloads tab in case the source of an SDK has
changed. As an example, the Powershell SDK (as of the writing of this book)
is hosted on a Git repository but is planned to move to Powershell gallery
sometime in the future.

Although new versions of the Intersight API and related client SDKs are
released frequently as part of the Intersight CI/CD development model, the

Cisco Intersight: A Handbook for Intelligent Cloud Operations


Programmability 313

SDKs are written in such a way that they should not break when talking to a
newer version of the API within the same major version.

While the clients can handle extra properties returned by the newer API
version, they would not be able to configure those newer
properties/capabilities without an update.

Cisco Intersight: A Handbook for Intelligent Cloud Operations


314 Programmability

Authentication and authorization

All communications with the Intersight API require user authentication.


Intersight supports the following API authorization schemes:

• API keys with HTTP signature


• OAuth2 client credentials grant
• OAuth2 authorization code grant

At the time of this writing, OAuth2-based authorization is primarily used by


the Intersight mobile app and is not commonly used by any of the Cisco-
supported client SDKs. With this in mind, the remaining portion of this
section will focus on API keys with HTTP signature.

API keys
API keys provide a very secure and simple way for client applications to
authenticate and authorize requests. An API key consists of a key ID and key
secret and is generated under a given Intersight user’s account. Developers
are responsible for storing the key secret (or private key) in a secure location
as it is never stored within Intersight, however, the key ID and associated
public key are stored within Intersight under the associated user account.
This approach eliminates the need to send a shared secret to Intersight
thereby reducing the risk of compromising user credentials.

Multiple keys can be created per user account to represent different client
applications. This is a recommended best practice to monitor and audit API
usage for each client.

Cisco Intersight: A Handbook for Intelligent Cloud Operations


Programmability 315

A new API key can be created in the Intersight UI by navigating to account


settings:

Figure 2: Navigating to settings

Cisco Intersight: A Handbook for Intelligent Cloud Operations


316 Programmability

Then navigating to API Keys → Generate API Key.

Figure 3: Navigating to API Keys

Cisco Intersight: A Handbook for Intelligent Cloud Operations


Programmability 317

Provide a name relevant to the client application and select the OpenAPI
schema (version 3 is typically preferred).

Figure 4: Generating API Key

Cisco Intersight: A Handbook for Intelligent Cloud Operations


318 Programmability

Lastly, the secret key should be stored in a secure location as it is not stored
within Intersight, as emphasized in the informational box shown in the figure
below. A maximum of three keys can be generated for each user account.

Figure 5: Viewing the generated API Key

The API key ID and secret are used to generate a unique signature string for
each request using one of the supported signing algorithms listed in the
API documentation (https://intersight.com/apidocs/introduction/security/
https://intersight.com/apidocs/introduction/security/
#generating-api-keys
#generating-api-keys).
#generating-api-keys Detailing how each hashing and signature algorithm
works is not within the scope of this chapter, however, it is important to
know that each request requires a unique signature string which is typically
handled within a given client SDK.

Cisco Intersight: A Handbook for Intelligent Cloud Operations


Programmability 319

The Intersight-generated SDKs have authentication built-in, but for tools


such as Postman, an immensely popular tool for modeling and testing API
endpoints, authentication must be provided by the end user. To use Postman
to interact with the Intersight API, a pre-request script can be used within
each request or at the collection level to create the required digest and
signature strings. An example pre-request script is provided in the “walk”
phase of the next section.

Privileges and capabilities


Each API key inherits the role-based privileges of the user account it is
generated under. An API client will not be permitted to perform actions
against the API that the associated user account is also not authorized to
access or perform at the time of action execution. Similarly, the capabilities
of the API client will be restricted to the features licensed within the account
associated with the user (again, at the point of execution) and the target
resource (a list of features supported with each license level is available at:
http://www.intersight.com/help/supported_systems#intersight_base_features).
http://www.intersight.com/help/supported_systems#intersight_base_features

Cisco Intersight: A Handbook for Intelligent Cloud Operations


320 Programmability

Crawl, walk, run

As with learning anything new, it is always best to start small and simple
before trying to tackle a more complex task. The Intersight API comes with
great power, which of course means great responsibility, so learning the
proper approach to programmatically achieve a task is of vital importance.

A great place to start is with a simple use case. Some suggestions would
be:

• Get a list of all triggered alarms for a given account


• Get a list of all the rack servers for a given account

• Find all HyperFlex clusters that contain a specific tag

Once a use case is selected, the steps below can be used as a guide to
becoming familiar with various programmability tools available to accomplish
that task.

Cisco Intersight: A Handbook for Intelligent Cloud Operations


Programmability 321

Figure 6: Getting familiar with programmability tools

The remainder of this section will focus on the “Get a specific rack
server” and “Turn on the Locator LED” use case.

Crawl
Now that a use case has been selected, the next step is to go into the API
Reference tab of the Intersight API docs and review the endpoints and
methods associated with the selected use case
(https://intersight.com/apidocs/apirefs
https://intersight.com/apidocs/apirefs).
https://intersight.com/apidocs/apirefs

The API Reference tool allows for a quick search of resources within the
Intersight API universe. Users are required to log into Intersight to utilize the
API Reference tool, and it performs API calls using that user’s credentials. It
is important to note that while the API Reference encompasses every API
endpoint, the user will receive an error if trying to perform an action outside
the scope of their current user role or license tier.

Cisco Intersight: A Handbook for Intelligent Cloud Operations


322 Programmability

Since the selected use case starts with finding a specific rack server, it
makes sense to search for a term such as “rack” as shown in the screenshot
below:

Figure 7: Browsing the API Reference tool

After reviewing the results in the screenshot above,


compute/RackUnits makes the most logical sense for getting a rack server.
Clicking this entry will bring up the model schema on the right side of the
screen.

Cisco Intersight: A Handbook for Intelligent Cloud Operations


Programmability 323

Figure 8: View a resource (object) schema

The model schema shows the power of a well-documented API. Users can
now fully understand what properties are associated with a rack server
(compute.RackUnit) resource or any other managed object, including the
property Type and a Description.

Since the first objective of the selected use case is to “Get a specific rack
server,” a GET operation would be the most fitting. In the list of methods
under compute.RackUnits on the left side of the API Reference screen, two
available GET operations are shown. This is a common scenario when
interacting with the Intersight API as resources can be searched for via query
parameters (meaning by specific resource properties) or by Moid which
uniquely identifies a given resource (in this case a rack server).

Cisco Intersight: A Handbook for Intelligent Cloud Operations


324 Programmability

Get a resource by querying for specific properties


Selecting the first GET operation will launch users into the first scenario
which is “Get a resource by querying for specific properties.” The screen
layout is now comprised of three sections:

1 Resource search/navigation

2 Resource schema documentation

3 Live REST Client

Figure 9: View a resource (object) method

Cisco Intersight: A Handbook for Intelligent Cloud Operations


Programmability 325

The right side of this screen allows for the query parameters to be specified.
Clicking Send with no query parameters specified is allowed and will return a
list of all RackServers that the given logged in account is allowed to see.

Figure 10: API response when requesting a list of all servers

Cisco Intersight: A Handbook for Intelligent Cloud Operations


326 Programmability

Scrolling through the Response Text shows all the parameters returned for
each object. As shown above, Serial (the server’s serial number) is one of
the parameters returned. To filter this list to just a single server, use the
$filter query option (see below). This option will be covered in more detail,
but for now, click the + Query Parameter blue text and add a Key of $filter
and Value of Serial eq FCH2109V1EB (replacing this serial number with a valid
serial number from the target environment) as shown in the figure below. It
is important to note that both the Key and the Value information are case
sensitive.

Figure 11: API response when filtering the server list by a specific serial number

This will return all the parameters for a single server whose serial number
matches the specified serial number. One of those parameters is Moid.
Saving that Moid allows a developer to interact directly with that server
without having to search for it again. Also noteworthy is the fact that there
are other parameters listed that are pointers to other objects, like
Fanmodules, the LocatorLed, and Psus (power supplies). The details for any of
these components can be located with a subsequent API call, or Intersight
can expand the API response to include them by using the $expand query
option covered later in this chapter.

The next step in the journey to using the API is to turn on the Locator LED for
the given server. The current state of the LED is visible in the previous API

Cisco Intersight: A Handbook for Intelligent Cloud Operations


Programmability 327

response, but the setting to change the Locator LED is in a ServerSetting


object. Every object in the API has a Moid, from a single power supply to a
software setting. The Moid for the ServerSetting is required to change the
Locator LED.

A ServerSetting contains the details about the server it affects as shown


below:

"RunningWorkflow": null,
"Server": {
"ClassId": "mo.MoRef",
"Moid": "5fd904a86176752d30ec392c",
"ObjectType": "compute.RackUnit"
},
"ServerConfig": {...

The Moid for the server is nested, so searching for that Moid requires the
use of dot notation. The figure below shows filtering the ServerSetting list by
the Moid of the server:
$filter = Server.Moid eq '5fd904a86176752d30ec392c'

The server setting object is discovered in the API Reference by searching for
“serversetting”.

Cisco Intersight: A Handbook for Intelligent Cloud Operations


328 Programmability

Figure 12: The response from searching for a server setting by server Moid

Searching through the response shows a parameter called


AdminLocatorLedState and yields the Moid of the ServerSetting.

"Moid": "5fd904a86573732d30e7872d"

The final step is to update the ServerSetting, which is known as a PATCH


operation. The ServerSetting object contains many settings, but this example
will only update the setting for the Locator LED. In the diagram below, the
ServerSetting Moid is used to PATCH the Locator LED setting using the
payload { "AdminLocatorLedState": "On" }. Note that the text for “On” and
“Off” is case-sensitive and the operation will fail if the case is incorrect.

Cisco Intersight: A Handbook for Intelligent Cloud Operations


Programmability 329

Figure 13: Turning on a Locator LED by changing a ServerSetting object

The state of the Locator LED can be confirmed in the Intersight GUI.

Walk
The API Reference tool is a great way to discover the available Intersight
resources, read through the associated resource schemas, and experiment
with basic REST calls. Most programmatic use cases however will require
more than one API call to achieve the desired result. For example, the
selected use case is to “Get a specific rack server” and “Turn on the Locator
LED.”

This use case typically involves a multi-step process. First, finding the server
by specific properties as it is unlikely that the Moid is already known
(Terraform would be a possible exception), and second, pushing a change
to the properties of that server (by Moid) to turn on the Locator LED.

Before jumping straight into a client SDK, it is good practice to model a few
of these linked requests in a tool such as Postman. Postman is a
collaboration platform for API development and testing and is a very popular
tool for modeling API requests.

Cisco Intersight: A Handbook for Intelligent Cloud Operations


330 Programmability

Installing Postman
OS-specific installation instructions can be found at: https://www.postman.c
https://www.postman.c
om/
om/downloads/
om/downloads/.
downloads/ Legacy versions of Postman were offered as a browser
plugin for Chrome; however, the current version is offered as a standalone
application supported on multiple operating systems. A detailed how-to
guide for Postman is not within the scope of this book but it is worth
mentioning some basic definitions:

• Workspace: a logical container of collection of work used to group


projects and allow collaboration
• Request: a single API request for a given HTTP method type (this is
the most used construct within Postman)
• Variables: Postman offers multiple variable scopes including globals,
environment, collection, and data

• Globals: variables available globally for a given workspace


• Environment: a shared set of values typically representing a
specific target environment that can be used within a
collection
• Collection: variables available to all requests in a given
collection

• Data: variables available during a given runner execution

• Pre-request script: a JavaScript script executed before a given


request
• Test: a JavaScript script executed after a request has been executed
• Collection: a grouping of requests into a shared space where
common variables, pre-request scripts, and tests can be defined.

Cisco Intersight: A Handbook for Intelligent Cloud Operations


Programmability 331

Getting started
To help newcomers get started with Postman, a pre-built collection is
available at (https://github.com/CiscoDevNet/intersight-postman
https://github.com/CiscoDevNet/intersight-postman)
https://github.com/CiscoDevNet/intersight-postman which
includes the pre-request script to handle authentication along with some
sample API requests for reference.

Postman collections can be exported and imported via a single JSON file
containing everything within that collection including requests, collection
variables, collection pre-request script, collection test script, and collection
documentation. There is no need to clone the Git repo since everything
needed is in the single JSON file for the collection.

Users can download the collection export JSON file from the repository and
import it into Postman. The first step is to click the Import button and browse
to the location where the JSON file is saved as shown in the following two
screenshots.

Figure 14: Postman collection import process

Cisco Intersight: A Handbook for Intelligent Cloud Operations


332 Programmability

Figure 15: Browsing to the proper file location

Cisco Intersight: A Handbook for Intelligent Cloud Operations


Programmability 333

Figure 16: Completing the import into Postman

Cisco Intersight: A Handbook for Intelligent Cloud Operations


334 Programmability

Postman maintains a group of settings and variables that are shared for all
entries in a collection. Those settings and variables can be viewed and
edited by clicking the ellipsis to the right of Intersight-Examples and
selecting Edit as shown below:

Figure 17: Editing collection settings and variables

Cisco Intersight: A Handbook for Intelligent Cloud Operations


Programmability 335

The collection includes a thorough description that can be edited by the


user.

Figure 18: The Postman collection description

Cisco Intersight: A Handbook for Intelligent Cloud Operations


336 Programmability

The collection also contains pre-request scripts which are executed before
any API call within the collection. This is used to perform the authentication
for each Intersight API endpoint and is critical to successfully executing any
API calls against Intersight. The JSON file at the GitHub repository
mentioned above already contains a working pre-request script for the
collection as shown below:

Figure 19: Collection pre-request script for Intersight

Cisco Intersight: A Handbook for Intelligent Cloud Operations


Programmability 337

The Variables tab in the collection settings shows the API server, which is
intersight.com for users consuming Intersight via SaaS. If the Intersight
Connected or Private Virtual Appliance has been employed instead of using
SaaS, this setting should change to the URL of that appliance. All Postman
API calls will be directed to this URL.

Figure 20: A view of the collection variables

Cisco Intersight: A Handbook for Intelligent Cloud Operations


338 Programmability

Postman includes the ability to specify multiple environments, which are


primarily used for credential management. It is not a good idea to store
credentials in a collection, where they might be accidentally exported and
shared with others. Credentials can be added to an environment by selecting
Manage Environments in the upper right corner of the Postman window as
shown below:

Figure 21: Managing Postman environments

Cisco Intersight: A Handbook for Intelligent Cloud Operations


Programmability 339

If Postman does not already have a local environment created, the user will
be prompted to add a new one.

Figure 22: Adding a new Postman environment

Cisco Intersight: A Handbook for Intelligent Cloud Operations


340 Programmability

For working with this Postman Intersight collection, the API key and secret
key must be added as the variables api-key and secret-key. The pre-
request script mentioned above expects those variables to be set properly
and will be unable to properly encrypt the transaction without them.

Figure 23: Setting user credentials in a Postman environment

Once the environment is configured with a valid key and secret, all the
requests should work within the imported collection. Any request sent via
Postman will be bound to the user associated with the API key and secret
within the selected environment. As mentioned previously, the privileges and
capabilities of the API are limited to that of the user and account associated
with this API key credential.

Cisco Intersight: A Handbook for Intelligent Cloud Operations


Programmability 341

There are several example API requests within the imported collection to
show how to interact with common Intersight resources. The folder named
“Turn on Server Locator LED” within the collection contains all the requests
needed to replicate what was previously done in the Walk stage. This folder
is highlighted below:

Figure 24: Viewing the Turn on Server Locator LED

The first request in the folder is “Get Rack Server By Serial” which shows
how to format a GET request for a rack server containing the target serial
number used in the Walk section. Additionally, the $expand operator is used
to expand the LocatorLed Object returned as part of the rack server schema.
Without the expand operator, only the object (resource) pointer would be
returned. $expand and $filter are covered in much greater detail in the
Advanced Usage section of this chapter.

Cisco Intersight: A Handbook for Intelligent Cloud Operations


342 Programmability

Figure 25: Querying for a rack server by Serial with a property expansion

Postman allows for the inclusion of post-execution scripts (written in


JavaScript) to run after a given request. A test script is needed for the “Get
Server By Serial” request to get the Moid of the returned server resource.

Cisco Intersight: A Handbook for Intelligent Cloud Operations


Programmability 343

Figure 26: Post execution script for retrieved rack server

if (responseBody.length > 0) {
var jsonData = JSON.parse(responseBody);
var moids = [];
jsonData.Results.forEach(result => {
moids.push(result.Moid);
})
pm.collectionVariables.set('server-moid', `${moids[0]}`);
}

Since the GET request was done via query parameters instead of a direct
Moid, the return value is always a list of results. The post-execution script
above loops through all the resulting servers (there should only be one in this
case since the filter was by serial number) and collects all the Moids in a
new list variable. The script then takes the first entry out of the list and
stores it in a collection variable named server-moid for later use.

Cisco Intersight: A Handbook for Intelligent Cloud Operations


344 Programmability

The screenshot below shows the returned resource from the API request. As
expected, one compute rack unit (rack server) is returned and because
the $expand property was set for the LocatorLed object, it is easy to observe
the current state of the Locator LED.

Figure 27: Viewing the current LocatorLed state

Now that the server Moid has been stored in a collection variable, the
ServerSettings can also be queried. The LocatorLed configuration state is
stored within a ServerSettings resource which is referenced by the target
rack server. As illustrated in the Walk section, the ServerSettings can be
queried by applying a filter matching the Server.Moid property with the
Moid collected in the previous step.

Cisco Intersight: A Handbook for Intelligent Cloud Operations


Programmability 345

The {{server-moid}} syntax is how Postman designates a variable for string


substitution. In this case, the server Moid is replaced with the value stored in
the collection variable.

Figure 28: Editing the ServerSettings post-execution script

Another post-execution script is required for this second request to capture


the Moid of the returned ServerSettings. As in the previous request, the
results are in the form of a list (or array) because the query was made via
query parameters rather than Moid.

Cisco Intersight: A Handbook for Intelligent Cloud Operations


346 Programmability

The last step to accomplish the end goal of turning on the server Locator
LED is to PATCH the server settings with an AdminLocatorLedState set to On.
The screenshot below shows the JSON body syntax for this request. The
{{settings-moid}} collected in the previous request is also used to point
directly to a specific ServerSettings resource.

Figure 29: Changing the LocatorLed state within ServerSettings

Cisco Intersight: A Handbook for Intelligent Cloud Operations


Programmability 347

The rack server can now be queried again, using the same syntax as in the
first API request with the LocatorLed property set to $expand. The
OperState of the LocatorLed resource should now be set to “on” as shown in
the screenshot below:

Figure 30: Verifying the LocatorLed state is set to “on”

Run
Once the required API endpoints and resource relationships have been
discovered in the Crawl phase, and the request chaining and variable
substitutions have been modeled in the Walk phase, the natural next step is
to choose a client SDK to make this a reusable function. Python is an
immensely popular programming language for both novice and expert
programmers. With this in mind, the Python Intersight client SDK will be used

Cisco Intersight: A Handbook for Intelligent Cloud Operations


348 Programmability

in this section to write a function that automates the desired use case to
“Get a specific rack server” and “Turn on the Locator LED.”

Because the installation source files and instructions may vary over time, it is
recommended to go to the API docs downloads page for the most up-to-
date instructions for a given SDK:
https://intersight.com/apidocs/downloads/
https://intersight.com/apidocs/downloads/.
https://intersight.com/apidocs/downloads/

Virtual environment
A good Python programming practice is to always use a Python virtual
environment when working on a particular project. This helps keep package
dependencies isolated to that specific project. Python 2 was sunsetted on
January 1, 2020, meaning that even security flaws will not be fixed. Thus,
any new projects should be written in Python 3. Virtual environment
capability comes natively built into Python 3 starting with version 3.3. Below
is an example of how to create a Python virtual environment.

python -m venv Intersight

After the above command is executed, a new folder will be created in the
current working directory named “Intersight” containing the virtual
environment. The virtual environment should then be sourced to ensure any
reference to the python command is pointing to the virtual environment and
associated libraries. Sourcing the virtual environment varies based on the
end-user operating system, so reference the official Python documentation
for the latest OS-specific instructions
https://docs.python.org/3/tutorial/venv.html).
https://docs.python.org/3/tutorial/venv.html
(https://docs.python.org/3/tutorial/venv.html

New versions of Python are continually being released and moving from one
version to another can easily break projects that were written on the
previous version. Pyenv is a very popular and easy-to-use version
management framework for Python allowing for the install of various versions
of Python and creates virtual environments tied to a particular installed
version. Installing pyenv is not within the scope of this book but instructions
can be found on its GitHub repo (https://github.com/pyenv/pyenv
https://github.com/pyenv/pyenv)
https://github.com/pyenv/pyenv

Cisco Intersight: A Handbook for Intelligent Cloud Operations


Programmability 349

Configuring the client


After the Intersight Python SDK has been installed into a virtual environment,
the next step is figuring out how to properly configure the client to
authenticate with the target Intersight service. Every programmer will
naturally have their preferred method for storing and passing credentials.
Some may choose to use Python’s built-in keyring library to pull secrets
from within their OS-specific secrets manager (like Credential Manager in
Windows or Keychain in MacOS). Others may prefer to use an enterprise-
hosted key management solution such as Vault from HashiCorp or a cloud-
based offering such as Secrets Manager from AWS.

With this in mind, a common practice when working with any type of client
SDK is to create a helper function to reuse in other projects to handle
authentication. In the example below, it is assumed that the API key in clear
text and a path to the secret file will be passed via the command line using
the argparse library.

import argparse
import os
import datetime
import intersight

Parser = argparse.ArgumentParser(description='Intersight SDK credential lookup')


def config_credentials(description=None):
"""config_credentials configures and returns an Intersight api client

Arguments:
description {string}: Optional description used within argparse help

Returns:
apiClient [intersight.api_client.ApiClient]: base intersight api client class
"""
if description != None:
Parser.description = description
Parser.add_argument('--url', default='https://intersight.com')
Parser.add_argument('--ignore-tls', action='store_true')
Parser.add_argument('--api-key-legacy', action='store_true')
Parser.add_argument(
'--api-key-id',
default=os.getenv('INTERSIGHT_API_KEY_ID'))
Parser.add_argument(

Cisco Intersight: A Handbook for Intelligent Cloud Operations


350 Programmability

'--api-key-file',
default=os.getenv('INTERSIGHT_API_PRIVATE_KEY', '~/Downloads/SecretKey.txt'))

args = Parser.parse_args()

if args.api_key_id:
# HTTP signature scheme.
if args.api_key_legacy:
signing_scheme = intersight.signing.SCHEME_RSA_SHA256
signing_algorithm = intersight.signing.ALGORITHM_RSASSA_PKCS1v15
else:
signing_scheme = intersight.signing.SCHEME_HS2019
signing_algorithm = intersight.signing.ALGORITHM_ECDSA_MODE_FIPS_186_3

configuration = intersight.Configuration(
host=args.url,
signing_info=intersight.HttpSigningConfiguration(
key_id=args.api_key_id,
private_key_path=args.api_key_file,
signing_scheme=signing_scheme,
signing_algorithm=signing_algorithm,
hash_algorithm=intersight.signing.HASH_SHA256,
signed_headers=[intersight.signing.HEADER_REQUEST_TARGET,
intersight.signing.HEADER_CREATED,
intersight.signing.HEADER_EXPIRES,
intersight.signing.HEADER_HOST,
intersight.signing.HEADER_DATE,
intersight.signing.HEADER_DIGEST,
'Content-Type',
'User-Agent'
],
signature_max_validity=datetime.timedelta(minutes=5)
)
)
else:
raise Exception('Must provide API key information to configure ' +
'at least one authentication scheme')

if args.ignore_tls:
configuration.verify_ssl = False

apiClient = intersight.ApiClient(configuration)
apiClient.set_default_header('referer', args.url)
apiClient.set_default_header('x-requested-with', 'XMLHttpRequest')
apiClient.set_default_header('Content-Type', 'application/json')

Cisco Intersight: A Handbook for Intelligent Cloud Operations


Programmability 351

return apiClient

if __name__ == "__main__":
config_credentials()

The code above can be found in the GitHub repo (https://github.com/CiscoD


https://github.com/CiscoD
evNet/
evNet/intersight-python-utils
evNet/intersight-python-utils)
intersight-python-utils as the file credentials.py. Future code
examples will include this file to eliminate any inconsistencies when
configuring client authentication.

Passing credentials
The imports at the top tell the Python interpreter which packages need to be
included when executing the script. Some important imports to note in this
example are argparse and intersight. The argparse library is a helper class
for defining required and optional arguments to be passed into the script at
execution. argparse also includes advanced capabilities like argument
groups, data validation, and mutual exclusion not shown in this example.

In the argument definition there is a required argument for both the API key
ID (--api-key-id) and the API secret file path (--api-key-file). Below is how
a user would execute this script named example.py (the key has been
truncated for readability):

python example.py --api-key-id 596cc79e5d91


--api-key-file=secret
--api-key-legacy

Here the API key ID is passed in clear text while the API secret is stored in a
file in the same working directory as the example.py script. Securing the
contents of the secret file is not within the scope of this book, but the file
should ideally be removed after its use. There are additional arguments in
the credentials.py code for passing an Intersight URL if using a Connected
Virtual Appliance or Private Virtual Appliance, as well as a flag for using a v2
API key.

Cisco Intersight: A Handbook for Intelligent Cloud Operations


352 Programmability

The argument parser gets the needed authentication data and can be
referenced in other scripts to add additional script-specific arguments. An
example of this will be shown later as part of the selected use case.

Client configuration
When all the necessary arguments needed for authentication have been
defined and parsed, the API client configuration can be created. The details
of the code below should not be analyzed too closely as they may change
over time with the addition of different signing algorithms. What is important
to note, however, is that the algorithms will change based on the version of
the API key being used, as noted in bold in the code below:

if args.api_key_id:
# HTTP signature scheme.
if args.api_key_legacy:
signing_scheme = intersight.signing.SCHEME_RSA_SHA256
signing_algorithm = intersight.signing.ALGORITHM_RSASSA_PKCS1v15
else:
signing_scheme = intersight.signing.SCHEME_HS2019
signing_algorithm = intersight.signing.ALGORITHM_ECDSA_MODE_FIPS_186_3
configuration = intersight.Configuration(
host=args.url,
signing_info=intersight.HttpSigningConfiguration(
key_id=args.api_key_id,
private_key_path=args.api_key_file,
signing_scheme=signing_scheme,
signing_algorithm=signing_algorithm,
hash_algorithm=intersight.signing.HASH_SHA256,
signed_headers=[intersight.signing.HEADER_REQUEST_TARGET,
intersight.signing.HEADER_CREATED,
intersight.signing.HEADER_EXPIRES,
intersight.signing.HEADER_HOST,
intersight.signing.HEADER_DATE,
intersight.signing.HEADER_DIGEST,
'Content-Type',
'User-Agent'
],
signature_max_validity=datetime.timedelta(minutes=5)
)
)

Cisco Intersight: A Handbook for Intelligent Cloud Operations


Programmability 353

else:
raise Exception('Must provide API key information to configure ' +
'at least one authentication scheme')

if args.ignore_tls:
configuration.verify_ssl = False

Creating a client instance


Finally, an instance of the Intersight API client can be instantiated using the
created configuration from above and returned for use in other code
projects, as shown in future examples.

apiClient = intersight.ApiClient(configuration)
apiClient.set_default_header('referer', args.url)
apiClient.set_default_header('x-requested-with', 'XMLHttpRequest')
apiClient.set_default_header('Content-Type', 'application/json')

return apiClient

Performance note
Performance of the Intersight Python SDK is vastly improved by using a
v3 API key, due to the performance benefits of the elliptical curve algorithm
over RSA. Note, the legacy key is only used below as an example, and to be
consistent with what was shown in the earlier phases.

Building the use case code


The credential.py file is a great piece of reusable code that can be
incorporated as the foundation for any future programs leveraging the
Intersight Python SDK. Now that connectivity is complete, a new project
named toggle_locator_led.py can be created to handle setting the Locator
LED. The code will begin by leveraging the config_credentials function
shown previously:

Cisco Intersight: A Handbook for Intelligent Cloud Operations


354 Programmability

import argparse
from datetime import timedelta
import logging
from pprint import pformat
import traceback
from typing import Text, Type
from time import sleep

import intersight
import credentials
from helpers import format_time, print_results_to_table
#* Place script specific intersight api imports here
import intersight.api.compute_api

def main():

client = credentials.config_credentials()

if __name__== "__main__":
main()

Instead of hard coding the serial number into the query, as was done in the
Walk phase, this value should be passed in as another argument. Supporting
a user-passed serial number requires an additional argparse argument as
shown below:

def main():

# Get existing argument parser from credentials


parser = credentials.Parser
parser.description = 'Toggle locator led of a rack server by serial'

# Place script specific arguments here


parser.add_argument('--serial', required=True, help='server serial number')

client = credentials.config_credentials()

args = parser.parse_args()

try:
# Start main code here
# Toggle locator led for compute rack unit with supplied serial
toggle_locator_led(client, args.serial)

Cisco Intersight: A Handbook for Intelligent Cloud Operations


Programmability 355

except intersight.OpenApiException as e:
logger.error("Exception when calling API: %s\n" % e)
traceback.print_exc()

if __name__== "__main__":
main()

As noted in the new argument description, the new function used to handle
changing the Locator LED state will also not be setting a static value, rather
it will read the current locator LED state and toggle the value. This means if
the current Locator LED state is Off, then the new state will be On, and vice
versa. The serial number can be passed in when executing the script, similar
to what was shown previously (the key has been truncated for readability):

python example.py --api-key-id 596cc79e5d91


--api-key-file=secret
--api-key-legacy
--serial FCH2109V1EB

Below is the code for the toggle_locator_led function used for toggling the
Locator LED state:

def toggle_locator_led(client: Type[intersight.ApiClient], serial: Text) -> None:


logger.info(f"Toggling locator led for {serial}")

# Get compute class instance


api_instance = intersight.api.compute_api.ComputeApi(client)

# Find rack server resource by Serial


server_query = api_instance.get_compute_rack_unit_list(
filter=f"Serial eq {serial}",
expand="LocatorLed"
)

# Store locator led state to a var and toggle the value


led_state = server_query.results[0].locator_led.oper_state
logger.info(f"Previous led state = {led_state}")
new_state = "On" if led_state == "off" else "Off"

Cisco Intersight: A Handbook for Intelligent Cloud Operations


356 Programmability

# Get server settings by Server Moid


server_settings_query = api_instance.get_compute_server_setting_list(
filter=f"Server.Moid eq '{server_query.results[0].moid}'"
)

# Update server settings with toggled led value


update_query = api_instance.update_compute_server_setting(
moid=server_settings_query.results[0].moid,
compute_server_setting=dict(admin_locator_led_state=new_state)
)
# Pause for eventual consistency to catch up
new_settings_query = api_instance.get_compute_server_setting_list(
filter=f"Server.Moid eq '{server_query.results[0].moid}'",
expand="LocatorLed"
)
retries = 0
while(retries <= 10 and
new_settings_query.results[0].config_state.lower() == 'applying'):
logger.info("Waiting for eventual consistency to occur")
sleep(2)
# Retrieve new led operational state
new_settings_query = api_instance.get_compute_server_setting_list(
filter=f"Server.Moid eq '{server_query.results[0].moid}'",
expand="LocatorLed"
)
retries += 1

current_state = new_settings_query.results[0].locator_led.oper_state
if current_state.lower() != new_state.lower():
logger.error("Timeout occurred. Led operstate never changed")
else:
logger.info(f"New led state = {current_state}")

The steps within the function should look very similar to what was executed
within Postman during the Walk phase. The only major difference is the
addition of an API helper class for the rack server (which is in the
computeUnit class):

api_instance = intersight.api.compute_api.ComputeApi(client)

Cisco Intersight: A Handbook for Intelligent Cloud Operations


Programmability 357

From here the steps are as follows

1 Get the target server by serial number

server_query = api_instance.get_compute_rack_unit_list(
filter=f"Serial eq {serial}",
expand="LocatorLed"
)

2 Store the current state of the locator LED into a variable and calculate
the new desired value

led_state = server_query.results[0].locator_led.oper_state
logger.info(f"Previous led state = {led_state}")
new_state = "On" if led_state == "off" else "Off"

3 Retrieve the target server settings by querying via the target server
Moid

server_settings_query = api_instance.get_compute_server_setting_list(
filter=f"Server.Moid eq '{server_query.results[0].moid}'"
)

4 Update the server settings with the new desired locator LED state

update_query = api_instance.update_compute_server_setting(
moid=server_settings_query.results[0].moid,
compute_server_setting=dict(admin_locator_led_state=new_state)
)

Cisco Intersight: A Handbook for Intelligent Cloud Operations


358 Programmability

5 Verify the new locator LED state

new_settings_query = api_instance.get_compute_server_setting_list(
filter=f"Server.Moid eq '{server_query.results[0].moid}'",
expand="LocatorLed"
)
retries = 0
while(retries <= 10 and
new_settings_query.results[0].config_state.lower() == 'applying'):
logger.info("Waiting for eventual consistency to occur")
sleep(2)
# Retrieve new led operational state
new_settings_query = api_instance.get_compute_server_setting_list(
filter=f"Server.Moid eq '{server_query.results[0].moid}'",
expand="LocatorLed")
retries += 1
current_state = new_settings_query.results[0].locator_led.oper_state
if current_state.lower() != new_state.lower():
logger.error("Timeout occurred. Led operstate never changed")
else:
logger.info(f"New led state = {current_state}")

In step 5, a while loop is used to follow the configuration state of the applied
changes. Intersight provides eventual consistency and therefore the changes
are not immediately applied. The “while” loop continues to query the server
settings and looks for the config_state to change from “applying” to
something else. Once a timeout occurs or the configuration state is no
longer “applying” the final server settings query is used to check the new
value of the locator LED. If the value doesn’t match what is expected, then
an error is printed to the user.

Cisco Intersight: A Handbook for Intelligent Cloud Operations


Programmability 359

The output of the above script for the example should look similar to the
following (timestamps were truncated for readability):

11:56:25 [INFO] [toggle_locator_led.py:21] Toggling locator led for FCH2109V1EB


11:56:27 [INFO] [toggle_locator_led.py:31] Previous led state = off
11:56:30 [INFO] [toggle_locator_led.py:44] Waiting for eventual consistency to occur
11:56:33 [INFO] [toggle_locator_led.py:44] Waiting for eventual consistency to occur
11:56:35 [INFO] [toggle_locator_led.py:44] Waiting for eventual consistency to occur
11:56:37 [INFO] [toggle_locator_led.py:44] Waiting for eventual consistency to occur
11:56:40 [INFO] [toggle_locator_led.py:44] Waiting for eventual consistency to occur
11:56:42 [INFO] [toggle_locator_led.py:53] New led state = on

Cisco Intersight: A Handbook for Intelligent Cloud Operations


360 Programmability

Advanced usage

Traditionally, a developer uses an API to read large amounts of data that are
then processed by the client application. It would be trivial in an environment
with a dozen servers to retrieve all servers and simply loop through them to
find the one server with the desired serial number. This is mildly inefficient at
a small scale but wildly wasteful at a larger scale, negatively affecting
performance.

The Intersight API provides additional query options that can be leveraged to
perform server-side filtering, expansion, and aggregation. In other words,
additional processing, such as counting the number of entries, can be done
by the Intersight servers in the cloud before the payload is transmitted to the
client. This can reduce payload size, reduce client-side processing, and
even reduce the number of required API calls, streamlining the entire effort.

Reduce payload size


$select
The $select query option specifies the desired properties to be returned by
the API operation. For example, the endpoint that retrieves a list of alarms
returns more than 30 properties (some of them with additional nested
properties) for every alarm, and the endpoint that retrieves a list of rack
servers returns more than 80 properties for every server. Using
the $select parameter will shorten that list to only the properties that the
client cares about.

Cisco Intersight: A Handbook for Intelligent Cloud Operations


Programmability 361

Figure 31: An example of using the $select option query

Note that most GET operations return the ClassId, Moid, and ObjectType
regardless of what is specified by $select. In the above example, the
resulting payload will contain six properties per alarm even though only three
were requested.

Cisco Intersight: A Handbook for Intelligent Cloud Operations


362 Programmability

$filter
The $filter query option specifies one or more expressions to use to
restrict which objects will be returned. An Intersight account could have
hundreds or thousands of alarms (hopefully most have a status of
“Cleared”), so it is useful to ask Intersight only to return alarms of specified
severity. This will greatly reduce the amount of data returned to the client.
The example below shows $filter being used to request only alarms of
“Critical” severity.

Figure 32: An example of using the $filter option query

Multiple expressions can be combined using logical operators such as


“and”. For example, the following $filter requests alarms of “Critical”
severity that have occurred since February 1, 2021.

$filter: Severity eq Critical and CreationTime gt 2021-02-01T00:00:00Z

The different comparison and logical operators available and their exact
syntax is detailed in the online user guide at: https://intersight.com/apidocs/
https://intersight.com/apidocs/
introduction/query/#filter-query-option-filtering-the-resources
introduction/query/#filter-query-option-filtering-the-resources

Cisco Intersight: A Handbook for Intelligent Cloud Operations


Programmability 363

$inlinecount
The $inlinecount query option adds a Count property showing the total
number of objects in the returned payload. The payload is unaltered beyond
the addition of this property. The benefit of this query option is that the client
does not have to count the number of items returned from the API call:

Figure 33: An example of using the $inlinecount query option

$count
The $count query option operates much like the $inlinecount query option
except that it replaces the payload with the Count property. An example of
the return payload when $count is set to true is shown below:

{
"ObjectType": "mo.DocumentCount",
"Count": 242
}

Cisco Intersight: A Handbook for Intelligent Cloud Operations


364 Programmability

Combining query options


Intersight can accept multiple query options with each API call, allowing the
developer to optimize the results. The example below shows combining
$select and a complex $filter to retrieve the CreationTime and
Description for every critical alarm since February 1, 2021.
$inlinecount was added so that Intersight can compute the total number of
alarms that match that criteria, saving the client from having to write that
code.

Figure 34: An example of combining multiple query options

Cisco Intersight: A Handbook for Intelligent Cloud Operations


Programmability 365

Reduce client-side processing


$apply for aggregation
The $apply query option performs aggregation. Those familiar with SQL will
notice a similarity to aggregation in SQL, where the server groups the results
by one or more parameters and then calculates a minimum, maximum, sum,
or count of another parameter. The “as” keyword is used to name the new
calculated field. The complete syntax is maintained here:
https://intersight.com/apidocs/introduction/query/#apply-query-option
https://intersight.com/apidocs/introduction/query/#apply-query-option

The example below shows a single API call that returns the total number of
alarms for each severity level.

Figure 35: An example of simple aggregation

Without server-side aggregation, this would require a lot of client-side code


and processing time. Instead, Intersight returns exactly what the client
needs.

Cisco Intersight: A Handbook for Intelligent Cloud Operations


366 Programmability

{
"ObjectType": "mo.AggregateTransform",
"Results": [
{
"Severity": "Warning",
"Total": 52
},
{
"Severity": "Info",
"Total": 15
},
{
"Severity": "Cleared",
"Total": 142
},
{
"Severity": "Critical",
"Total": 33
}
]
}

$orderby for sorting


An API response can be sorted by any parameter (either ascending or
descending) using the $orderby query option. This eliminates the need for
computationally expensive sorting by the client.

The following example shows two levels of sorting alarms: first in ascending
order by Severity, then in descending order by CreationTime (newest alarms
first).

Cisco Intersight: A Handbook for Intelligent Cloud Operations


Programmability 367

Figure 36: An example of sorting in both ascending and descending order

Reduce the number of required API calls


Some API responses include references to other objects. For example, the
locator LED has a property called Parent that looks like this (the link property
was truncated for readability):

"Parent": {
"ClassId": "mo.MoRef",
"Moid": "5c8014c07774666a78a029ff",
"ObjectType": "compute.RackUnit",
"link": "https://www.intersight.com/api/v1/compute/RackUnits/5c..."
}

In this case, the parent is a server, and the client application might need the
serial number of that server. Normally this would require an additional call to
Intersight to obtain the server serial number. For a list of 100 servers, that
would require a unique API call to Intersight for each of those 100 servers.

The $expand query option can be used here to instruct Intersight to fetch the
details of each parent and return those results inline. For a list of 100
servers, $expand eliminates those additional 100 API calls.

Cisco Intersight: A Handbook for Intelligent Cloud Operations


368 Programmability

Figure 37: A simple example showing the usage of the $expand query option

Expanding the parent (which is a server) will add more than 80 parameters
to the new payload. Fortunately, the $select option mentioned in a previous
section can be combined with $expand to limit the information returned by
$expand to only a subset of parameters. The following example limits the
scope of $expand to returning only the serial number of the server (Parent)
for each Locator LED.

Figure 38: An example using the $select and $expand option queries together

The SearchItems API can also reduce the number of required API calls.
Every object type (rack server, blade server, fabric interconnect, chassis,
pool, policy, profile, etc.) is accessed by a different API endpoint. To locate
a server by serial number, an administrator would have to query both the
Compute/RackUnits and the Compute/Blades endpoints. To locate hardware
by a tag, an administrator would have to query both of those endpoints and
also the Equipment/Chassis and Equipment/SwitchCard endpoints and

Cisco Intersight: A Handbook for Intelligent Cloud Operations


Programmability 369

more. Instead, a single API endpoint can be used to search across all
ClassIds within Intersight.

The following example shows how to find a specific serial number without
knowing what kind of device it is.

Figure 39: An example to find a specific serial number

Another common request is to locate every object with a specific tag.


Administrators can search for the presence of a given tag (regardless of its
value) by setting $filter to:

Tags/any(t:t/Key eq 'owner')

Administrators can search for a tag with a specific value by setting


$filter to:

Tags/any(t:t/Key eq 'location' and t/Value eq 'austin')

This syntax is also described under “Lambda Operators” at: https://intersigh


https://intersigh
t.com/apidocs/introduction/query/#filter-query-option-filtering-the-resources
t.com/apidocs/introduction/query/#filter-query-option-filtering-the-resources

Cisco Intersight: A Handbook for Intelligent Cloud Operations


370 Programmability

Next steps: use cases

The Intersight GUI cannot possibly cover every use case an IT administrator
might encounter. Programmability provides teams with agility and flexibility
beyond the GUI. This section will cover a few use cases that Intersight
customers have encountered that were easily solved with just a few lines of
code. Each use case will be presented with both the Python and PowerShell
SDKs for Intersight.

Lines in PowerShell snippets that are too long to fit on one line will be split
using the PowerShell standard backtick (`) for multiline statements.

Use case 1: Retrieve all critical alarms within


the last 7 days
A common request is to export all Interisight alarms into Splunk. This
example, however, will retrieve only critical alarms to illustrate a more
complex use case.

PowerShell
This short script first creates a string that represents seven days ago. Get-
Date will retrieve the current instant in time and AddDays(-7) will subtract
seven days from the current instant in time. The string is in the format that
the Intersight API uses for dates (including fractions of a second).

The second step is to create a string that will be used as a filter in the API
call. The variable $myfilter specifies that alarm severity must be critical and
the creation time of the alarm must be greater than the date string built in
the previous step.

Cisco Intersight: A Handbook for Intelligent Cloud Operations


Programmability 371

The last step is to simply execute the get with the $filter set to $myfilter.
There is also a $select option specified to minimize the size of the returned
payload. This is optional.

$mydate = (Get-Date).AddDays(-7).ToString("yyyy-MM-ddTHH:mm:ss.fffZ")
$myfilter = "Severity eq Critical and CreationTime gt $mydate"
(Get-IntersightCondAlarmList `
-VarFilter $myfilter `
-Select "CreationTime,Description” `
).ActualInstance.Results

Python
The example snippet below does not show the required client configuration
discussed previously for the sake of formatting. Import statements are
specific to just the snippet.

import intersight
import intersight.api.cond_api
from datetime import datetime, timedelta

#* Get condition class instance


api_instance = intersight.api.cond_api.CondApi(client)
search_period = datetime.now() - timedelta(days=30)
# truncate the time string from Python’s 6 digit microseconds to 3 digits
formatted_period = f"{search_period.strftime('%Y-%m-%dT%H:%M:%S.%f')[:-3]}Z"
query_filter = f"Severity eq Critical and CreationTime gt {formatted_period}"
#* Only include CreationTime and Description in results
query_select = "CreationTime,Description"

#* Get alarms using query parameters


alarm_query = api_instance.get_cond_alarm_list(
filter=query_filter,
select=query_select
)

Cisco Intersight: A Handbook for Intelligent Cloud Operations


372 Programmability

Use case 2: Pull valuable details from the audit


log
The Intersight audit log contains all the actions performed by every user in
the account including login, create, modify and delete operations. This
makes the audit log a valuable source of information that is not easy to
consume without some aggregation.

Every entry in the audit log has an Event such as Login, Created, Modified, or
Deleted and a MoType (Managed Object Type) such as
hyperflex.ClusterProfile, compute.Blade, or os.Install (not a complete list).

PowerShell
This example retrieves the entire audit log and groups by both email and
MoType. It aggregates the maximum (latest) date. The output of this
command will show the last date each user interacted with each managed
object type, sorting the results by email. One line of code is all that is
required to summarize the entire audit log:

(Get-IntersightAaaAuditRecordList `
-Apply 'groupby((Email,MoType), aggregate(CreateTime with max as Latest))' `
-Orderby Email `
).ActualInstance.Results

Sample output would look like this:

Email Latest MoType


----- ------ ------
user1@cisco.com 11/5/2020 4:06:40 PM recovery.BackupProfile
user1@cisco.com 10/30/2020 8:07:19 PM ntp.Policy
user1@cisco.com 12/12/2018 2:16:12 PM iam.Account
user1@cisco.com 10/30/2020 8:01:21 PM fabric.SwitchProfile
user2@cisco.com 10/8/2020 5:32:12 PM os.Install
user2@cisco.com 11/18/2020 6:00:24 PM macpool.Pool

It also takes only one line of code to show the last time each user logged
into Intersight. This command will filter for only login events, group by email

Cisco Intersight: A Handbook for Intelligent Cloud Operations


Programmability 373

while saving the latest date, and then sort the results in descending order
(most recent event first):

(Get-IntersightAaaAuditRecordList `
-VarFilter 'Event eq Login' `
-Apply 'groupby((Email), aggregate(CreateTime with max as Latest))' `
-Orderby '-Latest' `
).ActualInstance.Results

Sample output would look like this:

Email Latest
----- ------
user1@cisco.com 1/23/2021 2:32:46 AM
user2@cisco.com 1/23/2021 2:19:47 AM
user3@cisco.com 1/22/2021 11:12:20 PM

Python
Import statements are specific to just the snippet.

import intersight
import intersight.api.aaa_api

# Get aaa class instance


api_instance = intersight.api.aaa_api.AaaApi(client)
# Find all aaa records for login attempts
query_filter = "Event eq Login"
# Group results by Email and set count to property named `Total`
query_apply = "groupby((Email), aggregate(CreateTime with max as Latest))"
# Sort by Total in descending order
query_order_by = "-Latest"
aaa_query = api_instance.get_aaa_audit_record_list(
filter=query_filter,
apply=query_apply,
orderby=query_order_by
)

Cisco Intersight: A Handbook for Intelligent Cloud Operations


374 Programmability

Use case 3: Apply tags specified in a CSV file


Intersight tags are a powerful construct that makes searching for groups of
devices easy. One Cisco customer wanted to attach location tags (row
number and rack number) to each server in their data center. They had the
information in a spreadsheet but did not want to manually tag each server
through the GUI. How can they create tags in Intersight from locations in a
spreadsheet? Programmability.

Here is a sample of what that spreadsheet looks like. The script must find
each server by serial number and apply three tags (row, rack, location) to
each server.

serial row rack location

FCH2109V2DJ 1 3 austin

FCH2109V2RX 1 4 austin

FCH2109V0JH 2 9 austin

FCH2109V1H3 2 9 austin

FCH2109V0BB 2 10 austin

FCH2109V1FC 2 10 austin

PowerShell
This sample code steps through each line of the CSV file, locating each
server’s Moid by searching for its serial number. It then creates the tag
structure using the values specified in the CSV for that server. Lastly, it sets
the tags. This script does not use hard-coded keys for the tags, instead
using the CSV column headings themselves as keys.

Cisco Intersight: A Handbook for Intelligent Cloud Operations


Programmability 375

foreach($csv_row in (Import-Csv servers.csv)) {


# get the server Moid by searching for the server by serial number
$myfilter = "Serial eq $($csv_row.serial)"
$response = (Get-IntersightComputeRackUnitList `
-VarFilter $myfilter `
-Select "Moid,Tags" `
).ActualInstance.Results
$moid = $response.moid

# remove serial number because we don't need it anymore


# and don't want it applied as a tag
$csv_row.PSObject.Properties.Remove('serial')

# create tags based on column headings in the CSV file


$tags = @()
$csv_row.PSObject.Properties | % {
$temp = New-Object PSObject -Property @{
Key="$($_.Name)"; Value="$($_.Value)"
}
$tags += $temp
}

$settings = @{Tags=$tags}

Set-IntersightComputeRackUnit -Moid $moid -ComputeRackUnit $settings


}

Python
Import statements are specific to just the snippet.

import intersight
import credentials
import intersight.api.compute_api
from csv import DictReader

# Get existing argument parser from credentials


parser = credentials.Parser
parser.description = 'Intersight script to set rack server tags from csv file'

# Add argument for csv file name


parser.add_argument('--csv_file', required=True, help='Path to csv file')

Cisco Intersight: A Handbook for Intelligent Cloud Operations


376 Programmability

# Create intersight api client instance and parse authentication arguments


client = credentials.config_credentials()

# Parse arguments again to retrieve csv file path


args = parser.parse_args()

try:
# Get compute class instance
api_instance = intersight.api.compute_api.ComputeApi(client)

with open(args.csv_file, newline='') as csvfile:


reader = DictReader(csvfile)
for row in reader:
# Construct tag values
tags = []
for tag_key in [k for k in row.keys() if k != 'serial']:
tags.append(dict(
key=tag_key,
value=row[tag_key]
))
# Find rack server resource by Serial
server_query = api_instance.get_compute_rack_unit_list(
filter=f"Serial eq {row['serial']}"
)
# Update rack server tags
server_update = api_instance.update_compute_rack_unit(
moid=server_query.results[0].moid,
compute_rack_unit=dict(tags=tags)
)

Use case 4: Toggle locator LED


The Python version of this use case was covered earlier in this chapter in the
Run section of Crawl, Walk, Run. Here only the PowerShell version will be
shown. As discussed earlier, this script will first locate the appropriate server
and the current state of its Locator LED. It will then assign the opposite state
to the Locator LED and monitor the progress of that assignment.

Cisco Intersight: A Handbook for Intelligent Cloud Operations


Programmability 377

$server = (Get-IntersightComputeRackUnitList `
-Select LocatorLed `
-VarFilter "Serial eq FCH2109VXYZ" `
-Expand LocatorLed `
).ActualInstance.Results
$setting_moid = ((Get-IntersightComputeServerSettingList `
-VarFilter "Server.Moid eq '$($server.Moid)'" `
).ActualInstance).Results.Moid

# determine the current state of the LED and set it to the opposite state
if($server.LocatorLed.OperState -like 'on') {
$setting = ConvertFrom-Json '{"AdminLocatorLedState":"Off"}'
} else {
$setting = ConvertFrom-Json '{"AdminLocatorLedState":"On"}'
}
# initiate the changing of the locator LED
Set-IntersightComputeServerSetting `
-Moid $setting_moid `
-ComputeServerSetting $setting | Out-Null
Write-Host "Previous LED state = $($server.LocatorLed.OperState)"

# wait for LED state to change


while( (Get-IntersightComputeServerSettingByMoid `
-Moid $setting_moid).ConfigState -like 'applying' )
{
Write-Host " waiting..."
Start-Sleep -Seconds 1
}
# display current state of locator LED
(Get-IntersightEquipmentLocatorLedByMoid -Moid $server.LocatorLed.Moid).OperState

Use case 5: Configuring Proactive Support


There are certain simple, one-time administrative actions that need to be
performed directly against the Intersight API and these do not merit the use
of an external client script or toolset. A great example of such an action is
setting the Proactive Support email address for a given Intersight account as
mentioned in the Infrastructure Operations chapter.

Cisco Intersight: A Handbook for Intelligent Cloud Operations


378 Programmability

Opting in to Proactive RMAs


The first step in this configuration is to determine the Moid of the Intersight
account for the organization. Account ID is located in the Account Details
section of the Intersight Settings (http://intersight.com/an/settings/managea
http://intersight.com/an/settings/managea
ccount/
ccount/)
ccount/ or it can be gathered from the API Reference tool
(http://intersight.com/apidocs/apirefs/api/v1/iam/Accounts/get/
http://intersight.com/apidocs/apirefs/api/v1/iam/Accounts/get/)
http://intersight.com/apidocs/apirefs/api/v1/iam/Accounts/get/ using the
GET method. The $select query parameter with the Tags value can be
utilized to remove some unnecessary fields and make the AccountMoid easier
to locate. The figure below shows how to obtain the Moid.

Figure 40: Determine account Moid

Before configuring a default email address for Proactive Support


notifications, it is recommended to check if an address has already been
specified with a tag.

Cisco Intersight: A Handbook for Intelligent Cloud Operations


Programmability 379

This can be verified using a GET method in the API Reference tool
(https://intersight.com/apidocs/apirefs/api/v1/iam/Accounts/get/
https://intersight.com/apidocs/apirefs/api/v1/iam/Accounts/get/):
https://intersight.com/apidocs/apirefs/api/v1/iam/Accounts/get/

• The Key field should be set to $select


• The Value field should be set to Tags

If an email address has not been specified the Tags field in the response will
be blank as shown below:

Figure 41: Determine if Proactive Support email address is configured

To explicitly configure the email address to be associated with both the


Support Case and the RMA, an account administrator can create or set the
following tag:

{"Key":"AutoRMAEmail","Value":"UserName@AcemCo.com"}

Cisco Intersight: A Handbook for Intelligent Cloud Operations


380 Programmability

This can be performed using the API Reference tool with a POST operation
(https://intersight.com/apidocs/apirefs/api/v1/iam/Accounts/%7BMoid%7D/
https://intersight.com/apidocs/apirefs/api/v1/iam/Accounts/%7BMoid%7D/
post/
post/)
post/ as shown in the figure below:

Figure 42: Configure Proactive Support email address

As shown in the figure above, tags are created by setting the Tags property.

Cisco Intersight: A Handbook for Intelligent Cloud Operations


Programmability 381

{
"Tags": [
{
"Key": "AutoRMAEmail",
"Value": "UserName@AcmeCo.com"
}
]
}

Opting out of Proactive RMAs


Opting out of Proactive RMAs can be configured at the account level. To
opt-out, the tag must be as shown below:

{"Key":"AutoRMA","Value":"False"}

This can be performed using the API Reference tool with the POST method
(https://intersight.com/apidocs/apirefs/api/v1/iam/Accounts/%7BMoid%7D/
https://intersight.com/apidocs/apirefs/api/v1/iam/Accounts/%7BMoid%7D/
post/
post/)
post/ as shown in the figure below:

Cisco Intersight: A Handbook for Intelligent Cloud Operations


382 Programmability

Figure 43: Opt-out of Proactive Support

As shown in the figure above, tags are created by setting the Tags property.

{
"Tags": [
{
"Key": "AutoRMA",
"Value": "False"
}
]
}

To opt back into Proactive RMAs (if opted out), the tag value can be
changed to True or removing the tag will result in the same outcome.

Cisco Intersight: A Handbook for Intelligent Cloud Operations


Infrastructure as
Code

Cisco Intersight: A Handbook for Intelligent Cloud Operations


384 Infrastructure as Code

Introduction

When discussing Intersight, much of the conversation is naturally focused on


the graphical user interface. However, as mentioned in the Foundations
chapter, there are multiple interfaces to the platform, which is not only a
nice-to-have capability but also a core capability of a cloud operations
platform. The Intersight API, which was covered in-depth in the
Programmability section, is another interface into Intersight and the linchpin
for the capabilities of this cloud operation platform.

This API provides a model where all the infrastructure resources in an


organization’s cloud can be codified. The process of modeling the cloud
resources in code is known as Infrastructure as Code and allows both
physical and virtual resources to be managed.

Intersight is unique in that it serves as a bridge between different IT teams.


Rather than introducing new tools, and often dissension among teams,
Intersight offers a new paradigm that allows traditional infrastructure to be
both operated and maintained with the agility of cloud-native infrastructure,
while at the same time providing much of the stability and governance
principles of traditional infrastructure to newer, often cloud-native
infrastructure that is required for business agility.

Several of the previous chapters covering such topics as Infrastructure


Operations, Server Operations, and Storage Operations leaned toward more
traditional infrastructure types. The remainder of this chapter will pivot and
move into Intersight’s capabilities that apply to more modern or cloud-native
application environments. Specifically, Cisco and HashiCorp are building
solutions to support all aspects of bringing up and configuring infrastructure
across any workload, in any location, and any cloud.

Cisco Intersight: A Handbook for Intelligent Cloud Operations


Infrastructure as Code 385

HashiCorp is the industry leader in multicloud infrastructure automation


software. Their Terraform product provides a consistent CLI workflow to
manage hundreds of cloud services. Terraform codifies infrastructure APIs
into declarative configuration files and integrates with Cisco Intersight
through both traditional providers along with exciting, co-developed native
capabilities.

Cisco Intersight: A Handbook for Intelligent Cloud Operations


386 Infrastructure as Code

What is Infrastructure as Code?

IT infrastructure has long been thought of as an inhibitor to change, and for


good reason. When working with end user requests on a traditional data
center, IT admins usually must manually create resources based on tickets
that have been filed. Each request typically involves various
touchpoints including network, storage, compute, operating systems,
naming conventions, tagging, amongst many others. More touchpoints mean
more potential for error which means more potential for an outage. This
typically leads to additional processes (red tape) to thoroughly vet any
proposed changes, which in turn, often leads to the introduction of shadow
IT.

This manual process may work for a small set of resources but quickly
becomes unmanageable at scale. Application velocity drives business value,
and all applications need infrastructure. To keep pace, IT organizations look
to employ automation into their processes to help reduce time to market
while also attempting to reduce risks and optimize operations.

Automation is the goal but how does an organization ensure their processes
are repeatable, standardized, versioned, and above all, thoroughly tested?
The term Infrastructure as Code (IaC) refers to an automation approach for
managing IT resources that applies the same iterative, repeatable,
documented process of modern software development to infrastructure
components. Adopting an IaC practice enables IT infrastructure to quickly
support and enable change. The foundational principle of IaC is this:

Treat all infrastructure as configuration code, or more simply put, codify


everything.

IaC allows organizations to manage their infrastructure, including networks,


virtual machines, servers, load balancers, storage, and more, in a declarative
and human-readable model. This model then supports software
development techniques to eliminate config drift and apply extensive testing

Cisco Intersight: A Handbook for Intelligent Cloud Operations


Infrastructure as Code 387

before committing any changes. Continuous Integration/Continuous


Delivery (CI/CD) toolchains automatically test, deploy and track configuration
changes to the infrastructure, eliminating the need for IT operators or
developers to manually provision or manage infrastructure.

How does IaC work?

Figure 1: IaC flow

The flow of IaC involves the following (as noted in the image above):

1 Infrastructure Ops or Developers write the specifications in human-


readable code for their infrastructure in a domain-specific language

2 These specifications are written to a file and stored in a version


control system such as Github, GitLab, or Bitbucket

Cisco Intersight: A Handbook for Intelligent Cloud Operations


388 Infrastructure as Code

3 When executed, the code takes the necessary actions to create and
configure the resources

Modeling infrastructure configuration as code comes in one of two forms:


declarative or imperative. A declarative model describes what needs to be
provisioned versus an imperative model which describes how it is
provisioned. In the case of IaC, a declarative approach would be to specify a
list of resources that need to be created, whereas an imperative approach
would specify each of the commands to create the resources.

One way to think about this is to look at how things operate at an airport.
The air traffic control system is a good example of a declarative control
system. Air traffic controllers tell pilots to take off or land in particular places
but they do not describe how to reach them. Flying the plane, adjusting the
airspeed, flaps, landing gear, etc. falls on the intelligent, capable, and
independent pilot. In a system managed through declarative control,
underlying objects handle their own configuration state changes and are
responsible only for passing exceptions or faults back to the control system.
This approach reduces the burden and complexity of the control system and
allows greater scale.

An example of a declarative model shown below deploys a VM in AWS with


Terraform code. Simply replacing the “example” value will provision a new
resource.

resource "aws_instance" "example" {


ami = "ami-0c55b159cbfafe1f0"
instance_type = "t2.micro"
}

Cisco’s UCS and Application Centric Infrastructure (ACI) along with


HashiCorp’s Terraform all use declarative models where the end user writes
what they want, rather than what to do.

On the other hand, the imperative way to deliver this resource would be to
run the AWS CLI to launch a VM.

Cisco Intersight: A Handbook for Intelligent Cloud Operations


Infrastructure as Code 389

Another IaC principle to consider is the difference between


mutable and immutable infrastructure. Mutable infrastructure, as the name
implies, can be modified once it has been provisioned, whereas immutable
infrastructure cannot be changed once provisioned. If the resource needs to
be modified, it must be replaced with the new resources. This property of
IaC allows the elimination of any possibility of configuration drift.

Benefits of Infrastructure as Code


Speed and simplicity
With IaC, infrastructure is no longer bound to the operational inefficiencies of
physical configuration and deployment. Resources can now be modeled in a
schema-defined, vendor-specific language and applied via automation. This
process promotes repeatable, well-tested configuration used to quickly
create identical development, test, and production environments. For backup
and recovery, this capability eases the staging of a new environment to
restore into, all by writing and executing code.

Minimize risk
With version control, the entire configuration of the infrastructure is self-
documented and all changes are tracked over time. Version control tooling
allows operators to easily review the deltas for proposed configuration
changes which typically involve applying a set of tests to evaluate the
expected outcome. This process drastically reduces the risk of unexpected
changes and, because all changes are tracked, offers the ability to quickly
revert to the desired state should problems arise. Likewise, should a
developer who wrote the code choose to leave, this configuration has been
saved in a controlled way, minimizing the risk of losing this intellectual
property.

Audit compliance
Traditional Infrastructure requires a frenzy of activity by administrators to
provide an auditor with what is requested, usually involving custom scripts to

Cisco Intersight: A Handbook for Intelligent Cloud Operations


390 Infrastructure as Code

pull together the latest version information across their servers, only to
discover this contains just a fraction of what they are looking for. When the
entire configuration of the infrastructure is self-documented, IT can provide
the network diagram in the form of configuration files that are as up to date
as the last time they were applied.

Avoid deployment inconsistencies


Human error is inevitable. Even if the process is extremely well documented,
any system that requires manual touch is at risk for an unexpected mistake.
Standardizing the setup of infrastructure using code configuration templates
helps eliminate inconsistencies and promotes repeatability across
environments.

Increase developer efficiency


Using the methodologies that developers are already comfortable with, the
CI/CD lifecycle can be applied directly to infrastructure. With a consistent
approach to delivery, both used by Development and Operations, this
accelerates the process of making infrastructure available to those who need
it. Systems can be brought online in each of the Development/Test/
Production environments in the same manner, without time wasted on the
nuances of each as would be done traditionally.

Lower costs
With the adoption of IaC, an IT operator's time can be better spent working
on forward-looking tasks, such as planning for the next upgrade or features,
resulting in less cost on wasted engineering time. Once systems are no
longer needed, IaC can assist in ensuring these parts are pulled out of
operation. In an on-demand environment, such as a public cloud, costs can
stop accruing immediately, rather than waiting on human intervention. For
environments where the cost has already been incurred in acquiring the
hardware, such as a private data center, the cost savings lie in the ability to
reuse the resources more quickly.

Cisco Intersight: A Handbook for Intelligent Cloud Operations


Infrastructure as Code 391

Infrastructure as Code tools


Many public cloud providers have created their own declarative tools to help
with the ease of automation. Organizations should weigh the pros and cons
of each tool to determine which one works best for them. Tools provided by
cloud providers typically only work on their own infrastructure but other tools
can span both public and private clouds. Below are a few examples of both
cloud-specific and cloud-agnostic tools.

AWS CloudFormation
AWS CloudFormation is Amazon’s response to Infrastructure as Code. It is
an easy way to define a collection of resources and their dependencies in a
template to launch and configure them across multiple AWS Cloud regions
or accounts. These templates are either available in a JSON or YAML format.

Azure Resource Manager


Azure provides the use of Azure Resource Manager (ARM) templates to
deploy and manage resources in Azure. These templates allow organizations
to easily deploy and manage resources together and repeat the deployment
tasks. The ARM template uses the JSON format to define the configurations
of the resources that need to be built.

Google Cloud Deployment Manager


Google Cloud’s approach to a declarative model is Cloud Deployment
Manager. This model uses YAML to specify the resources required for the
application thus providing an easy and repeatable way to manage resources
in the Google Cloud Platform.

Terraform
The above solutions are tied to the respective cloud provider and will not
work for organizations that operate in a hybrid cloud model. Terraform is one
such tool that operates well for both on-premises data centers and public
cloud providers. We will dive deeper into Terraform in the next section.

Cisco Intersight: A Handbook for Intelligent Cloud Operations


392 Infrastructure as Code

HashiCorp Terraform

Terraform is a well-adopted tool in the industry that allows organizations to


declare what infrastructure they want to build, then allows them to adjust any
changes through versioning safely and efficiently.

Terraform provides a consistent schema-based approach to representing a


given piece of infrastructure’s capabilities and configuration. An easy-to-
read markup language called HashiCorp Configuration Language (HCL) is
used to represent what infrastructure is needed in a series of configuration
files. Based on these configuration files, Terraform builds a plan that lays out
the desired state, then automates the necessary changes to reach that
desired state. Terraform’s framework includes an extensive set of tooling to
make traditionally complex tasks, such as testing, much simpler.

HashiCorp employs the same approach with Terraform as it does with its
many other developer-centric tools by using a single Go-based
binary/executable. This makes development, installation, and portability
incredibly simple.

Terraform offerings
While the core of Terraform is distributed as a single binary/executable, its
consumption may take many different forms.

As of this publication, there are 3 different offerings of Terraform available:

• Terraform Open Source


• Single binary/executable supported on multiple operations
systems
• Terraform CLI driven

• State locking and distribution handled by the end user

Cisco Intersight: A Handbook for Intelligent Cloud Operations


Infrastructure as Code 393

• Terraform Cloud
• Multi-Tenant SaaS platform
• Team focused
• A consistent and reliable environment

• Includes easy access to other required components: shared


state, access controls, private registry, etc.

• Terraform Enterprise
• A self-hosted version of Terraform Cloud
• Targeted for organizations with requirements such as security,
compliance, etc.
• No resource limits
• Additional enterprise-grade features such as audit logging
and SAML SSO

Cisco Intersight: A Handbook for Intelligent Cloud Operations


394 Infrastructure as Code

Concepts in Terraform

Figure 2: Terraform concept

The figure above outlines the Core Terraform workflow, which consists of
three basic steps:

1 Write — Author Infrastructure as Code

2 Plan — Preview changes before applying

3 Apply — Provision reproducible infrastructure

Plan
One of the steps in deploying IaC in Terraform is a planning phase where an
execution plan is created. Terraform will look at the desired state of the
configuration file, then determine which actions are necessary to complete
this successfully.

Apply
The action that commits the configuration file to the infrastructure based on
the steps determined in the Plan step, building out everything that has been
defined.

Cisco Intersight: A Handbook for Intelligent Cloud Operations


Infrastructure as Code 395

Providers
A Terraform provider is an abstraction of a target infrastructure provider's
API/service allowing all the capabilities within that API to be configured in a
consistent, schema-defined IaC model. Terraform hosts a public registry,
known as the Terraform Registry, to allow for browsing available providers
and modules (mentioned next) at https://registry.terraform.io/
https://registry.terraform.io/

Using a provider typically requires some sort of configuration data such as


an API key or credential file and is used to hook into the resources available
from different platforms such as Cisco, AWS, Azure, Kubernetes, etc. All the
documentation describing a given provider’s capabilities including
configuration examples can be found within the Terraform Registry.

Modules
A module is a logical container for defining groups of related resources in an
easy-to-consume package. Modules are a great way to share configuration
best practices and mask the complexity of delivering an architectural use
case. The full list of community and vendor contributed modules can be
found on the Terraform Registry, inclusive of documentation and example
usage.

State
One of the most unique capabilities within Terraform is its ability to model
and store the state of managed infrastructure and configuration. Modeling
state in a consistent, ongoing manner allows Terraform to automatically
correlate resource dependencies so that configuration changes are
executed in the appropriate order, helps to keep track of useful metadata,
and helps to improve performance in larger environments.

Cisco Intersight: A Handbook for Intelligent Cloud Operations


396 Infrastructure as Code

Cisco and Infrastructure as


Code

Cisco and HashiCorp are working together to deliver IaC capabilities across
public and private cloud infrastructure by leveraging their similar and
complementary, declarative models. Together they have delivered unique
providers that can be leveraged to completely deploy private cloud
infrastructures along with private cloud resources. The joint solution
combines the power of Terraform, the provisioning tool for building,
changing, and versioning infrastructure safely and efficiently, with the power
of Intersight and ACI. This combination simplifies, automates, optimizes, and
accelerates the entire application deployment lifecycle across the data
center, edge, colo, and cloud.

Cisco Intersight: A Handbook for Intelligent Cloud Operations


Infrastructure as Code 397

Figure 3: Complex example of automation workflow

This powerful IaC solution enables organizations with automated server,


storage, networking, virtualization, and container infrastructure provisioning,
which can dramatically increase the productivity of both the IT operations as
well as the development organizations.

Cisco ACI Provider


Cisco Application Centric Infrastructure (ACI) allows application
requirements to define the network. This architecture simplifies, optimizes,
and accelerates the entire application deployment life cycle. ACI’s
controller-based architecture provides a scalable multi-tenant fabric with a
unified point of automation for management, policy programming,
application deployment, and health monitoring. An ACI provider for
Terraform allows for a consistent, template-driven configuration of ACI

Cisco Intersight: A Handbook for Intelligent Cloud Operations


398 Infrastructure as Code

resources. Full documentation for the provider can be found at


https://registry.terraform.io/providers/CiscoDevNet/aci/latest
https://registry.terraform.io/providers/CiscoDevNet/aci/latest

Cisco MSO Provider


Cisco ACI Multi-Site Orchestrator (MSO) is a policy manager for
organizations with multiple often geographically dispersed ACI fabrics. The
MSO Terraform provider is responsible for provisioning, health monitoring,
and managing the full lifecycle of Cisco ACI networking policies and tenant
policies across these ACI sites. Full documentation for the MSO provider can
be found at
https://registry.terraform.io/providers/CiscoDevNet/mso/latest/docs
https://registry.terraform.io/providers/CiscoDevNet/mso/latest/docs

Cisco Intersight Provider


The breadth of Intersight’s capabilities as a cloud operations platform, as
well as the benefits and architecture of the Intersight API, have been
covered throughout this book in great detail. Having all those capabilities
behind a single API would typically require a very complex Terraform
provider to fully represent all the possible resources an IaC operator would
need for querying and configuration. Thankfully, the Intersight Terraform
provider is auto-generated as part of the API version release cycle, as
described in the Programmability chapter of this book.

With this model, operators can always have access to the latest features and
capabilities that Intersight provides by using the latest version of the
Intersight Terraform provider. Before getting into the details of using the
provider, it is good to review when it does and does not make sense to
manage Intersight resources with Terraform.

When to use the Provider


Terraform allows administrators to manage cloud resources using IaC
principles with all the benefits described in the earlier portions of this

Cisco Intersight: A Handbook for Intelligent Cloud Operations


Infrastructure as Code 399

chapter. One of the unique and noteworthy features of Terraform is its ability
to track the state of a resource. Tracking the state of cloud infrastructure
helps Terraform automatically determine the dependencies within a set of
resources for lifecycle operations, as well as store infrastructure metadata in
a consistent manner, making it easier to reference existing resources along
with their properties when creating new infrastructure.

This makes Terraform an excellent tool for managing cloud infrastructure


whose configuration may change over time or may be referenced by other
infrastructure as part of a dependency relationship. However, it does NOT
necessarily make sense to use Terraform when a one-time action is being
performed or a resource is being created that will never change nor ever be
referenced by another resource.

A prime example of a task that is not a great fit for the Intersight provider is
an OS install on a server. OS installs are a one-time action, and although a
resource is created, no modifications can be made to that resource directly.
The state is essentially unused in this scenario. It would make more sense to
have a workflow within Intersight Cloud Orchestrator that took in the
required parameters and kicked off the OS install.

Where it does make sense to use the Intersight Terraform provider is for
pools, policies, profiles, and any other Intersight resource that requires
frequent changes or is frequently referenced by other resources. Managing
such resources as code allows operators to deploy new infrastructure
consistently and quickly, continuously track configuration changes over time
using a source control manager such as Github, as well as leveraging CI/CD
pipelines for ensuring the quality of the code being submitted. The next
section will cover specific examples of how to use the Intersight provider to
manage Intersight resources.

Setting up the Provider


The Intersight provider is hosted on the Terraform Registry making it quite
easy to include in a project. Always refer to the registry for the latest
documentation on capabilities and usage
(https://registry.terraform.io/providers/CiscoDevNet/intersight/latest
https://registry.terraform.io/providers/CiscoDevNet/intersight/latest)
https://registry.terraform.io/providers/CiscoDevNet/intersight/latest

Cisco Intersight: A Handbook for Intelligent Cloud Operations


400 Infrastructure as Code

With Terraform release 0.13+, a new syntax was introduced for providers to
support different namespaces within the registry. This capability allows
organizations to publish their own modules and providers to the Terraform
registry without having to worry about maintaining their own private instance.
Below is an example of the required provider syntax.

terraform {
required_providers {
intersight = {
source = "CiscoDevNet/intersight"
version = "0.1.4"
}
}
}

Please note that the version listed above is the latest at the writing of this
book and will continuously change as newer versions are published. Always
check the Terraform Registry for the latest version. If the markup used above
looks foreign, it is referred to as HashiCorp Configuration Language and is
commonly used throughout HashiCorp’s product offerings. HCL is meant to
be both easy-to-read and machine friendly.

The provider then needs to be configured with the Intersight endpoint, user
API key, and secret, which can be made available as a variable or
referenced to a file, as shown in the below example for defining the
provider:

variable "api_key" {
type = string
description = "Intersight API key id"
}

variable "api_secret_path" {
type = string
description = "Path to Intersight API secret file"
}

provider "intersight" {
apikey = var.api_key
secretkeyfile = var.api_secret_path

Cisco Intersight: A Handbook for Intelligent Cloud Operations


Infrastructure as Code 401

endpoint = "intersight.com"}

In the HCL markup above, variables are first defined representing the API
key and secret. Terraform, like most HashiCorp products, is built as a Go
binary and is thus strictly typed (which is a positive attribute). Using variables
in this manner avoids storing confidential credentials in code that is meant to
be tracked in a source control system. Variables can be passed manually via
the Terraform CLI or stored in a file and referenced. Below is an example
.tfvars file for the examples provided in this section.

api_key = "596cc79e5d91b400010d15ad/5f7b3f297564612d33dddbf9/6001a79b7564612d331fb23b"
api_secret_path = "intersight_api_secret"

organization = "default"

Creating resources
One of Intersight’s core features is the ability to define infrastructure by
using policies, profiles, and templates. One basic example of a policy would
be an NTP policy. NTP configuration is a common best practice for most
types of infrastructure and, unfortunately, config drift is common, presenting
a painful challenge for IT operators. Corporate NTP servers typically vary,
based on geographic location, and are often bypassed by externally hosted
servers that are often configured into infrastructure by default.

Intersight allows the configuration of NTP settings as a policy that can then
be referenced by other resources. Below is an example of how to create an
NTP policy resource using the provider.

variable "organization" {
type = string
description = "Name of organization for target resources"
}

// Get organization
data "intersight_organization_organization" "org" {
name = var.organization
}

Cisco Intersight: A Handbook for Intelligent Cloud Operations


402 Infrastructure as Code

resource "intersight_ntp_policy" "ntp1" {


name = "exampleNtpPolicy"
enabled = true
ntp_servers = [
"172.16.1.90",
"172.16.1.91"
]
timezone = "America/Chicago"

organization {
object_type = "organization.Organization"
moid = data.intersight_organization_organization.org.moid
}

tags {
key = "location"
value = "austin"
}
}

Almost all resources are tied to one of the organizations which were covered
in the Security chapter of this book. To ensure flexibility, the organization is
passed in from the example variables file shown earlier in this section. The
data designation in the code above represents a data source. Data sources
are used to retrieve information about a resource not locally configured.
Since the default organization was not created by this Terraform code, it can
be retrieved via the data source syntax.

Once queried, a dynamically generated variable is created to access the


properties of that resource and can be referenced by other resources as
shown in the code snippet below:

organization {
object_type = "organization.Organization"
moid = data.intersight_organization_organization.org.moid
}

The resource definition above creates a new NTP policy resource within
Intersight and adds a custom tag representing the location for which this
policy applies.

Cisco Intersight: A Handbook for Intelligent Cloud Operations


Infrastructure as Code 403

Tagging is a recommended best practice for any resources created within


Intersight as it makes it much easier to find and reference resources by
meaningful metadata specific to an organization. An example resource that
would reference this NTP policy would be a Server Profile. Server Profiles
reference many different policy types and pools to create a unique server
configuration that can then be applied to a target server via Intersight.

Below is a very minimal example of defining a Server Profile using the


Intersight provider.

resource "intersight_server_profile" "server1" {


name = "exampleServerProfile"
action = "No-op"
tags {
key = "location"
value = "austin"
}
organization {
object_type = "organization.Organization"
moid = data.intersight_organization_organization.org.moid
}
}

The profile definition is again tagged and references the same organization
as the NTP policy. All that currently exists within this profile is a name and a
tag. The NTP policy resource contains an optional property called
profiles which have not been configured yet. A set, in the case of Terraform
code, is an unordered list or array and can contain a simple variable type like
a string or a complex variable type such as an object. The tags property
shown above is an example of a set variable that contains objects of a set
structure.

Cisco Intersight: A Handbook for Intelligent Cloud Operations


404 Infrastructure as Code

The code below shows how the created Server Profile can be linked to the
NTP policy.

resource "intersight_ntp_policy" "ntp1" {


name = "exampleNtpPolicy"
enabled = true
ntp_servers = [
"172.16.1.90",
"172.16.1.91"
]
timezone = "America/Chicago"

organization {
object_type = "organization.Organization"
moid = data.intersight_organization_organization.org.moid
}

tags {
key = "location"
value = "austin"
}

profiles {
moid = intersight_server_profile.server1.id
object_type = "server.Profile"
}
}

Referencing the created Server Profile resource uses a slightly different


syntax than a data source as observed in the bold text above. One of the
nice benefits of how Terraform performs its state modeling is its ability to
map out a proper dependency tree so that the order of the resources does
not unintentionally break resource configuration. Because the NTP policy
references a property of the created Server Profile, Terraform knows the
Server Profile configuration must be handled first in the case of a create or
update lifecycle event.

Cisco Intersight: A Handbook for Intelligent Cloud Operations


Infrastructure as Code 405

Wrapping up

Cisco has taken a giant leap forward with Terraform to bring consistent IaC
automation to the edge, public clouds, and private data centers. The Cisco
providers for Terraform along with Intersight allow different IT teams to
adopt IaC practices without having to become programming experts or
completely re-tool their operations. For example, once the infrastructure
steps have been provisioned, a call to do a system health check can be
made, then a ticketing system can be updated with details of the operation
in a single achievement.

With these capabilities, traditional infrastructure can be operated and


maintained with the agility of cloud-native infrastructure and the possibilities
are endless for Cisco Intersight — integrations with Terraform Cloud through
APIs and single authentication mechanisms, securely creating and managing
resources on-premises, additional orchestration hooks such as Ansible, and
eventually wrapping this with policies and governance.

Cisco Intersight: A Handbook for Intelligent Cloud Operations


Cisco Intersight: A Handbook for Intelligent Cloud Operations
Acknowledgments

Cisco Intersight: A Handbook for Intelligent Cloud Operations


408 Acknowledgments

The authors would like to express their thanks to Cynthia Johnson, Sean
McGee, Vance Baran, and David Cedrone for their unwavering support and
encouragement from the initial idea through the final production of this book.

In addition, Bhaskar Jayakrishnan, Sebastien Rosset, Vijay Venugopal,


Chandra Krishnamurthy, and Dan Hanson have provided outstanding
guidance and expertise for both the business and technical aspects of the
topics covered.

Justin Barksdale and Michael Doherty served as technical contributors and


reviewers for much of the Kubernetes and Infrastructure as Code content.
The authors greatly appreciated their time, expertise, and witty interactions.

Throughout the creation of the book, the entire Product Management and
Technical Marketing teams for Intersight served as subject matter experts
and encouragers. The authors would like to especially thank David Soper,
Jeff Foster, Jeff New, Gregory Wilkinson, Matt Ferguson, Michael
Zimmerman, Andrew Horrigan, Vishwanath Jakka, Meenakshi Kaushik, Joost
Van Der Made, Jacob Van Ewyk, Gautham Ravi, Matthew Faiello, Chris
Atkinson, Ravi Mishra, Chris O'Brien, John McDonough, and Eric Williams.

A special thanks to the world-class Sales, Marketing, and Engineering teams


at Turbonomic.com for their longstanding partnership, mentorship, and
contributions to the Workload Optimization content.

Also, Faith Bosworth from Book Sprints (@booksprints) showed a super-


human amount of patience, a wonderful sense of humor, and strong
leadership when needed to bring this book to fruition — it would not have
been possible without her and the authors are extremely grateful for all her
efforts.

Finally, and most importantly, the authors would like to express their
gratitude for the support and patience of their families during this time-
intensive process. Thank you!

Cisco Intersight: A Handbook for Intelligent Cloud Operations


Acknowledgments 409

Authors Titles Linkedin

Matthew Technical Solutions


https://www.linkedin.com/in/mattbake
Baker Architect

Brandon Technical Solutions https://www.linkedin.com/in/brandon-b-


Beck Architect 70149695

Doron Technical Solutions


https://www.linkedin.com/in/doronchosnek
Chosnek Architect

Jason www.linkedin.com/in/jason-mcgee-
Principal Architect
McGee b6348894/

Sean Technical Solutions


https://www.linkedin.com/in/seanmckeown
McKeown Architect

Bradley Technical Solutions


https://www.linkedin.com/in/bradtereick
TerEick Architect

Mohit Technical Solutions


https://www.linkedin.com/in/mohitvaswani
Vaswani Architect

This book was written using the Book Sprints (https://www.booksprints.net)


method, a strongly facilitated process for collaborative authorship of books.
The illustrations and cover were created by Henrik van Leeuwen and Lennart
Wolfert, Raewyn Whyte and Christine Davis undertook the copy editing, and
the HTML book design was conducted by Manu Vazquez.

Cover photo: Cisco intersight.com

Fonts: Iosevka (Belleve Invis), CiscoSans (Cisco Systems)

Cisco Intersight: A Handbook for Intelligent Cloud Operations


Acknowledgments
410

Cisco Intersight: A Handbook for Intelligent Cloud Operations

You might also like