You are on page 1of 114

Data Center Deployment Guide

Revision: H1CY11

The Purpose of this Guide

This guide is a snap-in replacement for the Server Room module in the Cisco Smart Business Architecture for Midsize OrganizationsBorderless Networks Foundation Deployment Guide, which accommodates up to 24 physical servers. If your requirements are increasing to a larger number of servers, the Data Center for Midsize Organizations design described in this guide will provide additional scalability while also allowing more advanced business operations and applications. This architecture will accommodate growth of the server requirements up to a combination of 300 physical and virtual servers. As with the Cisco SBA for Midsize OrganizationsBorderless Networks Foundation Deployment Guide, this guide is a prescriptive reference that provides step-by-step instructions for the deployment of the design. It is based on enterprise best practice principles yet delivered in a design and cost structure for midsize organizations that are growing and expanding. Based on feedback from customers and partners, Cisco has developed a solid network foundation as a flexible platform that does not require reengineering to include additional network or user services. This guide is organized into modules. You can start at the beginning or jump to any module. Each part of the guide is designed to stand alone, so you can deploy the Cisco technology for that section without having to complete the previous module. The specific products that make up this design are listed at the end of this document for your convenience. The Cisco SBA for Midsize OrganizationsData Center Configuration Files Guide, contains the specific configuration files from the products used in the Cisco lab testing and is located on

Tech Tip
If this design does not scale to meet your needs, please refer to the Cisco Validated Designs (CVD) for larger data center deployment models. CVDs can be found on

Design Guides

Deployment Guides

Supplemental Guides

Design Overview

You are Here

Data Center

Advanced Server Load-Balancing

Configuration Files

The Purpose of this Guide

Who Should Read This Guide

This guide is intended for the reader who has any or all of the following: Has already read the Cisco SBA for Midsize OrganizationsBorderless Networks Foundation Deployment Guide Has an existing server room and is looking to solve business problems that require technologies more typically found in a data center Wants to expand from a few dozen servers to up to a combined total of up to 300 physical and virtualized servers Is interested in expanding the port capacity and overall throughput of the Ethernet environment supporting their application servers. Is interested in moving to a centralized storage model for increased flexibility and ease of provisioning Is interested in increasing compute capacity while reducing the operational complexity of supporting their server environment. Wants to ensure the security the security of critical application data. Wants to improve application availability

Related Documents
Before reviewing this guide

Foundation Design Overview

Foundation Deployment Guide

Foundation Configuration Files Guide

Data Center Design Overview

The Purpose of this Guide

Table of Contents
SBA Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 Architecture Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 Physical Environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 Business Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 Ethernet Infrastructure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 Business Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 Technology Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 Deployment Details . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 Storage Infrastructure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .14 Business Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 Technology Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 Deployment Details . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 Network Security . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32 Business Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32 Technology Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32 Deployment Details . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35 Computing Resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .47 Business Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47 Technology Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47 Deployment Details . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49 Virtual Switching . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68 Business Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68 Technology Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68 Deployment Details . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70 Application Resiliency . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102 Business Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .102 Technology Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .102 Deployment Details . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .103 Appendix A: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107 Product List . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .107 Appendix B: SBA for Midsize Organizations Document System . . . . . . . . 109


Table of Contents

SBA Overview
The Cisco Smart Business Architecture (SBA) is a comprehensive design for networks with up to 2500 users. This out-of-the-box design is simple, fast, affordable, scalable, and flexible. There are three options based on your scaling needs: up to 600 users, 1000 users, and up to 2500 users. The Cisco SBA for Midsize Organizations incorporates LAN, WAN, wireless, security, WAN optimization, and unified communication technologies tested together as a solution. This solution-level approach simplifies the system integration normally associated with multiple technologies, allowing you to select the modules that solve your organizations problems rather than worrying about the technical details. We have designed the Cisco Smart Business Architecture to be easy to configure, deploy, and manage. This architecture: Provides a solid network foundation Makes deployment fast and easy Accelerates ability to easily deploy additional services Avoids the need for re-engineering of the core network By deploying the Cisco Smart Business Architecture, your organization can gain: A standardized design, tested and supported by Cisco. Optimized architectures for midsize organizations with up to 2500 users. WAN with up to 75 remote sites with a headquarters site, regional site, and approximately 25 users per remote site. Flexible architecture to help ensure easy migration as the organization grows. Seamless support for quick deployment of wired and wireless network access for data, voice, teleworker, and wireless guest. Security and high availability for corporate information resources, servers, and Internet-facing applications. Improved WAN performance and cost reduction through the use of WAN optimization. Simplified deployment and operation by IT workers with CCNA certification or equivalent experience. Cisco enterprise-class reliability in products designed for midsize organizations.

Guiding Principles
We divided the deployment process into modules according to the following principles: Ease of use: A top requirement of Cisco SBA was to develop a design that could be deployed with the minimal amount of configuration and day-two management. Cost-effective: Another critical requirement as we selected products was to meet the budget guidelines for midsize organizations. Flexibility and scalability: As the organization grows, so too must its infrastructure. Products selected must have the ability to grow or be repurposed within the architecture. Reuse: We strived, when possible, to reuse the same products throughout the various modules to minimize the number of products required for spares.

User Services

Voice, Video, Web Meetings

Network Services

Security, WAN Optimization, Guest Access Routing, Switching, Wireless, and Internet

Network Foundation

The Cisco Smart Business Architecture can be broken down into the following three primary, modular yet interdependent components for the midsize organization. Network Foundation: A network that supports the architecture Network Services: Features that operate in the background to improve and enable the user experience without direct user awareness User Services: Applications with which a user interacts directly

SBA Overview

Midsize organizations encounter many challenges as they work to scale their information-processing capacity to keep up with demand. In a new organization, a small group of server resources may be sufficient to provide necessary applications such as file sharing, e-mail, database applications, and web services. Over time, demand for increased processing capacity, storage capacity, and distinct operational control over specific servers can cause a growth explosion commonly known as server sprawl. A midsize organization can then use some of the same data center technologies that larger organizations use to meet expanding business requirements in a way that keeps capital and operational expenses in check. This deployment guide provides a reference architecture to facilitate rapid adoption of these data center technologies by using a common, best-practices configuration. The Cisco SBA Data Center for Midsize Organizations architecture provides an evolution from the basic server room infrastructure illustrated in the SBA Midsize Foundation Deployment Guide. The Data Center for Midsize Organizations is designed to address four primary business challenges: Supporting rapid application growth Managing growing data storage requirements Optimizing the investment in server processing resources Securing the organizations critical data

Managing Growing Data Storage Requirements

As application requirements grow, the need for additional data storage capacity also increases. This can initially cause issues when storage requirements for a given server increase beyond the physical capacity of the server hardware platform in use. As the organization grows, the investment in additional storage capacity is most efficiently managed by moving to a centralized storage model. A centralized storage system can provide disk capacity across multiple applications and servers providing greater scalability and flexibility in storage provisioning. A dedicated storage system provides multiple benefits beyond raw disk capacity. Centralized storage systems can increase the reliability of disk storage, which improves application availability. Storage systems allow increased capacity to be provided to a given server over the network without needing to physically attach new devices to the server itself. More sophisticated backup and data replication technologies are available in centralized storage systems, which helps protect the organization against data loss and application outages.

Optimizing the Investment in Server Processing Resources

As a midsize organization grows, physical servers are often dedicated to single applications to increase stability and simplify troubleshooting. However, these servers do not operate at high levels of processor utilization for much of the day. Underutilized processing resources represent an investment by the organization that is not being leveraged to its full potential. Server virtualization technologies allow a single physical server to run multiple virtual instances of a guest operating system, creating virtual machines (VMs). Running multiple VMs on server hardware helps to more fully utilize the organizations investment in processing capacity, while still allowing each VM to be viewed independently from a security, configuration, and troubleshooting perspective. Server virtualization and centralized storage technologies complement one another, allowing rapid deployment of new servers and reduced downtime in the event of server hardware failures. Virtual machines can be stored completely on the centralized storage system, which decouples the identity of the VM from any single physical server. This allows the organization great flexibility when rolling out new applications or upgrading server hardware. The architecture defined in this guide is designed to facilitate easy deployment of server virtualization, while still providing support for the existing installed base of equipment.

Supporting Rapid Application Growth

As applications scale to support a larger number of users, or new applications are deployed, the number of servers required to meet the needs of the organization often increases. The first phase of the server room evolution is often triggered when the organization outgrows the capacity of the existing server room network. Many factors can limit the capacity of the existing facility, including rack space, power, cooling, switching throughput, or basic network port count to attach new servers. The architecture outlined in this guide is designed to allow the organization to smoothly scale the size of the server environment and network topology as business requirements grow.


Securing the Organizations Critical Data

With communication and commerce in the world becoming increasingly Internet-based, network security quickly becomes a primary concern of a growing organization. Often organizations will begin by securing their Internet edge connection, considering the internal network a trusted entity. However, an Internet firewall is only one component of building security into the network infrastructure. Frequently, threats to an organizations data may come from within the internal network. This may come in the form of on-site vendors, contaminated employee laptops, or existing servers that have already become compromised and may be used as a platform to launch further attacks. With the centralized repository of the organizations most critical data typically being the data center, security is no longer considered an optional component of a complete data center architecture plan. The SBA Midsize Data Center Architecture illustrates how to cleanly integrate network security capabilities such as firewall and intrusion prevention, protecting areas of the network housing critical server and storage resources. The architecture provides the flexibility to secure specific portions of the data center or insert firewall capability between tiers of a multi-tier application according to the security policy agreed upon by the organization.


Architecture Overview
The SBA Midsize Data Center Architecture is designed to allow organizations to take an existing server room environment to the next level of performance, flexibility, and security. Figure 1 provides a high-level overview of this architecture. Figure 1 . SBA Midsize Data Center Architecture

Architecture Overview

The SBA Midsize Data Center Architecture is designed to connect to one of the SBA Layer-3 Ethernet core solutions, as documented in the SBA Midsize Foundation Deployment Guide. The following technology areas are included within this reference architecture: Ethernet InfrastructureResilient Layer-2 Ethernet networking to support 10 Gigabit and 1 Gigabit Ethernet connectivity. Storage NetworkingTake advantage of IP-based storage access, Fibre Channel over Ethernet, or a full Fibre Channel SAN solution depending on your requirements. Network SecurityIntegrate firewall and intrusion prevention services into the data center to protect critical application data. Computing ResourcesLeverage powerful computing platforms in both blade server and rack mount formats. Virtual SwitchingDeploy a centrally managed distributed virtual switching system for your VMware environment. Application ResiliencyServer load balancing solutions providing high availability, scalability and security for your key applications. This architecture is designed to allow a midsize organization to position its network for growth while controlling both equipment costs and operational costs. The deployment processes documented in this guide provide concise step-by-step instructions for completing basic configuration of the components of the architecture. This approach allows you to take advantage of some of the newer technologies being used in the data centers of very large organizations without encountering a steep learning curve for the IT staff. Although this architecture has been designed and validated as a whole, the modular nature of this guide allows you to perform a gradual migration by choosing specific elements of the architecture to implement first. The remaining sections of this guide detail the various technologies that comprise this architecture.

Architecture Overview

Physical Environment
Business Overview
When building or changing a network, you have to carefully consider the location where you will install the equipment. When building a server room, a switch closet, or even a midsize data center, take three things into consideration: power, cooling, and racking. Know your options in each of these categories, and you will minimize surprises and moving of equipment later on.

vertical strips also assist in proper cable management of the power cords. Short C13/C14 and C19/C20 power cords can be used instead of much longer cords to multiple 110 volt outlets or multiple 110volt power strips.

With power comes the inevitable conversion of power into heat. Without going into great detail, power in equals heat out. Planning for cooling of one or two servers and a switch with standard building air may work. Multiple servers and blade servers (along with storage, switches, etc.) need more than building air for proper cooling. Be sure to at least plan with your facilities team what the options are for current and future cooling. Many options are available, in-row cooling, overhead cooling, raised floor with underfloor cooling, and wall mounted cooling.

Know what equipment will be installed in the area. You cannot plan electrical work if you do not know what equipment is going to be used. Some equipment requires standard 110v outlets that may already be available. Other equipment can require great power needs. Does the power need to be on all the time? In most cases this answer will be yes if there are servers and storage involved. Applications dont react very well when the power goes out, so to prevent this a uninterruptable power supply (UPS) is needed. The UPS will switch over the current load to a set of internal or external batteries. Some UPSs are online, which means the power is filtered through the batteries all the time; others are switchable, meaning they use batteries only during power losses. UPSs vary by how much load they can carry and for how long. Careful planning is required to make sure the correct UPS is purchased, installed, and managed correctly. Most UPSs provide for remote monitoring and the ability to trigger a graceful server shutdown for critical servers if the UPS is going to run out of battery. Distributing the power to the equipment can change the power requirements as well. There are many options available to distribute the power from the outlet or UPS to the equipment. One example would be using a power strip that resides vertically in a cabinet that usually has an L6-30 input and then C13/C19 outlets with the output voltage in the 200-240 range. These strips should be at a minimum metered so one does not overload the circuits. The meter provides a current reading of the load on the circuit. This is critical as a circuit breaker tripping due to being overloaded will bring down everything plugged into it with no warning causing business downtime and possible data loss. For complete remote control, power strips are available with full remote control of each individual outlet from a web browser. These

Equipment Racking
Where to put the equipment is a very important detail not to overlook. Proper placement and planning allow for easy growth. With the power and cooling properly evaluated racking or cabinets need to be installed. Most servers are fairly deep and, with network connections and power connections, take up even more space. Most servers will fit in a 42-inch deep cabinet, and deeper cabinets give more flexibility for cable and power management within the cabinet. Be aware of what rails are required by your servers. Most servers now come with rack mounts that use the square-hole style vertical cabinet rails. Not having the proper rails can mean having to use adapters or shelves and making management of servers and equipment difficult if not sometimes impossible without removing other equipment or sacrificing space. Data center racks should use the square rail mounting options in the cabinets. Cage nuts can be used to provide threaded mounts for such things as routers, switches, shelves, etc. that may be needed.

The physical environmental requirements for a data center require careful planning to provide for efficient use of space, scalability, and ease of operational maintenance. Working towards deployment of the Smart Business Architecture allows you to plan the physical space for your data center with a vision towards the equipment you will be installing over time, even if you begin with a smaller scale. For additional information on data center power, cooling and equipment racking, contact Cisco partners in the area of data center environmental products such as Panduit and APC.

Physical Environment

Ethernet Infrastructure
Business Overview
As your midsize organization grows, you may outgrow the capacity of the basic server-room Ethernet switching stack illustrated in the SBA Midsize Foundation Architecture. It is important to be prepared for the ongoing transition of available server hardware from 1 gigabit Ethernet attachment to 10 gigabit. Multi-tier applications often divide browser-based client services, business logic, and database layers into multiple servers, increasing the amount of server-to-server traffic and driving performance requirements higher. As the physical environment housing the organizations servers grows to multiple racks, it also becomes more challenging to elegantly manage the cabling required to attach servers to the network. 10 Gigabit Ethernet connections help to improve overall network performance, while reducing the number of physical links required to provide the bandwidth.

Virtual Port Channel

The Cisco Nexus 5000 Series switch pair providing the central Ethernet switching fabric for the SBA Midsize Data Center Architecture is configured using the Virtual Port Channel (vPC) feature. This capability allows the two switches to be used to build resilient, loop-free topologies that forward on all connected links instead of requiring Spanning Tree Protocol (STP) blocking for loop prevention. This feature enhances ease-of-use and simplifies configuration for the data center-switching environment. Neighboring devices such as the SBA Midsize core switches can leverage a Port Channel configuration, which bundles multiple links into a single logical link, with all available physical bandwidth being used to forward traffic. Allowing all links to forward traffic instead of requiring spanning tree to block the parallel paths for loop prevention provides greater bandwidth capacity to increase application performance.

Ethernet Fabric Extension

The Cisco Nexus 2000 Series Fabric Extender (FEX) delivers cost-effective and highly scalable 1 gigabit and 10 gigabit Ethernet environments. Fabric extension allows you to aggregate a group of physical switch ports at the top of each server rack, without needing to manage these ports as a separate logical switch. You can provide network resiliency by dual-homing servers into two separate fabric extenders, each of which is single homed to one member of the Cisco Nexus 5000 Series switch pair. To provide high availability for servers that only support single-homed network attachment, the FEX itself may instead be dual-homed into the two members of the central switch pair. Our reference architecture example shown in Figure 2. illustrates single-homed and dual-homed FEX configurations. Each FEX includes dedicated fabric uplink ports that are designed to connect to upstream Cisco Nexus 5000 Series switches for data communication and management; these are shown as ports F1 and F2 in the illustration. The port designations on the Cisco Nexus 5000 side reflect the example ports used for reference configurations contained in this guide; any 10 gigabit Ethernet port on the Cisco Nexus 5000 switch may be used for FEX connection.

Technology Overview
The foundation of the Ethernet network in the SBA Midsize Data Center Architecture is a resilient pair of Cisco Nexus 5000 Series switches. These switches offer the ideal platform for building a scalable, high-performance data center supporting both 10 gigabit and 1 gigabit Ethernet attached servers. Optional Fibre Channel modules are available to allow integration with a FibreChannel based Storage Area Network (SAN). The Cisco Nexus 5000 Series also supports the Cisco Nexus 2000 Series Fabric Extenders. Fabric Extenders allow the switching fabric of the resilient switching pair to be physically extended to provide port aggregation in the top of multiple racks, reducing cable management issues as the server environment expands. The Fabric Extenders are all managed centrally from the Cisco Nexus 5000 Series switch pair, where they appear as remote linecards to the primary data center switches. This centralized management provides easy port provisioning and keeps operational costs down for the growing organization. The SBA Midsize Data Center Architecture leverages many advanced features of the Cisco Nexus 5000 Series switch family to provide a central switching fabric for the data center environment that is easy to deploy. This section provides an overview of the key features used in this topology and illustrates the specific physical connectivity that applies to the example configurations provided in the deployment section.

Ethernet Infastructure

Deployment Details

Tech Tip
The first 8 ports of a Cisco Nexus 5010 and the first 16 ports of a Cisco Nexus 5020 switch may be configured at either 10 Gigabit or 1 Gigabit Ethernet speeds. If you have requirements for 1 Gigabit Ethernet connections directly to the switch, ensure that you are reserving the correct ports and apply the speed command to control this setting. All ports of the Cisco Nexus 5548 switch are configurable for 10 Gigabit or 1 Gigabit Ethernet. Figure 2 . Ethernet Switching Fabric Physical Connections

The following configuration procedures are required to configure the Ethernet switching fabric for the SBA Midsize Data Center Architecture. Establish physical connectivity Perform Initial device configuration Complete vPC configuration Create port channel to SBA core Configure fabric extender connections


Initial Setup 1. Establish Physical Connectivity 2. Perform Initial Device Configuration

Procedure 1

Establish Physical Connectivity

Complete the physical connectivity of the Cisco Nexus 5000 Series switch pair according to the illustration in Figure 2. (or according to the specific requirements of your implementation). Step 1: Connect ports Ethernet 1/17 and 1/18 or other available Ethernet ports between the two Cisco Nexus 5000 Series switches. This link will be used as the vPC peer-link, which allows the peer connection to form and supports forwarding of traffic between the switches if necessary during a partial link failure of one of the vPC port channels. Step 2: Connect the Management 0 ports to the SBA core switch pair (or an alternate location where they can be connected to the data center management VLAN). In our example configurations, the data center management VLAN is 163. Step 3: Connect ports Ethernet 1/19 and 1/20 or other available Ethernet ports on each Cisco Nexus 5000 Series switch to the SBA core to build the port channel which will carry production data traffic. Four 10 gigabit Ethernet connections will provide an aggregate throughput of 40 Gbps to carry data back and forth to client machines, or to be Layer-3 switched between servers by the SBA core if required.

Ethernet Infastructure

Step 4: To support a dual-homed FEX with single-homed servers, connect fabric uplink ports 1 and 2 on the FEX to port Ethernet 1/13 or other available Ethernet ports, one on each Cisco Nexus 5000 Series switch. These ports will operate as a port channel to support the dual-homed FEX configuration. Step 5: Support single-homed FEX attachment by connecting fabric uplink ports 1 and 2 on each FEX to ports Ethernet 1/15 and 1/16 or other available Ethernet ports on only one member of the Cisco Nexus 5000 Series switch pair. These ports will be a port-channel, but will not be configured as a vPC port-channel, because they have physical ports connected to only one member of the switch pair.

Would you like to enter the basic configuration dialog (yes/ no): y Create another login account (yes/no) [n]: n Configure read-only SNMP community string (yes/no) [n]: n Configure read-write SNMP community string (yes/no) [n]: n Enter the switch name : dc3-5k-1 Continue with Out-of-band (mgmt0) management configuration? (yes/no) [y]: y Mgmt0 IPv4 address : Mgmt0 IPv4 netmask : Configure the default gateway? (yes/no) [y]: y IPv4 address of the default gateway : Enable the telnet service? (yes/no) [n]: y Enable the http-server? (yes/no) [y]: y Enable the ssh service? (yes/no) [y]: y Type of ssh key you would like to generate (dsa/rsa) : rsa Number of key bits <768-2048> : 768 Configure the ntp server? (yes/no) [n]: y NTP server IPv4 address : Enter basic FC configurations (yes/no) [n]: n The following configuration will be applied: switchname dc2-5k-1 interface mgmt0 ip address no shutdown exit vrf context management ip route

Procedure 2

Perform Initial Device Configuration

Step 1: Connect a terminal cable to the console port of the first Cisco Nexus 5000 Series switch, and power on the system to enter the initial configuration dialog. Step 2: Follow the Basic System Configuration Dialog for initial device configuration of the first Cisco Nexus 5000 Series switch, as shown in the terminal capture below: Do you want to enforce secure password standard (yes/no): y Enter the password for admin: Confirm the password for admin: ---- Basic System Configuration Dialog ---This setup utility will guide you through the basic configuration of the system. Setup configures only enough connectivity for management of the system. Please register Cisco Nexus 5000 Family devices promptly with your supplier. Failure to register may affect response times for initial service calls. Nexus devices must be registered to receive entitled support services. Press Enter at anytime to skip a dialog. Use ctrl-c at anytime to skip the remaining dialogs.

Ethernet Infastructure

exit telnet server enable feature http-server ssh key rsa 768 force ssh server enable ntp server use-vrf management Would you like to edit the configuration? (yes/no) [n]: n Use this configuration and save it? (yes/no) [y]: y [########################################] 100% Nexus 5000 Switch dc2-5k-1 login: Step 3: On the second Cisco Nexus 5000 Series switch, repeat step 2. Use a unique device name and address for the mgmt0 interface. Before beginning the remainder of the configuration, enable common required features in the NXOS software. The example configurations shown in this guide use the following features: feature telnet feature private-vlan feature udld feature interface-vlan feature lacp feature vpc feature lldp feature fex

Tech Tip
If Fibre Channelspecific features such as Fibre Channel over Ethernet (FCoE) or N-Port Virtualization (NPV) are required, they should be enabled prior to applying any additional configuration to the switch. The NPV feature requires you to re-apply any existing configuration commands to the switch if it is added or removed.

Step 5: Create required VLANs. For our example configuration, we are using VLANS 148-163 for various roles within the data center, and VLAN 999 as a dummy VLAN to define as native on trunks to mitigate the risk of any VLAN hopping by untagged traffic. It is helpful to assign names to VLANs as they are created; this makes the switch configuration more self-documenting and can assist later if troubleshooting is required. vlan 148 name servers1 vlan 149 name servers2 vlan 163 name dc-management vlan 999 name native Step 6: Enable jumbo frame support. Jumbo frames can improve data throughput between key end nodes in the data center that are able to negotiate larger packet sizes, such as iSCSI based storage systems and their associated servers. policy-map type network-qos jumbo class type network-qos class-default mtu 9216 system qos service-policy type network-qos jumbo

Ethernet Infastructure



Configure Inter-Device Links 1. Complete vPC Configuration 2. Create Port Channel to SBA Core 3. Configure Fabric Extender Connections 4. Configure End Node Ports

being used on the mgmt0 port of the switch currently being configured, the destination address is that being used on the vPC peer. Apply the peerconfig-check-bypass command to allow newly configured vPCs or existing vPCs that may get flapped to be brought up even when a peer link is down and the vPC switch role has been determined to be primary. This can ensure connectivity is maintained in certain partial-outage scenarios such as unstable power facilities. peer-keepalive destination source peer-config-check-bypass Step 4: Create a port channel interface to be used as the peer link between the two vPC switches. The peer link is the primary link for communications and for forwarding of data traffic to the peer switch if required. interface port-channel10 switchport mode trunk vpc peer-link spanning-tree port type network Step 5: Add physical interfaces to the trunk. A minimum of two physical interfaces is recommended for link resiliency. Different 10 gigabit Ethernet ports (as required by your specific implementation) may replace the interfaces shown in the example. interface Ethernet1/17 description vpc peer link switchport mode trunk channel-group 10 mode active interface Ethernet1/18 description vpc peer link switchport mode trunk channel-group 10 mode active Step 6: Before moving on to the next procedure, ensure that the vPC peer relationship has formed successfully using the show vpc command. dc3-5k-1# show vpc Legend: (*) - local vPC is down, forwarding via vPC peer-link vPC domain id Peer status : 10 : peer adjacency formed ok

Procedure 1

Complete vPC Configuration

Before you can add port channels to the switch in vPC mode, basic vPC peering must be established between the two Cisco Nexus 5000 Series switches. Step 1: Define a vPC domain number to identify the vPC domain to be shared between the switches in the pair. The value 10 is shown in the example. vpc domain 10 Step 2: Define the role priority for each switch. The default value is 32667. The switch with lower priority will be elected as the vPC primary switch. If the peer link fails, vPC peer will detect whether the peer switch is alive through the vPC peer keepalive link. If the vPC primary switch is alive, the vPC secondary switch will suspend its vPC member ports to prevent potential looping while the vPC primary switch keeps all its vPC member ports active. The example configuration shows a lowered value to be applied to the vPC primary switch, and must be applied in vPC configuration mode after entering the vpc domain command. role priority 16000 Step 3: Configure the peer-keepalive destination and source addresses. The example configuration uses the management (mgmt0) interfaces on each switch in the pair for the peer-keepalive link. The peer-keepalive is an alternate physical path between the two vPC switches to ensure that they are aware of one anothers health even in the case where the main peer link fails. The peer-keepalive source IP address should be the address

Ethernet Infastructure


vPC keep-alive status : Configuration consistency status: vPC role : Number of vPCs configured : Peer Gateway : Dual-active excluded VLANs :

peer is alive success secondary 86 Disabled -

Procedure 2

Create Port Channel to SBA Core

vPC Peer-link status -------------------------------------------------------------id Port Status Active vlans ---------- ------------------------------------------1 Po10 up 1,148-151,154-155,159-163,520,999 Look for the peer status of peer adjacency formed ok and the keep-alive status of peer is alive to verify successful configuration. If the status does not indicate success, double-check the IP addressing assigned for the keep-alive destination and source addresses, as well as the physical connections.

A port-channel interface needs to be created to carry traffic back and forth to the network core, which provides forwarding to client machines and layer-3 forwarding between the different IP subnets carried on different VLANs. We recommend at least two physical interfaces from each vPC peer switch connected to the network core, for a total port channel of four resilient physical 10 gigabit Ethernet links and 40Gbps of throughput. Defining a vPC port channel is identical to defining a standard port channel interface, with the addition of the vpc [port-channel no.] command added to the interface configuration. interface port-channel60 description link to core switchport mode trunk vpc 60 switchport trunk native vlan 999 switchport trunk allowed vlan 148-151,154-163 interface Ethernet1/19 description link to core switchport mode trunk switchport trunk native vlan 999 switchport trunk allowed vlan 148-151,154-163 channel-group 60 mode active interface Ethernet1/20 description link to core switchport mode trunk switchport trunk native vlan 999 switchport trunk allowed vlan 148-151,154-163 channel-group 60 mode active

Tech Tip
Do not be concerned about the (*) - local vPC is down, forwarding via vPC peer-link statement at the top of the command output. Once you have vPC port channels defined, this is a legend to show the meaning of an asterisk next to your port channel in the listing if one of its member links is down.

Procedure 3

Configure Fabric Extender Connections

Fabric extender connections are also configured as port channel connections on the Cisco Nexus 5000 Series. If the FEX is to be single-homed to only one member of the switch pair, it is configured as a standard port channel. If the FEX is to be dual-homed to both members of the vPC switch pair to support single-homed servers, it is configured as a vPC port channel. Dual- and single-homed examples are illustrated in Figure 2. with configuration examples as shown below.

Ethernet Infastructure


Tech Tip
When assigning FEX numbering, you have the flexibility to use a numbering scheme (different from the example) that maps to some other identifier, such as a rack number that is specific to your environment.

Once these configuration steps are completed, you can verify the status of the fabric extender modules by using the show fex command, looking for the state of online for each unit: dc2-5k-1# sh fex FEX FEX FEX FEX Number Description State Model Serial ------------------------------------------------------------100 FEX0100 Online N2K-C2148T-1GE JAF1313AFEM 102 FEX0102 Online N2K-C2148T-1GE JAF1319BDPJ Procedure 4 Configure End Node Ports Step 1: Assign physical interfaces to support servers or devices that belong in a single VLAN as access ports. Setting the spanning-tree port type to edge allows the port to provide immediate connectivity on the connection of a new device. interface Ethernet102/1/1 switchport access vlan 163 spanning-tree port type edge Step 2: Assign physical interfaces to support servers or devices that require a VLAN trunk interface to communicate with multiple VLANs. Most virtualized servers will require trunk access to support management access, plus user data for multiple virtual machines. Setting the spanning-tree port type to edge allows the port to provide immediate connectivity on the connection of a new device. interface Ethernet100/1/1 switchport mode trunk switchport trunk allowed vlan 148-151,154-163 spanning-tree port type edge trunk Summary The configuration procedures provided in this section allow you to get a resilient Ethernet switching fabric up and running for your data center environment. This allows you to immediately take advantage of the high performance, low latency, and high availability characteristics inherent in the Cisco Nexus 5000 and 2000 Series products. To investigate more advanced features for your configuration, please refer to the Cisco Nexus 5000 Series configuration guide series on

Step 1: Configure port channel interfaces to support FEX attachment. interface port-channel100 description single-homed 2248 switchport mode fex-fabric fex associate 100 interface port-channel102 description dual-homed 2248 switchport mode fex-fabric vpc 102 fex associate 102 Step 2: Assign physical interfaces to the port channels supporting FEX attachment. The switchport mode fex-fabric command informs the Cisco Nexus 5000 Series switch that a fabric extender should be at the other end of this link. interface Ethernet1/13 fex associate 102 switchport mode fex-fabric channel-group 102 interface Ethernet1/15 fex associate 100 switchport mode fex-fabric channel-group 100 interface Ethernet1/16 fex associate 100 switchport mode fex-fabric channel-group 100

Ethernet Infastructure


Storage Infrastructure
Business Overview
There is a constant demand for more storage in organizations today. Storage for servers can be physically attached directly to the server or connected over a network. Direct Attached Storage (DAS) is physically attached to a single server and is difficult to use efficiently because it can only be used by the host attached to it. Storage Area Networks (SANs) allow multiple servers to share a pool of storage over a Fibre Channel or Ethernet network. This capability allows storage administrators to easily expand capacity for servers supporting data-intensive applications.

Network Attached Storage (NAS) is a general term used to refer to a group of common file access protocols, the most common implementations use Common Internet File System (CIFS) or Network File System (NFS). CIFS originated in the Microsoft network environment and is a common desktop file sharing protocol. NFS is a multi-platform protocol that originated in the UNIX environment and is a protocol that can be used for shared hypervisor storage. Both NAS protocols provide file-level access to shared storage resources. Most organizations will have applications for multiple storage access technologies. For example, Fibre Channel for the high performance database and production servers and NAS for desktop storage access. Fibre Channel Storage Fibre Channel allows servers to connect to storage across a fiber-optic network, across a data center or even a WAN. Multiple servers can share a single storage array. This SBA Data Center design uses the Cisco 9148 Multilayer Fabric Switch for Fibre Channel connectivity. The Cisco MDS 9148 Multilayer Fabric Switch is ideal for a small SAN fabric with up to 48 Fibre Channel ports, providing 48 line-rate 8-Gbps Fibre Channel ports and cost-effective scalability. In a SAN, a fabric consists of servers and storage connected to a Fibre Channel switch (Figure 3). It is standard practice in SANs to create two completely separate physical fabrics, providing two distinct paths to the storage Fibre Channel fabric services operate independently on each fabric so when a server needs resilient connections to a storage array, it connects to two separate fabrics. This design prevents failures or misconfigurations in one fabric from affecting the other fabric.

Technology Overview
IP-based Storage Options Many storage systems provide the option for access using IP over the Ethernet network. This approach allows a growing organization to gain the advantages of centralized storage without needing to deploy and administer a separate Fibre Channel network. Options for IP-based storage connectivity include Internet Small Computer System Interface (iSCSI) and Network Attached Storage (NAS). iSCSI is a protocol that enables servers to connect to storage over an IP connection and is a lower-cost alternative to Fibre Channel. iSCSI services on the server must contend for CPU and bandwidth along with other network applications, so you need to ensure that the processing requirements and performance are suitable for a specific application. iSCSI has become a storage technology that is supported by most server, storage, and application vendors. iSCSI provides block-level storage access to raw disk resources, similar to Fibre Channel. Network interface cards also can provide support to offload iSCSI to a separate processor to increase performance.

Storage Infrastructure


Figure 3 . Dual Fabric SAN with a Single Disk Array

Virtual Storage Area Networks (VSANs) The VSAN is a technology created by Cisco that is modeled after the Virtual Local Area Network (VLAN) concept in Ethernet networks. VSANs provide the ability to create many logical SAN fabrics on a single Cisco MDS 9100 Family switch. Each VSAN has its own set of services and address space, which prevents an issue in one VSAN from affecting other VSANs. In the past, it was a common practice to build physically separate fabrics for production, backup, lab, and departmental environments. VSANs allow all of these fabrics to be created on a single physical switch with the same amount of protection provided by separate switches. Zoning The terms target and initiator will be used throughout this section. Targets are disk or tape devices. Initiators are servers or devices that initiate access to disk or tape. Zoning provides a means of restricting visibility and connectivity between devices connected to a SAN. The use of zones allows an administrator to control which initiators can see which targets. It is a service that is common throughout the fabric and any changes to a zoning configuration are disruptive to the entire connected fabric. Initiator-based zoning allows for zoning to be port-independent by using the World Wide Name (WWN) of the end host. If a hosts cable is moved to a different port, it will still work if the port is a member of the same VSAN.

Each server or host on a SAN connects to the Fibre Channel switch with a multi-mode fiber cable from a Host Bus Adapter (HBA). For resilient connectivity, each host connects a port to each of the fabrics. Each port has a port worldwide name (pWWN), which is the ports address that uniquely identifies it on the network. An example of a pWWN is: 10:00:00:00:c9:87:be:1c. In data networking this would comparte to a MAC address for a Ethernet adapter.

Storage Infrastructure


Device Aliases When configuring features such as zoning, quality of service (QoS), and port security on a Cisco MDS 9000 Family switch, WWNs must be specified. The WWN naming format is cumbersome and manually typing WWNs is error prone. Device aliases provide a user-friendly naming format for WWNs in the SAN fabric (for example: p3-c210-1-hba0-a instead of 10:00:00:00:c9:87:be:1c). Use a naming convention that makes initiator and target identification easy. An example is below. p3-c210-1-hba0-a in this setup identifies Rack location Host type Host number HBA number Port on HBA Storage Array Tested The storage arrays used in the testing and validation of this deployment guide are the EMC CX4-120 and the NetApp FAS3140. The specific storage array configuration may vary. Please consult the installation instructions from the specific storage vendor. The Cisco interopability support matrix can be found here: P3 C210 1 hba0 a

Deployment Details
Deployment examples documented in this section include: Configuration of a Cisco MDS-based SAN network to support Fibre-Channel based storage. Fibre Channel over Ethernet FCoE access to storage from Cisco UCS C-Series servers using the Nexus 5000.


Configuring the Cisco MDS 9148 Switch 1. Complete the Initial Setup 2. Configure VSANs 3. Configure Fibre Channel Ports 4. Configure Device Aliases 5. Configure Zoning with Cisco Fabric Manager 6. Troubleshoot the Configuration

Complete each of the following procedures to configure the Cisco MDS 9148 switch.

Procedure 1

`Complete the Initial Setup

The following is required to complete this procedure:

Tech Tip
Specific interfaces, addresses, and device aliases are examples from the lab. Your WWN addresses, interfaces, and device aliases will likely be different.

Management IP address Defined Management upstream port Secure Password a, The characteristics for strong passwords included the following: At least 8 characters long Does not contain many consecutive characters (such as abcd) Does not contain many repeating characters (such as aaabb) Does not contain dictionary words Does not contain proper names

Storage Infrastructure


Contains both uppercase and lowercase characters Contains numbers b. The following are examples of strong passwords: If2COM18 2004AsdfLkj30 When initially powered on, a new Cisco MDS 9148 switch starts a setup script when accessed from the console. Step 1: Follow the prompts in the setup script to configure login, out-of-band management, Telnet, SSH, clock, time zone, Network Time Protocol, switch port modes, and default zone policies.

Please register Cisco MDS 9000 Family devices promptly with your supplier. Failure to register may affect response times for initial service calls. MDS devices must be registered to receive entitled support services. Press Enter at anytime to skip a dialog. Use ctrl-c at anytime to skip the remaining dialogs. Would you like to enter the basic configuration dialog (yes/ no): y Create another login account (yes/no) [n]: Configure read-only SNMP community string (yes/no) [n]: Configure read-write SNMP community string (yes/no) [n]: Enter the switch name : p3-mds9148-1 Continue with Out-of-band (mgmt0) management configuration? (yes/no) [y]: Mgmt0 IPv4 address : Mgmt0 IPv4 netmask : Configure the default gateway? (yes/no) [y]: IPv4 address of the default gateway : Configure advanced IP options? (yes/no) [n]: Enable the ssh service? (yes/no) [y]: Type of ssh key you would like to generate (dsa/rsa) [rsa]: Number of rsa key bits <768-2048> [1024]: Enable the telnet service? (yes/no) [n]: y Enable the http-server? (yes/no) [y]: Configure clock? (yes/no) [n]: Configure timezone? (yes/no) [n]: Configure summertime? (yes/no) [n]: Configure the ntp server? (yes/no) [n]: y NTP server IPv4 address : Configure default switchport interface state (shut/noshut) [shut]: noshut Configure default switchport trunk mode (on/off/auto) [on]: Configure default switchport port mode F (yes/no) [n]: Configure default zone policy (permit/deny) [deny]: Enable full zoneset distribution? (yes/no) [n]: Configure default zone mode (basic/enhanced) [basic]: The following configuration will be applied: switchname p3-mds9148-1 interface mgmt0 ip address no shutdown ip default-gateway ssh key rsa 1024 force

Tech Tip
When the administrative login is configured, a Simple Network Management Protocol Version 3 (SNMPv3) user is created automatically. This login is used by Cisco Fabric Manager to manage the switch. Also note, you will want to configure the secure password standard. The secure password standard does not allow for creation of insecure passwords and should be used for all production Cisco MDS switches.

---- System Admin Account Setup ---Do you want to enforce secure password standard (yes/no) [y]: Enter the password for admin: Confirm the password for admin: ---- Basic System Configuration Dialog ---This setup utility will guide you through the basic configuration of the system. Setup configures only enough connectivity for management of the system.

Storage Infrastructure


feature ssh feature telnet feature http-server ntp server no system default switchport shutdown system default switchport trunk mode on no system default zone default-zone permit no system default zone distribute full no system default zone mode enhanced Would you like to edit the configuration? (yes/no) [n]: n Use this configuration and save it? (yes/no) [y]: y [########################################] 100%

Tech Tip
Cisco Fabric Manager is a Java application available for download from or from the CD that ships with the Cisco MDS 9100 Family switch. Managing more than one switch at the same time requires licensing.

Tech Tip
Java runtime environment (JRE) is required to run Cisco Fabric Manager and Device Manager and should be installed before accessing either application.

Tech Tip
Network Time Protocol (NTP) is critical to troubleshooting and should not be overlooked.

Procedure 2
Step 2: The Cisco MDS Device Manager (Figure 4) provides a graphical interface to configure a Cisco MDS 9100 Family switch. To access the Device Manager, connect to the management address via HTTP or access it directly through Cisco Fabric Manager. Figure 4 . Device Manager

Configuring VSANs

By default, all ports are assigned to VSAN 1 at initialization of the switch. It is a best practice to create a separate VSAN for production and to leave VSAN 1 for unused ports. By not using VSAN 1, you can avoid future problems with merging of VSANs when combining other existing switches that may be set to VSAN 1. To create a VSAN, use the command-line interface (CLI) or Device Manager. Step 1: To create VSAN 4 and add it to port FC1/4 with the name GeneralStorage, enter the following from the command line: vsan database vsan 4 name General-Storage vsan 4 interface fc1/4 Using Device Manager, select FC->VSANS.

Storage Infrastructure


The Create VSAN General window appears. Select the VSAN id as 4 and enter the name General-Storage in the Name field. Figure 5 . Create VSAN

Tech Tip
A separate VSAN should be created for each fabric. Example: Fabric A has the General-Storage VSAN with the VSAN number 4, Fabric B would have the General-Storage VSAN number configured as VSAN 5.

Procedure 3

Configure Fibre Channel Ports

By default, the ports are configured for port mode Auto and this setting should not need to be changed for most devices that are connected to the fabric. Step 1: To change the port mode via Device Manager, right-click the port you want to configure. Figure 7 . Device Manager Figure 6 . Select Interfaces

Step 2: Select the interface members by clicking after Interface Members (Figure 5). Figure 6 illustrates interface fc 1/4 being selected. Step 3: Click Create to create the VSAN. You can add additional VSAN members in the Membership tab of the main VSAN window.

Storage Infrastructure


The General tab appears: Figure 8 . Interface Configuration

p3-mds9148-1# show flogi database -------------------------------------------------------------------------------INTERFACE VSAN FCID PORT NAME NODE NAME -------------------------------------------------------------------------------fc1/1 4 0x050000 50:06:01:60:3c:e0:60:e2 50:06:01:60:bc:e0:60:e2 fc1/3 4 0x050100 10:00:00:00:c9:92:80:1c 20:00:00:00:c9:92:80:1c fc1/4 4 0x050200 10:00:00:00:c9:91:d5:6c 20:00:00:00:c9:91:d5:6c fc1/5 4 0x050300 10:00:00:00:c9:8c:60:b4 20:00:00:00:c9:8c:60:b4 fc1/6 4 0x050400 10:00:00:00:c9:86:44:80 20:00:00:00:c9:86:44:80 fc1/7 4 0x050500 10:00:00:00:c9:87:be:1c 20:00:00:00:c9:87:be:1c fc1/15 4 0x050600 20:41:00:05:9b:76:b2:80 20:04:00:05:9b:76:b2:81 fc1/16 4 0x050700 20:42:00:05:9b:76:b2:80 20:04:00:05:9b:76:b2:81

Procedure 4

Configure Device Aliases

Device aliases map the long WWNs for easier zoning and identification of initiators and targets. An incorrect device name may cause unexpected results. Device aliases can be used for zoning, port-security, QOS, and show commands. There are two ways to configure device aliases: CLI and Device Manager. To configure device aliases using the CLI, complete the following steps: Step 1: Apply the following configuration: device-alias database device-alias name emc-a0 pwwn 50:06:01:60:3c:e0:60:e2 device-alias name p3-3rd-1-hba0-a pwwn 10:00:00:00:c9:8c:60:b4 device-alias name p3-3rd-2-hba0-a pwwn 10:00:00:00:c9:86:44:80 device-alias name p3-3rd-3-hba0-a pwwn 10:00:00:00:c9:87:be:1c device-alias name p3-c210-1-cna-a pwwn 21:00:00:c0:dd:11:28:29 device-alias name p3-dc-6100-fc-1 pwwn 20:41:00:05:9b:76:b2:80 device-alias name p3-dc-6100-fc-2 pwwn 20:42:00:05:9b:76:b2:80 device-alias name p3-c200-1-hba0-a pwwn 10:00:00:00:c9:92:80:1c device-alias name p3-c210-1-hba0-a pwwn 10:00:00:00:c9:91:d5:6c exit device-alias commit

Step 2: Connect devices to the Fibre Channel ports and activate the ports. When the initiator or target starts up, it automatically logs into the fabric. Upon login, the initiator or target WWN is made known to the fabric. To display this fabric login database, enter the following command through the Cisco MDS 9000 switch CLI:

Storage Infrastructure


Step 2: Aliases are now visible when you enter the show flogi database command. p3-mds9148-1# show flogi database -------------------------------------------------------------------------------INTERFACE VSAN FCID PORT NAME NODE NAME -------------------------------------------------------------------------------fc1/1 4 0x050000 50:06:01:60:3c:e0:60:e2 50:06:01:60:bc:e0:60:e2 [emc-a0] fc1/3 4 0x050100 10:00:00:00:c9:92:80:1c 20:00:00:00:c9:92:80:1c [p3-c200-1-hba0-a] fc1/4 4 0x050200 10:00:00:00:c9:91:d5:6c 20:00:00:00:c9:91:d5:6c [p3-c210-1-hba0-a] fc1/5 4 0x050300 10:00:00:00:c9:8c:60:b4 20:00:00:00:c9:8c:60:b4 [p3-3rd-1-hba0-a] fc1/6 4 0x050400 10:00:00:00:c9:86:44:80 20:00:00:00:c9:86:44:80 [p3-3rd-2-hba0-a] fc1/7 4 0x050500 10:00:00:00:c9:87:be:1c 20:00:00:00:c9:87:be:1c [p3-3rd-3-hba0-a] fc1/15 4 0x050600 20:41:00:05:9b:76:b2:80 20:04:00:05:9b:76:b2:81 [p3-dc-6100-fc-1] fc1/16 4 0x050700 20:42:00:05:9b:76:b2:80 20:04:00:05:9b:76:b2:81 [p3-dc-6100-fc-2]

To configure device aliases using Device Manager, complete the following steps: Step 1: Access the Device Alias window. Figure 9 . Advanced > Device Alias Window

Step 2: Click Create. Figure 10 . Create Device Alias Window

Storage Infrastructure


Step 3: Enter a device alias name and paste in or type the WWN of the host. Step 4: Click CFS->Commit when complete.

Figure 11 . Zoning Example With Two Fabrics

Procedure 5

Configure Zoning

Zoning can be configured from the CLI and from Fabric Manager. Both will be shown. Leading practices for zoning: Configure zoning between a single initiator and a single target per zone. A single initiator can also be configured to multiple targets in the same zone. Zone naming should follow a simple naming convention of initiator_x_target_x. p3-c100-1-hba0-a_SAN-A p3-c210-1-hba0-a_SAN-A Limiting zoning to a single initiator with a single or multiple target helps prevent disk corruption and data loss. To create a zone with the CLI, complete the following steps: Step 1: In configuration mode, enter zone name and vsan number. zone name p3-c210-1-hba0-a-SAN-A vsan 4 Device members can be specified by WWN or device alias. member device-alias p3-c210-1-hba0-a member pwwn 10:00:00:00:c9:91:d5:6c

Step 2: Create a zoneset. A zoneset is a collection of zones (Figure 11). Zones are members of a zoneset. After you add all the zones as members, you must activate the zoneset. There can only be one active zoneset per VSAN. zoneset name Zoneset1 vsan 4 Step 3: To add members to the zoneset, enter the following commands. member p3-c210-1-hba0-a-SAN-A member p3-c200-1-hba0-a-SAN-A Step 4: Once all the zones for VSAN 4 are created and added to the zoneset, activate the configuration. zoneset activate name Zoneset1 vsan 4

Procedure 6

Configure Zoning with Cisco Fabric Manager

Step 1: Select Zone Edit Local Full Zone Database in the Cisco Fabric Manager menu.

Storage Infrastructure


Figure 12 . Zone Edit Local Full Zone Database Window

Figure 14 . Create Zoneset

Step 2: On the left side of the zone database window are two sections, Zonesets and Zones. Across the top, the current VSAN and switch are displayed. The two sections on the right side list zones and zone members. To create a zone, right-click the zone folder. Figure 13 . Create Zone

Step 5: When finished, to activate the configured zoneset, click Activate in the bottom right of the window. Figure 15 . Save Configuration

Step 3: Highlight the new zone. On the bottom of the right hand side of the database window targets and initiators that have logged in the fabric for the selected VSAN are available to be added to the new zone. Highlight initiator or targets to add to the zone and click Add to Zone. Step 4: Right-click Zoneset to insert a new zoneset. Drag zones just created from the zone box to the zoneset folder just created.

Storage Infrastructure


Procedure 6

Troubleshoot the Configuration

Performing path discovery. Route present for : 10:00:00:00:c9:91:d5:6c 20:00:00:05:9b:70:40:20(0xfffc05)

Step 1: To check the fabric configuration for proper zoning, use the show zoneset active command to display the active zoneset. Each zone that is a member of the active zoneset is displayed with an asterisk (*) to the left of the member. If there is not an asterisk to the left, the host is either down and not logged into the fabric or there is a misconfiguration of the port VSAN or zoning. Use the show zone command to display all configured zones on the Cisco MDS 9000 Family switch. Step 2: In a Fibre Channel fabric, each host or disk requires a Fibre Channel ID (FC ID). When a fabric login (FLOGI) is received from the device, this ID is assigned by the fabric. If the required device is displayed in the FLOGI table, the fabric login is successful. p3-mds9148-1# show zoneset active zoneset name Zoneset1 vsan 4 zone name p3-c210-1-hba0-a-SAN-A vsan 4 * fcid 0x050200 [pwwn 10:00:00:00:c9:91:d5:6c] [p3-c210-1hba0-a] * fcid 0x050000 [pwwn 50:06:01:60:3c:e0:60:e2] [emc-a0] zone name p3-c200-1-hba0-a-SAN-A vsan 4 * fcid 0x050000 [pwwn 50:06:01:60:3c:e0:60:e2] [emc-a0] * fcid 0x050100 [pwwn 10:00:00:00:c9:92:80:1c] [p3-c200-1hba0-a] Step 3: Test Fibre Channel reachability using the fcping command and trace the routes to the host using the fctrace command. Cisco created these commands to provide storage networking troubleshooting tools that are familiar to individuals who use ping and traceroute. Examples are below: fcping device-alias p3-c210-1-hba0-a vsan 4 28 bytes from 10:00:00:00:c9:91:d5:6c time = 714 usec 28 bytes from 10:00:00:00:c9:91:d5:6c time = 677 usec 28 bytes from 10:00:00:00:c9:91:d5:6c time = 700 usec 28 bytes from 10:00:00:00:c9:91:d5:6c time = 705 usec 28 bytes from 10:00:00:00:c9:91:d5:6c time = 699 usec 5 frames sent, 5 frames received, 0 timeouts Round-trip min/avg/max = 677/699/714 usec fctrace device-alias p3-c210-1-hba0-a vsan 4


Configuring Cisco UCS Rack Mount Servers for FCoE 1. Enable FCOE 2. Enable NPV Mode 3. Configure VSAN 4. Configure F-Port-Channeling and Trunk Protocols 5. Configure a Trunking Port Channel 6. FCoE QOS Differences on the Cisco Nexus 5548 7. Configure Host Facing FCoE Ports 8. Verify FCoE Connectivity

Physical Setup and Connectivity Cisco UCS rack servers ship with onboard 10/100/1000 Ethernet adapters and a Cisco Integrated Management Controller (CIMC) with a 10/100 port. To get the most out of the rack servers and minimize cabling in the SBA Unified Computing architecture, the Cisco UCS C210 rack-mount server is connected to a unified fabric. The Cisco Nexus 5000 Series switch that connects the Cisco UCS 5100 Series Blade Server Chassis to the network can also be used to extend Fibre Channel traffic over 10 gigabit Ethernet. The Cisco Nexus 5000 Series switch consolidates I/O onto one set of cables, eliminating redundant adapters, cables, and ports. A single card and set of cables connects servers to the Ethernet and Fibre Channel networks and also allows the use of a single cabling infrastructure within server racks. In the SBA Midsize Data Center architecture, the Cisco UCS C210 rackmount server is configured with a dual-port CNA. Cabling the Cisco UCS C210 with a CNA limits the cables to three, one for each port on the CNA and the CIMC connection. A standard server without a CNA could have a few Ethernet connections or multiple Ethernet and Fibre Channel connections. Figure 16 shows a topology with mixed unified fabric and standard Ethernet and Fibre Channel connections.

Storage Infrastructure


Figure 16 . Unified Fabric and Non Unified Fabric

Figure 17 . Cisco MDS 9148 and Cisco Nexus 5548 Connectivity

The Cisco UCS C210 is connected to both Cisco Nexus 5000 Series switches from the CNA with twinax cabling. The CIMC 10/100 management port connects to an Ethernet port on the Cisco Nexus 2248 fabric extender. The Cisco Nexus 5548 switch has one available expansion slot, which can be used to add Fibre Channel or to add 10 gigabit Ethernet ports. The available options for Fibre Channel are: Split 8-port 10 gigabit Ethernet/8-port 8 Gbs Fibre Channel card 16-port 10 gigabit Ethernet card The Cisco Nexus 5548 with a Fibre Channel expansion card acts a standard Fiber Channel switch or it can act as a Fibre Channel switch in host mode. In N-port Virtualization (NPV) mode, all traffic is managed at the upstream MDS. NPV allows for the storage to be configured the same as in the previous Cisco UCS chassis configuration. In the SBA Unified Computing architecture, all the Fiber Channel zoning and switching occurs upstream on the Cisco MDS 9148 switches. The Fibre Channel connectivity is configured as shown, with two connections from each Cisco Nexus 5000 Series switch to each SAN fabric.

Two new features will be introduced, F-port trunking and N-port-channeling. F-port trunking allows for future expansion of VSANs across the link and prepares for expansion to the Nexus 5548. If there is a need for more than a single VSAN to a virtualized host for example, the VSAN can just be added to the trunk without disrupting other traffic or requiring new physical connections. F-Port Channeling now allows for Fibre Channel ports between the Nexus 5548 and the Cisco MDS 9148 to be put into a port channel allowing for redundancy and for added bandwidth. This example shows a 2 port 16 Gb fiber channel port channel. The Cisco MDS 9148 switch will need to be configured for NPIV when the Cisco Nexus 5000 Series switches are configured for N-port Virtualization mode.

Storage Infrastructure


Nexus 5000 configuration for FCoE The Cisco Nexus 5000 Series switch supports FCoE. To configure it for unified fabrics and support of the 2nd generation CNAs, you must configure the following items: FCoE functionality NPV (optional) Fibre Channel VSAN creation Fibre Channel Uplink Create Trunking Port Channel VLAN association to Fibre Channel VSAN Creation of a virtual Fibre Channel interface Assign VSAN to virtual Fibre Channel interface Configure Ethernet port and trunking Note: Configuration will be similar across both of the Cisco Nexus 5000 Series switches with the exception of the VSAN configured for SAN fabric A and for SAN fabric B. You must perform this procedure once for each of the two Cisco Nexus 5000 Series switches.

Procedure 2

Enable NPV Mode

Note: Enabling NPV erases the working configuration and reboots the switch. You must then reconfigure the switch over the console interface. The only information that remains is the admin username and password. Please understand the impact of this change on a production network device. If you do not enable NPV, the Cisco Nexus 5000 Series switches are used as a switch. All zoning and Fibre Channel configuration of the Cisco Nexus 5000 Series switches is similar to the Cisco MDS 9100 Series switch zoning and configuration in the storage section of this guide. Step 1: Enable NPIV on the Cisco Nexus 5548 switches. feature npv Step 2: Enable NPIV on the Cisco MDS 9148 SAN switches. feature npiv Step 3: Verify NPIV status: p3-mds9148-1# sh npiv status NPIV is enabled

Procedure 1

Enable FCOE

Procedure 3

Configure VSAN

Step 1: Enable FCoE on each Cisco Nexus 5000 Series switch. feature fcoe A temporary 180-day license is activated when you enter the feature fcoe command. For long-term use, you must install a permanent license. For more information, please see the Cisco NX-OS Licensing Guide at licensing/guide/b_Cisco_NX-OS_Licensing_Guide.html.

Step 1: Configure the VSAN on the Cisco Nexus 5548 switch and assign it to the interface connected to the MDS vsan database vsan 4 name General-Storage vsan 4 interface fc2/1-2 exit

Storage Infrastructure


Step 2: Configure and bring up the Fibre Channel port that is connected to the Cisco MDS 9100 Series switch. interface fc 2/1 no shut exit interface fc 2/2 no shut exit Note: The ports will need to be enabled on the MDS and have the correct VSAN assigned. Please refer Smart Business Architecture for Midsize Networks for more information on configuring the Cisco MDS 9100. Step 3: Use the show interface brief command on the Cisco Nexus 5000 Series switch to view the operating mode of the interface. For example, in the output below, the operating mode is NP (proxy N-Port). Because the default port configuration on the Cisco MDS 9148 Series switch is auto and NPIV feature has previously been enabled in the Cisco UCS Fabric Configuration, the switch negotiates as an NP port. DC-5000a# show interface brief ---------------------------------------------------------------------------Interface Vsan Admin Admin Status SFP Oper Oper Port Mode Trunk Mode Speed Channel Mode (Gbps) ---------------------------------------------------------------------------fc2/1 4 NP off up swl NP 4 -fc2/2 4 NP off up swl NP 4 -Step 4: Check the Fibre Channel interface on the corresponding Cisco MDS 9148 switch.

Procedure 4

Configure F-Port-Channeling and Trunk Protocols

Step 1: Configure F-Port-Channeling on the Cisco MDS 9148. This can also be turned on with Device Manager. feature fport-channel-trunk Step 2: Configure Trunk Protocol on the Cisco Nexus 5548. trunk protocol enable

Procedure 5

Configure a Trunking Port Channel

For added redundancy, a port channel will be set up between the Cisco Nexus 5548 and Cisco MDS 9148 Fibre Channel ports. Trunking will also be enabled. Step 1: Run Fabric Managers Port Channel wizard. Figure 18 . Port Channel Wizard

Step 2: Select the Radio button for NPV enabled switches, and then select the Cisco MDS9148 and Cisco Nexus 5548 pair.

Storage Infrastructure


Figure 19 . Select Switch Pair

Figure 20 . Select Switch Ports

Step 3: Select the correct ports between the switches and click Next.

Step 4: Select a port channel ID for both sides of the link (in this example it is 256). Select the trunking radio button. Select the port vsan as 4 for this example in fabric A. Click Finish.

Storage Infrastructure


Figure 21 . Create Port Channel

Figure 22 . Activate Port Channel

Step 6: Verify with a show interface brief on both the MDS and the Nexus. dc3-5k-1# show interface brief -----------------------------------------------------------------------------Interface Vsan Admin Admin Status SFP Oper Oper Port Mode Trunk Mode Speed Channel Mode (Gbps) ------------------------------------------------------------------------------fc2/1 4 NP on trunking swl TNP 8 256 fc2/2 4 NP on trunking swl TNP 8 256 -------------------------------------------------------------------------------Interface Vsan Admin Status Oper Oper IP Trunk Mode Speed Address Mode (Gbps) -------------------------------------------------------------------------------san-port-channel 256 4 on trunking TNP 16 -------------------------------------------------------------------------------With Fibre Channel configuration complete between the Cisco Nexus 5000 Series switch and the Cisco MDS 9148 Series switch, connectivity to the host can begin.

Step 5: This port-channel creation may be disruptive if there is traffic across the link. This configuration assumes traffic is not configured yet. Click Yes to continue.

Storage Infrastructure


Procedure 6

FCoE QOS Differences on the Cisco Nexus 5548

The Cisco Nexus 5548, unlike the Cisco Nexus 5510, does not preconfigure QOS for FCoE traffic. Step 1: There are four lines of QOS statements that map the existing system QOS policies for FCoE. Without these commands, the virtual Fibre Channel interface will not come up when activated. Enter the following in global configuration mode: system qos service-policy type qos input fcoe-default-in-policy service-policy type queuing input fcoe-default-in-policy service-policy type queuing output fcoe-default-out-policy service-policy type network-qos fcoe-default-nq-policy

Step 4: Configure the Ethernet interface to operate in trunk mode. Also configure the interface with the FCoE VSAN and any data VLANs required by the host. Configure the spanning-tree port type as trunk edge. interface Ethernet 1/3 switchport mode trunk switchport trunk allowed vlan 148,304 spanning-tree port type edge trunk no shut

Procedure 8

Verify FCoE Connectivity

Procedure 7

Configure Host Facing FCoE Ports

On the Cisco Nexus 5548 switch, configure the Ethernet ports connected to the CNA in the host. Step 1: Create a VLAN that will carry FCoE traffic to the host. In this example, VLAN 304 is mapped to VSAN 4. VLAN 304 carries all VSAN 4 traffic to the CNA over the trunk. vlan 304 fcoe vsan 4 exit Step 2: Create a virtual Fibre Channel (vfc) interface for Fiber Channel traffic and bind it to the corresponding host Ethernet interface. interface vfc1 bind interface Ethernet 1/3 no shutdown exit Step 3: Add the virtual Fibre Channel interface to the VSAN database vsan database vsan 4 interface vfc 1 exit

Step 1: Use the show interface command to verify the status of the virtual Fibre Channel interface. The interface should now be up as seen below if the host is properly configured to support the CNA. Host configuration is beyond the scope of this guide. Please see CNA documentation for specific host drivers and configurations. dc3-5k-1# show interface vfc 1 vfc1 is trunking (Not all VSANs UP on the trunk) Bound interface is Ethernet1/3 Hardware is Virtual Fibre Channel Port WWN is 20:00:00:05:73:ab:27:3f Admin port mode is F, trunk mode is on snmp link state traps are enabled Port mode is TF Port vsan is 4 Trunk vsans (admin allowed and active) (1,4) Trunk vsans (up) (4) Trunk vsans (isolated) () Trunk vsans (initializing) (1) 1 minute input rate 0 bits/sec, 0 bytes/sec, 0 frames/sec 1 minute output rate 0 bits/sec, 0 bytes/sec, 0 frames/sec 28 frames input, 3552 bytes 0 discards, 0 errors 28 frames output, 3456 bytes 0 discards, 0 errors last clearing of show interface counters never Interface last changed at Thu Oct 21 03:34:44 2010

Storage Infrastructure


Step 2: On the Cisco Nexus 5500 Series switches, display the FCoE addresses. dc3-5k-1# sh fcoe database ------------------------------------------------------------------------------INTERFACE FCID PORT NAME MAC ADDRESS ------------------------------------------------------------------------------vfc1 0x050b00 21:00:00:c0:dd:11:28:29 00:c0:dd:11:28:29 Step 3: The addresses appear in the current Fiber Channel login database on the Cisco MDS 9100 Series switch. The first line below is the Cisco Nexus 5548 Series switch. The second ID is the host on the vfc 1 interface. p3-mds9148-1# sh flogi da -------------------------------------------------------------------------------INTERFACE VSAN FCID PORT NAME NODE NAME -------------------------------------------------------------------------------port-channel 256 1 0xb41300 25:00:00:05:73:ab:27:00 20:01:00:05:73:ab:27:01 port-channel 256 4 0x050b00 21:00:00:c0:dd:11:28:29 20:00:00:c0:dd:11:28:29 Step 4: The Fiber Channel name server database differentiates the Cisco Nexus 5548 Series switch WWN from the actual host WWN. The switch appears as type NPV and the host as expected will show up as an initiator. p3-mds9148-1# show fcns database VSAN 4: -------------------------------------------------------------------------FCID TYPE PWWN (VENDOR) FC4-TYPE:FEATURE 0x050600 N 0x050b00 N 21:00:00:c0:dd:11:28:29 (Qlogic) scsi-fcp:init [p3-c210-1-cna-a] Zoning and device aliases are configured as in the Fibre Channel section. Note: Much of the configuration of the Cisco Nexus 5000 Series switch can also be done from within Device Manager; however, Device Manager cannot be used to configure VLANs or Ethernet Trunks on the Cisco Nexus 5000 Series switches.

Storage Infrastructure


Network Security
Business Overview
In todays business environment, the data center contains some of the organizations most valuable assets. Customer and personnel records, financial data, email stores, and intellectual property must be maintained in a secure environment to assure confidentiallity and availability. Additionally, portions of networks in specific business sectors may be subject to industry or government regulations that mandate specific security controls to protect customer or client information. To protect the valuable electronic assets located in the data center, network security ensures the facility is protected from automated or humanoperated snooping and tampering, and it prevents compromise of hosts by resource-consuming worms, viruses, or botnets.

Figure 23 . Secure Data Center Overview and Logical Topology

Technology Overview
While worms, viruses, and botnets pose a substantial threat to centralized data, particularly from the perspective of host performance and availability, servers must also be protected from employee snooping and unauthorized access. Statistics have consistently shown that the majority of data loss and network disruptions have occurred as the result of human-initiated activity (intentional or accidental) carried out within the boundaries of the businesss network. To minimize the impact of unwanted network intrusions, firewalls and intrusion prevention systems (IPSs) should be deployed between clients and centralized data resources.

The Data Center Security design employs a pair of Cisco Adaptive Security Appliance (ASA) 5500s for data center firewall security. One configuration option consists of a pair of Cisco ASA 5585-20s, each connected directly to a Cisco IPS 4260 appliance. This configuration offers up to 5 Gbps of firewall throughput. The IPS 4260s in this design support 1-2 Gbps of throughput, via gigabit connections between the Cisco ASA 5585 and Cisco IPS 4260. If 10-Gigabit Ethernet connectivity is required, a Cisco IPS 4270 may be employed for up to 4 Gbps of throughput. A similar configuration can be deployed with a pair of Cisco ASA 5580-20s, each connected directly to a Cisco IPS 4260 appliances. This configuration offers up to 5 Gbps of firewall throughput, as well. If applications do not require IDS/IPS protection, or if bandwidth requirements exceed the 4270s performance capabilities, you can provision a 10-Gigabit Ethernet connection from the secure side of the firewall to the server-room VLANs.

Network Security


Figure 24 . Cisco ASA 5585/5580 and IPS 4260 Physical Topology and Traffic Path

Security Topology Design

The network defines two secure VLANs in the data center. The number of secure VLANs is arbitrary; the design is an example of how to create multiple secured networks to host services that require separation. Highvalue applications, such as Enterprise Resource Planning and Customer Relationship Management, may need to be separated from other applications in their own VLAN. Figure 25 . Deploy Multiple Secure Data Center VLANs

The pair of ASAs is configured for active-standby high availability to ensure that access to the data center is minimally impacted by outages caused by software maintenance or hardware failure. The Cisco ASAs are configured in routing mode; as a result, the secure network must be in a separate subnet from the client subnets. IP subnet allocation would be simplified if the Cisco ASA were deployed in transparent mode; however, hosts might inadvertently be connected to the wrong VLAN, where they would still be able to communicate with the network, incurring an unwanted security exposure. The data center IPSs monitor for and mitigate potential malicious activity that is otherwise allowed by the security policy defined on the ASAs. The IPS sensors are deployed in promiscuous intrusion detection system (IDS) mode so that they only monitor and log abnormal traffic. As a more advanced option, they can be deployed in line to fully engage their intrusion prevention capabilities, wherein they mitigate traffic that violates the configured policy.

As another example, services that are indirectly exposed to the Internet (via a proxy in the Demilitarized Zone) should be separated from other services, if possible, to prevent Internet-borne compromise of some servers from spreading to other services that are not exposed. Traffic between VLANs should be kept to a minimum, unless your security policy dictates service separation. Keeping traffic between servers intra-VLAN will improve performance and reduce load on network devices.

Security Policy Development

A business should have an IT security policy as a starting point in defining its firewall policy. If there is no companywide security policy, it will be very difficult to define an effective policy for the business while maintaining a secure computing environment. To effectively deploy security between the various functional segments of a businesss network, you should seek the highest level of detail possible regarding the expected network behaviors. If you have greater detail of the expectations, you will be better positioned to define a security policy that enables a businesss application traffic and performance requirements while optimizing security.

Network Security


Reader Tip
A detailed examination of regulatory compliance considerations exceeds the scope of this document; you should include industry regulation in your network security design. Noncompliance may result in regulatory penalties such as fines or business-activity suspension.

Inversely, a blacklist policy only denies traffic that specifically poses the greatest risk to centralized data resources. A blacklist policy is simpler to maintain and less likely to interfere with network applications; a whitelist policy is the best-practice option if you have the opportunity to examine the networks requirements and adjust the policy to avoid interfering with desired network activity. Figure 27 . Blacklist Security Policy

Network security policies can be broken down into two basic categories: whitelist policies and blacklist policies. A whitelist security policy offers a higher implicit security posture, blocking all traffic except that which must be allowed (at a sufficiently granular level) to enable applications. Whitelist policies are generally better positioned to meet regulatory requirements because only traffic that must be allowed to conduct business is allowed. Other traffic is blocked and does not need to be monitored to assure that unwanted activity is not occurring, reducing the volume of data that will be forwarded to an IDS or IPS, as well as minimizing the number of log entries that must be reviewed in the event of an intrusion or data loss. Figure 26 . Whilelist Security Policy

Cisco ASA Firewalls implicitly end access-lists with a deny-all rule. Blacklist policies include an explicit rule, prior to the implicit deny-all rule, to allow any traffic that is not explicitly allowed or denied. Whether you choose a whitelist or blacklist policy basis, consider IDS or IPS deployment for controlling malicious activity on otherwise trustworthy application traffic. At a minimum, IDS or IPS can aid with forensics to determine the origin of a data breach. In the best circumstances, IPS may be able to detect and prevent attacks as they occur and provide detailed information to track the malicious activity to its source. IDS or IPS may also be required by the regulatory oversight to which a network is subject (for example, PCI 2.0). A blacklist policy that blocks high-risk traffic offers a lower-impact but less-secure option (compared to a whitelist policy) in cases where a detailed study of the networks application activity is impractical, or if the network availability requirements prohibit application troubleshooting. If identifying all of the application requirements is not practical, you can apply a blacklist policy with logging enabled, so that a detailed history of the policy is generated. With network-behavior details in hand, development of a whitelist policy is much easier and more effective.

Network Security


Deployment Details
Data Center Security Deployment is addressed in three discrete processes: Cisco ASA Firewall connectivity, which establishes network connections between the Cisco ASA Firewalls and the Cisco Catalyst core. Cisco ASA Firewall policy discussion and configuration, which outlines the process needed to identify security policy needs and apply a configuration to meet requirements. Cisco IPS connectivity and policy configuration, which integrates connectivity and policy configuration in one process.

Figure 28 . Secure Data Center Connectivity


Configure Cisco ASA Firewall Connectivity 1. Configure VLANs 2. Configure Cisco ASA Dynamic Routing 3. Configure the ASA for Active-Standby High Availability

Tech Tip
These examples illustrate configuration for the Cisco ASA 5585, which employs a fixed interface configuration. The ASA 5580 allows userconfigurable options for gigabit and 10 gigabit Ethernet interfaces. If an application-specific configuration for Cisco ASA 5580 Appliances is applied, consult the product documentation for interface configuration guidance. All interfaces on Cisco ASA have a security-level setting. The higher the number, the more secure the interface, relative to other interfaces. Inside interfaces are typically assigned 100, the highest security level. Outside interfaces are generally assigned 0. By default, traffic can pass from a highsecurity interface to a lower-security interface. In other words, traffic from an inside network is permitted to an outside network, but not conversely. Step 1: Configure the LAN-side Cisco ASA interfaces to connect to the Cisco Catalyst VSS: interface TenGigabitEthernet0/7 description dc-c6504 T*/4/6 nameif outside security-level 0 ip address standby

Complete the following procedures to configure connectivity between the Cisco ASA Firewalls and the core. Note that this design describes a configuration wherein both sides (LAN-side and DC-side) of the firewall connect to the core, to provide a central VLAN fanout point for all of the Data Center VLANs.

Procedure 1

Configure VLANs

10-Gigabit Ethernet links connect the ASA to the core Cisco Catalyst VSS. Connections to the secure VLANs (also on the core Cisco Catalyst VSS) are made with a Gigabit Ethernet port per ASA for each secured subnet. This interface is then assigned an IP address, which will be the default gateway for the subnet. The Cisco Catalyst VSS core switch does not carry an IP address assigned on secured VLANs as it does for other core VLANs. If higher-bandwidth connectivity through the firewall to the data center is required by some VLANs than will be permitted by IPS, a separate VLAN may be provisioned that is not passed to the IPS appliance. This is particularly applicable for 10-Gigabit Ethernet links.

Network Security


Step 2: Configure the DC-side Cisco ASA interfaces to connect to secure VLANs on the Cisco Catalyst VSS via an IDS sensor: interface GigabitEthernet0/0 description dc-ips4260 G2/2 nameif DCVLAN154 security-level 100 ip address standby Step 3: Add the LAN-side configuration to Cisco Catalyst VSS ports that connect to the Cisco ASA security appliances: interface TenGigabitEthernet1/4/5 description Access port for DC-5585a switchport switchport access vlan 153 switchport mode access spanning-tree portfast edge Step 4: Add the DC-side configuration to Cisco Catalyst VSS ports that connect to the Cisco IPS security appliances: interface GigabitEthernet1/2/6 description Access port for DC-IPS4260a switchport switchport access vlan 154 switchport mode access spanning-tree portfast edge

Procedure 3

Configure the ASAs for Active-Standby High Availability

The ASA and IPS appliances are configured for Active-Standby High Availability. When ASA appliances are configured in active-standby mode, the standby appliance does not handle traffic, so the primary device must be sized to provide enough throughput to address connectivity requirements between the core and the data center. Step 1: Configure one interface on each ASA as the state-synchronization interface that the ASAs use to share configuration updates, determine which device is active in the high-availability (HA) pair, and exchange state information for active firewall sessions. interface GigabitEthernet0/1 description LAN/STATE Failover Interface ! failover failover lan unit primary failover lan interface failover GigabitEthernet0/1 failover polltime unit msec 200 holdtime msec 800 failover polltime interface msec 500 holdtime 5 failover key [key] failover replication http failover link failover GigabitEthernet0/1 failover interface ip failover standby The failover key value must match on the two devices that are configured in the active-standby HA pair. This key is used for two purposes: to authenticate the two devices to each other when failover is established, and to secure state synchronization messages between the devices that enable the ASA pair to maintain service for existing connections when the standby ASA becomes primary. The two lines of the configuration that begin with failover polltime reduce the failover timers from the defaults to achieve subsecond failover. Improved failover times reduce application and user impact during outages. Reducing the failover timer intervals below these values is not recommended.

Procedure 2

Configure Cisco ASA Dynamic Routing

Because the ASAs are the gateway to the secure VLANs in the server room, the ASA pair must be configured to participate in the networks EIGRP updates to advertise the connected secure subnet into the LAN. This way, the servers connected to the secure VLAN will be reachable. Step 1: Enter the following text at the command line to configure the ASA pair: router eigrp 1 no auto-summary network network network passive-interface DCVLAN154 passive-interface DCVLAN154

Network Security



Procedure 2a

Deploy a Whitelist Security Policy

Evaluate and Deploy Security Policy 1. Evaluate Security Policy Requirements 2. Deploy the Appropriate Security Policy

This section describes the steps required to evaluate which type of policy fits an organizations Data Center security requirements and provides the procedures necessary to apply these policies.

Procedure 1

Evaluate Security Policy Requirements

Step 1: Evaluate security policy requirements by answering these questions: What applications will be served from the Secure Data Center? Can the applications traffic be characterized at the protocol level? Is a detailed description of application behavior available to facilitate troubleshooting if the security policy interferes with the application? What is the networks baseline performance expectation between the controlled and uncontrolled portions of the network? What is the peak level of throughput that security controls will be expected to handle, including bandwidth-intensive activity such as workstation backups or data transfers to a secondary data replication site? Step 2: For each datacenter VLAN, determine which security policy enables application requirements. Each VLAN that requires firewall will need either a permissive (blacklist) or restrictive (whitelist) security policy.

Step 1: A basic whitelist data-service policy can be applied to allow common business services such as HTTP, HTTPS, and DNS, and other services typically seen in Microsoft-based networks. Enter the following configuration to control access so only specific hosts may be accessed: name Secure-Subnets ! object-group network Application-Servers description HTTP, HTTPS, DNS, and MSExchange network-object host BladeWeb1Secure network-object host BladeWeb2Secure ! object-group service MS-App-Services service-object tcp eq domain service-object tcp eq www service-object tcp eq https service-object tcp eq netbios-ssn service-object udp eq domain service-object udp eq nameserver service-object udp eq netbios-dgm service-object udp eq netbios-ns ! access-list outside_access_in extended permit object-group MS-App-Services any object-group Application-Servers ! access-group outside_access_in in interface outside Step 2: IT management staff or network users may need access to certain resources. In this example, management hosts in the IP address range is allowed SSH and SNMP access to Data Center subnets: name Secure-Subnets name Mgmt-host-range description Address pool for IT users ! object-group service Mgmt-traffic service-object tcp eq ssh service-object udp eq snmp ! access-list outside_access_in extended permit object-group Mgmt-traffic Mgmt-host-range Secure-Subnets ! access-group outside_access_in in interface outside

Procedure 2

Deploy the Appropriate Security Policy

Network security policy configuration is fairly arbitrary to suit the policy and management requirements of an organization. Thus, examples here should be used as a basis for security policy configuration.

Network Security


Step 3: A bypass rule allows wide-open access to hosts that are added to the appropriate network object group. The bypass rule must be carefully defined to avoid opening access to hosts or services that must otherwise be blocked. In a whitelist policy, the bypass rule is typically disabled, and it is only called into use whenever firewall policy troubleshooting is required to allow access to an application. The following policy defines two hosts and applies them to the bypass rule: name BladeWeb1Secure name BladeWeb2Secure ! object-group network Bypass-Rule description Open Policy for Server Access network-object host BladeWeb1Secure network-object host BladeWeb2Secure access-list outside_access_in extended permit ip any objectgroup Bypass-Rule access-group outside_access_in in interface outside

Step 1: Network administrative users may need to issue SNMP queries from desktop computers to monitor network activity. The first portion of the policy explicitly allows SNMP queries for a specific address range that will be allocated for IT staff. Enter the following commands: name Secure-Subnets name Mgmt-host-range description Address pool for IT users access-list outside_access_in remark Access from mgmt-host pool to both secure subnets via ssh and snmp. access-list outside_access_in extended permit udp Mgmt-hostrange Secure-Subnets eq snmp Step 2: Block Telnet and SNMP to all other hosts with the following command: object-group service Mgmt-traffic service-object tcp eq ssh telnet service-object udp eq snmp access-list outside_access_in extended deny object-group Mgmt-traffic any any Step 3: Configure a bypass rule to allow any application traffic through that was not specifically denied. Note that logging is disabled on this policy to prevent the firewall from having to log all accesses to the server network. access-list outside_access_in extended permit ip any objectgroup Bypass-Policy log disable

Tech Tip
The bypass rule group is useful for troubleshooting or providing temporary access to services on the host that must be opened for maintenance or service migration.

Procedure 2b

Deploy a Blacklist Security Policy

If an organization does not have the desire or resources to maintain a granular, restrictive policy to control access between centralized data and the user community, a simpler, easy-to-deploy policy that limits only the highest-risk traffic may be more attractive. This policy is typically configured such that only specific services access is blocked; all other access is handled by the bypass rule discussed in the previous section.

Network Security


Figure 29 . ASA Firewall Policy configuration in ASDM

From a security standpoint, intrusion prevention systems are complementary to firewalls. This is due to the fact that firewalls are generally accesscontrol devices and are built to block access to an application. In this way, a firewall can be used to remove access to a large number of application ports, reducing the threat to the servers. IPS watches network and application traffic that is permitted to go through the firewall looking for attacks. If it detects an attack, the traffic is blocked, preventing the attack and sending an alert to inform the organization about the activity. IDS is similar to IPS except that it just provides alerts and does not block attacks.

Promiscuous versus Inline

There are two primary deployment options when using IPS devices, promiscuous (IDS) or inline (IPS). There are specific reasons for each deployment model based on risk tolerance and fault tolerance. In an IPS deployment, the device sits in the actual network packet flow and inspects the real packets. With IDS, the device inspects only copies of packets, which prevents it from being able to stop a malicious packet when it sees one. The advantage IPS mode offers is that the sensor, when it detects malicious behavior, can simply drop it. This allows the IPS device a much greater capacity to actually prevent attacks. An IDS box has to try and utilize another inline enforcement device to drop it. This means that for things like singlepacket attacks (slammer over user datagram protocol), an IDS could not prevent the attack from occurring. However, an IDS can offer great value when identifying and cleaning up infected hosts. The disadvantage for IPS mode is that, because it is an inline device, it needs to be able to keep up with the traffic load on the network, including handling bursts. Reasons for using IDS: No impact to the network (latency, availability) Deploying Cisco Intrusion Protection System (IPS) 1. Complete Initial Configuration 2. Complete Basic Configuration 3. Define and Tune the IPS Policy 4. Configure IPS Signature Updates 5. Monitor IDS or IPS Events 6. Troubleshoot the IPS Easier to deploy than IPS (no network changes) Reasons for using IPS: Higher security than IDS (ability to drop bad packets) Reduced false positives and false negatives

Tech Tip
The bypass rule group is useful for troubleshooting or providing temporary access to services on the host that must be opened for maintenance or service migration.


Network Security


For the headquarters design using ASA 5585/5580 and IPS 4260, two global policies were built: One for IPS that is enabled and sends all traffic to the IPS appliance in inline IPS mode, except for traffic designated as bypass traffic. One for IDS that is currently disabled and sends all traffic to the IPS appliance in promiscuous IDS mode, except for traffic designated as bypass traffic. An organization may choose an IPS or IDS deployment, depending on regulatory and application requirements. Cisco recommends that you start with an IDS or promiscuous design for initial deployment and then move to IPS once the traffic profile at the deployment is known and the organization is comfortable that no production traffic will be affected. The primary reason for using IPS is that the sensor blocks malicious activity instead of just alerting. Figure 30 . Service Policy Rules

Figure 31 . Traffic Inspection Mode Map

IPS Deployment Options

As described in the previous section, the two methods for deploying Cisco IPS are inline (IPS) where the sensor is in the actual packet flow or promiscuous mode (IDS) where the sensor sees copies of the packets. Beyond that, other deployment options for inline IPS mode are defined by how the device is put into the traffic flow. The appliance version of the IPS has two inline options (Figure 31: Traffic Inspection Mode Map): VLAN pairing where traffic comes in and out of the IPS on the same physical interface. This requires a trunk port on the attached device because the sensor will move packets in the specified VLAN pair from VLAN X to VLAN Y and conversely. Interface pairing mode where traffic comes to the sensor on one physical interface and leaves on another. This type of deployment acts most like a physical wire.

Changing configuration from promiscuous to inline generally involves some cabling changes and switch configurations to set up the actual monitor port. However, the connectivity changes allow the appliance to accommodate multiple functions at the same time, such as IDS inspection in front of the firewall and IPS inspection behind the firewall, or IDS in one network and IPS in an entirely different network.

Procedure 1

Complete Initial Configuration

The first step in configuring an IPS sensor is to use the console to access the sensor to set up basic networking information such as IP address, gateway, access lists to allow remote access, etc. Once these critical pieces of data are entered, the rest of the configuration is easily accomplished using a GUI tool like IPS Manager Express or IPS Device Manager. Unlike the Cisco ASA firewalls used in the SBA design, IPS appliances use an out-of-band management connection for configuration and monitoring. The sensors management port is connected to a VLAN where the sensors can route to or directly reach the management station.

Network Security


Step 1: Gain access to the 4260 console by following the directions at this link: guide/cli/cli_logging_in.html#wp1032737 Step 2: After you gain access, log in to the IPS device. The default username and password are both cisco, and the password must be changed after the first login. Step 3: After login, enter the setup command as follows: sensor# setup --- Basic Setup ----- System Configuration Dialog --At any point you may enter a question mark ? for help. Use ctrl-c to abort configuration dialog at any prompt. Default settings are in square brackets []. Current time: Mon Oct 12 23:31:38 2009 Setup Configuration last modified: Mon Oct 12 23:22:27 2009 Enter host name [sensor]: p3-dc-ips4260a Enter IP interface [,]:, Modify current access list? [no]: yes Current access list entries: No entries Permit: Permit: Use DNS server for Global Correlation? [no]: Use HTTP proxy server for Global Correlation? [no]: Modify system clock settings? [no]: Participation in the SensorBase Network allows Cisco to collect aggregated statistics about traffic sent to your IPS. SensorBase Network Participation level? [off]: The following configuration was entered. service host network-settings host-ip, host-name p3-dc-ips4260a telnet-option disabled access-list ftp-timeout 300 no login-banner-text dns-primary-server disabled dns-secondary-server disabled dns-tertiary-server disabled http-proxy no-proxy exit time-zone-settings

offset 0 standard-time-zone-name UTC exit summertime-option disabled ntp-option disabled exit service global-correlation network-participation off exit [0] Go to the command prompt without saving this configuration. [1] Return to setup without saving this configuration. [2] Save this configuration and exit setup. [3] Continue to Advanced setup. Enter your selection [3]: 2 Warning: DNS or HTTP proxy is required for global correlation inspection and reputation filtering, but no DNS or proxy servers are defined. --- Configuration Saved --Complete the advanced setup using CLI or IDM. To use IDM, point your web browser at https://<sensor-ipaddress>.

Procedure 2

Complete Basic Configuration

Once the setup is complete, you can continue configuration using IPS Manager Express (IME). The basic steps to configuring a sensor after running setup are: Configure time settings Enable interfaces Build interface pairs or VLAN pairs Assign interfaces to virtual sensors

Network Security


Step 1: To configure the time settings, access Sensor Setup >Time. From there, configure the timezone, summertime, and NTP server for the device (as shown in Figure 32). Figure 32 . IPS Time Configuration Window

Step 3: To build interface pairs, access Interfaces > Interface Pairs or VLAN Pairs. This design applies interface pairing; two pairs are created, which allows one interface pair to be used for ASA interface DCVLAN154 and one pair for interface DCVLAN155. Figure 34 . IPS Interface-Pair Configuration Window

Step 4: To assign interfaces to virtual sensors, access Policies->IPS Policies. For each appliance, assign interfaces (the only interface) to the virtual sensor, and then click OK and Apply. Figure 35 . Edit IPS Virtual Sensor Policy Window

Step 2: Enable the interfaces by accessing Interfaces > Interfaces and enabling the interface needs for IPS inspection use. Figure 33 . IPS Interface Configuration Window

Network Security


Procedure 3

Define and Tune the IPS Policy

Figure 36 . Configure Event Action Window

By default, Cisco IPS has an event action override configured to deny any attack that has a risk rating of 90 or higher. This allows the sensor to block a fairly substantial percentage of the most serious attacks with signatures that are out-of-the-box accurate. Make policy changes on the sensor while it is possible to do this on a case-by-case basis. It is far easier to do this using the policy table or the event action rules. Modifying the event action rules from 90. the default setting, offers the following IPS behavior: At 100, the sensors policy is changed so that only the most accurate and highest-severity signatures will take default-deny packet actions on the sensor. At 85 or lower, the sensor will block a greater range of events. This makes the sensors behavior more aggressive, blocking more attacks, but also possibly blocking a small amount of legitimate traffic if the sensor has not been tuned to the environment it operates in. IDS/IPS tuning is a process, rather than a one-time event. Review the events being blocked and determine if their denial was appropriate. If not, follow these steps to lessen their impact. Step 1: Use Event Action Filters to remove an action or alert due to a specific event.

Step 2: Disable or retire a signature or remove an action from the signaturespecific settings. This prevents the signature from triggering (retiring also prevents it from taking up resources, but it takes longer to bring it back online if needed later).

Network Security


Figure 37 . Edit Active IPS Policies Window

Figure 38 . Configure IPS Auto-Update Settings

Procedure 4

Configure IPS Signature Updates

Tech Tip
Note that using the auto update feature from will only update the sensors Engine files and Signature files. Major and minor code versions and service packs are not updated with this mechanism.

IPS devices are generally only as good as their last update and, because of this, keeping the sensors updated is important. To this end, the easiest solution is to configure each sensor to retrieve signature updates directly from Step 1: To configure an IPS signature update in IME, access Configuration > IPS > Sensor Management > Auto Cisco .com Update.

IPS software is available here (note this requires a valid login): Redirect.x?mdfid=268439591. To receive automatic notifications of code version releases and other IPS news, sign up for Cisco Threat Defense Bulletins here: 380&keyCode=123668_4.

Network Security


Procedure 5

Monitor IDS or IPS Events

Step 1: Right-click a specific event to get more data. Figure 40 . Viewing IPS Event Details

When deploying IDS or IPS, it is important to set up a monitoring solution to retrieve, store, and display alerts. IPS Manager Express (IME) is one such solution for Cisco IPS. Cisco IME is a complete management and monitoring solution for Cisco IPS solutions that allows a user to set up, configure, monitor, and tune an IPS/IDS deployment. Cisco IME is a standalone package that includes a service to configure and monitor activity from up to 10 sensors (as of IME 7.0.2). Cisco IME is available at no extra cost on in the same web location as Cisco IPS software updates and upgrades. Figure 39 . Viewing IPS Events with Cisco IME

Network Security


Procedure 6

Troubleshoot the IPS

Figure 41 . Troubleshooting IPS Health with Cisco IME

If network errors are suspected as the result of the IPS device blocking legitimate traffic, the IPS policy may be checked and adjusted to eliminate the problem. Step 1: The first step in identifying whether the IPS is blocking legitimate traffic is to take the sensor out of the processing path. To do this, navigate IDM and add the devices being impacted to the Bypass policy group, which will remove all inspection including firewall inspection for those IP addresses. Removing the sensor from the processing path can be easily accomplished by putting the sensor into Bypass On mode, which passes traffic around the inspection engine which prevents any IPS inspection from occurring. If enabling IPS Bypass solves the problem, then more detailed troubleshooting is in order at a time when it is possible without impacting network traffic. If this does not solve the problem, it is unlikely to be an IPS-related issue and you can focus troubleshooting efforts elsewhere. Step 2: Follow-on steps might include checking IME to see if the sensor is healthy and responsive. If not, a TAC case might be needed to determine the problem.

Step 3: If the sensor is working fine, check the event logs to see if the sensor is firing events with deny actions related to the IP addresses or services being impacted. If the sensor is firing events that are blocking traffic, the sensor is either seeing real attacks and blocking them, or it is firing on false positives. Step 4: Filter out false positives using the Event Action Filters in the Sensor Configuration > Policies bottom right screen. Adding a filter to remove the deny action for the event being fired incorrectly should solve the problem.

Network Security


Computing Resources
Business Overview
As a midsize organization begins to grow, the number of servers required to handle the information processing tasks of the organization grows as well. Using the full capabilities of the investment in server resources can help an organization add new applications while controlling costs as they move from a small server room environment into a midsized data center. Server virtualization has become a common approach to allow an organization to better utilize its investment in processing capacity. Streamlining the management of server hardware and its interaction with networking and storage equipment is another important component of using this investment in an efficient manner. Scaling a data center with conventional servers, networking equipment, and storage resources can pose a significant challenge to a growing organization. Multiple hardware platforms and technologies must be integrated to deliver the expected levels of performance and availability to application end users. These components in the data center also need to be managed and maintained, typically with a diverse set of management tools with different interfaces and approaches. In larger organizations, often multiple teams of people are involved in managing applications, servers, storage, and networking. In a midsize organization, the lines between these tasks are blurred and often a single, smaller team, or even one individual, may need to handle many of these tasks in a single day.

The primary computing platforms targeted for the SBA Unified Computing reference architecture are Cisco UCS B-Series Blade Servers and Cisco UCS C-Series Rack-Mount Servers. The Cisco UCS Manager graphical interface provides ease of use that is consistent with the goals of the Smart Business Architecture. When deployed in conjunction with the SBA Data Center network foundation, the environment provides the flexibility to support the concurrent use of the Cisco UCS B-Series Blade Servers, Cisco UCS C-Series Rack-Mount Servers, and third-party servers connected to 1 and 10 gigabit Ethernet connections.

Cisco UCS Blade Chassis System Components

The Cisco UCS Blade Chassis system has a unique architecture that integrates compute, data network access, and storage network access into a common set of components under a single-pane-of-glass management interface. The primary components included within this architecture are as follows: Cisco UCS 6100 Series Fabric InterconnectsProvide both network connectivity and management capabilities to the other components in the system. Cisco UCS 2100 Series Fabric ExtendersLogically extend the fabric from the fabric interconnects into each of the enclosures for Ethernet, FCoE, and management purposes. Cisco UCS 5100 Series Blade Server ChassisProvides an enclosure to house up to eight half-width or four full-width blade servers, their associated fabric extenders, and four power supplies for system resiliency. Cisco UCS B-Series Blade ServersAvailable in half-width or full-width form factors, with a variety of high-performance processors and memory architectures to allow customers to easily customize their compute resources to the specific needs of their most critical applications. Cisco UCS B-Series Network AdaptersA variety of mezzanine adapter cards that allow the switching fabric to provide multiple interfaces to a server. The following figure shows an example of the physical connections required within a UCS Blade Chassis system to establish the connection between the fabric interconnects and a single blade chassis. Different physical port numbers may be used to scale the system to support multiple chassis or for other implementation-specific requirements. The links between the blade chassis and the fabric interconnects carry all server data traffic, centralized storage traffic, and management traffic generated by Cisco UCS Manager.

Technology Overview
Consistent with the SBA approach, Cisco offers a simplified reference model for managing a small server room as it grows into a full-fledged data center. This model benefits from the ease of use offered by the Cisco Unified Computing System (UCS). Cisco UCS provides a single graphical management tool for the provisioning and management of servers, network interfaces, storage interfaces, and their immediately attached network components. Cisco UCS treats all of these components as a cohesive system, which simplifies these complex interactions and allows a midsize organization to deploy the same efficient technologies as larger enterprises, without a dramatic learning curve.

Computing Resources


Figure 42 . Cisco UCS Blade System Component Connections

throughput requirements of the applications or virtual machines in use and the number of network interface cards installed per server. The following figure shows a detailed example of dual-homed connections from Cisco UCS C-Series servers to the single-homed FEX providing 1 gigabit Ethernet connections. 10 gigabit connections are available either through the Cisco Nexus 2232 Fabric Extender or by using 10 gigabit ports directly on the Cisco Nexus 5000 Series switch pair. Figure 43 . Example Cisco UCS C-Series FEX Connections

Cisco UCS Manager

Cisco UCS Manager is embedded software resident on the fabric interconnects, providing complete configuration and management capabilities for all of the components in the UCS system. This configuration information is replicated between the two fabric interconnects, providing a highly available solution for this critical function. The most common way to access Cisco UCS Manager for simple tasks is to use a web browser to open the Javabased graphical user interface (GUI). For command-line or programmatic operations against the system, a command-line interface (CLI) and an XML API are also included with the system.

Cisco UCS C-Series Rack Servers

Cisco UCS C-Series Rack-Mount Servers balance simplicity, performance, and density for production-level virtualization, web infrastructure, and data center workloads. Cisco UCS C-Series servers extend Unified Computing innovations and benefits to rack-mount servers.

The Cisco UCS 6100 Series Fabric Interconnects provide connectivity for Cisco UCS Blade Server systems. The following figure shows a detailed example of the connections between the fabric interconnects and the Cisco Nexus 5000 Series switch pair. The default and recommended configuration for the fabric interconnects is end-host mode, which means they do not operate as full LAN switches, and rely on the upstream data center switching fabric. In this way, the UCS system appears to the network as a virtualized compute cluster with multiple physical connections. Individual server traffic is pinned to specific interfaces, with failover capability in the event of loss of the primary link.

UCS System Network Connectivity

Both Cisco UCS B-Series Blade Servers and C-Series Rack Mount Servers integrate cleanly into the SBA Midsize Data Center Architecture. The Cisco Nexus switching fabric provides connectivity for 10 gigabit or 1 gigabit Ethernet attachment for Cisco UCS C-Series servers, depending on the

Computing Resources


Figure 44 . UCS Fabric Interconnect Ethernet Detail

Step 1: Connect the two fabric interconnects together using the integrated ports labeled L1/L2. These ports are used for replication of cluster information between the two fabric interconnects, not the forwarding of data traffic. Step 2: Attach the Management Ethernet ports from each fabric interconnect to a management network or appropriate Ethernet segment where they can be accessed for overall administration of the system. Step 3: Populate each blade chassis with two fabric extenders (I/O modules) to provide connectivity back to the fabric interconnects. Step 4: Cable one I/O module to the first fabric interconnect. Cable the other I/O module to the second fabric interconnect. After you have configured the fabric interconnects, they will be designated as A and B fabrics. You can connect the I/O modules to the fabric interconnects by using one, two, or four cables per module. For system resiliency and throughput we recommend a minimum of two connections per I/O module. Ensure that all of the connections from a given I/O module only attach to one of the fabric interconnects; I/O modules themselves are not dual-homed.

Deployment Details
This section will provide information on basic initial configuration of a Cisco UCS Blade Chassis system. For more extensive detail on UCS Blade System setup, and UCS C-Series Fibre Channel over Ethernet configuration, please refer to the SBA Unified Computing Deployment Guide.

Procedure 2

Complete Initial Fabric Interconnect Setup


You can easily accomplish the initial configuration of the fabric interconnects through the Basic System Configuration dialog that launches when you power on a new or un-configured unit. Step 1: Connect a terminal to the console port of the first system to be configured and press Enter. Step 2: In the Basic System Configuration Dialog that follows, enter console, setup, and yes, and then establish a password for the admin account. ---- Basic System Configuration Dialog ---This setup utility will guide you through the basic configuration of the system. Only minimal configuration including IP connectivity to the Fabric interconnect and its clustering mode is performed through these steps. Type Ctrl-C at any time to abort configuration and reboot system. To back track or make modifications to already entered values, complete input till end of section and answer no when prompted to apply configuration. Enter the configuration method. (console/gui) ? console Enter the setup mode; setup newly or restore from backup.

Completing the Initial System Setup 1. Complete Physical Setup and Ensure Connectivity 2. Complete Initial Fabric Interconnect Setup

Procedure 1

Complete Physical Setup and Ensure Connectivity

The Cisco UCS Fabric Interconnect acts as the concentration point for all cabling to and from the UCS Blade Chassis.

Computing Resources


(setup/restore) ? setup You have chosen to setup a new Fabric interconnect. Continue? (y/n): y Enter the password for admin: xxxxxxxx Confirm the password for admin: xxxxxxxx Step 3: Next you are prompted to create a new cluster or add to an existing cluster. The Cisco UCS cluster consists of two fabric interconnects, with all associated configuration replicated between the two for all devices in the system. Enter yes to create a new cluster. Do you want to create a new cluster on this Fabric interconnect (select no for standalone setup or if you want this Fabric interconnect to be added to an existing cluster)? (yes/no) [n]: yes Step 4: Each fabric interconnect has a unique physical IP address. There is also a shared cluster IP address that is used to access Cisco UCS Manager after the system initialization is completed. The fabric interconnects are assigned one of two unique fabric IDs for both Ethernet and Fibre Channel networking. Choose fabric A for the first fabric interconnect that you are setting up. Enter the switch fabric (A/B) []: a Step 5: The system name is shared across both fabrics, so -a or -b is automatically appended to the name that you specify in the Basic System Configuration Dialog when you set up one of the units. Enter the system name: sba-ucs-10

Step 6: Apply the following example settings as you respond to the prompts, or use setting specific to your implementation. Physical Switch Mgmt0 IPv4 address : Physical Switch Mgmt0 IPv4 netmask : IPv4 address of the default gateway : Cluster IPv4 address : Configure the DNS Server IPv4 address? (yes/no) [n]: yes DNS IPv4 address : Configure the default domain name? (yes/no) [n]: yes Default domain name : cisco.local Step 7: The Basic System Configuration Dialog displays a summary of the configuration options that you chose. Verify the accuracy of the settings. Unless the settings require correction, enter yes to apply the configuration. The system assumes the new identity that you configured. Following configurations will be applied: Switch Fabric=A System Name=sba-ucs-10 Physical Switch Mgmt0 IP Address= Physical Switch Mgmt0 IP Netmask= Default Gateway= Cluster Enabled=yes Cluster IP Address= Apply and save the configuration (select no if you want to re-enter)? (yes/no): yes Applying configuration. Please wait. Configuration file Ok

Computing Resources


Step 8: After the system is reset, you can add the second fabric interconnect to the cluster. Because you have already defined the cluster, you only need to acknowledge the prompts to add the second fabric interconnect to the cluster and set a unique IP address. Enter the configuration method. (console/gui) ? console Installer has detected the presence of a peer Fabric interconnect. This Fabric interconnect will be added to the cluster. Continue (y/n) ? y Enter the admin password of the peer Fabric interconnect: Connecting to peer Fabric interconnect... done Retrieving config from peer Fabric interconnect...done Peer Fabric interconnect Mgmt0 IP Address: Peer Fabric interconnect Mgmt0 IP Netmask: Cluster IP address: Physical Switch Mgmt0 IPv4 address : Apply and save the configuration (select no if you want to re-enter)? (yes/no): yes Applying configuration. Please wait. Configuration file Ok From this point forward, the Cisco UCS Manager GUI may be used for primary management of the system; however, you should be familiar with the console in case you need very low-bandwidth remote access or a separate mode of access for administrative tasks such as code upgrades or system troubleshooting.

UCS Manager is the GUI client, which is Java-based and is launched from a web browser, using Java Web Start. Any computer that you want to use to run the Cisco UCS Manager client must meet or exceed the following minimum system requirements: Sun JRE 1.6 or later One of the following web browsers: Microsoft Internet Explorer 6.0 or higher Mozilla Firefox 3.0 or higher One of the following operating systems: Microsoft Windows XP Microsoft Windows Vista Red Hat Enterprise Linux 5.0 or higher

Procedure 1

Launch UCS Manager

Step 1: Using a browser, access the cluster IP address that you assigned during initial setup, and choose Launch to download the UCSM Java application. Authenticate with the configured username and password and view the initial screen. Figure 45 . UCS Manager GUI Initial Screen


Getting Started with UCS Manager 1. Launch UCS Manager 2. Discover System Components 3. Define Ethernet Uplink Ports 4. Add a Management IP Address Pool Cisco UCS Manager (UCSM) is the management service for all of the components in a Cisco UCS instance. Cisco UCS Manager runs on the fabric interconnects and keeps configuration data synchronized between the resilient pair. The primary access method covered here for using Cisco

Computing Resources


The Cisco UCS Manager GUI consists of a navigation pane on the left side of the screen and a work pane on the right side of the screen. The navigation pane allows you to browse through containers and objects and to drill down easily through layers of the system management. In addition, the following tabs appear across the top of the navigation pane: Equipment: Inventory of hardware components and hardware-specific configuration Servers: Service profile configuration and related components such as policies and pools LAN: LAN-specific configuration for Ethernet and IP networking capabilities SAN: SAN-specific configuration for Fibre Channel networking capabilities VM: Configuration specific to linking to external server virtualization software, currently supported for VMware. Admin: User management tasks, fault management and troubleshooting. The tabs displayed in the navigation pane are always present as you move through the system and, in conjunction with the tree structure shown within the pane itself, are the primary mechanisms for navigating the system. After you choose a section of the GUI in the navigation pane, information and configuration options appear in the work pane on the right side of the screen. In the work pane, tabs divide information into categories. The work pane tabs that appear vary according to the context chosen in the navigation pane.

Step 1: Configure the number of physical links used to connect each chassis to each fabric. Click the Equipment tab in the navigation pane, and then choose the Policies tab in the work pane. Within the Policies tab, another set of tabs appears. Step 2: By default, the Global Policies tab displays the Chassis Discovery Policy. This may be set at 1, 2, or 4 links per fabric, as shown in the following figure. Choose the appropriate number of links for your configuration, and then click Save Changes at the bottom of the work pane. Figure 46 . Set Chassis Discovery Policy

Procedure 2

Discover System Components

On a newly installed system, one of your first tasks in the Cisco UCS Manager GUI is to define how many physical ports are attached to the I/O modules in each chassis. This allows Cisco UCS Manager to discover the attached system components and build a view of the entire system. These ports are referred to as server ports. The specific ports being used as server ports must also be defined to the system.

Step 3: In the navigation pane, choose the Equipment tab and then expand Fabric Interconnects > Fabric Interconnect A > Fixed Module > Unconfigured Ports. Objects are displayed, representing each of the physical ports on the base fabric interconnect system. Step 4: Select the desired port by clicking the port object. Optionally, choose several sequential ports by clicking a second port while holding the shift key. Step 5: Right-click the selected port or group of ports and choose Configure as Server Port from the pop-up menu as shown.

Computing Resources


Figure 47 . Configuring Server Ports on the Fabric Interconnects

Procedure 3

Define Ethernet Uplink Ports

In the SBA Unified Computing reference design, Ethernet uplink ports connect the fabric interconnects to the Cisco Nexus 5000 switches via 10 gigabit Ethernet links. In alternate designs, these links may be attached to any 10 gigabit Ethernet switch that provides access to the core of the network. These links carry IP-based client-server traffic, server-to-server traffic between IP subnets, and Ethernet-based storage access such as iSCSI or NAS traffic. Ports from either the base fabric interconnect or expansion modules may be used as uplink ports. Step 1: In the Equipment tab of the navigation pane, locate the ports that are physically connected to the upstream switches. These ports will initially be listed as unconfigured ports in the tree view. Step 2: For each port, right-click and choose Configure as Uplink Port. Step 6: Acknowledge this operation. Step 7: In a similar manner, expand the tree to Fabric Interconnect B, and apply the corresponding configuration for the resilient links from Fabric B. Figure 48 . Configure Uplink Ports

Tech Tip
Tech Tip: If the system gets out of synchronization for any reason during the chassis discovery process, you can clear up most issues by acknowledging the chassis. Right-click the chassis in the navigation pane and choose Acknowledge Chassis.

After Cisco UCS Manager has discovered each of the chassis attached to your system, you can use the Equipment tab in the navigation pane to verify that each chassis, I/O module, and server is properly reflected in the display. Allow some time for the full system discovery of all of the components to occur.

Step 3: If you implemented port-channel configuration in the upstream switches (such as the Cisco Nexus 5000 Series switches in this example), corresponding port-channel configuration is required for the Ethernet uplink ports in the Cisco UCS Manager GUI. Choose the LAN tab in the navigation pane, and expand LAN > LAN Cloud > Fabric A, and select the Port Channels container.

Computing Resources


Step 4: Click the Add button (marked with a green plus sign) to create a new port-channel definition. Step 5: Provide an ID and name for the new port channel, as shown in Figure 49. Figure 49 . Set Port Channel Name

Tech Tip
Pay close attention to the Slot ID column when you select the ports to add to the channel. Integrated ports will be listed with a slot ID of 1. If using an expansion module, scroll down to find ports listed with a slot ID of 2.

Step 8: Click Finish to complete creation of the Ethernet uplink port channel for Fabric A. Step 9: Repeat this procedure to create a port channel for Fabric B, using a unique port-channel ID value. Step 10: After you have created the port channels, you must enable them before they will become active. In the navigation pane, expand Port Channels. Step 6: Click Next, and then select the Ethernet ports in use as uplinks from the list on the left of the screen, as shown in Figure 50. Step 7: Use the arrow button to add them to the list of ports in the channel. Figure 50 . Add Ports to the Port Channel Step 11: For each port channel, select and then right-click the port channel name, and choose Enable Port Channel. Figure 51 . Enable the Port Channels

Computing Resources


Figure 52 . Add Management IP Pool

Tech Tip
Tech Tip: Port channel IDs are locally significant to each device; therefore, as shown, the ID used to identify a port channel to the fabric interconnect does not have to match the ID used for the channels on the Cisco Nexus 5000 configuration. In some cases, it may be beneficial for operational support to use consistent numbering for representation of these channels.

Procedure 4

Add a Management IP Address Pool Step 2: In the work pane, under Actions, click Create Block of Addresses. As shown in Figure 52. Step 3: Allocate a contiguous block of IP addresses by specifying the starting address in the From field, and then use the corresponding fields to specify the Size of the block, the Subnet Mask and the Default Gateway. Figure 53 . Create Management IP Block

The Cisco UCS Manager GUI provides a launching point to direct KeyboardVideo-Mouse (KVM) access to control each of the blade servers within the system. To facilitate this remote management access, you must allocate a pool of IP addresses to the blade servers within the system. These addresses are used by the Cisco UCS KVM Console application to communicate with the individual blade servers. You must allocate this pool of addresses from the same IP subnet as the addresses assigned to the management interfaces of the fabric interconnects, because a common default gateway is used for their communication. Step 1: Choose the Admin tab from the navigation pane, and expand All > Communication Management, and select Management IP Pool,

Step 4: Click OK to finish creating the block of addresses.

Computing Resources



Creating an Initial Service Profile for Local Boot 1. Access the Service Profile Wizard 2. Create UUIDs 3. Configure Storage 4. Complete Networking Configuration 5. Define the Server Boot Order Policy 6. Assign the Server One of the core concepts of Cisco UCS Manager is the service profile. A service profile defines all characteristics that are normally presented by a physical server to a host operating system or a hypervisor, including presence of network interfaces and their addresses, host adapters and their addresses, boot order, disk configuration, and firmware versions. The profile can be assigned to one or more physical server blades within the chassis. In this way, what is traditionally thought of as the personality of a given server or host is tied to the service profile rather than to the physical server blade where the profile is running. This is particularly true if network-based or SAN-based boot is configured for the profile. If local-boot is configured for the profile, the boot images installed on the local hard drives of the physical blade do tie the identity of the service profile to a given physical server blade.

There are multiple supporting objects within the Cisco UCS Manager GUI to streamline the creation of a service profile. These objects contain items such as pools of MAC addresses for Ethernet, World Wide Port Names (WWPNs) for Fibre Channel, disk configurations, VLANs, VSANs, etc. These objects are stored by the system so that they may be referenced by multiple service profiles, so that you do not need to redefine them as each new profile is created. Walking through the process of creating an initial service profile on a brand-new system may be used as a system-initialization exercise, providing a structured path through the process and the option to create these reusable configuration objects along the way to be referenced by additional service profiles in the future. This process provides an example of how to create a basic service profile for initial installation and boot of a host operating system or a hypervisor. Throughout this process, we explore the creation of reusable system objects to facilitate faster creation of additional profiles with similar characteristics. For simplicity, configuration of a basic boot policy using local mirrored disks is shown. Later sections in this guide show options for network-based or SAN-based boot. The configuration of this initial profile creates the base system setup upon which additional, more advanced profiles can be built.

Procedure 1

Access the Service Profile Wizard

Step 1: Select the Servers tab in the navigation pane, expand the containers underneath Service Profiles, and select the Root container. Step 2: On the General tab within the work pane, click on Create Service Profile (expert). An initial pop-up window displays, as shown in Figure 13. Assign a name to the service profile. The following sections will walk you through a wizard-based process for defining these attributes of the first service profile: Identification/UUIDs Storage Networking vNIC/vHBA Placement Server Boot Order Server Assignment Operational Policies

Computing Resources


Figure 54 . Initial Service Profile (Expert) Screen

Figure 55 . Set UUID Pool Name and Description

Step 3: Click Next to proceed to the step of defining the UUID Blocks to be included in the pool, and then click Add. Step 4: In the From field, enter a unique, randomized base value as a starting point as shown in Figure 56.

Procedure 2

Create UUIDs

Step 1: Click Create UUID Suffix Pool to add a Universally Unique Identifier (UUID) suffix pool to the system.

Tech Tip
UUID generation tools compliant with RFC 4122 are available on the Internet. For an example, see uuidgen.

Tech Tip
A UUID suffix pool is a collection of SMBIOS UUIDs that are available to be assigned to servers. The first number of digits that constitute the prefix of the UUID are fixed. The remaining digits, the UUID suffix, are variable. A UUID suffix pool avoids conflicts by ensuring that these variable values are unique for each server associated with a service profile that uses that particular pool. Step 2: Assign a name and description to the pool, as shown in Figure 55. To use a UUID prefix which is derived from hardware, leave derived selected in the Prefix box.

Step 5: Set the Size field to exceed the number of servers or service profiles that you will require to use the same pool. If future expansion is required, you can add multiple UUID suffix blocks to the same pool. For a base system startup, a simple, small pool is sufficient.

Computing Resources


Figure 56 . Create UUID Suffix Blocks

Figure 57 . Create Local Disk Configuration Policy

Step 6: After you enter the starting point and size of the suffix block, click OK to create the suffix block and Finish to complete the creation of the pool. Step 7: Then click Next to proceed to the storage section.

Procedure 3

Configure Storage Step 3: Click OK to create the policy, acknowledge the creation, and then choose the name of the newly created disk policy from the Local Storage drop-down list on the storage screen. Step 4: The next item on the storage screen is scrub policy. This controls the action of the system on the data stored on the disks of blades being disassociated from a profile. For purposes of this example, leave scrub policy at default. Also for this example, we will not show Virtual Host Bus Adapter (vHBA) setup, which is specific to a Fibre Channel SAN configuration. Step 5: To proceed with creating a basic service profile for local boot, choose No vHBAs next to the How would you like to configure SAN connectivity? prompt. Step 6: Choose Next to proceed to the screen for Networking configuration.

The local disk configuration policy allows the service profile to define how the block storage is structured on the local disks installed in each UCS blade server. A common configuration is to have two locally installed disks in each blade, set up for mirrored storage. To speed configuration of profiles and ensure consistency, a local disk configuration policy may be created and saved as a reusable object. Step 1: Click Create Local Disk Configuration Policy. Step 2: Provide a name and description for the policy, and choose RAID Mirrored from the Mode drop-down list as shown in Figure 57.

Computing Resources


Figure 58 . Create vNIC

Tech Tip
See the Creating Service Profiles for SAN Boot process for detailed information on enabling a service profile to access Fibre Channel attached storage over a SAN.

Procedure 4

Complete Networking Configuration

The networking configuration screen allows you to define virtual network interface cards (vNICs) that the system will present to the installed operating system in the same way that a standalone physical server presents hardware NICs installed in a PCI bus. The type of mezzanine card installed in the blade server affects how many vNICs may be defined in the profile and presented to the server operating system. Leave the Dynamic vNIC Connection Policy drop-down list at its default setting for this example.

Tech Tip
Dynamic vNICs only apply to configurations that use the Cisco UCS M81KR Virtual Interface Card. Such configurations are discussed in the Service Profiles Using Multiple vNICs section under Advanced Configurations. Step 1: Click Expert next to the How would you like to configure LAN connectivity? prompt. The expert mode allows you to walk through the creation of a MAC address pool, instead of using the default MAC pool which will not contain any address blocks on a new system. Step 2: Click Add at the bottom of the screen to define the first vNIC in the profile. Step 3: First assign a name to the profile. For the example configuration, we will use eth0 as the interface name, representing Ethernet 0 as shown in Figure 58. Step 4: Next, click Create MAC Pool to add a pool of MAC addresses to be used by vNIC interfaces in service profiles. Using a pool of MAC addresses instead of hardware-based MAC addresses allows a service profile to retain the same MAC address for its network interfaces, even when it is assigned to a new blade server in the system. Step 5: Set a name and description for the MAC pool, as shown in Figure 59.

Computing Resources


Figure 59 . Naming the MAC Address Pool

Figure 60 . Create A MAC Address Block

Step 6: Click Next to continue with adding MAC addresses to the pool. Step 7: Click Add in the bottom of the window to add a block of MAC addresses. Step 8: The dialog box for creating a block of MAC addresses allows you to define the starting address and the number of addresses to include in the block. Create a block of addresses large enough to allocate one address to each vNIC that will exist in the system.

Step 10: Click OK to add the new block into the MAC address pool, and then click Finish and OK to acknowledge creation of the pool. Step 11: Use the MAC Address Assignment drop-down list to select the name of the MAC address pool that you created. The next section of the Create vNIC screen allows you to define the vNIC traffic path through the fabric interconnects and what VLANs are present on the vNIC. The Cisco UCS system has the capability to present multiple vNICs to the installed operating system and pass the traffic from a specific vNIC to either fabric interconnect A or B. In addition, a fabric-failover capability is available on specific NIC hardware to allow the system to continue forwarding traffic through the other fabric interconnect if the primary selection has failed. For this basic service profile example, select fabric A as the primary traffic path and enable failover.

Tech Tip
The MAC address is a hexadecimal value with 12 characters. The first six characters are usually a registered manufacturer ID, to assist with maintaining uniqueness on the network. The last six characters are usually the starting point for the pool. MAC addresses must be unique within an Ethernet broadcast domain or IP subnet. Consider using multiple MAC address blocks with specific numbering conventions relevant to your implementation to assist with troubleshooting.

Tech Tip
Fabric failover is appropriate for configurations with a single host operating system installed directly on the blade server. For a virtualized environment, we recommend presenting multiple vNICs to the hypervisor and allowing the hypervisor system to manage the failover of traffic in the event of a loss of connection to one of the fabrics.

Step 9: Enter a starting address for the MAC address block and a quantity of addresses to allocate as shown in Figure 60.

Computing Resources


See the Service Profiles Using Multiple vNICs discussion under Advanced Configurations for more information on configurations with multiple vNICs. Step 12: Next to Fabric ID, choose Fabric A and select Enable Failover as shown in Figure 61. Figure 61 . vNIC Fabric Selection

Step 14: To receive traffic from the server vNICs, you must define each VLAN needed in the Cisco UCS system. Click Create VLAN to identify the new VLAN number to the system. Step 15: The Create VLAN(s) window allows you to create multiple VLANs. The number of the VLANs created is appended to the Name/Prefix entered. For example: The entries shown in Figure 62 would result in VLANs called SBA-VLAN28 through SBA-VLAN29. Enter the desired name and group of VLANs to create and click OK. Figure 62 . Create VLANs Window

Step 13: The Cisco UCS system also allows vNICs to connect to the hosts as 802.1q VLAN trunks. In this basic example, we are placing this vNIC on a single VLAN in the Ethernet switching fabric, therefore, next to VLAN Trunking, leave No selected.

Step 16: From the Create vNIC main screen, you can now choose the newly created VLAN from the Select VLAN drop-down list as shown in Figure 63. Step 17: When running a single VLAN from a host that will be transmitting traffic without 802.1Q VLAN tags, select Native VLAN to ensure that the untagged traffic can be properly forwarded by the fabric interconnects.

Computing Resources


Figure 63 . Select VLAN for the vNIC

Step 1: On the Server Boot Order screen, click Create Boot Policy. This launches the Create Boot Policy screen, which allows you to assign various boot sources in order to a named policy. The lower left side of this screen has three containers for boot sources: Local Devices, vNICs and vHBAs. Step 2: Click the down arrows on the Local Devices container to display the choices. Step 3: Click Add CD-ROM first, to add a removable media source to the list. Step 4: Click Add Local Disk to add the locally installed drives on the blade server itself as the next boot source. Figure 64 . Create Boot Policy Screen

Step 18: Leave the remainder of the fields on the Create vNIC screen at the default settings and click OK. The next screen shows the resulting created vNIC, its fabric association, and the VLANs on which it forwards traffic. Step 19: Verify the information displayed regarding the new vNIC. Click Next to continue the service profile creation process. Step 20: Click Next on the vNIC/vHBA Placement screen to let the system perform the placement of the virtual interfaces on the physical interfaces that exist on the blade servers to which this profile will be associated.

Procedure 5

Define the Server Boot Order Policy

The server boot order policy allows you to control the priority of different boot devices to which the server will have access. A basic configuration is to boot from removable media first, such as an attached CD/DVD drive, and then from the internal disk. More advanced configurations allow boot from LAN or boot from SAN. Having a preconfigured policy as a reusable object promotes consistency between similar service profile configurations.

Step 5: The order of the devices in the list is displayed as a number in the Order column of the table. Assign a name and description to the policy in the spaces provided, verify the choices, and click OK to create the named boot policy, as shown in Figure 64.

Computing Resources


Step 6: In the Server Boot Order screen, use the Boot Policy drop-down list to select the name of the policy just created to be applied to this profile, as shown in Figure 65. Figure 65 . Server Boot Policy Selection

Figure 66 . Server Assignment

Step 7: Click Next to proceed with the service profile creation.

Step 2: Leave the power state set up to ensure that a physical blade server will be powered on when the service profile is eventually applied. Click Next to proceed. Step 3: The final screen in the service profile creation expert wizard allows configuration of access by Intelligent Platform Management Interface (IPMI) clients, and Serial over LAN (SoL) access as shown in the following diagram. Detail on these tools is beyond the scope of this guide. For more information, please refer to Cisco UCS product guides at en/US/partner/products/ps10281/tsd_products_support_series_home. html

Procedure 6

Assign the Server

Cisco UCS has the ability to assign a service profile directly to a specific server, pre-provision an unused chassis slot, assign the profile to a pool of servers, or assign the profile to a physical blade server later. Step 1: To simplify creation of this basic initial service profile, choose Assign Later from the Server Assignment drop-down list, as shown in Figure 66.

Computing Resources


Figure 67 . Operational Policies Screen


Applying Service Profiles to Physical Servers 1. View Your Profile 2. Associate Profile with Server 3, Complete Basic OS Installation Youve now completed the basic initial service profile creation process. This process shows you how to apply this profile to a physical blade server, and install a base operating system on the locally installed disks.

Procedure 1

View Your Profile

Step 4: Click Finish to complete creation of the initial service profile on the system. The system displays a message indicating successful completion of the Create Service Profile (expert) wizard as shown in Figure 68. Figure 68 . Create Service Profile Success

After you complete the service profile creation wizard, view the resulting profile in the Cisco UCS Manager GUI. Step 1: In the navigation pane, choose the Servers tab, expand Service Profiles, and select the working service profile. The work pane shows multiple tabs that roughly correspond to the sections of service profile configuration that you walked through in the Create Service Profile (expert) wizard. On the General tab in the work pane is an Actions area with a list of links for performing various tasks associated with the profile, as shown in Figure 69.

Computing Resources


Figure 69 . View Service Profile in UCS Manager

Figure 70 . Associate Existing Server Selection

Procedure 2

Associate Profile with Server

After you finish viewing the service profile and verifying the settings, you are ready to associate the profile with a physical blade server: Step 1: Click Change Service Profile Association to launch the Associate Service Profile screen. Step 2: From the Server Assignment drop-down list, choose Select existing Server as shown in Figure 70. By default, the Associate Service Profile screen shows available servers only, that is, servers that are not already associated to another service profile. The available servers are displayed in a table sorted by chassis ID, slot number, and characteristics. Step 3: In the Select column, choose a server, as shown in Figure 70.

Computing Resources


Figure 71 . Available Server Selection

Step 2: Access the KVM Console for the server by clicking KVM Console in the Actions area of the work pane. Figure 72 . Associated Service Profile General Tab

Step 4: Click OK to initiate the process of associating the service profile to the selected server. Step 5: To track the progress of the association, choose the service profile by name under the Servers tab in the navigation pane and view either Overall Status on the General tab or the progress of the specific operation on the FSM tab.

Step 3: Removable CD/DVD media may be presented to the system in two primary ways. The first approach is to use the USB connection provided by the console port on the front of each blade server to connect an external drive. This approach will provide the fastest file access for initial operating system install. An alternate approach for presenting installation media to a server is to use the virtual media capability of the KVM Console utility. In the KVM Console window, choose Tools > Launch Virtual Media as shown in Figure 73. The Virtual Media Session window opens.

Procedure 3

Complete Basic OS Installation

To complete a simple operating system installation with traditional nonscripted approach, CD or DVD media must be available during the server boot process and defined first in the server boot order, as shown in our example service profile creation. Step 1: To view the state of the server as it boots, select the service profile name in the Servers tab of the navigation pane, and then view the General tab in the work pane.

Computing Resources


Figure 73 . KVM Launch Virtual Media

Tech Tip
When using virtual media, the media does need to travel across the network from the UCSM client machine through the fabric interconnects to the server. Install times may be slightly longer with this approach, depending on the speed and latency of the connection between the computer running the client and the blade server.

The Virtual Media feature allows the server being viewed within the KVM Console to access the removable media drives of the computer running the Cisco UCS Manager client. The mapped drive will be seen as a locally attached device for purposes of boot policy, for easy installation of operating systems without needing to manually load media locally to the blade server. Map ISO disk images present on the computer running the Cisco UCS Manager client as virtual media by clicking Add Image. Step 4: To map the local drive to the active blade server being viewed, next to the drive of your choice, select Mapped, as shown in Figure 74. After it is mapped, the blade server can boot to the virtual media session just as it would to a USB-attached CD/DVD player attached to the physical console port. Figure 74 . Virtual Media Session Mapping

Step 5: After the blade server associated to the service profile has booted from the provided install media, the installation process proceeds in the same way as a typical standalone rack-mount server. You can use the KVM Console interface for any ongoing interaction required to complete the installation.

The procedures in this section have detailed how to establish initial connectivity and a base configuration for a Cisco UCS Blade Server system. This section also covered configuration of a basic service profile, as well as association of that profile to a blade server for operating system installation. For further detail on Cisco UCS Blade Server systems, such as creating more advanced Service Profiles, Templates, and configurations that boot from LAN or SAN storage, please refer to the SBA Unified Computing Deployment Guide.

Computing Resources


Virtual Switching
Business Overview
Midsize Organizations are increasingly using server virtualization or hypervisor technology to optimize their investment in computer resources. Virtualization of a server allows multiple logical server instances running distinct guest operating systems to share the same physical hardware, increasing the use of the investment in each server and reducing overall equipment costs. Virtual Machines may easily be migrated from one hardware platform to another, and in conjunction with centralized storage improve availability and reduce downtime for the organization. However, server virtualization does introduce its own level of complexity to the data center architecture. What was previously a clearly defined demarcation between server configuration, and network configuration is now blended, as elements of the network environment reside in software on the physical server platform. In a basic VMware configuration, port settings must be defined on a per-virtualmachine basis, which can become repetitive and potentially error-prone for new server initialization.

Tech Tip
As of Cisco Nexus 1000v software version 4.0(4)SV1(3), VMware ESX/I 3.5U2 or higher is required, or ESX/I 4.0 or higher. Enterprise Plus licensing from VMware is also required.

The Cisco Nexus 1000V virtual switch provides Layer-2 data center access switching to VMware ESX and ESXi hosts and their associated VMs. The two primary components to the solution are the Virtual Supervisor Module (VSM), which provides the central intelligence and management of the switching control plane, and the Virtual Ethernet Module (VEM), which resides within the hypervisor of each host. Together, the VSM and multiple VEMs comprise a distributed logical switch, similar to a physical chassis based switch with redundant supervisors and multiple physical linecards. This model echos a common distributed architectural approach with the Cisco Nexus 5000/2000 series, as well as the Cisco UCS fabric interconnects and I/O modules. A logical view of the Nexus 1000V architecture is shown in the following figure. Figure 75 . Nexus 1000V Logical View

Technology Overview
The Cisco Nexus 1000V Virtual Distributed Switch is a software-based switch designed for hypervisor environments that implements the same Cisco NXOS operating system as the Cisco Nexus 5000 Series switching platforms that comprise the primary Ethernet switching fabric for the SBA Midsize Data Center Architecture. This allows a consistent method of operation and support for both the physical and virtual switching environments. The Cisco Nexus 1000V allows for policy-based VM connectivity using centrally defined port profiles that may be applied to multiple virtualized servers, simplifying the deployment of new hosts and virtual machines. As virtual machines are moved between hardware platforms for either balancing of workloads or implementation of new hardware, port configuration migrates right along with them, increasing the ease of use of the overall solution. The Cisco Nexus 1000V is currently supported with hypervisor software from VMware as an integrated part of the vSphere server virtualization environment.

Virtual Switching


Nexus 1000V VEM The Cisco Nexus 1000V Virtual Ethernet Module executes as part of the VMware ESX or ESXi kernel and provides a richer alternative feature set to the basic VMware Virtual Switch functionality. The VEM leverages the VMware vNetwork Distributed Switch (vDS) API, which was developed jointly by Cisco and VMware, to provide advanced networking capability to virtual machines. This level of integration ensures that the Cisco Nexus 1000V is fully aware of all server virtualization events, such as VMware VMotion and Distributed Resource Scheduler (DRS). The VEM takes configuration information from the Virtual Supervisor Module and performs layer 2 switching and advanced networking functions: Port Channels Quality of service (QoS) Security: Private VLAN, access control lists, port security, DHCP snooping Monitoring: NetFlow, Switch Port Analyzer (SPAN), Encapsulated Remote SPAN (ERSPAN) In the event of loss of communication with the Virtual Supervisor Module, the VEM has nonstop forwarding capability to continue to switch traffic based on last known configuration. In short, the Nexus1000v brings data center switching and its operational model into the hypervisor to provide a consistent network management model from the core to the virtual machine network interface card. Cisco Nexus 1000V provides centralized configuration of switching capabilities for VEMs supporting multiple hosts and VMs, allowing you to enable features or profiles in one place instead of reconfiguring multiple switches. Virtual Supervisor Module The Cisco Nexus 1000V Series Virtual Supervisor Module (VSM) controls multiple VEMs as one logical modular switch. Instead of physical line card modules, the VSM supports multiple VEMs running in software inside of the physical servers. Configuration is performed through the VSM and is automatically propagated to the VEMs. Instead of configuring soft switches inside the hypervisor on a host-by-host basis, administrators can define configurations for immediate use on all VEMs being managed by the Virtual Supervisor Module from a single interface. The VSM may be run as a VM on an ESX/ESXi host, or on the dedicated Cisco Nexus 1010 hardware platform.

By using the capabilities of Cisco NX-OS, the Cisco Nexus 1000V Series provides these benefits: Flexibility and ScalabilityPort Profiles, a new Cisco NX-OS feature, provides configuration of ports by category, enabling the solution to scale to a large number of ports. Common software can run all areas of the data center network, including the LAN and SAN. High AvailabilitySynchronized, redundant Virtual Supervisor Modules enable rapid, stateful failover and ensure an always-available virtual machine network. ManageabilityThe Cisco Nexus 1000V Series can be accessed through the Cisco command-line interface, Simple Network Management Protocol (SNMP), XML API, Cisco Data Center Network Manager and CiscoWorks LAN Management Solution (LMS). The Virtual Supervisor Module is also tightly integrated with VMware vCenter Server so that the virtualization administrator can take advantage of the network configuration in the Cisco Nexus 1000V. Nexus 1000V Port Profiles To complement the ease of creating and provisioning VMs, the Cisco Nexus 1000V includes the Port Profile feature to address configuration consistency challenges, which provides lower operational costs and reduces risk. Port Profiles enable you to define reusable network policies for different types or classes of VMs from the Cisco Nexus 1000V VSM, then apply the profiles to individual VM virtual NICs through VMwares vCenter. Nexus 1010 Virtual Services Appliance The Cisco Nexus 1010 Virtual Services Appliance provides a dedicated hardware platform for the VSM. The Nexus 1010 is based off of a Cisco UCS C-Series server platform and can host up to four instances of the VSM. Because the Cisco Nexus 1010 provides dedicated hardware for the VSM, it makes virtual access switch deployment and ongoing operations much easier for the network administrator. It also has the capacity to support the Cisco Nexus 1000V NAM Virtual Service Blade, as well as other new service blades in the future. Using the Nexus 1010 platform also simplifies the setup of redundancy between the two VSM modules. Deployment procedures for both the Nexus 1010 and software VM-based VSM are provided in this guide. Two separate Nexus 1010 units are required to provide full physical redundancy for the VSM.

Virtual Switching


Deployment Details
The following sections provide instructions for installation and basic setup of the Nexus 1000V on both the Nexus 1010 platform and as a basic VM. Also, migration examples are provided for configuring hosts in vCenter to utilize the Nexus 1000V for switching services.

Cisco Nexus 1010 CIMC Controller IP address primary Cisco Nexus 1010 CIMC Controller IP address secondary Cisco Nexus 1010 VSA Primary Address Cisco Nexus 1010 VSA Secondary Address Cisco Nexus 1000V VSM address for Cisco Nexus 1010


Install and Set Up the Nexus 1010 1. Configure the Cisco Integrated Management Controller 2. Configure the VSA 3. Install the Cisco Nexus 1000V on the VSA 4. Configure the VSM to Communicate with the vCenter You need to configure several IP addresses. The following figure shows the addressing required for the primary Cisco Nexus 1010. The table following provides all the IP addresses used in the example setup. Figure 76 . IP Addressing for Primary Cisco Nexus 1010

Cisco Nexus 1000V VSM address for VM install Primary Cisco Nexus 1000V VSM address for VM install Secondary (note: this is a temporary address VMware vCenter IP address

Procedure 1

Configure the Cisco Integrated Management Controller

The Cisco Nexus 1010 is a virtual services appliance (VSA) that hosts up to four Cisco Nexus 1000V virtual supervisor modules and a Cisco Network Analysis Module (NAM). VSMs that were hosted on VMware virtual machines can now be hosted on a Cisco Nexus 1010 appliance. This allows network administrators to install and manage the VSM like a standard Cisco switch. To Setup the Virtual Services Appliance you need to set up the Cisco Integrated Management Controller (CIMC). Step 1: Connect a KVM console to the Nexus 1010 appliance, and on bootup press F8 to access the CIMC. Step 2: Enter the following for the CIMC controller: Dedicated NIC mode IP address Subnet mask Default gateway Default password (the default username is Admin)

Virtual Switching


Note: If you are using a trunk port, be sure to fill out the VLAN section. Press F10 to save and reboot the server. Figure 77 . CIMC Configuration Utility

Figure 78 . Nexus 1010 KVM Console Screen

Step 3: Begin an http session to the address just configured for the CIMC. Figure 79 . Nexus 1010 CIMC Login Screen

The configured Cisco Nexus 1010 console screen looks like Fiure 78. You cannot configure the Virtual Services Appliance from here.

Step 4: Login with username admin and the password configured during CIMC setup. Step 5: Click the Server tab on the left, and then click Remote Presence. Step 6: On the Serial over LAN tab, check Enabled, and then click Save Changes.

Virtual Switching


Figure 80 . Enable Serial Over LAN

Figure 81 . VSA Secure Shell Terminal Window

There are now two ways to connect to the VSA for configuration: SSH to the CIMC address or using the available serial port. This example shows how to use SSH or secure shell to securely log into the CIMC address and configure the VSA appliance.

Procedure 2

Configure the VSA

Step 1: Using an SSH client, ssh to the previously configured CIMC address. Step 2: Log in with the username admin and your password configured for the CIMC. Step 3: The prompt, ucs-c2xx# appears. Enter connect host Step 4: The scripted VSA installation will prompt for configuration information CISCO Serial Over LAN: Close Network Connection to Exit Enter the password for admin: Weak Password entered Please enter a Strong Password. ******************************************************* Strong Password should not be easy to decipher Short and Easy-to-decipher passwords are not encouraged Be sure to configure a strong password that is at least eight characters long, contains

Virtual Switching


both upper and lower case letters, and contains numbers. ******************************************************* Enter the password for admin: Confirm the password for admin: Enter HA role[primary/secondary]: primary Figure 82 . Nexus 1010 Physical Connectivity

Step 5: VSA Management Configuration begins here: ---- Basic System Configuration Dialog ---This setup utility will guide you through the basic configuration of the system. Setup configures only enough connectivity for management of the system. Press Enter at anytime to skip a dialog. Use ctrl-c at anytime to skip the remaining dialogs. Would you like to enter the basic configuration dialog (yes/ no): y Create another login account (yes/no) [n]: Configure read-only SNMP community string (yes/no) [n]: Configure read-write SNMP community string (yes/no) [n]: Enter the VSA name : p3-vsa-1010-1 Continue with Out-of-band (mgmt0) management configuration? (yes/no) [y]: Mgmt0 IPv4 address : Mgmt0 IPv4 netmask : Configure the default gateway? (yes/no) [y]: IPv4 address of the default gateway : Configure advanced IP options? (yes/no) [n]: Enable the telnet service? (yes/no) [y]: Enable the ssh service? (yes/no) [n]: y Type of ssh key you would like to generate (dsa/rsa) : rsa Number of key bits <768-2048> : 1024 Configure the ntp server? (yes/no) [n]: y NTP server IPv4 address : The following configuration will be applied: switchname p3-vsa-1010-1 interface mgmt0 ip address no shutdown vrf context management ip route telnet server enable ssh key rsa 1024 force ssh server enable ntp server Would you like to edit the configuration? (yes/no) [n]: Use this configuration and save it? (yes/no) [y]: y [########################################] 100% System is going to reboot to configure network uplinks

The Network Uplink Type section gives several options. Given the number of ports available to the VSAs in this setup, network type number 4 was chosen. Each pair of ports are connected to two separate Nexus 2248 Fabric Extenders. Port speeds are 1 gigabit Ethernet. Enter network-uplink type <1-4>: 1. Ports 1-2 carry all management, control and data vlans 2. Ports 1-2 management and control, ports 3-6 data 3. Ports 1-2 management, ports 3-6 control and data 4. Ports 1-2 management, ports 3-4 control, ports 5-6 data 4 Enter control vlan <1-3967, 4048-4093>: 160 Enter the domain id<1-4095>: 300 Enter management vlan <1-3967, 4048-4093>: 163 Saving boot configuration. Please wait... [########################################] 100%

Virtual Switching


Step 6: Repeat the identical processes for the secondary VSA. Use the same Domain ID and select HA role as secondary. The appliance is now configured and ready for modules to be installed. To manage the host, use an SSH client to the management address of the primary VSA.

Step 3: Attach an ISO image to this blade. Enter the ISO image file name previously copied to the repository. p3-vsa-1010-1(config-vsb-config)# virtual-service-blade-type new nexus-1000v.4.0.4.SV1.3b.iso Step 4: Check the status of your current blade creation. p3-vsa-1010-1(config-vsb-config)# show virtual-service-blade summary ------------------------------------------------------------------------------Name Role State Nexus1010-Module ------------------------------------------------------------------------------vsm-1000v-1010 PRIMARY VSB NOT PRESENT Nexus1010-PRIMARY vsm-1000v-1010 SECONDARY VSB NOT PRESENT Nexus1010-SECONDARY Step 5: Assign the control and packet vlans VLANS created on the VSA previously. The management VLAN is automatically inherited from the management VLAN configured on the VSA. p3-vsa-1010-1(config-vsb-config)# interface control vlan 160 p3-vsa-1010-1(config-vsb-config)# interface packet vlan 159 Step 6: Enable the service blade. In the dialog that appears, configure the management IP address for the VSM, host name, and password. p3-vsa-1010-1(config-vsb-config)# enable Enter vsb image: [nexus-1000v.4.0.4.SV1.3b.iso] Enter domain id[1-4095]: 100 Management IP version [V4/V6]: [V4] Enter Management IP address: Enter Management subnet mask: IPv4 address of the default gateway: Enter HostName: vsm-1000v-1010 Enter the password for admin: c1sco123

Procedure 3

Install the Cisco Nexus 1000V on the VSA

Step 1: Copy a Cisco Nexus 1000V ISO image from the downloaded Cisco Nexus 1000V software package to the VSA repository. For this example, we are using the bundle from CCO. Retrieve the .iso image from the directory Nexus-1000v.4.0.4.SV1.3b->VSM->Install. Place this image in a location that can be reached by the file transfer protocol of your choice. TFTP, SCP, SFTP, and FTP are available. p3-vsa-1010-1# copy tftp: bootflash:repository/ Enter source filename: nexus-1000v.4.0.4.SV1.3b.iso Enter vrf (If no input, current vrf default is considered): management Enter hostname for the tftp server: Trying to connect to tftp server...... Connection to Server Established. / TFTP get operation was successful Step 2: Create a virtual service blade on the VSA. With the Primary and Secondary VSA configured, the VSA automatically creates the secondary VSM on the secondary VSA without any user intervention. p3-vsa-1010-1# conf t p3-vsa-1010-1(config)# virtual-service-blade vsm-1000v-1010

Virtual Switching


Step 7: No shut the service blade, exit, and save the configuration for the VSA. This will turn on the primary and secondary VSM. p3-vsa-1010-1(config-vsb-config)# no shut p3-vsa-1010-1(config-vsb-config)# end p3-vsa-1010-1# copy running-configs startup-config [########################################] 100% Step 8: Track the blade startup process with the command show virtualservice-blade summary. The first instance of the command below shows it starting, and the second shows successful completion of the primary and secondary VSM configuration. p3-vsa-1010-1# show virtual-service-blade summary ------------------------------------------------------------------------------Name Role State Nexus1010-Module ------------------------------------------------------------------------------vsm-1000v-1010 PRIMARY VSB NOT PRESENT Nexus1010-PRIMARY vsm-1000v-1010 SECONDARY VSB DEPLOY IN PROGRESS Nexus1010-SECONDARY p3-vsa-1010-1# p3-vsa-1010-1# show virtual-service-blade summary ------------------------------------------------------------------------------Name Role State Nexus1010-Module ------------------------------------------------------------------------------vsm-1000v-1010 PRIMARY VSB POWERED ON Nexus1010-PRIMARY vsm-1000v-1010 SECONDARY VSB POWERED ON Nexus1010-SECONDARY p3-vsa-1010-1# Step 9: Log in to the newly configured VSM with a secure shell client.

Figure 83 . VSM Secure Shell Terminal Window

Virtual Switching


Step 10: Verify redundancy of the VSM. vsm-1000v-1010# show module Mod Ports Module-Type Model Status --- ----- -------------------------------- ------------------ -----------1 0 Virtual Supervisor Module Nexus1000V ha-standby 2 0 Virtual Supervisor Module Nexus1000V active * Mod Sw Hw --- --------------- -----1 4.0(4)SV1(3b) 0.0 2 4.0(4)SV1(3b) 0.0 Mod MAC-Address(es) Serial-Num --- -------------------------------------- ---------1 00-19-07-6c-5a-a8 to 00-19-07-6c-62-a8 NA 2 00-19-07-6c-5a-a8 to 00-19-07-6c-62-a8 NA Mod Server-IP Server-UUID Server-Name --- --------------- ------------------------------------ -------------------1 NA NA 2 NA NA * this terminal session vsm-1000v-1010# vsm-1000v-1010# show system redundancy status Redundancy role --------------administrative: secondary operational: secondary Redundancy mode --------------administrative: HA operational: HA This supervisor (sup-2) ----------------------Redundancy state: Active Supervisor state: Active Internal state: Active with HA standby Other supervisor (sup-1) -----------------------Redundancy state: Standby Supervisor state: HA standby Internal state: HA standby vsm-1000v-1010#

Procedure 4

Configure the VSM to Communicate with the vCenter

Step 1: In your web browser, open the newly configured management address of the VSM installed on the VSA. This will bring up a window that gives several choices. Choose the Cisco Nexus 1000V Installer Application. Virtual Ethernet modules are also available here for the different versions of ESX and ESXi listed. Figure 84 . VSM Web Interface Window

Step 2: Enter the VSM password, and then click Next.

Virtual Switching


Figure 85 . Enter VSM Credentials

Figure 86 . Enter vCenter Credentials

Step 3: Enter the vCenter credentials. Enter the correct IP address, username, and password, and then click Next.

Step 4: Enter the IP address of the just-configured VSM, along with a new password. This can be the same password as entered in the initial 1000V configuration on the Nexus 1010. Select the Datacenter Name that you want the 1000V to be a member of, and then click Next.

Virtual Switching


Figure 87 . Provide VSM Config Options

Figure 88 . Summary Configuration

Step 5: On the summary window, verify the information, and then click Finish.

Step 6: Two status windows will now appear while the install application registers the extension with vcenter and verification completes. Click Close to complete the configuration.

Virtual Switching


Figure 89 . Installation Progress

Figure 90 . Successful Installation

Step 7: At the command line, verify the svs connection with the show svs connections command. vsm-1000v-1010# show svs connections connection vcenter: ip address: remote port: 80 protocol: vmware-vim https certificate: default datacenter name: P3-DC-1000v DVS uuid: 1f b3 31 50 a8 53 ef 49-08 83 0b d9 a0 ec 58 39 config status: Enabled operational status: Connected sync status: Complete version: VMware vCenter Server 4.1.0 build-258902 vsm-1000v-1010#

Virtual Switching



Figure 92 . Select OVF File Location

Deploying the Nexus 1000V on a ESXi host as a virtual machine 1. Install the VM-based VSM 2. Configure the Nexus 1000V and Connect to the vCenter 3. Add a Redundant Nexus 1000V VM You can also deploy the 1000V separately as a virtual machine hosted on a vSphere Hypervisor. Unlike the VSA, the virtual machine version requires the secondary VSM to be installed manually. The most direct and straightforward way to do this is with an OVF template provided in the download for the Cisco 1000V code.

Procedure 1

Install the VM-based VSM

Step 1: From the vSphere client, select File->Deploy OVF Template. Figure 91 . Deploy OVF Template Step 3: The OVF template details will be displayed. Verify that the correct version and product have been selected, and then click Next.

Step 2: Select the file location of the Cisco Nexus 1000V OVF file.

Virtual Switching


Figure 93 . OVF Template Details

Figure 94 . License Agreement

Step 4: Read and accept the end user license agreement to continue, and then click Next.

Step 5: Name the 1000V virtual machine. This is the name that will appear in the vSphere client inventory screen.

Virtual Switching


Figure 95 . Name VSM and Choose Location

Figure 96 . Deployment Configuration

Step 6: The configuration will default to the Nexus 1000V Installer. Click Next to continue.

Step 7: Just like with a regular virtual machine, select a server location for the VSM to reside on. Click Next to continue.

Virtual Switching


Figure 97 . Select Host for VSM

Figure 98 . Select Datastore

Step 8: Select a datastore to hold the VSM virtual machine storage. Click Next to continue.

Step 9: Select thick or thin provisioning. The size of the VSM is estimated at 3GB. With this small size, thick provisioning is selected because there is not much to gain through thin provisioning. Click Next to continue.

Virtual Switching


Figure 99 . Select Thick Provisioning

Figure 100 . Network Mapping

Step 10: Select the correct networks for the Control, Packet and Management interfaces for the VSM by clicking on the Destination Networks to get a drop-down menu. Click Next to continue.

Step 11: On the Properties page, enter a password, management IP address, subnet mask, and default gateway for the VSM management interface. Click Next to continue.

Virtual Switching


Figure 101 . Assign IP Addressing and Password

Figure 102 . Verify Settings

Step 12: Verify the settings are correct and click Finish to deploy the VSM virtual machine.

Step 13: The Deploying 1000V window appears. You can track progress in the status window of the vSphere client. Figure 103 . Monitor Status

After the template is deployed, the VSM shows in the inventory window under the machine it was installed on.

Virtual Switching


Figure 104 . VSM Displayed in Inventory

Figure 107 . VSM Secure Shell Terminal Window

Step 14: To power on the VSM, right-click the virtual machine and select Power On. Figure 105 . Power On Virtual Machine

Step 15: In a console window, you will see the 1000V boot and present a login prompt. You can configure from here or from a Secure Shell client. Figure 106 . Initial VSM Login Prompt

Procedure 2

Configure the Nexus 1000V and Connect to the vCenter

Step 1: In a web browser, enter the address of the newly installed 1000V Virtual Machine. Select Launch Installer Application. This starts a Java applet that will continue configuring the 1000V.

Step 16: Properly configured, the VSM should now be accessible with a secure shell client to the management address configured during deployment of the VSM template.

Virtual Switching


Figure 108 . VSM Web Interface Window

Figure 109 . Enter VSM Credentials

Step 3: Enter the vCenter credentials. This allows the setup program to install the plugin into vCenter for this Cisco Nexus 1000V. Click Next to continue. Step 2: Enter the VSM password configured when setting up the Cisco 1000V with the OVF template. Click Next to continue.

Virtual Switching


Figure 110 . Enter vCenter Credentials

Figure 111 . Select Host

Step 4: Select the VSMs host selected in the OVF profile installation, and then click Next.

Tech Tip
Tech Tip: Ensure that that the host chosen for the VSM is already provisioned with access to the control, packet, and management VLANs required for the 1000v environment.

Step 5: Select the VSM port groups. This can be done at the command line of the VSM or here in the configuration utility. If the port groups exist for control and packet already on the ESXi host they will be available in a pull-down menu for each port group. In this example the port groups for VLAN 159 Packet, VLAN 160 Control already are created on the ESX host. If they do not exist on the ESXi host, you will need to select Create Port Group for Packet, Management and Control Port Groups. Creating a port group requires a Name, VLAN id and the correct vSwitch they are available on. Select the appropriate groups, and then click Next to proceed.

Virtual Switching


Figure 112 . Select the VSM VM and Port Groups

Figure 113 . Provide VSM Config Options

Step 6: Enter a the switch name, password, IP address, Subnet Mask, Gateway IP address, Domain ID, SVS Datacenter Name, and native VLANs for vSwitches. The domain id is used for the primary and secondary VSMs to identify each other. The VSM VM must be run on the same IP subnet as the ESX 4.0 hosts that it manages. In our example, all of the ESXi hosts are on the 163 VLAN and the VSM is as well.

Step 7: With the VSM installation complete, migration is available. This will be covered in another section. Select No and click Finish.

Virtual Switching


Figure 114 . Configure DVS Migration Options

Figure 115 . Installation Complete

Installation is now complete.

Procedure 3

Add a Redundant Nexus 1000V VM

Step 1: Using a secure shell client, SSH to the VSM just installed. At the exec prompt, enter: system redundancy role primary Step 2: Save the configuration. copy running-config startup-config Step 3: In vCenter deploy another VSM as shown earlier with the vSphere client. Select File>Deploy OVF Template. This VSM will only use its assigned IP address temporarily until the VSM is configured to be the secondary. Step 4: As shown earlier, turn on the secondary VSM VM.

Virtual Switching


Step 5: Using a secure shell client, SSH to the address of the newly installed secondary VSM. In configuration mode, enter the following commands to provision the domain ID and required VLANS. svs-domain domain id 111 control vlan 160 packet vlan 159 svs mode L2 end Exit configuration mode. Enter system redundancy role secondary at the exec prompt, and then save the running configuration. system redundancy role secondary copy running-config startup-config Step 6: Use the reload command to reload both primary and redundant VSMs. The VSM will change to a redundant role and will reset the SSH connection. At this point it will sync with the primary and share its IP address. Step 7: Log into the primary VSM and check the redundancy status. show system redundancy status It should look similar to this: vsm1# show redundancy status Redundancy mode --------------administrative: HA operational: HA This supervisor (sup-1) ----------------------Redundancy state: Active Supervisor state: Active Internal state: Active with HA standby Other supervisor (sup-2) -----------------------Redundancy state: Standby Supervisor state: Internal state: HA standby HA standby

System start time:

Tue Nov 16 04:03:40 2010 4 minutes, 49 seconds 5 minutes, 44 seconds 4 minutes, 49 seconds Model -----------------Nexus1000V Nexus1000V Status -----------active * ha-standby

System uptime: 0 days, 0 hours, Kernel uptime: 0 days, 0 hours, Active supervisor uptime: 0 days, 0 hours, Show module will show the two VSMs. vsm1# show module Mod Ports Module-Type --- ----- -------------------------------1 0 Virtual Supervisor Module 2 0 Virtual Supervisor Module Mod --1 2 Mod --1 2 Sw --------------4.0(4)SV1(3b) 4.0(4)SV1(3b) Hw -----0.0 0.0

MAC-Address(es) -------------------------------------00-19-07-6c-5a-a8 to 00-19-07-6c-62-a8 00-19-07-6c-5a-a8 to 00-19-07-6c-62-a8

Serial-Num ---------NA NA

Mod Server-IP --- ------------------1 2

Server-UUID Server-Name ------------------------------------ --------------NA NA NA NA

* this terminal session

Virtual Switching



Figure 116 . VUM Basic Tasks

Configure Virtualized Hosts to Use the Nexus 1000v 1. Configure VMware Update Manager for Nexus 1000v VEM 2. Create Port Profiles 3. Migrate Hosts

Procedure 1

Configure VMware Update Manager to install Cisco Nexus 1010 Virtual Ethernet Module

This example illustrates use of the VMware Update Manager for vCenter (VUM). VUM provides for automated patch updates to managed hosts. In this setup it will be used for installing the Cisco Nexus 1010 Virtual Ethernet Module Extension. For instructions on initial installation of VUM or information on other VUM uses, refer to

Step 2: The New Baseline wizard opens. Enter a name for the baseline and then select Host Extension as the Baseline Type. Click Next. Figure 117 . Select Baseline Name and Type

Tech Tip
vCenter 4.1 installs with a 64-bit DSN. VUM will not install on this since it requires a 32-bit DSN. Using Windows utilities, create a 32-bit DSN for VUM.

Step 1: Create a new baseline. In the Update Manager from the main vSphere Screen, click Create a new baseline.

Virtual Switching


Step 3: Select the correct extension for the host. This example uses ESXi 4.1 hosts. The Product column describes the supported vSphere Hypervisors. Select the correct extension for your system, and then click the down arrow (in the middle of the page) to add the extension to the bottom box. Click Next. Figure 118 . Select Extensions

Figure 119 . Name and Type

Step 7: The upgrades window appears next. Click Next without changing anything on this window.

Step 4: Click Finish. Step 5: Create a New Baseline Group (Figure 116). Step 6: In the Baseline Group Wizard, select the type Host Baseline Group and assign a name. Click Next.

Virtual Switching


Figure 120 . Upgrades

Figure 121 . Patches

Step 8: Patches window follows and is not used in this example. Click Next.

Step 9: Select the Extension Baseline created earlier, and then click Next.

Virtual Switching


Figure 122 . Extensions

Figure 123 . Patch Download Sources

Step 12: Returning to the main window in Figure 116, select Go To Compliance View. Step 13: Click Create to apply the extension baseline. Figure 124 . Compliance View

Step 10: Review the settings in the next window, and then click Finish. Step 11: In the configuration tab of VUM, the Patch Download Sources window appears. For this example only the Custom Type is required. The others are not selected. If a proxy is required that information can be entered here as well. Click Apply and Download Now to retrieve the VEM source. Step 14: Select the Extension Baseline created earlier and the baseline group. Click Attach.

Virtual Switching


Figure 125 . Attach Baseline

Figure 126 . Compliance Window with Baselines Attached

Step 16: To remediate a host, select the host in the compliance window, and then click Stage to move the software to the host. Click Remediate to install the staged software. Figure 127 . Stage and Remediate

Procedure 2

Create Port Profiles

This example illustrates creation of port profiles using the CLI of the VSM. Step1: Access the VSM CLI and enter configuration mode by typing in configure terminal. Step 2: Add the VLANs that will be used for VM data transport to the VSM configuration. Packet and Control VLANs were automatically added during the previous sections. Also add the management VLAN to the configuration of the VSM. The example configuration is using VLAN 163 for management traffic. vlan 148-151 vlan 154-163

Step 15: The Compliance Window now looks like the following figure. Click Scan with the vCenter selected in the inventory window. VUM now scans all hosts for compliance to the attached baseline. If they are not compliant, click Remediate to install the Cisco Nexus VEM.

Virtual Switching


Step 3: Configure a port profile to be assigned to the physical uplink ports on each virtualized host. Uplink interfaces are commonly configured as VLAN trunks, to allow a number of different VLANs to be carried down to the individual virtual interfaces of the VMs. Define the VLANs you used earlier for packet, control, and management as system VLANs. port-profile type ethernet DC-ETH-UPLINK vmware port-group switchport mode trunk switchport trunk allowed vlan 148-151,154-163 channel-group auto mode on mac-pinning no shutdown system vlan 159-160,163 state enabled

Procedure 3

Migrate Hosts

By default, a virtualized host that existed prior to the installation of Cisco Nexus 1000V, or one that is newly created, will begin with a standard vSphere virtual switch. To convert these hosts over to use the 1000V Virtual Distributed Switch (vDS), follow the steps in this procedure. Our example host is a B-Series Blade Server in a Cisco UCS Chassis environment, with two physical vmnic interfaces defined. Step 1 . Within vSphere, go to Inventory > Networking to display the vDSs that exist within your vCenter. Figure 128 . Display Available vDS

Tech Tip
The example shown above uses the channel-group auto mode on mac-pinning command to enable vPC Host Mode, which is required for a Cisco UCS Blade Server environment. For Cisco C-Series servers directly connected to upstream switches that support LACP, use the channel-group auto mode active command to enable LACP negotiation.

Step 4: Configure port profiles to allow individual VM virtual interfaces to be assigned to the appropriate VLANs for their traffic requirements. The example below shows two of the VLAN numbers used in our reference setup; create the appropriate profiles for your implementation. port-profile type vethernet DC-148-ACCESS vmware port-group switchport mode access switchport access vlan 148 no shutdown state enabled port-profile type vethernet DC-149-ACCESS vmware port-group switchport mode access switchport access vlan 149 no shutdown state enabled

Step 2 . Highlight the vDS that you would like to use to manage the switching for the host, and then choose Add a host from the Getting Started tab. Step 3 . Check the selection boxes for one of the physical adapters to provide connectivity between the vDS and the host, choose the uplink port group you created from the pull-down list, then click Next.

Virtual Switching


Figure 129 . Select Physical Adapters and Uplink Port Group

Figure 130 . Assign Virtual Adapters to a Port Group

Step 4 . Assign Virtual Adapters to an appropriate port group for the traffic that they are carrying. In our example, the vmk0 interface is being assigned to VLAN 163, where it can access the management network subnet.

Step 5: If you have virtual machines already assigned to the host, click the check box to migrate virtual machine networking. In our example configuration, two existing virtual machines require interfaces in VLAN 148, which has been pre-defined as a required port group.

Virtual Switching


Figure 131 . Migrate Virtual Machine Networking

Figure 132 . View New vDS Settings

Step 6: Settings for the new vNetwork Distributed Switch are displayed. Existing interfaces from other hosts are included in the display, because it represents a switch that is distributed across multiple hosts. Click Finish to exit this screen.

Tech Tip
We recommend using VMware Update Manager (VUM) to enable automatic installation of the 1000V VEM software onto the host. If VUM is not being used, the VEM software first must be manually downloaded and installed to the hypervisor of the host.

Step 7: Monitor the remediation of the host in the Recent Tasks window of the vSphere Client. When the host has completed the Update Network Configuration task, view the host underneath the Inventory > Hosts and Clusters section of vSphere. Highlight the host name and choose Networking under the Configuration Tab. Click the vNetwork Distributed Switch button, and view the results of the configuration.

Virtual Switching


Figure 133 . vNetwork Distributed Switch View

Step 9 . Choose vmnic1 in the Physical Adapter box underneath the vSwitch0 Adapters heading, and click OK. Click Yes when asked if you want to move the physical adapter from the standard vswitch to the new vDS, and then click OK to confirm your changes. Figure 135 . Choose vmnic to Add

Step 8 . Click Manage Physical Adapters from the vNetwork Distributed Switch screen in order to migrate the additional physical adapter over to the vDS. Choose the Click to Add NIC link underneath the UpLink1. Figure 134 . Manage Physical Adapters Step 10 . After a short initialization period, the second uplink port will be shown as green in the vDS display, indicating dual active uplinks.

Virtual Switching


Figure 136 . Completed Dual-Uplink Migration

The configuration procedures that have been provided in this section will allow you to establish a basic functional Nexus 1000v setup for your network. The virtual switch configuration and port profiles will allow for vastly simplified deployment of new virtual machines with consistent port configurations. For more details on Cisco Nexus 1000v configuration, please see the Cisco Nexus 1000v configuration guides on

Virtual Switching


Application Resiliency
Business Overview
The network is playing an increasingly important role in the success of a business. Key applications such as enterprise resource planning, e-commerce, email, and portals must be available around the clock to provide uninterrupted business services. However, the availability of these applications is often threatened by network overloads as well as server and application failures. Furthermore, resource utilization is often out of balance, resulting in the low-performance resources being overloaded with requests while the high-performance resources remain idle. Application performance, as well as availability, directly affects employee productivity and the bottom line of a company. As more users work more hours while using key business applications, it becomes even more important to address application availability and performance issues to ensure achievement of business processes and objectives. There are several factors that make applications difficult to deploy and deliver effectively over the network. Inflexible Application Infrastructure Application design has historically been done on an application-by-application basis. This means the infrastructure used for a particular application is often unique to that application. This type of design tightly couples the application to the infrastructure and offers little flexibility. Because the application and infrastructure are tightly coupled, it is difficult to partition resources and levels of control to match changing business requirements. Server Availability and Load The mission-critical nature of applications puts a premium on server availability. Despite the benefits of server virtualization technology, the number of physical servers continues to grow based on new application deployments, which in turn increases power and cooling requirements. Application Security and Compliance Many of the new threats to network security are the result of application- and document-embedded attacks that compromise application performance

and availability. Such attacks also potentially cause loss of vital application data, while leaving networks and servers unaffected. One possible solution to improve application performance and availability is to rewrite the application completely to make it network-optimized. However, this requires application developers to have a deep understanding of how different applications respond to things such as bandwidth constraints, delay, jitter, and other network variances. In addition, developers need to accurately predict each end-users foreseeable access method. This is simply not feasible for every business application, particularly traditional applications that took years to write and customize.

Technology Overview
The idea of improving application performance began in the data center. The Internet boom ushered in the era of the server load balancers (SLBs). SLBs balance the load on groups of servers to improve their response to client requests, although they have evolved and taken on additional responsibilities such as application proxies and complete Layer 4 through 7 application switching. The Application Control Engine (ACE) is the latest SLB offering from Cisco. Its main role is to provide Layer 4 through 7 switching, but the ACE also provides an array of acceleration and server offload benefits, including TCP processing offload, Secure Socket Layer (SSL) offload, compression, and various other acceleration technologies. Cisco ACE sits in the data center in front of the Web and application servers and provides a range of services to maximize server and application availability, security, and asymmetric (from server to client browser) application acceleration. As a result, Cisco ACE gives IT departments more control over application and server infrastructure, which enables them to manage and secure application services more easily and improve performance. Ciscos Application Control Engine is the next-generation Application Delivery Controller that provides server load-balancing, SSL offload, and application acceleration capabilities. There are four key benefits provided by Cisco ACE: ScalabilityACE scales the performance of a server-based program, such as a web server, by distributing its client requests across multiple servers, known as a server farm. As traffic increases, additional servers can be added to the farm. With the advent of server virtualization, application servers can be staged and added dynamically as capacity requirements change.

Application Resiliency


High AvailabilityACE provides high availability by automatically detecting the failure of a server and repartitioning client traffic among the remaining servers within seconds, while providing users with continuous service. Application AccelerationACE improves application performance and reduces response time by minimizing latency and data transfers for any HTTP-based application, for any internal or external end user. Server OffloadACE offloads TCP and SSL processing, which allows servers to serve more users and handle more requests without increasing the number of servers. ACE hardware is always deployed in pairs for highest availability: one primary and one secondary. If the primary ACE fails, the secondary ACE takes control. Depending on how session state redundancy is configured, this failover may take place without disrupting the client-to-server connection. Cisco ACE uses both active and passive techniques to monitor server health. By periodically probing servers, the ACE will rapidly detect server failures and quickly reroute connections to available servers. A variety of health-checking features are supported, including the ability to verify Web servers, SSL servers, application servers, databases, FTP servers, streaming media servers, and a host of others. Cisco ACE can be used to partition components of a single web application across several application server clusters. For example: The two URLs www. and order.jsp could be located on two different server clusters even though the domain name is the same. This partitioning allows the application developer to easily scale the application to several servers without numerous code modifications. Furthermore, it maximizes the cache coherency of the servers by keeping requests for the same pages on the same servers. Additionally, ACE may be used to push requests for cacheable content such as image files to a set of caches that can serve them more cost-effectively than the application servers. Running SSL on the web application servers is a tremendous drain on server resources. By offloading SSL processing, those resources can be applied to traditional web application functions. In addition, because persistence information used by the content switches is inside the HTTP header, this information is no longer visible when carried inside SSL sessions. By terminating these sessions before applying content switching decisions, all the persistence options previously discussed become available for secure sites. There are several ways to integrate ACE into the data center network. Logically, the ACE is deployed in front of the application cluster. Requests to

the application cluster are directed to a virtual IP address (VIP) configured on the ACE. The ACE receives connections and HTTP requests and routes them to the appropriate application server based on configured policies. Physically, the network topology can take many forms. One-armed mode is the simplest deployment method, where the ACE is connected off to the side of the Layer 2/Layer 3 infrastructure. It is not directly in the path of traffic flow and only receives traffic that is specifically intended for it. Traffic, which should be directed to it, is controlled by careful design of VLANs, virtual server addresses, server default gateway selection, or policy routes on the Layer 2/Layer 3 switch. Figure 137 . Resilient Server Overview

Deployment Details
In our deployment example, we will first configure the ACE appliance with the required parameters to be recognized on the network. Then we will define the policies for directing the traffic. While the first part of the configuration is typically performed at the CLI when booting ACE, both parts can be configured via the ACE GUI. We have chosen to use the CLI commands for both network and application policy configuration. When setting up the ACE for the first time, the default password for the admin account must be changed.

Application Resiliency



Configuring ACE 1. Add the ACE to the Network 2. Configure a Load-Balancing Policy

and Default Route (optional). Enter ctrl-c at any time to quit the script ACE>Would you like to enter the basic configuration dialog (yes/no) [y]: n switch/Admin# Step 2: Before proceeding with any additional configuration, set up the basic network security policies to allow for management access into the ACE. access-list ALL line 8 extended permit ip any any class-map type management match-any remote_access 2 match protocol xml-https any 3 match protocol icmp any 4 match protocol telnet any 5 match protocol ssh any 6 match protocol http any 7 match protocol https any 8 match protocol snmp any policy-map type management first-match remote_mgmt_allow_ policy class remote_access permit Step 3: Ethernet VLAN trunks to the networks switching resources connect the ACE appliances. Configure two 1 gigabit Ethernet ports on each ACE to trunk to the core switch as follows: interface gigabitEthernet 1/1 channel-group 1 no shutdown interface gigabitEthernet 1/2 channel-group 1 no shutdown interface port-channel 1 switchport trunk allowed vlan 148 no shutdown


Add the ACE to the Network

Step 1: Connect a console cable to the ACE appliance to perform initial configuration of the admin user, then exit from the initial configuration dialog at the prompt. switch login: admin Password: admin Admin user is allowed to log in only from console until the default password is changed. www user is allowed to log in only after the default password is changed. Enter the new password for user admin: Confirm the new password for user admin: admin user password successfully changed. Enter the new password for user www: Confirm the new password for user www: www user password successfully changed. Cisco Application Control Software (ACSW) TAC support: Copyright 1985-2009 by Cisco Systems, Inc. All rights reserved. The copyrights to certain works contained herein are owned by other third parties and are used and distributed under license. Some parts of this software are covered under the GNU Public License. A copy of the license is available at http://www.gnu. org/licenses/ gpl.html. ACE> This script will perform the configuration necessary for a user to manage the ACE Appliance using the ACE Device Manager. The management port is a designated Ethernet port that has access to the same network as your management tools including the ACE Device Manager. You will be prompted for the Port Number, IP Address, Netmask,

Application Resiliency


Tech Tip
You can tailor the amount of bandwidth available to applications managed by the ACE by using a larger or smaller number of physical ports in the port channel. Evaluate your application throughput requirements to size the port channel accordingly.

Step 5: For the ACE to begin passing traffic, create a VLAN interface and assign an IP address to it. Since we are employing one-armed mode, a NAT pool needs to be created as well. interface vlan 148 ip address peer ip address access-group input ALL nat-pool 1 netmask pat service-policy input remote_mgmt_allow_policy service-policy input int148 no shutdown ip route At this point, the ACE should be reachable on the network. Now we can begin configuring a load-balancing policy.

As such, the switch ports that connect to the security appliances must be configured so that they are members of the same secure VLANs and forward secure traffic to switches that offer connectivity to servers and other appliances in the server room. The ACE appliances are configured for Active-Standby High Availability. When ACE appliances are configured in active-standby mode, the standby appliance does not handle traffic, so the primary device must be sized to provide enough throughput to address connectivity requirements between the core and the server room. Step 4: A fault-tolerant (FT) VLAN is a dedicated VLAN used by a redundant ACE pair to communicate heartbeat and state information. All redundancyrelated traffic is sent over this FT VLAN (including TRP protocol packets, heartbeats, configuration sync packets, and state replication packets). ft interface vlan 12 ip address peer ip address no shutdown ft peer 1 heartbeat interval 300 heartbeat count 10 ft-interface vlan 12 ft group 1 peer 1 peer priority 110 associate-context Admin inservice


Configure a Load-Balancing Policy

Step 1: To start, define the application servers that require load balancing. rserver host webserver1 ip address inservice rserver host webserver2 ip address inservice Step 2: Next, create a simple HTTP probe to test the health of the Web servers. probe http http-probe interval 15 passdetect interval 60 request method head expect status 200 200 open 1 Step 3: Place the web servers and the probe into a server farm. serverfarm host webfarm probe http-probe rserver webserver1 80 inservice rserver webserver2 80 inservice

Application Resiliency


Step 4: Now configure the load-balancing policy and assign it to the VLAN interface. class-map match-all http-vip 2 match virtual-address tcp eq www policy-map type loadbalance first-match http-vip-17slb class class-default serverfarm webfarm policy-map multi-match int148 class http-vip loadbalance vip inservice loadbalance policy http-vip-17slb loadbalance vip icmp-reply active nat dynamic 1 vlan 148 interface vlan 148 service-policy input int148 At this point, the application should be accessible via the VIP we created ( and the requests distributed between the two Web servers.

IT organizations face significant challenges associated with the delivery of applications and critical business data with adequate service levels to a globally distributed workforce. Application-delivery technologies help IT organizations improve availability, performance, and security of all applications. The Cisco Application Control Engine provides core-server load-balancing services, advanced application acceleration, and security services to maximize application availability, performance, and security. It is coupled with unique virtualization capabilities, application-specific intelligence, and granular role-based administration to consolidate application infrastructure, reduce deployment costs, and minimize operational burdens.

Application Resiliency


Appendix A: Product List

The following products and software version have been validated for the Cisco Smart Business Architecture: Functional Area Ethernet Infrastructure Product Nexus 5010 Nexus 5548P Nexus 2148T Nexus 2248TP Nexus 2232PP Storage Infrastructure MDS 9148 MDS 9124 MDS 9134 Network Security ASA 5580-40 ASA 5585-X IPS-4260-K9 Part Numbers N5K-C5010P-BF N5K-C5548P-FA N2K-C2148T-1GE N2K-C2248TP-1GE N2K-C2232PP-10GE DS-C9148D-8G16P-K9 DS-C9124-K9 DS-C9134-K9 ASA5580-40-10GE-K8 ASA5585-S40-K9 IPS-4260-K9 ASA: 8.2.3 IPS: 7.0.2E4 NX-OS 5.0(1a) Software Version NX-OS 4.2(1)N1(1) NX-OS 5.0(2)N1(1)

Appendix A: Product List


Functional Area Computing Resources

Product UCS 6120XP 20-port Fabric Interconnect 6-port 8Gb FC/Expansion module/UCS 6100 Series UCS 5108 Blade Server Chassis UCS 2104XP Fabric Extender UCS B200 M2 Blade Server UCS B250 M2 Blade Server UCS M81KR Virtual Interface Card UCS C200 M2 Server UCS C210 M2 Srvr

Part Numbers N10-S6100 N10-E0060 N20-C6508 N20-I6584 N20-B6625-1 N20-B6625-2 N20-AC0002 R200-1120402W R250-2480805W N1K-C1010 ACE-4710-0.5F-K9

Software Version Cisco UCS Release version 1.3

Virtual Switching Application Resiliency

Nexus 1010 Appliance Cisco ACE 4710 Appliance

4.0(4)SP1(1) A3(2.6)

Appendix A: Product List


Appendix B: SBA for Midsize Organizations Document System

Design Guides

Deployment Guides

Supplemental Guides

Design Overview

You are Here

Data Center

Advanced Server Load-Balancing

Configuration Files

NetApp Storage

Network Management

Unified Computing


UCS C-series and VMware


Appendix B: SBA for Midsize Organizations Document System


Americas Headquarters Cisco Systems, Inc. San Jose, CA

Asia Pacific Headquarters Cisco Systems (USA) Pte. Ltd. Singapore

Europe Headquarters Cisco Systems International BV Amsterdam, The Netherlands

Cisco has more than 200 offices worldwide. Addresses, phone numbers, and fax numbers are listed on the Cisco Website at
Cisco and the Cisco Logo are trademarks of Cisco Systems, Inc. and/or its affiliates in the U.S. and other countries. A listing of Cisco's trademarks can be found at Third party trademarks mentioned are the property of their respective owners. The use of the word partner does not imply a partnership relationship between Cisco and any other company. (1005R)

C07-572789-03 1/11