Data Center Networking: Internet Edge Design Architectures

Solutions Reference Network Design March, 2003

Corporate Headquarters Cisco Systems, Inc. 170 West Tasman Drive San Jose, CA 95134-1706 USA http://www.cisco.com Tel: 408 526-4000 800 553-NETS (6387) Fax: 408 526-4100

Customer Order Number: 956484 Text Part Number:

THE SPECIFICATIONS AND INFORMATION REGARDING THE PRODUCTS IN THIS MANUAL ARE SUBJECT TO CHANGE WITHOUT NOTICE. ALL STATEMENTS, INFORMATION, AND RECOMMENDATIONS IN THIS MANUAL ARE BELIEVED TO BE ACCURATE BUT ARE PRESENTED WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED. USERS MUST TAKE FULL RESPONSIBILITY FOR THEIR APPLICATION OF ANY PRODUCTS. THE SOFTWARE LICENSE AND LIMITED WARRANTY FOR THE ACCOMPANYING PRODUCT ARE SET FORTH IN THE INFORMATION PACKET THAT SHIPPED WITH THE PRODUCT AND ARE INCORPORATED HEREIN BY THIS REFERENCE. IF YOU ARE UNABLE TO LOCATE THE SOFTWARE LICENSE OR LIMITED WARRANTY, CONTACT YOUR CISCO REPRESENTATIVE FOR A COPY. The Cisco implementation of TCP header compression is an adaptation of a program developed by the University of California, Berkeley (UCB) as part of UCB’s public domain version of the UNIX operating system. All rights reserved. Copyright © 1981, Regents of the University of California. NOTWITHSTANDING ANY OTHER WARRANTY HEREIN, ALL DOCUMENT FILES AND SOFTWARE OF THESE SUPPLIERS ARE PROVIDED “AS IS” WITH ALL FAULTS. CISCO AND THE ABOVE-NAMED SUPPLIERS DISCLAIM ALL WARRANTIES, EXPRESSED OR IMPLIED, INCLUDING, WITHOUT LIMITATION, THOSE OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT OR ARISING FROM A COURSE OF DEALING, USAGE, OR TRADE PRACTICE. IN NO EVENT SHALL CISCO OR ITS SUPPLIERS BE LIABLE FOR ANY INDIRECT, SPECIAL, CONSEQUENTIAL, OR INCIDENTAL DAMAGES, INCLUDING, WITHOUT LIMITATION, LOST PROFITS OR LOSS OR DAMAGE TO DATA ARISING OUT OF THE USE OR INABILITY TO USE THIS MANUAL, EVEN IF CISCO OR ITS SUPPLIERS HAVE BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES.

CCIP, the Cisco Arrow logo, the Cisco Powered Network mark, the Cisco Systems Verified logo, Cisco Unity, Follow Me Browsing, FormShare, iQ Breakthrough, iQ Expertise, iQ FastTrack, the iQ Logo, iQ Net Readiness Scorecard, Networking Academy, ScriptShare, SMARTnet, TransPath, and Voice LAN are trademarks of Cisco Systems, Inc.; Changing the Way We Work, Live, Play, and Learn, Discover All That’s Possible, The Fastest Way to Increase Your Internet Quotient, and iQuick Study are service marks of Cisco Systems, Inc.; and Aironet, ASIST, BPX, Catalyst, CCDA, CCDP, CCIE, CCNA, CCNP, Cisco, the Cisco Certified Internetwork Expert logo, Cisco IOS, the Cisco IOS logo, Cisco Press, Cisco Systems, Cisco Systems Capital, the Cisco Systems logo, Empowering the Internet Generation, Enterprise/Solver, EtherChannel, EtherSwitch, Fast Step, GigaStack, Internet Quotient, IOS, IP/TV, LightStream, MGX, MICA, the Networkers logo, Network Registrar, Packet, PIX, Post-Routing, Pre-Routing, RateMUX, Registrar, SlideCast, StrataView Plus, Stratm, SwitchProbe, TeleRouter, and VCO are registered trademarks of Cisco Systems, Inc. and/or its affiliates in the U.S. and certain other countries. All other trademarks mentioned in this document or Web site are the property of their respective owners. The use of the word partner does not imply a partnership relationship between Cisco and any other company. (0208R)

Data Center Networking: Internet Edge Design Architectures Copyright © 2003, Cisco Systems, Inc. All rights reserved.

C ON T E N T S
Preface i i i ii

Intended Audience

Document Organization Document Conventions

Obtaining Documentation iii World Wide Web iii Documentation CD-ROM iii Ordering Documentation iii Documentation Feedback iii Obtaining Technical Assistance iv Cisco.com iv Technical Assistance Center iv Cisco TAC Web Site v Cisco TAC Escalation Center Internet Edge Overview 1-1

v

What is the Enterprise Internet Edge? 1-1 Edge Connectivity 1-2 Edge Routing 1-3 Routing Protocols Overview 1-3 Edge Security 1-4 Server Farms 1-5 Extranet/Intranet Server Farms 1-5 Internet Server Farms 1-6 Caveats 1-8 Redundancy 1-8 Device Redundancy 1-8 Site Availability and Route Redundancy Scalability 1-9 Memory Considerations 1-9 Physical and Logical Scalability 1-9 Device Provisioning 1-9

1-8

Data Center Networking: Internet Edge Design Architectures 956484

i

Contents

Internet Edge Design Considerations 1-9 Autonomous System Considerations 1-9 Address Allocation 1-10 NAT Considerations 1-10 Topology Considerations 1-11 Small Office Home Office Deployments 1-11 Small/Medium Business Deployments 1-11 Single ISP Design 1-12 Co-Location ISP Design 1-12 Enterprise Deployments 1-13 Dual ISP Design 1-13 Co-Location Design 1-14 Central Back-End Process Designs 1-15 Summary 1-15 2-1

Internet Edge Security Design Principles

Security Design Requirements 2-1 Security Policy Definition 2-1 Host Addressing 2-2 Application Definition 2-7 Usage Guidelines 2-8 Topology/Trust Model 2-9 Stateful Traffic Inspection 2-10 Engine Performance Considerations 2-11 Resiliency 2-11 Intrusion Detection 2-12 Network-Based Intrustion Detection (NIDS) 2-12 Host-Based Intrusion Detection (HIDS) 2-13 Variance-Based Capture Systems (Honey Pots) 2-13 IDS Implementation And Performance Considerations 2-13 Performance Considerations 2-15 Scalability Requirements 2-15 Bandwidth 2-15 Connection Rate 2-16 Total Connections 2-17 Asymmetry Concerns 2-17 Forwarding 2-17 Translation 2-18

Data Center Networking: Internet Edge Design Architectures

ii

956484

Contents

Security Considerations 2-20 Element Security 2-20 Identity Services 2-21 Common Internet Edge Security Policies Internet Edge Security Implementation Basic Security Policy Functions 3-1 3-1

2-21

Broadband Design 3-3 Configuration 3-4 Basic Forwarding 3-7 Security Policy Functional Deployment 3-8 NAT Issues 3-9 DMZ Design 3-10 Intrusion Detection Capabilities 3-10 Network Management 3-10 Basic Design 3-10 Configuration 3-11 Basic Forwarding 3-19 Security Policy Functional Deployment 3-22 NAT Issues 3-23 DMZ Design 3-24 Intrusion Detection Capabilities 3-25 Network Management 3-25 Partially Resilient Design 3-25 Configuration 3-26 Basic Forwarding 3-34 Security Policy Functional Deployment 3-37 NAT Issues 3-38 DMZ Design 3-39 Intrusion Detection Capabilities 3-40 Network Management 3-40 Fully Resilient Design 3-41 Configurations 3-41 Basic Forwarding 3-49 Security Policy Functional Deployment NAT Issues 3-54 DMZ Design 3-54

3-52

Data Center Networking: Internet Edge Design Architectures 956484

iii

Contents

Intrusion Detection Capabilities Network Management 3-55 Single Site Multi Homing 4-1

3-55

Internet Edge Design Guidance 4-1 High Availability 4-1 Scalability 4-3 Intelligent Network Services 4-3 HSRP 4-3 Internal Routing 4-4 Edge Routing 4-4 Design Caveats 4-6 4-7 Design Recommendations 4-7 Internet Edge Design Fundamentals Border Routers 4-8 Layer 2 Switching Layer 4-9 Firewall Layer 4-9 Layer 3 Switching Layer 4-9

Implementation Details 4-9 Single Site Multi-Homing Topology 4-9 Internet Cloud Router BGP 4-10 Primary Customer Configurations 4-11 Secondary Customer Configurations 4-11 BGP Attributes 4-11 Controlling Outbound Routes 4-12 Controlling Inbound Routes 4-13 Security Considerations 4-14 5-1

Scaling the Internet Edge: Firewall Load Balancing Network Topology 5-2 One Arm Topology 5-2 Sandwich Topology 5-3

System Components 5-4 Hardware Requirements and Software Requirements Features 5-5 Configuration Tasks 5-5 Configuring FWLB with the CSM 5-6 VLAN Configuration on the CSM 5-7

5-4

Data Center Networking: Internet Edge Design Architectures

iv

956484

Contents

Server Farm Configuration 5-7 Virtual Service Configuration 5-8 Probe Definitions 5-9 High Availability 5-10 Configuring CSM Failover 5-10 FT VLAN Configuration 5-10 Convergence Results 5-11 Performance Benchmarks 5-11 Complete Configurations Multi Site Multi Homing Overview 6-1 6-3 5-11 6-1

Multi-Site Multi-Homing Design Principles High Availability 6-3 Scalability 6-5 Intelligent Network Services 6-5 HSRP 6-5 Routing Protocol Technologies 6-6 Edge Routing - BGP 6-6 Design Caveats 6-8 Work Arounds 6-9

Multi-Site Multi-Homing Design Recommendations 6-9 Border Router Layer 6-10 Internet Data Center Core Switching Layer 6-10 Firewall Layer 6-11 Data Center Core Switching Layer 6-11 Implementation Details 6-12 Multi-Site Multi-Homing Topology 6-12 Internet Cloud Router Configurations 6-12 Internet Edge Configurations 6-13 Edge Switching Layer Configurations 6-15 Core Switching Layer Configurations 6-20 BGP Attribute Tuning 6-23 Security Considerations 6-24 7-1

High Availability via BGP Tunneling Overview 7-1

Configuration Sequence and Tasks 7-2 Edge Router BGP Configurations 7-4 Firewall Configurations 7-5
Data Center Networking: Internet Edge Design Architectures 956484

v

Contents

Internal Routing Layer BGP Configurations IGP Router Configurations 7-6 OSPF Configurations 7-6 EIGRP Configurations 7-7 Convergence Results
INDEX

7-6

7-7

Data Center Networking: Internet Edge Design Architectures

vi

956484

Preface
This Solution Reference Network Design (SRND) provides a description of the design issues related to the connection between an Enterprise and the Internet, referred to as the Internet Edge. It provides information on the core design principles associated with key elements of a traditional Internet Edge design. Such elements include:
• • •

Network infrastructure (such as edge routers and Layer 2 switches) Security infrastructure (such as ACLs, firewalls, and Intrusion Detection Systems [IDS]) Other technology likely used in conjunction with server farms or to control Internet access (such as Caching and Content Switching)

This document focuses on the unique requirements that are relevant to Internet Edge topologies. Internet Edge solutions must be highly scalable. Specifically, the topology must be able to accommodate more traffic or more connections. This topology must not compromise functionality or overall design principles for the added resource. For example, you can not compromise the security of a firewall to increase the throughput at the head-end of your network topology. The solution, as a whole, must not be too complex to manage from the perspective of controlling routes and accessibility to network resources from external networks.

Intended Audience
This document is for intended for network design architects and support engineers who are responsible for planning, designing, implementing, and operating networks.

Document Organization
This document contains the following chapters: Chapter or Appendix Chapter 1, “Internet Edge Overview” Chapter 2, “Internet Edge Security Design Principles” Chapter 3, “Internet Edge Security Implementation” Description Provides an overview of the need for security at the Internet Edge. Provides an overview of the basic principles involved in the design of Internet Edge security. This chapter presents four basic Internet Edge security designs.

Data Center Networking: Internet Edge Design Architectures 956484

i

Preface Document Conventions

Chapter or Appendix

Description

Chapter 4, “Single Site Multi Homing” Clarifies and identifies typical single site Internet Edge designs. Chapter 5, “Scaling the Internet Edge: Firewall Load Balancing” Provides design guidance for implementing a firewall load-balancing architecture using the Content Switching Module (CSM).

Document Conventions
This guide uses the following conventions to convey instructions and information:
Table 1 Document Conventions

Convention boldface font italic font [ ] {x | y | z}
screen font boldface screen font

Description Commands and keywords. Variables for which you supply values. Keywords or arguments that appear within square brackets are optional. A choice of required keywords appears in braces separated by vertical bars. You must select one. Examples of information displayed on the screen. Examples of information you must enter. Nonprinting characters, for example passwords, appear in angle brackets. Default responses to system prompts appear in square brackets.

< [

> ]

Note

Means reader take note. Notes contain helpful suggestions or references to material not covered in the manual.

Timesaver

Means the described action saves time. You can save time by performing the action described in the paragraph.

Tips

Means the following information will help you solve a problem. The tips information might not be troubleshooting or even an action, but could be useful information, similar to a Timesaver.

Caution

Means reader be careful. In this situation, you might do something that could result in equipment damage or loss of data.

Data Center Networking: Internet Edge Design Architectures

ii

956484

Preface Obtaining Documentation

Obtaining Documentation
The following sections explain how to obtain documentation from Cisco Systems.

World Wide Web
You can access the most current Cisco documentation on the World Wide Web at the following URL: http://www.cisco.com Translated documentation is available at the following URL: http://www.cisco.com/public/countries_languages.shtml

Documentation CD-ROM
Cisco documentation and additional literature are available in a Cisco Documentation CD-ROM package, which is shipped with your product. The Documentation CD-ROM is updated monthly and may be more current than printed documentation. The CD-ROM package is available as a single unit or through an annual subscription.

Ordering Documentation
Cisco documentation is available in the following ways:

Registered Cisco.com users (Cisco direct customers) can order Cisco product documentation from the Networking Products MarketPlace: http://www.cisco.com/cgi-bin/order/order_root.pl Registered Cisco.com users can order the Documentation CD-ROM through the online Subscription Store: http://www.cisco.com/go/subscription Nonregistered Cisco.com users can order documentation through a local account representative by calling Cisco corporate headquarters (California, USA) at 408 526-7208 or, elsewhere in North America, by calling 800 553-NETS (6387).

Documentation Feedback
If you are reading Cisco product documentation on Cisco.com, you can submit technical comments electronically. Click the Fax or Email option under the “Leave Feedback” at the bottom of the Cisco Documentation home page. You can e-mail your comments to bug-doc@cisco.com.

Data Center Networking: Internet Edge Design Architectures 956484

iii

Preface Obtaining Technical Assistance

To submit your comments by mail, use the response card behind the front cover of your document, or write to the following address: Cisco Systems Attn: Document Resource Connection 170 West Tasman Drive San Jose, CA 95134-9883 We appreciate your comments.

Obtaining Technical Assistance
Cisco provides Cisco.com as a starting point for all technical assistance. Customers and partners can obtain documentation, troubleshooting tips, and sample configurations from online tools by using the Cisco Technical Assistance Center (TAC) Web Site. Cisco.com registered users have complete access to the technical support resources on the Cisco TAC Web Site.

Cisco.com
Cisco.com is the foundation of a suite of interactive, networked services that provides immediate, open access to Cisco information, networking solutions, services, programs, and resources at any time, from anywhere in the world. Cisco.com is a highly integrated Internet application and a powerful, easy-to-use tool that provides a broad range of features and services to help you to
• • • • •

Streamline business processes and improve productivity Resolve technical issues with online support Download and test software packages Order Cisco learning materials and merchandise Register for online skill assessment, training, and certification programs

You can self-register on Cisco.com to obtain customized information and service. To access Cisco.com, go to the following URL: http://www.cisco.com

Technical Assistance Center
The Cisco TAC is available to all customers who need technical assistance with a Cisco product, technology, or solution. Two types of support are available through the Cisco TAC: the Cisco TAC Web Site and the Cisco TAC Escalation Center. Inquiries to Cisco TAC are categorized according to the urgency of the issue:
• •

Priority level 4 (P4)—You need information or assistance concerning Cisco product capabilities, product installation, or basic product configuration. Priority level 3 (P3)—Your network performance is degraded. Network functionality is noticeably impaired, but most business operations continue.

Data Center Networking: Internet Edge Design Architectures

iv

956484

Preface Obtaining Technical Assistance

• •

Priority level 2 (P2)—Your production network is severely degraded, affecting significant aspects of business operations. No workaround is available. Priority level 1 (P1)—Your production network is down, and a critical impact to business operations will occur if service is not restored quickly. No workaround is available.

Which Cisco TAC resource you choose is based on the priority of the problem and the conditions of service contracts, when applicable.

Cisco TAC Web Site
The Cisco TAC Web Site allows you to resolve P3 and P4 issues yourself, saving both cost and time. The site provides around-the-clock access to online tools, knowledge bases, and software. To access the Cisco TAC Web Site, go to the following URL: http://www.cisco.com/tac All customers, partners, and resellers who have a valid Cisco services contract have complete access to the technical support resources on the Cisco TAC Web Site. The Cisco TAC Web Site requires a Cisco.com login ID and password. If you have a valid service contract but do not have a login ID or password, go to the following URL to register: http://www.cisco.com/register/ If you cannot resolve your technical issues by using the Cisco TAC Web Site, and you are a Cisco.com registered user, you can open a case online by using the TAC Case Open tool at the following URL: http://www.cisco.com/tac/caseopen If you have Internet access, it is recommended that you open P3 and P4 cases through the Cisco TAC Web Site.

Cisco TAC Escalation Center
The Cisco TAC Escalation Center addresses issues that are classified as priority level 1 or priority level 2; these classifications are assigned when severe network degradation significantly impacts business operations. When you contact the TAC Escalation Center with a P1 or P2 problem, a Cisco TAC engineer will automatically open a case. To obtain a directory of toll-free Cisco TAC telephone numbers for your country, go to the following URL: http://www.cisco.com/warp/public/687/Directory/DirTAC.shtml Before calling, please check with your network operations center to determine the level of Cisco support services to which your company is entitled; for example, SMARTnet, SMARTnet Onsite, or Network Supported Accounts (NSA). In addition, please have available your service agreement number and your product serial number.

Data Center Networking: Internet Edge Design Architectures 956484

v

Preface Obtaining Technical Assistance

Data Center Networking: Internet Edge Design Architectures

vi

956484

C H A P T E R

1

Internet Edge Overview
An Enterprise connection to the Internet is a critical resource. Whether the Internet is used as a research tool, a medium to increase branding and mind share, a support structure for products and services, or the means to generate revenue through any form of electronic commerce; the need for resilient access is undeniable. Resilient access comes from a well-thought out and executed plan to architect the connection to the Internet. Factors to consider in the actual design of the connection to the Internet are:
• • • •

High availability Scalability Security Manageability Proper access to the network infrastructure as well as devices in the topology Proper scalability provisioning in reference to overall topology capacity. Downtime equates to revenue loss. What happens if… Planning for the unknown.

In understanding the critical nature of the connection to the Internet, consider the following points:
• • •

This chapter presents an overview of the issues related to designing, deploying, and operating the Enterprise Internet Edge architecture.

What is the Enterprise Internet Edge?
An Enterprise Internet Edge is defined as the area containing the network infrastructure required for a resilient connection to the Internet. The scope of the area is highly dependent on how the Enterprise is using the connection to the Internet. Typical functions are:
• • • • • •

Edge Connectivity Edge Routing Edge Security Server Farms Redundancy Scalability

Figure 1-1 presents a high-level topological view of the network elements that form the Internet Edge.

Data Center Networking: Internet Edge Design Architectures 956484

1-1

Chapter 1 What is the Enterprise Internet Edge?

Internet Edge Overview

Figure 1-1

Network Edge Elements

Internet SP1 SP2

Edge connectivity

Edge Routing

Edge security

Server farm architectures

Edge Connectivity
Edge connectivity is inclusive of all connections in the Enterprise: from Headquarters to Internet Service Providers (ISP), small or remote office connections that use VPN tunnels to access the Intranet/Internet, and distributed data center resources, such as server farms. These types of common Internet Edge topology deployments use similar design principles. These principles are general design practices and are equally applicable to all the different topologies in areas such as Layer 3 and Layer 2 infrastructure, security, routing functions, Network Address Translation (NAT), Internet server farms, and Internet access.

Data Center Networking: Internet Edge Design Architectures

1-2

956484

81130

Chapter 1

Internet Edge Overview What is the Enterprise Internet Edge?

Edge Routing
Edge routing is a key aspect of the Internet Edge function. Routing functionality, whether static or dynamic, ensures access to the Internet and defines the degree of availability the edge router supports. Use static routing for small enterprises that have a single connection to a single ISP. There are no advantages to deploying dynamic routing in this instance, therefore there in no need to run routing protocols and their associated processes. Use dynamic routing when there are multiple connections to one or more ISPs. The defacto standard Internet routing protocol is Border Gateway Protocol (BGP). BGP propagates routing tables through the Internet, determines routing conditions for path selection, and converges upon failures. With the average routing table on an edge router including between 100,000 and 115,000 routes, managing your routes to the Internet could be a full time activity. Implementing a dynamic routing protocol, such as BGP, makes manual updates to Internet sized routing tables unnecessary. You must have a good understanding of routing protocols when setting up your edge routers. The Routing Protocols Overview provides a basic lesson in routing protocols.

Routing Protocols Overview
There are two primary types of routing protocols: Distance Vector and Link State. Distance Vector protocols have a vector, or list of hop counts, in their destination prefixes. Each node calculates the best path to each destination prefix within its routing table and then exchanges routing tables with all other nodes. Common Distance vector protocols are Routing Information Protocol (RIP) versions 1 and 2, Interior Gateway Routing Protocol (IGRP), and Enhanced IGRP (EIGRP). Each of these protocols have subtle differences that make them either more or less attractive based on your infrastructure needs. Link State protocols work on the premise that each routing node in the topology exchanges link state information with other routing nodes. This exchange contains information about adjacent routers and network prefixes, as well as the metrics associated with each link. Link state routers do not exchange routing tables. In fact, upon a topology change, the link state routing nodes notify all other adjacent nodes of a link failure. Common link state protocols are Open Shortest Path First OSPF and Intermediate System to Intermediate System (IS-IS). BGP is a Path Vector protocol, which sends its routing tables of the closest network adjacencies based on Autonomous System (AS) hop counts. This implies that a routing table, when using BGP, contains AS hop counts to every other known AS. Obviously, there is duplicate AS information in the table, but this is purely based on the ISP defined peering relationships. BGP allows you to turn on proper tuning metrics to control the egress and ingress traffic on your network. The egress control mechanism supports local preferences, which define the external link the I-BGP peers use to exit the AS. The ingress control method pre-pends an AS to your route update. Each time you advertise your routes outbound, you add another instance of your own AS which increases the hop count to other external peers. These metrics and many others are covered in more detail in the remaining chapters of this document.

Default Routing (Static)
An Enterprise that connects to a single ISP over a single link is a good candidate for default routing. In this topology, the edge router points to a single upstream device with a single IP address as the next hop. This route could be a single physical route upstream or two redundant routes with Hot Standby Routing Protocol (HSRP) enabled for dynamic redundancy. In either case, the primary and secondary edge routers both point to the same defined IP address.

Data Center Networking: Internet Edge Design Architectures 956484

1-3

Chapter 1 What is the Enterprise Internet Edge?

Internet Edge Overview

Dynamic Routing: BGP
Dynamic routing is recommended when there is more than one link or more than one ISP and the externally advertised address space must be reachable to either provider or either link. This creates the need for a routing protocol with the ability to determine network conditions when forwarding traffic. Determining the path for the proper destination at any time is best handled by a routing protocol, which, considering the current size of the Internet routing table and its dynamic nature, is a more manageable approach than static routing. Another important aspect of dynamic routing is the need to share specific destination prefixes, re-distribute routes, and internal routed networks. This is determined by the either the routing protocol redistribution mechanisms within your border routers or by a static route entered in your routers themselves. Redistribution is controlled by BGP in the specific Interior Gateway Protocols (IGPs) running on the intranet.

Edge Security
This section provides a brief summary of the security functions supported within Internet Edge designs. The Chapter 2, “Internet Edge Security Design Principles” provides a more in depth discussion of Internet Edge security The security functions at the Internet Edge include:
• •

Element Security — The secure configuration and management of the devices that collectively define the Internet Edge. Identity Services — The inspection of IP traffic across the Internet Edge requires the ability to identify the communicating endpoints. Although this can be accomplished with explicit user/host session authentication mechanisms, usually IP identity across the Internet Edge is based on header information carried within the IP packet itself. Therefore, IP addressing schemes, address translation mechanisms, and application definition (IP protocol/port identity) play key roles in identity services. ΙP Anti-Spoofing — This includes support for RFC-2827, which requires enterprises to protect their assigned public IP address space, and RFC-1918, which allows for the use of private IP address spaces within enterprise networks. Basic Filtering and Application Definition — Derived from enterprise security policies, Access Control Lists (ACLs) provide explicitly permitted and/or denied IP traffic that may traverse between the areas (Inside, Outside, DMZ, etc.) defined to exist within the Internet Edge. Stateful Inspection — Provides the ability to establish and monitor session states of traffic permitted to flow across the Internet Edge, and deny that traffic which fails to match the expected state of an existing or allowed session. Intrusion Detection — The ability to promiscuously monitor network traffic across discrete point within the Internet Edge, and alarm and/or take action upon detecting suspect behavior that may threaten the enterprise network. Demilitarized Zones (DMZ) — A basic security policy for enterprise networks is that internal network hosts should not be directly accessible from hosts on the Internet (as opposed to replies from Internet hosts for internally initiated session, which are statefully permitted). For those hosts, such as web servers, mail servers, VPN devices, etc., which are required to be directly accessible from the Internet, it is necessary to establish quasi-trusted network areas between, or adjacent to both, the Internet and the internal enterprise network. Such DMZs allow internal hosts and Internet hosts to communicate with DMZ hosts, but the separate security policies between each area prevent direct communication originating from Internet hosts from reaching internal hosts.

Data Center Networking: Internet Edge Design Architectures

1-4

956484

Chapter 1

Internet Edge Overview What is the Enterprise Internet Edge?

Server Farms
Server farms located at the Internet Edge are characteristic of external facing services where the client base is the Internet at large. An Enterprise's external presence is typically off the DMZ and is protected by a security perimeter. Enterprises place servers that are accessed from the Internet in the DMZ. These servers offer the following services:
• • • •

DNS (Domain Name System) FTP (File Transfer Protocol) SMTP (Simple Mail Transfer Protocol) HTTP (Hyper Text Transfer Protocol).

There are two main types of server farms found at the Internet edge: extranet/intranet and Internet. Although these server farms have different functions, they commonly share similar design principles and topologies.

Extranet/Intranet Server Farms
Large Enterprises typically deploy Extranet/Intranet server farms, as depicted in Figure 1-2. These server farms support the connectivity of partner or dedicated customers to an Enterprise infrastructure as well as the internal application connectivity for the corporate users. For example, in a financial industry, the Enterprise customers rely on market data from many different providers. Although some of these designs are on dedicated circuits, some can be channeled across the Internet. This requires another network topology to terminate and secure these connections. Once the connections are secured, the transactions can be routed to the internal network. Also, it is important to be able to perform order processing and inventory management in an Extranet environment. This is particularly useful in cases where an e-business partner on the front-end processes the service or product request for an Enterprise. These partners generally have back-end or extranet connectivity to access the Enterprise’s database and remove the service or product from inventory. An Intranet DMZ design facilitates the connectivity of corporate users to internal applications and resources. This is similar to the design principles of the Extranet design, but there is no partner connectivity. The users primarily access the intranet for internal applications only.

Data Center Networking: Internet Edge Design Architectures 956484

1-5

Chapter 1 What is the Enterprise Internet Edge?

Internet Edge Overview

Figure 1-2

Extranet/Intranet Server Farm Design

Internet SP1 SP2

Partner cloud

Internet Server Farms
Internet server farms, shown in Figure 1-3, may have similar topologies as an extranet/intranet design, but they have completely different functions. In an Internet Server farm design, the resource being made available is either the external DNS, FTP, or mail service, as well as a front-end web services or Enterprise homepage. This requires some strict security posture, as this would be the first stage of a security attack on the network.

Data Center Networking: Internet Edge Design Architectures

1-6

81131

956484

Chapter 1

Internet Edge Overview What is the Enterprise Internet Edge?

Figure 1-3

Internet Server Farm Design

Internet SP1 SP2

Common Protocols
Common protocols and their associated ports related to Internet server farms are:
• • • • •

UDP port 53 for DNS FTP, which requires either TCP/UDP port 20 or 21 SMTP, which requires TCP/UDP port 25 POP (post office protocol) version 2 and 3, which requires TCP/UDP port 108 and 109 respectively HTTP type class protocols, which require TCP/UDP port 80

Although this makes up the majority of protocol types within most Internet server farms, the access that needs to be allowed is based on the services and applications deployed.

Data Center Networking: Internet Edge Design Architectures 956484

81132

1-7

Chapter 1 What is the Enterprise Internet Edge?

Internet Edge Overview

Caveats
Common issues when deploying this architecture are the high throughput NAT requirements and security within a partner network. The Enterprise customer is liable for partner interconnectivity. Therefore, it is imperative that the Enterprise ensure that all partners are unable to resolve or see each others real addresses. In addition, within the design of these architectures, there are instances of timeout values associated with some applications. It is important to define a topology that is sensitive to application timeout values and does not incur additional downtime to the application itself. For example, when implementing HSRP in a DMZ network, there can be a substantial re-convergence when pre-empting is enabled which causes retransmission timeouts with the TCP protocol itself thus causing the application to lose connectivity. Loss of application connectivity is unacceptable in certain environments and network administrators must be aware of it.

Redundancy
Redundancy is a necessity in all network architectures that require high availability. Redundancy includes, not only device redundancy, but also route and ISP redundancy. The ability for a company to be successful in the Internet is based solely on the company's ability to define and deploy a scalable, available network architecture that meets customers demands.

Device Redundancy
The ability to keep the network infrastructure up and running is of utmost importance in these Internet Edge topologies. Therefore, it is common practice to have device redundancy built in to the topology. You must define the variables that exist at primary device layers, such as the CSU/DSU, power, cabling, and the actual line connectivity. Device redundancy is acceptable and apparent when the Enterprise is dual-homed to multiple ISPs or to a single upstream provider. Obviously, if the Enterprise is presently running on a single circuit or feed from a single ISP, then device redundancy does not provide any added benefit. Unforeseen software bugs can be a site destroyer as well. Development teams do their best to stay on top of this curve but nothing beats a hardened QA lab that regression tests new routing code before inserting it into the network. Also, staying up-to-date on incident mailers and bug tracking lists is also a good idea.

Site Availability and Route Redundancy
Site availability is defined as the continuity or reach ability of the site or services offered by the network architecture. This is accomplished in many ways. Site availability is directly related to the redundancy set forth at a device level as well as at the route layer. Therefore, when you look at the route-layer redundancy, it is apparent that you should define proper peering relationships with either multiple providers or, when staying with the same provider, that they offer a circuit or Ethernet handoff on completely separate hardware. To be specific, if this is a co-location situation, the handoffs must be from two different physical switches with different route redundancies. In the instance of terrestrial circuits, the demarcation points must be terminated from different rings or circuit switches. This type of redundancy offers a higher availability ratio in the event of an unexpected failures.

Data Center Networking: Internet Edge Design Architectures

1-8

956484

Chapter 1

Internet Edge Overview Internet Edge Design Considerations

Scalability
Scalability at a device layer in Internet Edge topologies is just as important as the ability to handle more traffic requests and higher throughput requirements. This is apparent as the BGP Internet routing tables increase on a monthly basis.

Memory Considerations
As mentioned above, a router with insufficient memory is a burden on any network topology. For example, routers today run the full BGP routing table, which has approximately 110,000 route entries. A router with 32 MB of memory cannot handle the average Internet routing table. This means that if a router terminates multiple BGP routing tables, as well as IGP routing tables and its own IOS, then the router must have at least 128 MB of memory to handle the load. Most ISP routers today run 256 MB of memory to keep up with these growing memory requirements.

Physical and Logical Scalability
There must be a common link between physical and logical scalability. Installing a router today that just meets throughput and physical connectivity requirements is not a best common practice. If a device that was installed needs to be upgraded quickly, then the proper provisioning did not take place. The upgrade charges and lost revenues due to scheduled outages alone make up for the costs incurred to provision a sufficient platform from the initial network design.

Device Provisioning
When discussing physical scalability it is assumed that proper device provisioning occurred. This means that when defining the Enterprises Internet Edge topology; do not overlook the possible growth characteristics of the web site or services offered. When defining the scope of an Enterprise’s Internet Edge topology, make sure you define a physical architecture that scales to meet the needs of your own applications and customer requirements.

Internet Edge Design Considerations
Autonomous System Considerations
When deploying BGP at the Internet Edge, the following two key functions of the protocol must be clearly understood:
• •

Internal BGP (IBGP) —The peering relationship and the associated information exchange between routers that are part of the same AS. External BGP (EBGP) —The peering relationship and information exchanged between routers from different ASs.

Barriers to deployment can arise depending on the size of your relationship and your ability to obtain an AS from American Registry for Internet Numbers (ARIN).

Data Center Networking: Internet Edge Design Architectures 956484

1-9

Chapter 1 Internet Edge Design Considerations

Internet Edge Overview

Address Allocation
Address allocation is a key consideration when designing and deploying the Internet Edge. The address space at the Internet Edge is used by both Enterprise clients accessing the Internet and Internet clients accessing the Enterprise’s publicly advertised services. The demand for IP addresses from the ISP has exposed that the current class-based routing system does not scale. This demand is driving the need for larger address spaces, such as IPv6 support, which increases the address size from 32 bits to 128 bits. Another solution is the classless inter domain routing (CIDR) scaling method, also known as supernetting. CIDR is the concept of defining an IP address block in a routing table down to a prefix match. For example, an IP address range represented as 192.168.0.0/16 encompasses the following IP address ranges 192.168.0.0, 192.168.1.0 and so on. This representation is referred to as an aggregate route. When creating redundancy at the Internet Edge, you would assume that the architecture is contiguous. Therefore, the idea of disparate IP address blocks does not become apparent unless you try to introduce two different ISPs into one architecture. This means that each ISP provides you with an address block to identify yourself within their network and, with the IP address depletion problems today not making it any easier, you are forced to define an in-contiguous Internet Edge network topology. ISPs could opt to give portions of their address blocks to Enterprises for Internet access purposes. The Enterprise is then identified to the Internet community as being on such given network ranges. In the event that a second ISP is involved, there are some potential caveats related to which address identifies the Enterprise, who owns the address space, who advertises the address space, and what happens when the primary link goes away. For example, if ISP A owns and advertises the address space and the link to ISP A fails, the Enterprise has the following choice to make: either ISP B advertises the address space or a different address range is used. This type of deployment causes future problems when you decide to build redundancy into your topology or if you decide that you need to be free from reliance on the ISP. This would give you the freedom of owning your own address space in the event that you want to change ISPs in the future.

Note

The testing portion of this document defines a best practices scenario. In the event that this problem arises, the solution or best common practice is documented for future releases.

NAT Considerations
When an Enterprise deploys a private IP address internally, that address must be translated into a routable address in order to access the public network. The translation process is called network address translation (NAT). The NAT process takes place at the Internet Edge. NAT is typically required when an Enterprise does not own a block of public addresses or when it wishes to keep the internal IP address block, which is public, private or hidden from the rest of the Internet. NAT is also used to control how traffic traverses inbound and outbound paths at the Internet Edge. If a network administrator were to deploy a contiguous network segment for NAT at each egress point, then the network that leaves at that adjacency has to return via the same route. This allows for load distribution in the future. NAT at the edge of the Enterprise network has many functions including to apply routable addresses for the Internet backbone, to hide internal address spaces for security reasons, and to handle the increasingly common problem of receiving two different address blocks from your ISP. All of these have become prevalent in an Enterprise backbone, yet the main reason for using NAT is to preserve address spaces on the Internet.

Data Center Networking: Internet Edge Design Architectures

1-10

956484

Chapter 1

Internet Edge Overview Topology Considerations

As the IP address depletion problem persists, RFC1918 was developed to define the need for private address allocation. RFC1918 defines a number of address blocks, referred to as private addresses, which are reserved for internal use. Companies use the same IP addresses concurrently as these blocks of addresses are not publicly available and thus not routed on the Internet. These private networks include 10.0.0.0 through 10.255.255.255, 172.16.0.0 through 172.31.255.255, and 192.168.0.0 to 192.168.255.255. Additional information is available by referencing RFC1918 on the IETF website.

Topology Considerations
Internet Edge solutions touch many different customer types and therefore may potentially have many different topologies. These topologies can range from any remote office connection to a major ISP peering point. Therefore, the ability to maintain common design principles allow you to carry these recommendations into almost all Internet Edge topologies.

Small Office Home Office Deployments
Although Internet Edge topologies are in every Internet-facing network, the scalability of these topologies is often different. For example, in a small office home office (SOHO) environment, the routing device at the Internet edge may be a DSL gateway, a cable gateway, or an ISDN router. Furthermore, these designs may not have redundancy principles applied. These environments are typically deployed in conjunction with VPN technologies. The ISP connection at many SOHOs is the transport mechanism for network connectivity between the remote SOHO office and the central office or Headquarters. A typical SOHO deployment is depicted in Figure 1-4.
Figure 1-4 SOHO VPN Network Topologies

Single-box 905 Cable backbone PIX 501 Variations: VPN 3002 can be used in place of PIX501 if fiewall not required. 925 can be used in place of 906.

Two-box

To head end

806/1710

Small/Medium Business Deployments
Despite the differences in the actual device provisioning, the common topologies and design principles remain the same for small/medium businesses (SMB) and large Enterprises.

Data Center Networking: Internet Edge Design Architectures 956484

74797

Third-party modem

3rd-party cable modem

1-11

Chapter 1 Topology Considerations

Internet Edge Overview

Single ISP Design
In the single ISP topology, it is apparent that the need for redundancy at the edge is a null issue because in the event that the primary edge router fails, the Internet connection is down. Therefore, defining redundancy at the edge of the network has no beneficial affect, unless the provider supplies two terrestrial circuits as depicted below in Figure 1-5.
Figure 1-5 Single ISP Design

Single ISP environment
Internet

SP1

E-BGP Instance

I-BGP Instance

Co-Location ISP Design
In an SMB co-location environment, redundancy is common. Even though there is a single provider, the ISP generally offers a dual connection to the SMB. The typical deployment scenario has an instance of an HSRP default route upstream to the provider network.

Data Center Networking: Internet Edge Design Architectures

1-12

74798

956484

Chapter 1

Internet Edge Overview Topology Considerations

Enterprise Deployments
Enterprise deployments have the same design principles applied to their architectures as the SMB. Some differences in circuit provisioning in enterprise networks is probable, but the design principles are, never the less, similar.

Dual ISP Design
A dual ISP design can be cumbersome if not properly provisioned. When deploying a dual ISP environment, we can assume that BGP will be introduced into the network architecture. This can be done in many different ways. The Enterprise has the choice of defining the type of route updates they can receive from the upstream provider. The choice can be either partial routes from the provider or the full BGP routing table. The full routing-table update is more process-intensive and can require more memory in the router. Depending on the provider chosen, the Enterprise might opt to have full BGP routing table updates. This way, the network itself has full knowledge of all BGP routes upstream, as well as the full routes of its IBGP peers. This option offers the best path selection based on AS hop count. Figure 1-6 illustrates an example of a dual ISP topology.

Data Center Networking: Internet Edge Design Architectures 956484

1-13

Chapter 1 Topology Considerations

Internet Edge Overview

Figure 1-6

Dual ISP Topology

Dual ISP environment
Internet SP1 SP2

E-BGP Instance

I-BGP Instance

Co-Location Design
The co-location design of the Enterprise is similar to that of the SMB. The difference being that a larger enterprise usually has a second, first-tier provider in the topology. Along with the usual dual connectivity of the co-location facility, the Enterprise has another terrestrial circuit dropped into the cage for provider redundancy. This design also requires that BGP be activated between all internal edge routers. Figure 1-7 illustrates an example of a co-location topology.

Data Center Networking: Internet Edge Design Architectures

1-14

956484

74799

Chapter 1

Internet Edge Overview Summary

Figure 1-7

Co-Location Design

Internet SP1 SP2

E-BGP Instance

I-BGP Instance

Central Back-End Process Designs
Integration strategies involving a central back-end database process require the internal or back-end routers to run I-BGP between the peers to properly scale and route destined traffic back to the central site. Although this is possible, a more common design is to run dedicated private circuits with an internal IGP protocol between these sites.

Summary
As you discover Internet Edge routing problems, as well as the scalability issues, it is apparent that proper planning and provisioning are necessary to create a topology and solution that is bulletproof and that can be easily scaled and managed.

Data Center Networking: Internet Edge Design Architectures 956484

74800

1-15

Chapter 1 Summary

Internet Edge Overview

Complete testing of some of the more common BGP attributes, as well as further testing of IGP and BGP integration, is scheduled by ESE. Future revisions of this document will contain the results of such testing.

Data Center Networking: Internet Edge Design Architectures

1-16

956484

C H A P T E R

2

Internet Edge Security Design Principles
This chapter provides an overview of the basic principles involved in the design of Internet edge security. It discusses:
• • •

Security Design Requirements Performance Considerations Security Considerations

Security Design Requirements
Network security is based on the following basic principles:
• •

Identity — The ability to challenge the credentials offered by a user or host and, based on their validity, determine which network resources they are authorized to access. Trust — The determination of whether information from another device should be accepted. Within network designs, trust is based on the inherent or explicitly constructed ability of the network to forward traffic between hosts. Enforcement — The ability to enforce security policy using four basic types of mechanisms: trust limiting/definition, monitoring capabilities, audits, and security management tools and procedures. Risk — The determination of the likelihood of the network being compromised. Network security has its basis defined relative to the potential risk associated with both known and unknown threats to the network, from internal and external sources. Assessment — The continual evaluation of the validity and effectiveness of the security policy and its implementation. Network security is an ongoing process and network security policies must undergo periodic review to determine how to improve their enforcement and ensure their viability as the network grows and changes.

• •

Because of the transitional nature of the Internet edge, which represents the outer perimeter of the enterprise, there is no area of network design in greater need of network security expertise.

Security Policy Definition
Security policy combines the five principles of security (Identity, Trust, Enforcement, Risk, Assessment) into a series of statements, which are used as guidance in developing and implementing a network security design.

Data Center Networking: Internet Edge Design Architectures 956484

2-1

Chapter 2 Security Design Requirements

Internet Edge Security Design Principles

For example, take the case where a network administrator determines that, due to the lack of authentication security, Telnet sessions should be prohibited in any direction across the Internet edge. The administrator then crafts the following example security policy: “Due to authentication security concerns, particularly the potential for username/password information to be captured by third parties in cleartext format, Telnet traffic (TCP port 23) will be blocked in all directions by the corporate Internet firewall. This policy includes session attempts initiated by any internal user. The use of non-standard TCP ports by Telnet applications is not an authorized alternative. Corporate users impacted by this policy should migrate affected applications to use the SSH protocol. Exceptions to this policy will require the CIO's written approval.” Upon reviewing the above example, you would find that it meets the principles of security as defined above. It is equally clear that although the author of this example policy is defining enforcement using a firewall, they recognize the fact that the use of non-standard TCP ports by Telnet applications is a potential risk. They may not be able to prevent users from using Telnet on TCP port 80, but by indicating that this is not authorized, they make known the possibility of corrective action if non-standard Telnet usage is detected. This section discusses the elements that embody effective security policies and the security mechanisms that can be deployed at the Internet edge, including:
• • • •

Host Addressing Application Definition Usage Guidelines Topology/Trust Model

Figure 2-1 provides a view of design factors and threat factors that are key elements in developing policies.
Figure 2-1 Security Policy Elements

Design factors Host addressing Application definition Usage guidelines Topology/trust model Policy

Threat factors Vulnerabilities Denial of service Misuse
76681

Reconnaissance

Host Addressing
One of the basic tenets of Cisco's SAFE blueprint for network security is the use of modular and hierarchical network designs that allow segregation of network hosts based on organizational and functional boundaries. To that end, structured host IP addressing and subnet design can be among the most powerful tools in developing and enforcing network security policies.

Data Center Networking: Internet Edge Design Architectures

2-2

956484

Chapter 2

Internet Edge Security Design Principles Security Design Requirements

Tip

For more information about SAFE, see the white paper titled SAFE: A Security Blueprint for Enterprise Networks, located at: http://www.cisco.com/warp/public/cc/so/cuso/epso/sqfr/safe_wp.htm

Host Addressing Guidelines
When developing security policies, use the following IP addressing guidelines:
1.

Wherever possible, hosts of dissimilar organizations or network functionality should not co-exist within the same subnet. For example, servers and user workstations should not be placed within the same subnet boundary, as policy rules written for each would be significantly different. If a server is placed in the same subnet as user workstations, a Layer 2 (Ethernet MAC, etc.) trust association exists between that server and its clients. Therefore, security between these clients and the server is based purely on host configuration; the network does not participate. However, if the user workstations are placed in a different IP subnet, a Layer 3+ (IP, TCP, UDP, etc.) device exists between the server and the clients and some level of policy enforcement can then be applied at the network level. If a server is placed within the same subnet as user workstations, a Layer 2 (Ethernet MAC, etc.) trust association exists between that server and its clients. Therefore security between these clients and the server is based purely on host configuration; the network does not participate. However, if user workstations exist on an IP subnet different from that of the server, this means that a Layer 3+ (IP, TCP, UDP, etc.) device exists to which you can apply some level of policy enforcement at the network level.

Data Center Networking: Internet Edge Design Architectures 956484

2-3

Chapter 2 Security Design Requirements

Internet Edge Security Design Principles

Figure 2-2

Bounded and Unbounded Hosts Within a Common Subnet

Users B .129 C .130 D .131 E .132

Server F .2

Printer G .3

Secondary exploits .1

192.168.1.0/24

A

Vulnerability exploit

External networks

In Figure 2-2, users, the printer, and the file server exist within a common subnet, 192.168.1.0/24. Server F is vulnerable to external security exploits. If compromised, Server F could be used to launch secondary attacks against other hosts within the subnet. Since trust exists at Layer 2, as no Layer 3 boundary exists between these hosts, the attacks can be Layer 2 or Layer 3 in nature. Examples of Layer 2 attacks include ARP spoofing and MAC flooding, either of which allow the compromised server F to observe and collect flowing across the entire network. Even if router A was shielding the user hosts and the printer from potential IP vulnerabilities via ACL, those vulnerabilities can be successfully launched from the compromised server. The reverse is true as well. A compromised user or low-level server (e.g. printer) can potentially be used to attack the server and/or other hosts within the subnet.

Data Center Networking: Internet Edge Design Architectures

2-4

956484

76682

Chapter 2

Internet Edge Security Design Principles Security Design Requirements

Figure 2-3

Bounded and Unbounded Hosts on Separate Subnets

Users B .130 C .131 D .132 E .133

Server F .2

Printer G .3

192.168.1.128/25 .129 Failed secondary exploits .1 A .65 Vulnerability exploit

192.168.1.0/26

192.168.1.64/26

In Figure 2-3, the same network exists except the servers and printers are segregated on separate subnets. The actual implementation uses VLANs to provide three separate logical segments on the edge switch, and requires a single physical interface on the router. By using ACLs to filter IP traffic, you significantly mitigate the probability of secondary exploitations between bounded and unbounded hosts. Use the ACLs between:
– The users and the server, limited to file and print server protocols – The server and the printer (assuming the server provides print spooling services), limited to

print server protocols
– The users and the printer (assuming the printer provides its own print spooling services), limited

to print server protocols
2.

By design, each subnet should include network devices as well as various hosts. Therefore, within a subnet's address design, hosts of differing network properties should be grouped in bit-wise boundaries to allow them to be represented by a single access control list (ACL) statement. The second guideline assumes that all IP subnets have at least one host that acts as a default gateway (and in many cases two such devices exist). Also, it may be more administratively efficient to have hosts of differing network properties share a common set of Layer 3 interfaces on network devices. In such cases, hosts of similar properties within a subnet should be grouped within bit-wise boundaries in that subnet. For example, consider a 254-host subnet that is required to support: two IP gateways, 100 DHCP-configured hosts (50 workstations and 50 IP phones), and 4 hosts with fixed IP addresses (ex. teleconferencing stations, printers, etc.). One possible bit-wise addressing solution could include (let’s assume a network IP address of 192.168.1.0):

Data Center Networking: Internet Edge Design Architectures 956484

76683

External networks

2-5

Chapter 2 Security Design Requirements

Internet Edge Security Design Principles

– An IP address range of 192.168.1.1 to 192.168.1.6 (falling within the bit-wise range of

192.168.1.0/29) for use by the two gateways (.2 &.3), an explicit HSRP address for the gateways to share (.1), and unused addresses available for management of other network devices (such as switches) within the subnet.
– An IP address range of 192.168.1.8 to 192.168.1.14 (falling within the bit-wise range of

192.168.1.8/29) for use by the fixed IP address hosts with two additional addresses available for growth.
– An IP address range of 192.168.1.129 to 192.168.1.254 (falling within the bit-wise range of

192.168.1.128/25) for use by DHCP hosts. If DHCP servers can differentiate pools by requestors MAC prefixes, two ranges of 192.168.1.128/26 and 192.168.192/26 could be used for further segmentation of IP phones and workstations.

Note

Although the first and last addresses of these ranges can be assigned to hosts (because only the first and last IP address of a subnet itself are reserved for broadcast use), you should avoid using them. This allows for ease of migration should there be a need to break out of these ranges in the future.
Class C Network Space

Figure 2-4

24 bits 1 x 254

25 bits 2 x 126

26 bits 4 x 62

27 bits 8 x 30

28 bits 16 x 14

29 bits 32 x 6

30 bits 64 x 2

.255

255.255.255.0 255.255.255.128 255.255.255.192 255.255.255.224 255.255.255.240 255.255.255.248 255.255.255.252

.0

Figure 2-4 provides a breakdown of a Class C (254 host, 24 bit mask) network space. Note that this diagram assumes classless routing. (Older classful routing structures disregard the first and last subnet of a given classful space, which makes 25 bit subnetting unavailable for a Class C address space.) By addressing like function hosts within a subnet along bit-wise boundaries, it is possible to craft ACLs in a meaningful fashion. Using the bit-wise addressing solution above, apply the following policy rules:

Since all network devices use other interfaces for network management purposes, no traffic should be allowed to access these devices from within the subnet.

Data Center Networking: Internet Edge Design Architectures

2-6

956484

76684

Chapter 2

Internet Edge Security Design Principles Security Design Requirements

• •

The four fixed hosts only externally communicate with hosts on the 192.168.100.0/24 subnet. Users are not allowed to use telnet for any reason.

The following are the IOS ACL results:
interface FastEthernet0/0 ip address 192.168.1.1 255.255.255.0 ip access-group 101 in access-list access-list access-list access-list access-list 101 101 101 101 101 deny ip any 192.168.1.0 0.0.0.7 permit ip 192.168.1.8 0.0.0.7 192.168.100.0 0.0.0.255 deny ip 192.168.1.8 0.0.0.7 any deny tcp 192.168.1.128 0.0.0.127 any eq telnet permit ip 192.168.1.0 0.0.0.255 any

While discipline in developing IP addressing conventions can be readily achieved in new network designs, one of the major obstacles that network security administrators face is inconsistency of existing host IP addressing. One solution is to use available private IP address spaces (as defined by the IETF in RFC 1918) to establish 'green-field' address spaces, into which existing hosts are migrated to achieve a desired IP address structure. However, private IP address spaces require the use of network address translation (NAT) devices to allow the hosts to communicate with the Internet.

Application Definition
The application definition process for the Internet edge appears straight-forward. Many applications are relatively simple and make use of well-known TCP/UDP ports or IP protocols. For example, you expect HTTP traffic to use TCP/80. Others, particularly multimedia applications, make use of a complex combination of ports and protocols. Examples here are IPSec, which makes use of UDP/500 for Internet Key Exchange (IKE), IP protocol 50 for Encapsulating Security Protocol, and IP protocol 51 for Authentication Headers; and H.323, which includes dynamically created RTP streams using high UDP port numbers. Defining applications is the process of researching and observing network applications in order to determine the IP networking environment that must exist for application to properly functioning. The end-result of this process provides administrators with information regarding the supported IP network applications they can use to correctly set up their network security enforcement mechanisms, without impacting the productivity of these applications. Unfortunately, many commercial applications do not provide the level of IP information that is necessary to develop a clear application definition. Therefore, administrators are often required to test applications and solutions within a test network to provide full application definition. A full application definition includes:
• •

A description of the IP protocols used, including a clean capture of a typical session delineating their use. Reports from auditing tools identifying open ports and protocols on workstations, servers, and other network devices used to host the proposed solution. The purpose of such active ports and protocols should be clearly identified with the results used to disable unnecessary services (to minimize risk) included. An operation, maintenance, and management plan for the proposed solution. Network management protocols must be included in the protocol definitions previously stated.

For example, a network administrator is tasked with setting up a basic public web server. The resulting application definition is shown in Figure 2-5.

Data Center Networking: Internet Edge Design Architectures 956484

2-7

Chapter 2 Security Design Requirements

Internet Edge Security Design Principles

Figure 2-5

Basic HTTP Application

Enterprise networks Web server A Web client B

Src IP:B, Dst IP:A, TCP, Src Port:High, Dst Port: 80 Request Src IP:A, Dst IP:B, TCP, Src Port:80, Dst Port: High(same) Response
76685

Note

Although the full TCP handshake is not shown, it is assumed that this session fully conforms to the requirements for a normal TCP session. In addition to the basic function shown in Figure 2-5, the application definition should also include the following information flows, which are required for proper operation of the solution:
• • •

The DNS request/reply originating from user B’s network to the network's DNS service. Flows of additional applications required to monitor and maintain the web server. Flows for support for back-end applications that ties into the web-server front-end.

Note

Enterprises typically implement a wide variety of applications for employee productivity. Therefore, security administrators should adopt a practice of participating in their planning and implementation and explicitly defining how those applications are supported within their enterprise networks.

Usage Guidelines
Although every Internet edge offers the ability to communicate to other hosts on the Internet, many sites may want to restrict those communications to productive use. By publishing Internet usage guidelines, administrators provide users with a way to self-regulate their Internet usage, thus helping the site maintain maximal utilization of limited Internet access resources. Define usage criteria as enforceable guidelines. This is important for a number of reasons:

Without the ability for the Internet edge to enforce usage guidelines, users may choose to maintain or ignore such guidelines when convenient. For example, a departing employee may choose to divulge sensitive company information. Again, enforcement is a basic principal of security, and the best security policies have defined enforcement mechanisms to deter malicious activity Many hosts, such as Internet-accessible servers, must operate autonomously. For these hosts, usage guidelines must be enforceable in an equally autonomous fashion at the Internet edge. Users and hosts on the Internet are not subject to defined usage guidelines without enforceability. Although an application definition provides most of this usage structure, the Internet edge can offer additional protection for hosts accessing and accessible by the Internet.

• •

Data Center Networking: Internet Edge Design Architectures

2-8

956484

Chapter 2

Internet Edge Security Design Principles Security Design Requirements

One of the significant challenges of network security design is discriminating malicious and non-productive traffic based on content. It is unfortunate that attacks based on malicious code (viruses, worms, malformed requests) appear innocuously as e-mail attachments, HTT requests, and potentially other traffic types that are allowed to flow through the perimeter of the network. While the use of third-party application filters (e-mail scan, URL filtering, etc.) and content networking engines aid in the mitigating such attacks, intrusion detection systems and anomalous data analysis capabilities are needed to effectively detect and twart such threats.

Topology/Trust Model
Perhaps the most significant design factor in developing security policies is trust within the network design. Host reachability and the success of networked (and Internet) applications are based on the existence of a trusted forwarding patch between hosts.
Figure 2-6 Levels of Trust in Network Designs

Internet Application

Figure 2-6 illustrates the four basic levels of trust within network designs:
1. 2. 3. 4.

Subnet—Trust between hosts potentially reachable within the same Layer 2 media. Internet—Trust between hosts on different subnets, potentially reachable via IP forwarding mechanisms. Application/Session—Trust between hosts on the same or different IP subnets, based on operating a common set of network applications or protocols. Operational—Trust which is based on the user's judgement and processes in accessing the network, as well as the ability to manage network functions.

Within a subnet, trust between hosts is based on Layer 2 mechanisms. In nearly all cases, Ethernet-based Layer 2 rules apply, due to the preponderance of that media within network designs. Hosts respond to both unique (unicast) and shared (broadcast/multicast) MAC addresses. The ability to forward traffic between hosts within the same subnet is based upon knowledge of the MAC address. With shared media (hubs), subnets represent pure trust zones within networks. This is because with shared media, each host sees all traffic within the subnet. However, with switched media, port-level bridging functions provide basic discrimination of unicast Layer 2 traffic, so that hosts connected to one interface do not see unicast traffic destined to hosts on other interfaces. Switches can provide additional Layer 2 traffic discrimination mechanisms, both standards-based and proprietary, including:

VLAN tagging (802.1Q and ISL), which can provide virtual segmentation of traffic across common physical media.

Data Center Networking: Internet Edge Design Architectures 956484

76686

Operational

Subnet

Subnet

2-9

Chapter 2 Security Design Requirements

Internet Edge Security Design Principles

• • •

Port-level security mechanisms, which control the learning and inclusion of MAC addresses, as well as port-level traffic-rate limitations. Bridge-level filtering mechanisms, including MAC-filtering, multicast support, and Layer 2-based ACLs (VACLs). MAC-independent Layer 2 forwarding rules, such as the use of private VLANs on Catalyst switches, which can be used to fashion port-level forwarding rules based on the intended function of a port rather than the specific MAC addresses of hosts accessible via that port.

The importance of Layer 2 trust may not be clear to the functions of the Internet edge, which by its nature deals predominantly with Internet, Application/Session, and Operational trust issues. However, network security designers must be aware of the consequences of a security breach, and the ability of intruders to make use of existing trust (including Layer 2 trust) to perform secondary exploitations on other hosts within the enterprise. For example, if Internet-accessible servers of different functions exist on a common DMZ subnet, the compromise of one server could result in the secondary exploitation of other servers, leading to their compromise. If one of these compromised servers has a trust relationship to an internal host, that host could also be compromised. In designing the Internet edge, administrators must not fall into the trap of relying on perimeter security mechanisms only, and must consider the internal threat posed by potentially compromised hosts. Internet-level trust is based on the reachability of hosts by means of their IP addresses. The foremost requirement here is a viable IP forwarding path between two hosts. This is a concern associated with routing protocols, static routes, and other policy-based forwarding mechanisms. The ability to limit Internet-level trust is based on two mechanisms:
• •

The ability to limit IP forwarding by discriminatingly disabling a forwarding path. There are a number of route poisoning mechanisms available to do this. The ability to limit IP forwarding, based on source or destination IP address filtering. In addition to strict address and network definition by actual IP address, advanced mechanisms are available to limit access by hostname resolution.

In addition to unicast IP traffic, Internet trust must deal with broadcast and multicast traffic. With regard to IP broadcast, directed broadcast to other IP subnets should be prohibited. While broadcasts may have effective local significance in IP to Layer 2 address resolution, inter-subnet broadcast applications are virtually non-existent and represent a significant potential for Denial-of-Service (DoS) attacks. The many-to-one nature of multicast traffic requires a high trust model, and multicast traffic transiting the Internet edge is usually encapsulated within unicast traffic (GRE), in order to allow security to be established bi-directionally between multicast routers (but not between multicast sources and receivers). Another aspect of Internet trust is associated with 1:1 NAT. NAT is a useful (and often necessary) tool to provide forwarding to internal IP hosts, particularly those using private IP addressing (RFC-1918). NAT devices are most often located within the Internet edge.

Stateful Traffic Inspection
The firewalls used to create this design guide are stateful inspection firewalls. This means that for TCP-based traffic initiated on trusted interfaces, the firewall tracks the state of those connections and allows correctly formatted TCP responses from remote hosts to pass through to the internal hosts that were the session originators. As many common Internet applications are TCP-based, the use of stateful inspection firewalls provides effective protection while maintaining high-performance capabilities. However, it is important not to become overly reliant on the defensive mechanisms offered by stateful inspection firewalls. Although TCP-based traffic can be monitored statefully, UDP and ICMP-based traffic is connectionless and cannot be tracked, and other IP protocols (GRE, IPSec ESP and AH) are not only stateless, but may impact the ability to implement address translation mechanisms. Stateful

Data Center Networking: Internet Edge Design Architectures

2-10

956484

Chapter 2

Internet Edge Security Design Principles Security Design Requirements

inspection firewalls rely on simple access control to limit these traffic types, and offer only timers for responses matching internally originated requests. For these reasons, Cisco recommends that you strictly limit ICMP, UDP, and other IP protocol firewall rules. The obvious exception is DNS traffic (UDP/53), which is generally required for hostname translation of Internet hosts. The other potential drawback of stateful inspection firewalls is in their limited ability to detect application-level vulnerabilities. For example, if a firewall rule allows IPSec traffic to pass through (IP protocol 50 for ESP and 51 for AH, plus UDP/500 for IKE), the firewall cannot peek into the tunneled data stream to determine if malicious activity is occurring. Firewalls can make use of internal or third-party application filters that can help detect and mitigate application-level attacks (such as URL filters and SMTP content inspection engines). Cisco firewalls also make use of protocol fixups, which can monitor and predict the behavior of specific applications. This is useful for complex applications that make use of additional TCP/UDP ports for multi-session data transfers (such as H.323 for Voice over IP), and allows administrators to support these applications without opening large ranges of ports. Stateful inspection firewalls provide the ability to focus permitted traffic to only that required for supported network applications. While deploying firewalls is no cure for effective edge device and internal network security, they are the cornerstone of Internet edge design.

Engine Performance Considerations
When placing a firewall in a security design, it is important to consider the firewall’s performance, as this determines how well your design withstands DoS attacks. In addition to forwarding IP traffic, stateful inspection firewalls must maintain connection tables. Factors such as connection rates, the number of total and embryonic (half-open) connections allowed, and the maximum time that state is maintained on an idle connection directly impact on the ability of the firewall to deal with periods of heavy demand. For more information, see Performance Considerations, page 2-15.

Resiliency
When developing resiliency into Internet edge designs, consider the following:
• • • •

How quickly can a fault be detected and the backup, or standby, unit take up the load? Can statefulness be maintained in recovering from a fault condition? How does the failed unit impact unaffected upstream/downstream devices? How are administrators advised of the failed condition and deployment of resiliency mechanisms?

Cisco PIX firewalls provide failover mechanisms, including stateful failover capabilities, which can be used to develop resilient Internet edge designs. Routers that use the Cisco IOS software Firewall Feature Set do not offer explicit firewall failover mechanisms; but they can make use of router resiliency mechanisms, such as HSRP/VRRP, and routing protocol configurations to provide basic failover capabilities. In addition to firewall resiliency, basic routers and switches used in the design can also be deployed in a resilient fashion. For Layer 2 resiliency, it is important to avoid the use of the spanning-tree protocol (IEEE 802.1D loop-detection), as its reconvergence time precludes the ability to maintain stateful failover mechanisms. Although there are existing and future fast Layer 2 convergence mechanisms, their use must be fully tested comparative to the Layer 3 mechanisms (router/firewall) required to maintain statefulness.

Data Center Networking: Internet Edge Design Architectures 956484

2-11

Chapter 2 Security Design Requirements

Internet Edge Security Design Principles

Intrusion Detection
At the Internet edge, intrusion detection capabilities play an active role in defending the enterprise from Internet-based attacks. It is not the intention of this design guide to examine the intricacies of intrusion detection system (IDS) design, deployment, and management. However, it is important to touch on the fact that Internet edge designs offer the ability to employ effective intrusion detection capabilities. Generally, there are three basic intrusion detection systems deployed at the Internet edge:
1. 2. 3.

Network-Based Intrustion Detection (NIDS) Host-Based Intrusion Detection (HIDS) Variance-Based Capture Systems (Honey Pots)

Additional or future systems (such as anomaly detection, behavior pattern matching, identity-based profiling) can also participate in the designs offered in this guide.

Network-Based Intrustion Detection (NIDS)
NIDS capabilities are provided in two ways:
1. 2.

The intrusion detection capabilities inherently available within the Layer 3 (router/firewall) devices within the design. The use of external NIDS sensors, which promiscuously monitor IP traffic at discreet points within the design.

The primary responsibility of firewalls and routers is to forward IP traffic—not to store or buffer it for signature analysis. Therefore, the NIDS capability offered within the PIX firewall and IOS Firewall Feature Set is limited to approximately 60 exploit signatures, which can be used on streaming data (attacks that can be detected in one to a few consecutive packets). Also, the use of these mechanisms can impact the overall performance of these units. However, NIDS may be useful in smaller Internet edge designs, where a full scale IDS is not cost-justified. The use of external IDS sensors assumes the ability to attach the sensor to the network in a promiscuous fashion. There are often philosophical discussions by network security experts as to which side of the firewall the sensor should be placed (to the ultimate conclusion that you should have a sensor on each firewall interface). That aside, it is important to understand the limitations in using external sensors:
• •

External sensors work best on shared media (hubs), because their half-duplex operation mitigates sensor overrun issues. When using a switched port analyzer (SPAN) port to monitor a switch port, establishing a full-duplex SPAN can result in up to 50% packet loss on the NIDS sensor as the monitored port reaches full load. This is due to the fact that bi-directional traffic is being offered to a SPAN port that is able only to uni-directionally forward traffic to the sensor. When monitoring multiple ports (as with a VLAN) or bi-directional traffic on a single port, buffer overruns on the SPAN port or sensor interface can result in lost packets. Although sensor tuning can help reduce false positive indications, they do not help with packet loss on external NIDS sensors. The exception is the use of VACLs on a Catalyst 6000/6500 series switch, which can filter which packets are copied (not SPAN) to a NIDS sensor port (usually an IDSM that resides within the switch itself).

• •

If NIDS is to be deployed within Internet edge designs, hubs, SPAN capable switches, or Catalyst 6000/6500 switches need to be included to allow NIDS sensor connectivity.

Data Center Networking: Internet Edge Design Architectures

2-12

956484

Chapter 2

Internet Edge Security Design Principles Security Design Requirements

Host-Based Intrusion Detection (HIDS)
A general security policy recommended for Internet edge design is that no Internet-originated traffic should pass directly to the interior of an enterprise network. Therefore, hosts that must be accessible from the Internet should be placed in neutral zones, or DMZs. Because DMZs are part of Internet edge design, provisions should be made to include HIDS on Internet-accessible hosts. This provides a means of detecting application-level and operating system malicious activity on such hosts. As HIDS is deployed on edge hosts, the HIDS selection depends on the operating systems running on the protected hosts. For example, the Cisco Host Intrusion Detection System (powered by Entercept) provides support for Microsoft Windows 2000/XP and Sun Solaris based operating systems, while there are numerous open-source solutions for the Linux environment. HIDS generally differs from personal firewalls. HIDS systems harden the behavior of the underlying OS and applications, while personal firewalls focus on the behavior of the host’s IP network layer. HIDS is highly effective on bounded hosts such as servers, where the host is expected to receive IP traffic in support of network applications and requires protection at the application layer. Personal firewalls are most effectively deployed on unbounded user hosts, which should not receive unsolicited IP traffic. Laptops and other user systems that use the Internet as transport benefit from the deployment of personal firewalls.

Variance-Based Capture Systems (Honey Pots)
Variance-based capture systems, or honey pots, deploy decoy systems on otherwise unused enterprise IP addresses that are accessible from the Internet. Honey-pots provide a number of advantages. Use a honey-pot to obscure more valuable hosts, by providing additional targets detectable by a ping sweep and other scanning mechanisms. Honey-pot systems most often respond to such activity, offering the attacker easy vantage points in hopes to draw them away from true application hosts. They provide the opportunity to monitor and analyze specific attacks. This information can then be used to strengthen existing defenses. Honey-pots are also useful in collecting evidence for law-enforcement efforts.

Caution

A word of warning on the use of honey-pots—as their nickname suggests, they are effective in attracting potential attackers towards your network. With regard to obscuring meaningful hosts, experienced attackers will know to avoid obvious targets in favor of those that do not respond to basic scans. Honey-pots should only be deployed if the enterprise is committed to full-time expert analysis of their data.

IDS Implementation And Performance Considerations
There are five major factors to consider when determining IDS implementation requirements:
1. 2. 3. 4. 5.

Optimal sensor placement Sensor performance characteristics Limiting false positive indications Frequency of signature update, and the ability to develop customer signatures IDS event analysis and reporting

Data Center Networking: Internet Edge Design Architectures 956484

2-13

Chapter 2 Security Design Requirements

Internet Edge Security Design Principles

HIDS is dependant on the placement of the protected host. Optimal sensor placement really deals with the deployment of NIDS sensors. There are three areas of NIDS sensor placement to consider:

Perimeter Exterior – Placing the NIDS sensor on the segment immediately outside the firewall. The principal benefit of this sensor placement is that the sensor has a full view of all attacks against the enterprise. The principal disadvantages are that this placement does not filter out those attacks that would be blocked by the network perimeter. A growing issue with this placement is the use of broadband and/or VPN encapsulation of traffic terminating on the firewall. Encapsulation may shield attacks from detection. Perimeter Interior –Placing the NIDS sensor on the segment immediately inside of a firewall (can be an inside or on a DMZ interface). The principal benefit of this sensor placement is the focus placed on attacks that have passed through perimeter defenses. The disadvantage is that this placement is not effective in detecting scans and DoS attacks, which are stopped by the firewall. Enterprise Interior – NIDS deployment within the interior of enterprise networks is useful in protecting critical network assets. The problem is that the performance of LAN backbones often exceeds the performance capabilities of NIDS sensors, resulting in sensor-overrun. Thus, internal sensors must be properly tuned to focus on analyzing those protocols of importance to the protected systems.

Sensor performance is limited by the throughput rate supported by the sensing, or promiscuous, interface. For example, if an NIDS sensor has a Fast Ethernet interface, then it can theoretically receive and process 100 Mbps of IP traffic. However, if the source of traffic being analyzed is a switch span port monitoring both the send and receive of another Fast Ethernet port, that results in sensor-overrun, with potentially 200Mbps of traffic monitored by a device capable of receiving only 100 Mbps. If the span port is monitoring multiple ports, for example a VLAN, sensor-overrun is due to buffer-overruns on the span port, in addition to bandwidth mismatch. Proper sensor placement is a critical factor in preventing sensor-overrun. For example, while the core switches of an enterprise network appear to be an excellent placement for a NIDS sensor, because it is central to the IP traffic flowing across the enterprise, the bandwidth supported across the core switches would most certainly exceed the capabilities of available NDIS sensors. However, placement of NIDS sensors on the edge switches supporting mission critical server connectivity may not only offer a more scalable placement, but also allow upstream routers to filter the traffic destined for servers only to that pertinent for the operation of network traffic, which aids is reducing the number of false positive indications. You can also use sensor tuning to reduce the number of false positive indications provided by IDS sensors. Tuning IDS sensors generally includes the following tasks:
• • • •

Based on your supported application definitions, prioritize those signatures with direct relevance to protecting those applications and supporting network assets. Filter out network management traffic sourced from legitimate sources that would otherwise falsely trigger attack signatures. Analyze IDS alarms and prosecute their causes. If the traffic is legitimate and predictable behavior, create filters to eliminate future false positive indications. Consider filtering traffic that may be sourced from actual attacks, but is known to be blocked by your network networks defenses. Such attacks can be used to ‘dazzle’ a sensor with large numbers of false positives, which due to the resulting sensor-overrun condition, blinds the sensor to the actual attack.

An effectively deployed IDS system has the ability to be updated with both new/revised signatures from the IDS vendor, as well as with custom signatures that are tailored to the enterprise’s specific requirements. The ability to develop custom signatures also provides administrators the ability to respond to newly learned threats, thus providing a more rapid response to new threats than with signature updates alone.

Data Center Networking: Internet Edge Design Architectures

2-14

956484

Chapter 2

Internet Edge Security Design Principles Performance Considerations

The deployment of IDS within enterprise networks results in a significant amount of raw sensor data, possibly from multiple sensors (both HIDS and NIDS). This requires the inclusion of event collection, analysis, and reporting tools as part of the IDS implementation.

Performance Considerations
In the Internet edge, performance is determined by the following:
• •

Scalability Requirements Asymmetry Concerns

Scalability Requirements
Generally, the scalability and performance of the Internet edge is based on the following factors:

Bandwidth—The maximum available throughput supported by the Internet edge. Bandwidth measurements are broken down into two significant values:
– Session Bandwidth—Defined by the maximum available throughput supported by individual

elements. Available session bandwidth is based on the limiting value that a single session's packets incur while traversing the Internet edge.
– Aggregate Bandwidth—Defined by the maximum available throughput supported by multiple

elements. While a single session may be limited by the throughput value across a single path, multiple active paths allow a higher aggregate throughput across multiple sessions.

Connection Rate—A rate measured in connections per second. This value is a measure of the scalability an Internet edge has in supporting user populations. Connection rate measurement are broken down into two significant values:
– Steady-State Connection Rate—The number of connections per second supported across the

Internet edge on a continual basis.
– Maximum Connection Rate—The maximum number of connections per second supported

across the Internet edge, but possibly not on a continual basis.

Maximum Number of Connections—The total number of connections supported across the Internet edge at any given instance of time.

Bandwidth
By design, the amount of aggregate bandwidth supported across the Internet edge should be at least equal to the bandwidth supported by its associated Internet connection. In addition, consideration must be given to the number of users (hosts) that can be supported and the average bandwidth available to each user session across the Internet edge. A principle cause for the development of backdoors to the Internet within enterprise designs is a failure of designers to meet a qualitative level of user bandwidth requirements. For example, if the average user feels that a dial-up connection to an ISP has better performance in reaching the Internet than the enterprise provides, then some users may act upon the obvious conclusion, undermining the overall security of the network for the sake of productivity.

Data Center Networking: Internet Edge Design Architectures 956484

2-15

Chapter 2 Performance Considerations

Internet Edge Security Design Principles

An aspect of bandwidth support is based on the physical interfacing supported by the Internet edge. For example, while all components within Internet edge may support 10/100BaseTX, device interface throughput may become a limiting factor when dealing with OC-3 or above, ATM, or PoS connectivity to the Internet. Session bandwidth is associated with the concept of the theoretical throughput a single session sustains across the Internet edge. This is a non-aggregate value that provides a basis for scaling the Internet edge design to meet aggregate session throughput requirements. This value represents the maximum sustainable throughput across a single forward path within the Internet edge. As the number of simultaneous sessions increases, the average bandwidth per session decreases. Once average session bandwidth falls below a qualitative threshold, clients begin to sense excessive delay in their applications, resulting in substandard network performance. This is important, as it is at this point that Internet edge designs with single active paths must either be upgraded to support a higher bandwidth or modified to support multiple active paths.

Note

Aggregate bandwidth represents the total throughput supported by multiple active data paths across the Internet edge. Adding additional active paths results in an increase in the number of supportable sessions. However, an individual session (supported across a single active path), is still limited by the maximum session bandwidth value.

Connection Rate
There is a direct relationship between the connection rate supported by the Internet edge and the number of users supportable at a given time, which is independent of any relationship between bandwidth size and number of users. If the Internet edge cannot keep up with the number of user session requests per unit time, the resulting lost sessions significantly impact qualitative network performance. Note that for TCP-based sessions, the TCP backoff algorithm adds additional aggravation to a connection request overload situation. TCP-based sessions have a high degree of variance with regard to connection lifetime, as the majority of the traffic is expected to be HTTP-based and, therefore, short-lived. However, this can result in the generation of a very high number of connection requests, particularly during periods of peak network usage. Another factor to consider is that HTTP applications usually launch multiple sessions simultaneously for performance reasons. The steady-state connection rate represents the ability of the Internet edge to support continual operations. This value varies, depending on the traffic characteristics of the individual network. The value defined is the point at which the connection request rate does not exceed the average session teardown rate, but may be limited by a device's performance limitation. For example, if the average session teardown rate for a network is 1,000 sessions per second, then the Internet edge is expected to be able to sustain a similar steady-state connection rate. However, if the firewall or load-balancing platform can only support a maximum of 750 connections, that becomes the steady-state (and maximum connection rate) limit for that path across the Internet edge. The maximum connection rate represents the ability of the Internet edge to meet the session launching requirements during a network's busy (burst) period. This value is based solely on the best connection rate performance available via the Internet edge devices. However, the assumption is that at this rate, the connection request rate exceeds the session teardown rate. Therefore, if the maximum connection rate is sustained, the total number of connections will quickly reach a limiting value. Connection rate factors are aggregated across multiple paths through the use of load-balancing or routing mechanisms. This is the principle reason for including load-balanced firewalls in Internet edge designs.

Data Center Networking: Internet Edge Design Architectures

2-16

956484

Chapter 2

Internet Edge Security Design Principles Performance Considerations

Total Connections
Stateful devices within the Internet edge support a maximum number of total simultaneous connections. This maximum total connections represents a volumetric limit as to the number of users or sessions that can be supported by a given design. Once this limit is reached, additional requests for new connections are dropped. It may be that over a period of time (for example, during the course of a business day), the number of sessions may begin to creep up to reach the maximum total connections value. Methods to increase this value include upgrading devices, as well as aggregation across multiple paths. In the case of the latter, it is important to keep in mind that the failure of a single path may result in overloading the remaining ones, if not properly designed.

Asymmetry Concerns
Routing or forwarding asymmetry represents the greatest difficulty in developing Internet edge designs with multiple active paths. Asymmetry occurs when a session is split across two forwarding paths. In forwarding traffic, asymmetry may be seen as an inconvenience. Even without statefulness, asymmetry causes difficulties in event collection and security analysis. For example, if a session uses two forwarding paths, this results in the NIDS of each path seeing only half of the session. If a composite attack occurs within that session, it may be possible for that attack to elude signature detection across the NIDS. When stateful devices, particularly firewalls, are inserted into the Internet edge, asymmetry causes serious functional degradation to the design. Asymmetry results in session failures across multiple active firewalls, due to the fact that the firewall on the return path does not have a stateful connection entry, which was created on the firewall of another path. Generally, the following methods are used to combat asymmetry:

Provide a means of sharing state information across all active parallel firewalls. Such mechanisms must be inherent to the firewall's functionality and are most probably limited to firewalls that are not geographically dispersed, due to latency reasons. Provide mechanisms to eliminate asymmetry by ensuring traffic returns across the same path used to create the initial session state. Unlike the previous method, solutions in this area support geographically dispersed firewalls. Forwarding Translation

The basic design mechanisms used to eliminate asymmetry fall into the following categories:
• •

Forwarding
A principle method of eliminating asymmetry is through explicit or dynamic control of the forwarding information. Deploy a number of methods using basic router capabilities, including:

Routing protocol configuration, which is used to adjust route propagation and table information in ways to prevent asymmetric paths from forming. Of specific note is the use of BGP mechanisms on Internet-connected routers, particularly in cases where the multiple paths to the Internet are geographically dispersed.

Data Center Networking: Internet Edge Design Architectures 956484

2-17

Chapter 2 Performance Considerations

Internet Edge Security Design Principles

Policy routing, which is used to segment traffic across multiple paths, based on Layer 3+ information. For example, the routers upstream and downstream of a set of parallel firewalls are used to receive inbound HTTP traffic across one firewall, send outbound HTTP traffic across another, and send and receive VPN traffic across a third. Resiliency mechanisms, which are used in combination with routing protocol configuration and policy routing.

Asymmetry mitigation is also achieved through the use of dynamic load-balancing devices, which externally maintain a shared-session state table across multiple, active firewalls. In the absence of address translation, a “load-balancing sandwich” method is used, in which the firewalls are placed between a pair of load-balancers to prevent the formation of asymmetric paths by the load-balancing process itself. The performance characteristics (bandwidth, connection rate, total connections) of the load-balancing mechanisms must be taken into consideration. If the elimination of asymmetry results in performance values below or not significantly improved over the use of a single (but possible resilient) path design, then the multi-path design is difficult to justify.

Translation
Using NAT at the Internet edge is a highly effective means of eliminating asymmetry. Using IP address translation methods depends on a number of factors, including:
• • •

The deployment of private IP addressing (RFC 1918), thereby requiring the Internet edge to translate private IP addresses into Internet-routable addresses. The need to avoid asymmetric routing concerns associated with operating multiple Internet Edges or active firewalls in parallel. The ability of applications and their associated protocols to function across address translation boundaries. Applications that use fixed source TCP/UDP port numbers or non TCP/UDP IP protocols (for example, IPSec uses IP protocol 50 for ESP, which has no concept of port numbers), or that carry the original IP address untranslated within the payload generally do not function across a source port address translation (PAT) boundary. Untranslated IP address information carried within the data payload of an IP packet results in application failure even across static 1:1 NAT boundaries. NAT methods may be desirable to overcome complex routing obstacles. For example, a translation boundary across two different autonomous systems (AS) may be easier to administrate than setting up inter-AS route peering.

In all cases, the translating device is responsible for providing proxy-ARP services for the host IP addresses being translated. This is necessary to ensure proper Layer 2 forwarding of packets to upstream devices. If the next upstream device uses static ARP tables, it must be understood that hosts behind a firewall or other translating device have the same MAC address and possibly the same IP address (in the case of 8:1 source PAT to the interface IP address of the device). Table 2-1 describes the types of NAT mechanisms available when designing an Internet edge and provides guidance for their use. The following terms are used in this table and often used when describing IP address translation mechanisms:
• •

Local Address—A local IP address is the address assigned directly to the IP host being translated. Global Address—A global IP address represents a single IP address or pool of IP addresses available to the translating device to use for IP address translation. Relative to the host being translated, the global IP address is the address that this host advertises on the other side of the translating device.

Data Center Networking: Internet Edge Design Architectures

2-18

956484

Chapter 2

Internet Edge Security Design Principles Performance Considerations

Inside Interface—The interface of the translating device accessible to the local IP addresses of the hosts to be translated. Relative to the host being translated, this interface is the one facing towards the host's subnet. Outside Interface—The interface of the translating device providing global IP addresses for hosts to be translated. Relative to the host being translated, the translating device must forward (and therefore translate the source address of) IP packets passing to/through this interface.

The table below describes the types of NAT mechanisms available the Internet edge, and provides guidance for their use:
Table 2-1 NAT Mechanisms

Address Translation Method 1:1 Static NAT

Description The translating device translates a discretely defined local IP address on the inside interface into a single, specific global address on the outside interface. The translating device has a defined pool of available global addresses on the outside interface. Local hosts are dynamically assigned an IP address from the pool, as required. When no longer used, the dynamically-assigned IP address is returned to the pool for reuse by another client. The translating device has a single, global address in its defined pool of available IP addresses and can translate many local addresses into that single global address. Source host discrimination is accomplished by maintaining the state of the local host’s IP address and TCP/UDP source port, and translating the TCP/UDP source port number to a unique value for an active session.

Guidelines for Use Useful for servers and other hosts that must be directly accessible from the Internet. The external DNS entry for these hosts should properly reflect the translated (global) address. Useful for end-user hosts communicating to the Internet. Because each host gets exclusive use of the dynamically-assigned address, 1:1 dynamic NAT offers the widest application support. The size of the available address pool can be used to limit the number of simultaneous hosts accessing the Internet at a given time. Also known as address overload, dynamic PAT has the advantage of allowing a single global address to represent a large number of local hosts. This is useful when the range of concurrently active local hosts exceeds the available number of global IP addresses. Although using PAT is considered an effective means of obscuring local hosts from the internet, PAT is subject to application limitations due to its reliance TCP/UDP source port information. A single PAT global address places an upper limit of approximately 64,000 active TCP or UDP sessions, which is significantly lower than the maximum number of concurrent sessions supported by many Cisco firewalls.

1:1 Dynamic NAT

8:1 Dynamic PAT (address overload)

Data Center Networking: Internet Edge Design Architectures 956484

2-19

Chapter 2 Security Considerations

Internet Edge Security Design Principles

Address Translation Method

Description

Guidelines for Use Also known as interface PAT, dynamic PAT to an outside address is often used in broadband and other environments where only a single IP address is available for use. In many cases, the translating device’s IP address may be dynamically assigned via DHCP. Although interface PAT is highly versatile, increasing use of this method results in greater security concerns for network administrators, because any host on the network could be masking a number of devices behind it. Bidirectional NAT is useful when operating multiple active firewalls in parallel. This method ensures consistency in translation for both the source and destination of a given session. Another effective use of bi-directional NAT is where two hosts can communicate across a defined translation boundary each believing the other is a 'local' host. This method is useful for network management to provide presence for a network management tool that would otherwise be on an isolated network management subnet or out-of-band network.

8:1 Dynamic PAT to outside IP Similar to dynamic PAT, the translating address (Interface PAT) device uses the IP address of its own outside interface as the global address, which is available for use by local hosts.

Bi-directional NAT

Translating devices that support bi-directional NAT translate source IP addresses in both directions. The result is that from the local host’s point of view, external destination hosts appear to be local themselves.The actual translation mechanisms used can be a combination of those listed above. However, the use of PAT in the both directions is highly discouraged, because randomizing the TCP/UDP port of reply traffic can lead to highly unpredictable application responses.

Security Considerations
In the Internet edge, security generally falls into one of the following categories:
• • •

Element Security Identity Services Common Internet Edge Security Policies

Element Security
Due to the location of the Internet edge (on the fringes of the enterprise), its components are subject to potential attacks, both internal and from the Internet. To prevent compromise and reduce the potential for secondary attacks, the elements that make up the Internet edge must be well shielded from such attacks. The best way to approach element security is in three steps:
Step 1

Disable all management functions on Internet edge elements, with the exception of the direct console ports.

Data Center Networking: Internet Edge Design Architectures

2-20

956484

Chapter 2

Internet Edge Security Design Principles Security Considerations

Step 2

Enable AAA functions, to provide strict controls on device access. If external AAA servers are used, it is highly desirable to protect and authenticate the communication between the Internet edge device and the external AAA server. Verify the operation of AAA functions by enabling their use on the console port. Configure, protect, and enable management functions to the minimum extent necessary. For example, if SSL access is desired, disable unencrypted HTTP. Use encrypted protocols whenever possible. (Only use encrypted protocols when allowing remote administration via the Internet.) Limit management protocols to and from specified management hosts or via specific interfaces.

Step 3

Identity Services
As previously mentioned, the concept of identity is fundamental to network security. In a simplified view, identity at the Internet edge is concerned with two issues:
• •

Is the internal user or host authorized to access the Internet via the applications/protocols requested? Is an Internet user authorized to access the enterprise hosts via the applications/protocols requested?

Identity at the Internet edge is based on information carried within the IP headers of the packets. The authority to forward the packet is granted based on the following criteria:
• • • • •

The IP header information must match an explicit rule, permitting the packet to be forwarded. Although not explicitly permitted by a static rule, the IP header information must match a dynamically stored state of an existing (or embryonic) connection. The IP packet to be forwarded was received by the proper interface. The Internet edge device is capable of forwarding the IP packet. The IP packet is properly formed. For example, if a firewall requires the packet to be decrypted prior to forwarding (meaning that received IP packets must be encrypted), then non-encrypted packets and those failing decryption are not forwarded.

It is possible to create mechanisms that provide external authentication of users and hosts. These authentication mechanisms augment stateful packet inspection by establishing the conditions for forwarding packets based on that authentication. For example, proxy-authentication on a PIX firewall (via Telnet, FTP, or HTTP) is used to establish the state for a given session or dynamically modify ACLs to temporarily permit user traffic based upon a successful authentication. In another example, a shared-secret or digital certificate exchange between an external host and a VPN capable device within the Internet edge provides the basis for establishing an IPSec connection between those two hosts.

Common Internet Edge Security Policies
Although enterprises vary significantly in their security policies, the following security policies should be included in virtually all designs:
• •

Avoid policies that allow external hosts to initiate traffic directly to internal hosts, as this aids in mitigating internal threats via secondary exploitation. Hosts to which Internet users can initiate sessions to should be placed into DMZs, to allow monitoring such traffic in a centralized, controlled environment that is separate from the internal enterprise network.

Data Center Networking: Internet Edge Design Architectures 956484

2-21

Chapter 2 Security Considerations

Internet Edge Security Design Principles

• • •

Strictly limit ICMP and UDP traffic that is flowing through to and from the Internet, with DNS being the obvious exception. Prevent encapsulation and port redirection mechanisms between internal networks and the Internet, or strictly define their use. Strictly apply anti-spoofing (RFC 2827 and RFC 1918 filters) mechanisms at the outer Internet connection.

Data Center Networking: Internet Edge Design Architectures

2-22

956484

C H A P T E R

3

Internet Edge Security Implementation
This chapter presents four basic Internet edge security designs. Each design has the following characteristics:
• • • •

They provide a stateful inspection firewall as the principal filtering mechanism to protect the enterprise network. Each provides a single forwarding path model. A DMZ is provided to support Internet servers (website, e-mail, DNS, etc.). Each provides both site-to-site and remote access VPN services.

This chapter provides a basic topology and commented configurations for each of the designs. In addition, the following areas are discussed:
• • • • •

Basic Forwarding — Describes the basic packet forwarding model, including firewall routing requirements. Security Policy Functional Deployment — Discusses how and where each design meets basic security requirements. Network Address Translation (NAT) Issues — If address translation is part of the design, this section describes how and where each design meets these requirements. DMZ Design — Describes how each design meets DMZ requirements. Intrusion Detection Capabilities — Although the implementation of IDS is outside the scope of this paper, this section defines where you deploy IDS capabilities and the limitations of such a deployment. Network Management — Describes how you can deploy remote management of the various elements within the design.

Note

Each design assumes that the PIX firewalls are running PIX OS v6.2(x). Although not required for these designs, the implementation of firewall functions is compatible with the conventions and logic used by PIX Device Manager (PDM) 2.0.

Basic Security Policy Functions
Table 3-1 lists the security functions (in order of priority) required to meet the basic security policies outlined in Chapter 2, “Internet Edge Security Design Principles.”

Data Center Networking: Internet Edge Design Architectures 956484

3-1

Chapter 3 Basic Security Policy Functions

Internet Edge Security Implementation

Table 3-1

Security Policy Functions

Security Policy Functional Name

Description

Management Traffic Rules (Element The various elements of Internet edge designs require the Security) ability to pass management traffic to and from remote network management workstations. Meeting these requirements often requires a combination of enabling the management functions, use of ACLs or specific management filters, and enabling authentication mechanisms. RFC 2827 In Packets originating from outside the enterprise must not be sourced from IP address spaces contained within the enterprise, which indicates that the packet sender is attempting to masquerade the packet as being sent from an internal user.Accomplish this function through the use of either ACLs or an IP reverse path verification mechanism, which filters packets that fail this security policy. Packets originating from inside the enterprise must not be destined to IP address spaces contained within the enterprise. This indicates a routing or forwarding failure, and is potentially harmful to the network. Accomplish this function through the use of either ACLs or an IP reverse path verification mechanism, which filters packets that fail this security policy. Packets originating from outside the enterprise must not be sourced or destined to private IP address as defined by RFC 1918. Such packets indicate highly suspicious activity and are potentially harmful to the network. Accomplish this function through the use of ACLs to filter packets that fail this policy. Packets originating from inside the enterprise must not be destined to private IP address as defined by RFC 1918. Such packets indicate highly suspicious activity and are potentially harmful to the network. Accomplish this function through the use of ACLs to filter packets that fail this policy The enterprise can have a set of IP addresses and ranges, IP protocols, TCP/UDP ports and ranges, and ICMP message types that it considers a general threat from traffic originating outside the enterprise. Accomplish this function through the use of deny ACLs or firewall rules to filter packets that fail these policies.

RFC 2827 Out

RFC 1918 In

RFC 1918 Out

Basic Filtering In

Data Center Networking: Internet Edge Design Architectures

3-2

956484

Chapter 3

Internet Edge Security Implementation Broadband Design

Security Policy Functional Name Basic Filtering Out

Description The enterprise may have a set of IP addresses and ranges, IP protocols, TCP/UDP ports and ranges, and ICMP message types that it considers a general threat from traffic originating from inside the enterprise. Accomplish this function through the use of deny ACLs or firewall rules to filter packets that fail these policies. The enterprise should have a set of IP addresses and ranges, IP protocols, TCP/UDP ports and ranges, and ICMP message types that are required to pass through the Internet edge for legitimate business requirements. Accomplish this is function through the use of permit firewall rules to statefully inspect (or otherwise monitor for non-TCP traffic) packets to and from the enterprise. It is important to note that the PIX firewall has a basic state of allowing all traffic originating from a higher security-level interface to a lower one, and allow replies to return by stateful monitoring of sessions.

Stateful Inspection Rules

Broadband Design
The first basic Internet edge design is a single PIX 501 firewall, although you can use a PIX 506 for metro Ethernet environments. The PIX 501 firewall is intended for use by small enterprises that use available broadband ISP services to connect to the Internet and to support no more than 50 users, though in many cases, it supports a single user.
Figure 3-1 Broadband Design

Outsource DMZ

As shown in Figure 3-1, this design uses the firewall as the demarcation point between the enterprise network and the Internet.This design assumes that the broadband provider supplied the equipment (cable, DSL, or satellite modem) required to support the outside Ethernet interface

Note

The PIX 501 does support PPP over Ethernet (PPPoE).

Data Center Networking: Internet Edge Design Architectures 956484

76677

Enterprise

Internet

3-3

Chapter 3 Broadband Design

Internet Edge Security Implementation

The four-port switch in the PIX 501 provides user connectivity to the Internet edge. You can add additional infrastructure to this switch to provide support for additional users. In many cases, you can use an IEEE 802.11b access point to provide wireless networking. In any case, this design assumes that the enterprise infrastructure is a single Layer 2 subnet, without additional routing requirements imposed on the firewall.

Configuration
The configuration of the firewall is as follows:
Building configuration... : Saved : PIX Version 6.2(1)

The following lines define the interface names and security levels associated with the PIX firewall.
nameif ethernet0 outside security0 nameif ethernet1 inside security100

The following lines provide the Telnet/SSH and enable passwords associated with the PIX firewall.
enable password 2KFQnbNIdI.2KYOU encrypted passwd 2KFQnbNIdI.2KYOU encrypted

The following lines define the hostname and domain for the PIX firewall. The PIX Firewall setup wizard provides this information and provides the basis for the SSH and SSL certificates used by the firewall.
hostname pix domain-name cisco.com

The following lines define the clock timezone characteristics used by the PIX firewall clock.
clock timezone EST -5 clock summer-time EDT recurring

This following lines define the fixup protocols currently active within the PIX firewall. Fixup protocols assist the PIX firewall in defining the stateful behavior of complex protocols so that ports can be dynamically opened for the operation of these applications, without exposing the enterprise environment to needless open ports.
fixup fixup fixup fixup fixup fixup fixup fixup fixup fixup fixup protocol protocol protocol protocol protocol protocol protocol protocol protocol protocol protocol ftp 21 http 80 h323 h225 1720 h323 ras 1718-1719 ils 389 rsh 514 rtsp 554 smtp 25 sqlnet 1521 sip 5060 skinny 2000

This following lines define the names and object groups used within PIX firewall rules. In this design, the only objects defined are the network management host and a definition of the RFC 1918 private IP address ranges.

Note

Due to the various classes of RFC 1918 address spaces, each RFC 1918 object has been added to an RFC 1918 object group to simplify their expression within firewall rules.

Data Center Networking: Internet Edge Design Architectures

3-4

956484

Chapter 3

Internet Edge Security Implementation Broadband Design

names name 10.0.0.10 netmgmt name 172.16.0.0 RFC1918-B name 10.0.0.0 RFC1918-A name 192.168.0.0 RFC1918-C object-group network RFC-1918 description Private IP address ranges as defined by RFC-1918 network-object RFC1918-A 255.0.0.0 network-object RFC1918-B 255.240.0.0 network-object RFC1918-C 255.255.0.0

The following lines provide the RFC 1918 Out security functionality. Due to the implicit deny ip any any command at the end of all ACLs, the second permit ip any any command is required to allow outbound Internet traffic from internal users.
access-list inside_access_in deny ip any object-group RFC-1918 access-list inside_access_in permit ip any any

The following lines provide basic configuration of Telnet/SSH screens, enable logging (with timestamps) to the network management host, and define the Ethernet characteristics for the PIX firewall interfaces.
pager lines 24 logging on logging timestamp logging host inside 10.0.0.10 interface ethernet0 10baset interface ethernet1 10full mtu outside 1492 mtu inside 1500

The following lines define the IP addresses for each of the PIX firewall interfaces.

Note

The outside IP address is configured to be set dynamically via PPPoE, including the dynamic setting of a default route.
ip address outside pppoe setroute ip address inside 10.0.0.1 255.255.255.0

The following lines provide RFC 2827 anti-spoofing functionality.
ip verify reverse-path interface outside ip verify reverse-path interface inside

The following lines are related to the basic IDS capabilities of the PIX firewall. Based on these lines, IDS information and signature violations will result in an alarm posted to the syslog server.
ip ip ip ip ip ip ip ip audit audit audit audit audit audit audit audit name Info info action alarm name Attack attack action alarm interface outside Info interface outside Attack interface inside Info interface inside Attack info action alarm attack action alarm

The following lines are related to the functionality of PIX Device Manager (PDM). Most importantly, PDM requires network objects and groups to be defined relative to their interface.
pdm location netmgmt 255.255.255.255 inside pdm location RFC1918-A 255.0.0.0 outside pdm location RFC1918-B 255.240.0.0 outside

Data Center Networking: Internet Edge Design Architectures 956484

3-5

Chapter 3 Broadband Design

Internet Edge Security Implementation

pdm pdm pdm pdm

location RFC1918-C 255.255.0.0 outside group RFC-1918 outside logging warnings 100 history enable

The following line defines the ARP timeout characteristic for the PIX. By default, ARP entries remain in cache for 4 hours.
arp timeout 14400

The following lines define the address translation characteristics of the PIX firewall. The global command indicates that the IP address of the outside interface of the PIX firewall is to be used for port address translation (PAT, many:1 dynamic NAT). The next line indicates that all inside traffic is to be translated when forwarded to the outside interface.
global (outside) 1 interface nat (inside) 1 0.0.0.0 0.0.0.0 0 0

The following line applies the firewall rule generated ACL to the inside interface.
access-group inside_access_in in interface inside

The following lines provide the default timeout characteristics for stateful connections passing through the PIX firewall.
timeout xlate 3:00:00 timeout conn 1:00:00 half-closed 0:10:00 udp 0:02:00 rpc 0:10:00 h323 0:05:00 sip 0:30:00 sip_media 0:02:00 timeout uauth 0:05:00 absolute

The following lines provide the default definition of AAA servers to be used in operating the PIX.
aaa-server TACACS+ protocol tacacs+ aaa-server RADIUS protocol radius aaa-server LOCAL protocol local

The following line defines the NTP server used to maintain the PIX firewall clock.

Note

In this example, 192.43.244.18 represents the current IP address for NTP server time.nist.gov. It is also configured as a non-authenticated source, which is most likely the case for small environments.
ntp server 192.43.244.18 source outside

The following lines allow PDM to be run from the network management workstation.
http server enable http netmgmt 255.255.255.255 inside

The following lines define the SNMP management configuration, which is disabled by default. This is often the case for small environments.
no snmp-server location no snmp-server contact snmp-server community public no snmp-server enable traps

The following lines are related to inherent security features offered by the PIX firewall and are shown in their default state for the PIX 501. The first line enables the PIX floodguard feature, which allows the PIX firewall to reclaim incomplete connection resources (such as embryonic connections) in the event of an insufficient resource condition, even if the timers have not expired. The second line specifies that

Data Center Networking: Internet Edge Design Architectures

3-6

956484

Chapter 3

Internet Edge Security Implementation Broadband Design

when an incoming packet causes a route lookup, the incoming interface should not be used to determine the interface to which the packet should go and which is the next hop. For a two interface PIX firewall, the no sysopt route dnat command provides a performance advantage.
floodguard enable no sysopt route dnat

The following lines provide SSH configuration information, allowing the network management station to use an SSH client to access the PIX remotely.
ssh netmgmt 255.255.255.255 inside ssh timeout 5

The following lines provide the PPPoE dialer information used in connecting to the broadband service provider.
vpdn group pppoe_group request dialout pppoe vpdn group pppoe_group localname foo vpdn username foo password *********

The following lines enable a DHCP server for IP address configuration of enterprise hosts.
dhcpd dhcpd dhcpd dhcpd dhcpd address 10.0.0.129-10.0.0.159 inside lease 3600 ping_timeout 750 auto_config outside enable inside

The following line sets the Telnet, console, or SSH terminal width to 80 characters, which is the default.
terminal width 80

The following lines end the configuration. All PIX firewall configurations end with a crypto checksum, which allows administrator to detect if changes have been made to the configuration.
Cryptochecksum:d2dce5ccadd485c4d705f3d35af07711 : end [OK]

Basic Forwarding
This design assumes that the broadband provider is offering the enterprise one IP address, which requires an address translation mechanism to support the enterprise's internal users. It is therefore assumed that no hosts exist within the enterprise that is directly accessed by external hosts on the Internet. This design has a simple routing and forwarding configuration.The outside interface obtains its IP address dynamically (via DHCP or PPPoE's IPCP mechanism), which creates a learned default gateway that is used by the firewall to forward packets to the Internet. If the outside interface has a manually configured IP address, you must include a static default route (according to the broadband provider's requirements) to properly forward packets to the Internet. Because the design assumes that the internal environment is a single Layer 2 subnet, it requires no additional routing information (because the enterprise hosts are directly connected). These internal hosts should make use of the IP address of the internal interface of the firewall as their default gateway. The PIX 501 provides a DHCP server to aid in configuring the IP parameters of internal hosts. Do not define a default gateway on internal servers with static IP addresses, such as network printers, that do not require Internet access.

Data Center Networking: Internet Edge Design Architectures 956484

3-7

Chapter 3 Broadband Design

Internet Edge Security Implementation

Security Policy Functional Deployment
All of the defined security policy functions are deployed via the configuration of the PIX 501as shown in Table 3-2.
Table 3-2 Security Policy Function—Broadband Design

Security Policy Functional Name Management Traffic Rules (Element Security)

Deployment Various, based on element security features within the PIX device configuration

Comment The general recommendation is to make use of SSH for console connectivity, and PIX Device Manager (PDM), which uses SSL. This provides encryption of interactive management streams. As the PIX-501 is a two interface firewall, management traffic is expected to flow to/from the inside interface. Based on the assumption of having a single IP address from the broadband ISP, use the ip verify reverse-path interface outside command to provide anti-spoofing functionality, even with a dynamically assigned IP address on the outside interface This design assumes that network address translation takes place, to make use of a single IP address. That assumption extends to include the probable use of RFC 1918 private IP addressing within the enterprise space. Therefore, the “RFC 1918 Out” requirement covers this function.You can use the ip verify reverse-path interface inside command to provide anti-spoofing functionality, but it is redundant in conjunction with RFC 1918 outbound filtering, in this case. In this case, there are no static or 1:1 dynamic address translations occurring across the firewall. The basic firewall state, in combination with 'RFC 1918 Out' functionality, provides the needed protection.

RFC 2827 In

PIX ip verify reverse-path

RFC 2827 Out

N/A (PIX ip verify reverse-path)

RFC 1918 In

N/A

Data Center Networking: Internet Edge Design Architectures

3-8

956484

Chapter 3

Internet Edge Security Implementation Broadband Design

Security Policy Functional Name RFC 1918 Out

Deployment PIX firewall rules on the inside interface

Comment Implement a firewall rule to ensure that no outbound traffic is destined for RFC 1918 addresses. Accomplish this in multiple steps:
1.

Create network objects that identify RFC 1918 addresses as being outside the firewall. Group the RFC 1918 network objects together, as to allow RFC 1918 Out filtering within a single firewall rule. Create a firewall rule denying traffic destined for any RFC 1918 address, sourced from the inside interface.

2.

3.

Basic Filtering In

PIX firewall rules on outside Create firewall rules to explicitly deny interface IP traffic identified as harmful to the enterprise when sourced from the Internet. PIX firewall rules on inside interface Create firewall rules to explicitly deny IP traffic sourced from within the enterprise, which is not permitted or otherwise explicitly filtered, based on security policy The purpose of these firewall rules is to permit IP traffic to flow statefully, as required to support business applications. These rules are generally applied as exceptions to the implied traffic blocking which occurs from traffic originating from a lower security level interface.

Basic Filtering Out

Stateful Inspection Rules

PIX firewall rules on inside interface

NAT Issues
The small design assumes that all traffic sourced from the enterprise side is translated to the IP address of the outside interface of the PIX firewall as it passes to and from the Internet. Although this has the advantage of allowing multiple hosts to share a single IP address, applications that either rely on a specific source TCP/UDP port or that carry the original IP address within the payload may not function. An example of such an application is Microsoft's Netmeeting, which allows hosts across the Internet to send voice/video between each other.

Data Center Networking: Internet Edge Design Architectures 956484

3-9

Chapter 3 Basic Design

Internet Edge Security Implementation

DMZ Design
The small Internet edge design assumes that the service provider provides traditional DMZ functionality on an out sourced basis, including:
• • •

Web hosting DNS (domain registration and hosting) E-mail services (sendmail, support for POP3/IMAP clients, HTTP/SSL-based mail front ends)

Therefore, the enterprise network contains no servers that are directly connected to or from the Internet.

Intrusion Detection Capabilities
The PIX 501 provides basic IDS capabilities using the 51 basic signatures provided with PIX OS. If you desire a standalone NIDS sensor on the enterprise side, you can connect it to one of the four switchports on the PIX 501. These switchports are not true bridge ports, but are more like a buffered repeater, which can be used by NIDS. The use of a hub between the outside interface of the PIX 501 and the broadband service provider’s equipment offers the ability to inspect traffic outside of the Internet edge perimeter. In such a case, NIDS management traffic should be handled out-of-band to the enterprise or NIDS management workstation. Otherwise, the firewall rules must consider NIDS management requirements. Keep in mind that NIDS may be required to compensate for PPPoE encapsulation, if used by the service provider.

Network Management
In a small design with a single firewall providing Internet edge services management of the design is relatively simple. Use a single host with the following tools for basic management:
• • •

Web browser w/ SSL support, for use in accessing PIX device manager An SSH client (Telnet is not recommended for remote console access.) A syslog server

Basic Design
This design represents a basic Internet edge design with additional interfaces for DMZ support. The following assumptions are made regarding this design:
• • • •

·A dedicated, single Internet connection, up to a DS-3 in bandwidth. The firewall platform is a PIX 515, although you can use a PIX 525 if you require Gigabit Ethernet interface support. If the PIX terminates VPN traffic, configure the PIX 515 to support three interfaces plus a VPN Accelerator Card (VAC). If a separate device terminates VPN traffic, such as a VPN 3000 series concentrator, at least four interfaces are required to provide a separate DMZ for VPN traffic. Provide six interfaces (using a four-port Fast Ethernet card) to cover future growth.

This design handles traffic from 100 to 3,500 users, with reasonable session performance These limits are based on reasonable session bandwidth based on T1 at the low end and T3 at the high end.

Data Center Networking: Internet Edge Design Architectures

3-10

956484

Chapter 3

Internet Edge Security Implementation Basic Design

Figure 3-2

Basic Design

DMZ services

Enterprise

Internet
76678

Figure 3-2 illustrates the firewall-centric nature of this small-to-medium design. Use a separate interface segregate DMZ assets. Layer 2 switches provide host connectivity and NIDS support on each of the firewall interfaces.

Note

Cisco recommends that you use separate switches rather than VLANs on a common switch to mitigate any potential Layer 2 configuration and security issues. If you use a separate device to terminate VPN traffic, the outside (public) interfaces of both the VPN device and firewall share a common subnet. The inside (private) interface of the VPN device feeds towards a separate DMZ interface on the firewall, which provides you with the ability to enforce filtering and stateful inspection rules relative to VPN traffic.

Configuration
The following is the configuration for the PIX firewall shown within this design: The next lines define the interface names and security levels associated with the PIX
nameif nameif nameif nameif nameif nameif ethernet0 ethernet1 ethernet2 ethernet3 ethernet4 ethernet5 outside security0 inside security100 vpn security60 dmz security15 intf4 security20 intf5 security25

The next lines display the Telnet/SSH and enable passwords associated with this PIX
enable password 2KFQnbNIdI.2KYOU encrypted passwd 2KFQnbNIdI.2KYOU encrypted

The next two lines define the hostname and domain for this PIX. The PIX setup wizard provides this information and provides the basis for the SSH and SSL certificates used by the PIX
hostname pix

Data Center Networking: Internet Edge Design Architectures 956484

3-11

Chapter 3 Basic Design

Internet Edge Security Implementation

domain-name cisco.com

The next two lines define the clock timezone characteristics used by the PIX clock
clock timezone EST -5 clock summer-time EDT recurring

This next section of the configuration defines the fixup protocols currently active on the PIX. Fixup protocols help the PIX define the stateful behavior of complex protocols, so that ports are properly dynamically opened for the operation of these applications, without exposing the enterprise environment to needless numbers of open ports.
fixup fixup fixup fixup fixup fixup fixup fixup fixup fixup fixup protocol protocol protocol protocol protocol protocol protocol protocol protocol protocol protocol ftp 21 http 80 h323 h225 1720 h323 ras 1718-1719 ils 389 rsh 514 rtsp 554 smtp 25 sqlnet 1521 sip 5060 skinny 2000

This section defines the names and network object groups used in PIX firewall rules. These names and object groups simplify the identities of hosts within firewall rules definitions.
names name 192.168.1.65 webserver name 10.0.1.2 vpn3000-private name 192.168.1.67 dnsserver name 10.0.1.128 vpnusers name 192.168.1.66 mailserver name 10.1.0.0 internalsvrs name 10.10.0.0 mgmtservers name 200.200.200.2 outerrouter name 10.10.0.67 console name 10.10.0.66 SMNP name 10.10.0.65 syslog name 10.10.0.68 tftpsvr name 10.10.0.69 aaa name 10.10.0.70 ntpsvr name 10.1.0.65 intmailsvr object-group network dmzservers description Servers in DMZ network-object webserver 255.255.255.255 network-object mailserver 255.255.255.255 network-object dnsserver 255.255.255.255

In addition to using object groups to group hosts, you can also use object-groups to bundle TCP/UDP services in a common definition. This helps reduce the number of line entries in the firewalls ACLs. This configuration provides two examples. First, the three services offered by a web server (HTTP, HTTPS, FTP, located within the DMZ), are bundled together to simplify the number of lines required to properly firewall these services. Second, as an alternative to defining individual network management servers that perform certain functions, management protocols are grouped into four areas: TCP mgmt servers, TCP mgmt clients, UDP mgmt servers, and udp mgmt clients.

Note

While creating these management groups greatly simplifies management protocol traffic flowing from the Internet edge, it is assumed that an upstream internal router to the management subnet has ACLs which determine which specific protocols go to specific management hosts.

Data Center Networking: Internet Edge Design Architectures

3-12

956484

Chapter 3

Internet Edge Security Implementation Basic Design

object-group service webservices tcp description Services provide by the webserver port-object eq ftp port-object eq https port-object eq www object-group service tcp-mgmt-svcs tcp description TCP-based network management services port-object eq tacacs object-group service tcp-mgmt-clients tcp description TCP-based network management clients port-object eq ssh port-object eq https object-group service udp-mgmt-svcs udp description UDP-based network management services port-object eq syslog port-object eq tftp port-object eq ntp port-object eq snmptrap port-object eq tacacs object-group service udp-mgmt-clients udp description UDP-based network management clients port-object eq snmp

While you should not permit traffic to flow from outside of a firewall directly to hosts on the internal network, a special case exists for event data (syslogs, authentication, SNMP traps) coming from the outer router to the network management subnet. The first two lines of the outside interface’s ACL permits this traffic.
access-list outside_access_in permit udp host outerrouter mgmtservers 255.255.0.0 object-group udp-mgmt-svcs access-list outside_access_in permit tcp host outerrouter mgmtservers 255.255.0.0 object-group tcp-mgmt-svcs

The remaining outside interface’s ACL is straightforward. It provides external access to hosts located within the DMZ. This follows the basic security policy recommendation that all externally originated traffic must flow to a DMZ, and not directly to internal subnets.
access-list outside_access_in permit tcp any object-group webservices host 200.200.200.65 access-list outside_access_in permit tcp any host 200.200.200.66 eq smtp access-list outside_access_in permit udp any host 200.200.200.67 eq domain

The NAT rules established further down in this configuration require that all internal hosts are translated into global pool 1 on the other, lower level interfaces. However, since both the VPN and DMZ interfaces use private IP addressing, it is undesirable to translate the private IP addresses of the internal network for permitted traffic flowing to these interfaces. This outbound nat0 ACL provides no NAT translation for these cases.
access-list inside_outbound_nat0_acl permit ip internalsvrs 255.255.0.0 vpnusers 255.255.255.128 access-list inside_outbound_nat0_acl permit ip mgmtservers 255.255.0.0 host vpn3000-private access-list inside_outbound_nat0_acl permit ip any object-group dmzservers

The following ACL applied to the VPN interface defines the allowed management traffic from the VPN concentrator to the network management subnet, and also allows VPN users to access the internal network, which is on a higher level interface. Note that if there are any restrictions on areas of the internal network accessible by VPN users, this ACL can be modified to suit. Traffic from VPN users is permitted by default to the DMZ and outside interfaces
access-list vpn_access_in permit udp host vpn3000-private mgmtservers 255.255.0.0 object-group udp-mgmt-svcs

Data Center Networking: Internet Edge Design Architectures 956484

3-13

Chapter 3 Basic Design

Internet Edge Security Implementation

access-list vpn_access_in permit tcp host vpn3000-private mgmtservers 255.255.0.0 object-group tcp-mgmt-svcs access-list vpn_access_in deny ip host vpn3000-private any access-list vpn_access_in permit ip vpnusers 255.255.255.128 any

The NAT rules established further down in this configuration require that all VPN hosts are translated into global pool 1 on the other, lower level interfaces. However, since hosts on the DMZ interface use private IP addressing, it is undesirable to translate the private IP addresses of the VPN network for permitted traffic flowing to the DMA interface. This outbound nat0 ACL provides no NAT translation for these cases.
access-list vpn_outbound_nat0_acl permit ip vpnusers 255.255.255.128 object-group dmzservers

Permitted traffic from the internal network is defined by the internal interface’s ACL. First you define the network management traffic allowed to flow from the internal management subnet to the outer router outside the firewall, the VPN concentrator, and servers within the DMZ. Note the use of your previously defined object groups to minimize the number of protocol definitions required. Otherwise no other traffic is permitted to flow to/from our management subnet.
access-list inside_access_in permit udp mgmtservers 255.255.0.0 object-group udp-mgmt-svcs host outerrouter access-list inside_access_in permit tcp mgmtservers 255.255.0.0 object-group tcp-mgmt-svcs host outerrouter access-list inside_access_in permit udp mgmtservers 255.255.0.0 host outerrouter object-group udp-mgmt-clients access-list inside_access_in permit tcp mgmtservers 255.255.0.0 host outerrouter object-group tcp-mgmt-clients access-list inside_access_in permit udp mgmtservers 255.255.0.0 object-group udp-mgmt-svcs object-group dmzservers access-list inside_access_in permit tcp mgmtservers 255.255.0.0 object-group tcp-mgmt-svcs object-group dmzservers access-list inside_access_in permit udp mgmtservers 255.255.0.0 object-group dmzservers object-group udp-mgmt-clients access-list inside_access_in permit tcp mgmtservers 255.255.0.0 object-group dmzservers object-group tcp-mgmt-clients access-list inside_access_in permit udp mgmtservers 255.255.0.0 object-group udp-mgmt-svcs host vpn3000-private access-list inside_access_in permit tcp mgmtservers 255.255.0.0 object-group tcp-mgmt-svcs host vpn3000-private access-list inside_access_in permit udp mgmtservers 255.255.0.0 host vpn3000-private object-group udp-mgmt-clients access-list inside_access_in permit tcp mgmtservers 255.255.0.0 host vpn3000-private object-group tcp-mgmt-clients access-list inside_access_in deny ip mgmtservers 255.255.0.0 any

The internal mail server must be able to communicate using SMTP to the external mail server, as well as to the DNS server. These next two lines of the inside interface’s ACL permit this traffic.
access-list inside_access_in permit tcp host intmailsvr host mailserver eq smtp access-list inside_access_in permit udp host intmailsvr any eq domain

Since VPN users are permitted to access internal servers, this traffic is explicitly permitted. Otherwise traffic from internal servers is blocked from traversing the Internet edge. The next two lines of the inside interface’s ACL accomplish this.
access-list inside_access_in permit ip internalsvrs 255.255.0.0 vpnusers 255.255.255.128 access-list inside_access_in deny ip internalsvrs 255.255.0.0 any

Data Center Networking: Internet Edge Design Architectures

3-14

956484

Chapter 3

Internet Edge Security Implementation Basic Design

Since an ACL is being applied to the inside interface, its default posture of permitting traffic to flow to lower level interfaces is blocked by the implied ‘deny ip any any’ on the bottom of all ACLs. Therefore an explicit ‘permit ip any any’ statement is required to allow traffic to flow from internal hosts to the Internet, DMZ, and to VPN users.
access-list inside_access_in permit ip any any

Since the DMZ interface is a lower level than the internal network, you must allow event management data to flow from the DMZ servers to the management subnet. The first two lines of the DMZ interface’s ACL accomplishes this.
access-list dmz_access_in permit udp object-group dmzservers mgmtservers 255.255.0.0 object-group udp-mgmt-svcs access-list dmz_access_in permit tcp object-group dmzservers mgmtservers 255.255.0.0 object-group tcp-mgmt-svcs

In the case of the DMZ, servers play very specific roles, and can be bounded to those roles by using the DMZ interface’s ACL. For example, the web server is only permitted to respond to requests from its defined services (in an object group), and is not allowed to forward any other web server initiated traffic (due to the implied ‘deny ip any any’ at the bottom of the ACL)
access-list access-list access-list access-list access-list dmz_access_in dmz_access_in dmz_access_in dmz_access_in dmz_access_in permit permit permit permit permit tcp tcp tcp udp udp host host host host host webserver object-group webservices any mailserver host intmailsvr eq smtp mailserver any eq smtp dnsserver eq domain any dnsserver any eq domain

The following set of lines provide basic configuration of telnet/SSH screens, enables logging (w/ timestamps) to the network management host, and defines the Ethernet characteristics for the PIX interfaces
pager lines 24 logging on logging timestamp logging trap notifications logging history warnings logging host inside 10.1.0.65 interface ethernet0 auto interface ethernet1 auto interface ethernet2 auto interface ethernet3 auto interface ethernet4 auto shutdown interface ethernet5 auto shutdown mtu outside 1500 mtu inside 1500 mtu vpn 1500 mtu dmz 1500 mtu intf4 1500 mtu intf5 1500

The following section defines the IP addresses for each of the PIX interfaces.
ip ip ip ip ip ip address address address address address address outside 200.200.200.1 255.255.255.0 inside 10.0.0.1 255.255.255.0 vpn 10.0.1.1 255.255.255.0 dmz 192.168.1.1 255.255.255.0 intf4 127.0.0.1 255.255.255.255 intf5 127.0.0.1 255.255.255.255

The following lines provide RFC-2827 anti-spoofing functionality
ip verify reverse-path interface outside ip verify reverse-path interface inside

Data Center Networking: Internet Edge Design Architectures 956484

3-15

Chapter 3 Basic Design

Internet Edge Security Implementation

ip verify reverse-path interface vpn ip verify reverse-path interface dmz

The following lines relate to the PIX’s basic IDS capabilities, and define that IDS info & attack signature violations result in an alarm posted to the syslog server.
ip ip ip ip ip ip ip ip ip ip ip ip audit audit audit audit audit audit audit audit audit audit audit audit name Attack attack action alarm name Info info action alarm interface outside Info interface outside Attack interface inside Info interface inside Attack interface vpn Info interface vpn Attack interface dmz Info interface dmz Attack info action alarm attack action alarm

Multi-interface PIXs (515E, 525, 535, and FWSM for the Catalyst 6000/Cisco 7600) are capable of operating with a secondary unit in an active/standby failover configuration. For this basic design, use a single firewall and disable failover.
no failover failover timeout 0:00:00 failover poll 15 failover ip address outside 0.0.0.0 failover ip address inside 0.0.0.0 failover ip address vpn 0.0.0.0 failover ip address dmz 0.0.0.0 failover ip address intf4 0.0.0.0 failover ip address intf5 0.0.0.0

The following section of commands relates to the functionality of PIX Device Manager (PDM). Most importantly, PDM requires that you define network objects and groups relative to their interface.
pdm pdm pdm pdm pdm pdm pdm pdm pdm pdm pdm pdm pdm pdm pdm pdm pdm pdm location vpn3000-private 255.255.255.255 vpn location vpnusers 255.255.255.128 vpn location webserver 255.255.255.255 dmz location mailserver 255.255.255.255 dmz location dnsserver 255.255.255.255 dmz location internalsvrs 255.255.0.0 inside location mgmtservers 255.255.0.0 inside location outerrouter 255.255.255.255 outside location syslog 255.255.255.255 inside location SMNP 255.255.255.255 inside location console 255.255.255.255 inside location tftpsvr 255.255.255.255 inside location aaa 255.255.255.255 inside location ntpsvr 255.255.255.255 inside location intmailsvr 255.255.255.255 inside group dmzservers dmz logging warnings 100 history enable

The follow line defines the ARP timeout characteristic for the PIX. By default, ARP entries remain in cache for 4 hours.
arp timeout 14400

Data Center Networking: Internet Edge Design Architectures

3-16

956484

Chapter 3

Internet Edge Security Implementation Basic Design

The following lines define the address translation characteristics of the PIX. The global lines indicates that the IP address of the defined interfaces of the PIX are used for port address translation (PAT, many:1 dynamic NAT).
global (outside) 1 interface global (vpn) 1 interface global (dmz) 1 interface

Users on the inside and VPN interfaces are generally translated as they access hosts on other interfaces. However, as previously noted, there are exceptions in accessing hosts on the inside, DMZ, and VPN interfaces, due to a common use of RFC-1918 private IP addresses. Apply the previously defined outbound NAT0 ACL at the start of each interfaces’ NAT definition to alleviate this issue
nat nat nat nat nat (inside) 0 access-list inside_outbound_nat0_acl (inside) 1 0.0.0.0 0.0.0.0 0 0 (vpn) 0 access-list vpn_outbound_nat0_acl (vpn) 1 vpn3000-private 255.255.255.255 0 0 (vpn) 1 vpnusers 255.255.255.128 0 0

Statically translate the DMZ servers and internal network management hosts on the outside interface to allow external hosts and the outer router (respectively) to access these hosts using publicly accessible IP addresses
static static static static static static static static static static (dmz,outside) 200.200.200.65 webserver netmask 255.255.255.255 0 0 (dmz,outside) 200.200.200.66 mailserver netmask 255.255.255.255 0 0 (dmz,outside) 200.200.200.67 dnsserver netmask 255.255.255.255 0 0 (inside,outside) 200.200.200.100 laptop netmask 255.255.255.255 0 0 (inside,outside) 200.200.200.129 syslog netmask 255.255.255.255 0 0 (inside,outside) 200.200.200.130 SMNP netmask 255.255.255.255 0 0 (inside,outside) 200.200.200.131 console netmask 255.255.255.255 0 0 (inside,outside) 200.200.200.132 tftpsvr netmask 255.255.255.255 0 0 (inside,outside) 200.200.200.133 aaa netmask 255.255.255.255 0 0 (inside,outside) 200.200.200.134 ntpsvr netmask 255.255.255.255 0 0

The following lines applies the firewall ACLs to their associated interfaces
access-group access-group access-group access-group outside_access_in in interface outside inside_access_in in interface inside vpn_access_in in interface vpn dmz_access_in in interface dmz

The following three lines enable authenticated RIPv2 routing protocol support on the PIX firewall. The first line provides the ability for the firewall to receive routes from the outer router, including a default route. As an alternative, a statically defined default route may also be applied, using the PIX ‘route’ command.
rip outside passive version 2 authentication md5 cisco 1

This is the second of three lines defining authenticated RIPv2 support on the PIX, this time on the inside interface. This allows the PIX to learn routes to internal subnets via Rip from adjacent interior routers, without statically defining them
rip inside passive version 2 authentication md5 cisco 1

Thus far, the PIX has interoperated passively in the authenticated RIPv2 routing protocol, accepting routing updates from its router neighbors on the inside and outside interface. This third RIPv2 configuration line configures the PIX to actively send a default route to its neighbors on the inside interface. This allows the internal routers to correctly point and propagate a default route towards the PIX.
rip inside default version 2 authentication md5 cisco 1

Data Center Networking: Internet Edge Design Architectures 956484

3-17

Chapter 3 Basic Design

Internet Edge Security Implementation

The following lines provide the (default) timeout characteristics for stateful connections passing through the PIX.
timeout xlate 3:00:00 timeout conn 1:00:00 half-closed 0:10:00 udp 0:02:00 rpc 0:10:00 h323 0:05:00 sip 0:30:00 sip_media 0:02:00 timeout uauth 0:05:00 absolute

The following lines define the (default) definition of AAA servers used in operating the PIX.
aaa-server aaa-server aaa-server aaa-server TACACS+ protocol tacacs+ TACACS+ (inside) host aaa cisco timeout 5 RADIUS protocol radius LOCAL protocol local

The following line provides the NTP server definition used to maintain the PIX clock. Note that in this example, ntpsvr represents an internal NTP server, and MD5 authentication is used to verify NTP updates
ntp ntp ntp ntp authentication-key 1 md5 ******** authenticate trusted-key 1 server ntpsvr key 1 source inside prefer

The following two lines enable PDM to be run from the network management workstation.
http server enable http console 255.255.255.255 inside

The following lines deal with SNMP management configuration, which was configured to use a host called ‘SNMP’ on the network management subnet.
snmp-server snmp-server snmp-server snmp-server snmp-server host inside SMNP location Rack A3 contact Joe User community cisco enable traps

In this design, the PIX firewall was configured to write its configuration to an external TFTP server, whenever changes are made and a ‘write network’ command is issued.
tftp-server inside tftpsvr /pix.cfg

The following two lines relate to inherent security features offered by the PIX. The first line enables the PIX floodguard feature, which allows the PIX to reclaim incomplete connection resources (such as embryonic connections) in the event of an insufficient resource condition, even if the timers have not expired. The second specifies that when an incoming packet does a route lookup, the incoming interface is not used to determine which interface the packet should go to, and which is the next hop. For a multi interface PIX, enabling ‘sysopt route dnat’ provides a performance advantage.
floodguard enable sysopt route dnat

The following lines provide SSH configuration information, allowing the network management station to use an SSH client to access the PIX remotely.
telnet timeout 5 ssh console 255.255.255.255 inside ssh timeout 5

The telnet/console/SSH terminal width is set to 80 characters by default.
terminal width 80

Data Center Networking: Internet Edge Design Architectures

3-18

956484

Chapter 3

Internet Edge Security Implementation Basic Design

All PIX configurations end with a crypto checksum, which allows administrator to detect if changes have been made to the configuration.
Cryptochecksum:08e3727f3c5158b69de91cf550345d22

Basic Forwarding
For firewall-centric designs, the number of flow relationships are expressed by the equation N(N-1), where N is the number of firewall interfaces. For a four interface PIX firewall, as shown in Figure 3-2, this means there are twelve separate flow relationships., as shown by the chart below:
Table 3-3 Flow Relationships—Basic Design

Source Interface Inside

Destination Interface Outside

Description Represents traffic flows originating from internal hosts toward the Internet. This traffic is predominantly inspected by implied stateful inspection, as well as outbound anti-spoofing and RFC 1918 rules. Internal servers do not directly access the Internet. Therefore, filters must be put in place to explicitly exclude traffic from these hosts.

Inside

DMZ

Represents traffic flows originating from internal hosts to DMZ servers. Much of this is traffic related to DNS and e-mail from internal hosts accessing the Internet and collecting e-mail. However, this also includes DMZ management and e-commerce application traffic. Although stateful inspection is in place, explicit filters should be used to limit this traffic to defined permissible DMZ traffic. Generally represents the return path for VPN traffic accessing internal servers. This traffic is predominantly inspected by implied stateful inspection. Traffic to VPN-connected servers should be limited by explicit filtering rules.

Inside

VPN

Outside

Inside

Represents traffic flows originating from Internet hosts to internal hosts. Block this traffic, with the exception of reply traffic statefully matching sessions originating from internal hosts. Generally represents traffic required to support e-commerce applications (HTTP/HTTPS, DNS, SMTP, etc.). This traffic is explicitly allowed only via firewall rules. Represents traffic that flows from Internet hosts to VPN-connected hosts. It is treated similar to Outside-Inside traffic.

Outside

DMZ

Outside

VPN

Data Center Networking: Internet Edge Design Architectures 956484

3-19

Chapter 3 Basic Design

Internet Edge Security Implementation

Source Interface DMZ

Destination Interface Inside

Description Represents any traffic originating from the DMZ toward internal hosts—primarily reply traffic, with the exception of server event management traffic (ex. syslog). Block this traffic, with the exception of reply traffic statefully matching sessions originating from internal hosts. Put explicit filters in place to allow event management traffic. Represents any traffic originating from the DMZ toward Internet hosts—primarily reply traffic, with the exception of DNS requests originating from the DNS server. Put explicit filters on replies in place, based on the behavior of DMZ-based applications. Represents any traffic originating from the DMZ toward VPN-connected users—primarily reply traffic from DMZ servers. Block this traffic, with the exception of reply traffic statefully matching sessions originating from internal hosts.There may be cases where DMZ servers need to communicate with VPN-connected servers for e-commerce, DNS, and SMTP purposes. This traffic must be explicitly permitted by filtering rules. Unlike DMZ-originated traffic, which is predominantly reply traffic, VPN traffic toward internal hosts generally falls into two categories:
1.

DMZ

Outside

DMZ

VPN

VPN

Inside

Replies from VPN-connected remote servers to originating internal hosts. Block this traffic, with the exception of reply traffic statefully matching sessions originating from internal hosts. Add explicit filters to allow event management traffic. Originating requests from VPN-connected remote users towards internal servers. This traffic is explicitly allowed only via firewall rules.

2.

Data Center Networking: Internet Edge Design Architectures

3-20

956484

Chapter 3

Internet Edge Security Implementation Basic Design

Source Interface VPN

Destination Interface Outside

Description This traffic is unusual in scope, as it assumes that VPN-connected host users require access to the Internet while connected to the enterprise. This is often the case, as many remote-access VPN implementations may not allow split-tunneling at the far end. This traffic is treated similar to inside-outside traffic, in that this traffic is predominantly inspected by implied stateful inspection, as well as outbound anti-spoofing and RFC 1918 rules. VPN-connected remote servers are not expected to require Internet access. Therefore, put filters in place to explicitly exclude traffic from these hosts.

VPN

DMZ

Again, there are two separate cases for VPN traffic toward the DMZ:
1.

VPN-connected servers should not need to communicate with DMZ hosts, except for e-commerce requirements (backend applications, replication, etc.), DNS, and SMTP transfers. Explicitly limit this traffic by filtering rules, in addition to the implicit stateful inspection that exists. Traffic from VPN-connected users is similar to Inside-DMZ traffic. Although stateful inspection is in place, use explicit filters to limit this traffic to defined permissible DMZ traffic.

2.

In the previous small Internet edge design, the firewall has two interfaces (inside, outside) with fixed security levels. Therefore, that design did not consider the hierarchy of firewall interfaces. In this design, you must consider the security level values of each interface, as this impacts the basic forwarding operation to the firewall. Consider the following guidelines:
• • • • • • •

The basic forwarding policy of the PIX firewall permits traffic originating from a higher-level interface to a lower-level interface and allows stateful replies in reverse. The basic forwarding policy of the PIX firewall drops traffic originating from a lower-level interface to a higher-level interface. The inside interface always has a security level of 100 and, therefore, implicitly forwards traffic to all other interfaces. The outside interface always has a security level of 0 and, therefore, all traffic entering this interface must match an existing session state, or be explicitly permitted by a rule. Assign other interfaces a value between 1-99. If two interfaces are set to the same security level, no communication is allowed between those interfaces. This design assumes that VPN traffic has a higher degree of trust than DMZ traffic. Therefore, the security level of the VPN interface should be higher than that of the DMZ. This results in the following basic traffic flows implicitly forwarded by the stateful inspection engine (with stateful replies allowed in reverse):
– Inside-VPN

Data Center Networking: Internet Edge Design Architectures 956484

3-21

Chapter 3 Basic Design

Internet Edge Security Implementation

– Inside-DMZ – Inside-Outside – VPN-DMZ – VPN-Outside – DMZ-Outside

Routing is generally straightforward, with the exception of the VPN interface. The following routing guidelines apply:

The outer router has a default route to the ISP and is often required to participate in a routing protocol (which requires the redistribution of static routes, depending on how you implemented NAT). The firewall, and any separate VPN device, has a default route toward the IP address of the inside interface of the outer router. This is generally a static route. The firewall needs routes to networks connected to the inside, VPN, and DMZ interfaces.
– In smaller implementations, in which these interfaces connect to a single subnet, no additional

• •

effort is necessary because all known subnets are directly connected.
– However, if routing, content switching, or otherwise Layer 3+ forwarding mechanisms exist

behind a firewall interface, it is necessary to populate the firewall's routing table to establish reachability to networks accessible through these mechanisms. This is accomplished via static routes, or via RIP routing updates (preferably RIP v2, which supports routing protocol authentication).

The following rules apply to RIP support on the PIX firewall:
– The PIX firewall passively listens for RIP updates on RIP enabled interfaces. – If configured to do so, the PIX firewall advertises a default route (but not specific learned

routes) via RIP to other routers connected to the interface.
– The PIX firewall supports both RIP v1 and v2. RIP v2 is preferred, because it supports

variable-length subnet masks and routing protocol authentication.

Routing between the firewall and a separate VPN device is fairly easy for remote access VPNs, because the IPSec Mode Config addressing mechanism results in VPN-connected remote hosts appearing to exist on the same subnet as the firewall's VPN interface. Routing between the firewall and a separate VPN device may be complex in the case of site-to-site VPN connectivity, if the network behind the VPN tunnel is complex. This is due to RIP routing limitations on the PIX firewall.

Security Policy Functional Deployment
The defined security policy functions deployed in this design are shown in Table 3-4.

Data Center Networking: Internet Edge Design Architectures

3-22

956484

Chapter 3

Internet Edge Security Implementation Basic Design

Table 3-4

Security Policy Function—Basic Design

Security Policy Functional Name Management Traffic Rules (Element Security)

Deployment Various, based on element security features of the elements within the design

Comment The general recommendation is to use SSH for console connectivity, SSL for device manager connectivity (PDM and the VPN 3000 device manager use SSL), or IPSec to protect naturally non-encrypted traffic such as Telnet, SNMP, and HTTP. This provides encryption of interactive management streams.

RFC 2827 In

ip verify reverse-path on The ip verify reverse-path interface outside firewall or outer router command on the PIX firewall provides this anti-spoofing functionality.When implemented on the outer IOS router, enable this feature with the ip verify reverse-path command applied to the router's Internet connected interface. ip verify reverse-path on The ip verify reverse-path interface inside firewall command on the PIX firewall provides this anti-spoofing functionality. ACL on outer router Because NAT occurs at the firewall level within this design, no traffic passing inward through the outer router should be destined for an RFC 1918 addressed host. An ACL applied in to the Internet facing interface of the outer router provides sufficient protection. Implement an ACL to ensure that no outbound traffic is destined for RFC 1918 addresses. Craft the ACL to explicitly deny IP traffic that has been identified as harmful to the enterprise (based on security policy) when sourced from the Internet. Create firewall rules to explicitly deny IP traffic sourced from within the enterprise, which is not permitted or otherwise to be explicitly filtered, based on security policy. The purpose of these firewall rules is to permit IP traffic to flow statefully, as required to support business applications. You generally apply these rules as exceptions to the implied traffic blocking which occurs from traffic originating from a lower security level interface.

RFC 2827 Out

RFC 1918 In

RFC 1918 Out Basic Filtering In

ACL on outer router ACL in on the outer router outside interface

Basic Filtering Out

PIX firewall rules on inside, VPN, and DMZ interfaces PIX firewall rules on inside interface

Stateful Inspection Rules

NAT Issues
The following factors determine the number of real (Internet routable) IP addresses required to support this design:

Data Center Networking: Internet Edge Design Architectures 956484

3-23

Chapter 3 Basic Design

Internet Edge Security Implementation

• •

In general, host IP addresses from higher-level interfaces are translated to addresses in lower-level interfaces. Individual servers (or aggregation devices, such as load balancers) in the DMZ require unique real IP addresses, either applied directly to the device (meaning no address translation) or via 1:1 static NAT. Host applications have differing levels of support for the translation process. Hosts may require public IP addresses or specific methods of address translation to resolve incompatible application issues. Note that:
– PAT generally supports TCP/UDP based applications. Other IP protocols (such as those used

natively by IPSec VPNs) are generally not supported by PAT.
– Applications that carry IP address information within the packet payload may not function

correctly, as this information is not translated.
– Applications that require the use of specific source TCP/UDP ports are generally not supported

by PAT, as the PAT process randomizes this information. However, these application may be supported by static or dynamic 1:1 NATb.
– Applications in which endpoints separately checksum IP header information, such as IPSec

Authentication Header mode, do not function with any form of address translation. Address translation is usually performed on the firewall, as this is the Internet edge device that interconnects all the areas (outside, inside, DMZs) of the design. The firewall configuration provided illustrates this point. An alternative is to perform address translation functions on devices behind the firewall. This provides the benefit of making internal traffic appear to be adjacent to the firewall. This aids in simplifying firewall traffic forwarding because summarizing of internal network routes, either via static routes or via routing protocol support on the firewall, is not required. Examples include:
• • •

Performing address translation on the inner routers for internal networks Use of load-balancer and content network engines within the DMZ Use of mode-config (for remote access VPNs) or other NAT mechanisms (for site-to-site VPNs) on VPN devices connected externally to the firewall

DMZ Design
This Internet edge design makes use of DMZs that are adjacent to the inside and outside networks. This supports the use of a single firewall engine to define the forwarding relationships between separate network interfaces. The purpose of this DMZ design is to allow direct n-way associations between the inside, outside, and DMZ networks, without requiring traffic between any two networks to pass through an intermediate network. This provides the benefits of:
• • •

Establishing a unique set of firewall rules between any two firewall interfaces (see Basic Forwarding) based on security policy Overall higher performance because the firewall interfaces are dedicated to the networks they support Improved security because hosts connected to each interface only see traffic relevant to that area of the network (no transitional data paths)

Data Center Networking: Internet Edge Design Architectures

3-24

956484

Chapter 3

Internet Edge Security Implementation Partially Resilient Design

The switch in the DMZ provides connectivity for that DMZ's hosts, as well as a means to monitor traffic flowing on that DMZ via NIDS. This switch should have strict Layer 2 port security, which limits the maximum number of MAC addresses per port (to prevent flooding attacks) and statically defines the MAC address of the host assigned to that port. For Cisco Catalyst switches, you can establish a private VLAN to group DMZ hosts that need to communicate with each other, but more importantly, to prevent communication between DMZ hosts that do not need to communicate with each other. This allows you to place dissimilar servers on a common DMZ, as opposed to administrating separate DMZs for each set of dissimilar hosts. This switch provides basic connectivity for DMZ hosts but does not exclude the use of more advanced networking infrastructures. However, if multiple subnets exist within the DMZ, you must ensure that the firewall has the required routing information to forward traffic to subnets not directly connected within the DMZ design. You can accomplish this via static routes, routing protocols, or address translation behind the firewall's DMZ interface.

Intrusion Detection Capabilities
The use of SPAN ports on the switches connected to each firewall interface supports the connectivity of NIDS sensors to monitor traffic flowing across that interface. Connect the management ports of these sensors to either out-of-band or to the inside switch to protect sensor communication, as well as to provide the ability to shun attackers by adjusting firewall or outer router ACLs. Servers placed in the DMZ should have HIDS installed to monitor potential intrusions and aberrant server behavior. HIDS event management traffic requires specific firewall rules.

Network Management
In the basic design, with a single firewall providing Internet edge services, management of the design is relatively simple. A single host with the following tools is sufficient for basic management:
• • •

Web browser w/ SSL support, for use in accessing PIX device manager An SSH client (Telnet is not recommended for remote console access.) A syslog server

You should also manage the routers and switches via SSH. Configure SNMP for read-only support. Cisco does not recommend the use of Telnet, HTTP, and SNMP read-write network management tools for devices on the Internet edge.

Partially Resilient Design
The partially resilient Internet edge design (shown in Figure 3-3) combines the single forwarding path of the basic design with Layer 3 functional resiliency for firewalls (via failover) and routers (via HSRP or VRRP). To simplify forwarding (without spanning-tree protocol support requirements) and allow the use of fewer NIDS sensors, Layer 2 resiliency is not provided.

Data Center Networking: Internet Edge Design Architectures 956484

3-25

Chapter 3 Partially Resilient Design

Internet Edge Security Implementation

Figure 3-3

Partially Resilient Design

Enterprise

DMZ

Internet

Configuration
The following is the configuration for the PIX firewall shown within this design: The next lines define the PIX interface names and security levels.
nameif nameif nameif nameif gb-ethernet0 outside security0 gb-ethernet1 inside security100 ethernet0 dmz security50 ethernet1 vpn security100

The next lines provide the PIX Telnet/SSH and enable passwords.
enable password 2KFQnbNIdI.2KYOU encrypted passwd 2KFQnbNIdI.2KYOU encrypted

The next two lines define the hostname and domain for this PIX. The PIX setup wizard provides this information and provides the basis for the SSH and SSL certificates used by the PIX.
hostname pix domain-name cisco.com

The next two lines define the PIX clock timezone characteristics.
clock timezone EST -5 clock summer-time EDT recurring

This next section defines the fixup protocols currently active within the PIX. Fixup protocols define the stateful behavior of complex protocols, so that ports are properly, dynamically opened for the operation of these applications, without exposing the enterprise environment to needless numbers of open ports.
fixup fixup fixup fixup fixup fixup fixup fixup fixup fixup fixup protocol protocol protocol protocol protocol protocol protocol protocol protocol protocol protocol ftp 21 http 80 h323 h225 1720 h323 ras 1718-1719 ils 389 rsh 514 rtsp 554 smtp 25 sqlnet 1521 sip 5060 skinny 2000

Data Center Networking: Internet Edge Design Architectures

3-26

956484

76679

Chapter 3

Internet Edge Security Implementation Partially Resilient Design

This section provides the PIX firewall rules name and network object group definitions. Use these names and object groups simplify the identities of hosts within firewall rules definitions.
names name 192.168.1.65 webserver name 10.0.1.3 vpn3000-private name 192.168.1.67 dnsserver name 10.0.1.128 vpnusers name 192.168.1.66 mailserver name 10.1.0.0 internalsvrs name 10.10.0.0 mgmtservers name 200.200.200.3 outerrouter1 name 200.200.200.4 outerrouter2 name 10.10.0.67 console name 10.10.0.66 SMNP name 10.10.0.65 syslog name 10.10.0.68 tftpsvr name 10.10.0.69 aaa name 10.10.0.70 ntpsvr name 10.1.0.65 intmailsvr object-group network dmzservers description Servers in DMZ network-object webserver 255.255.255.255 network-object mailserver 255.255.255.255 network-object dnsserver 255.255.255.255 object-group network outerrouters description Outer Routers outside the firewalls network-object outerrouter1 255.255.255.255 network-object outerrouter2 255.255.255.255

In addition to using object groups to group hosts, you can also use object-groups to bundle TCP/UDP services in a common definition. This helps reduce the number of line entries in the firewalls ACLs. This configuration provides two examples. First, the three services offered by a web server (HTTP, HTTPS, FTP, located within the DMZ), are bundled together to simplify the number of lines required to properly firewall these services. Second, as an alternative to defining individual network management servers that perform certain functions, management protocols are grouped into four areas: TCP mgmt servers, TCP mgmt clients, UDP mgmt servers, and udp mgmt clients.

Note

While creating these management groups greatly simplifies management protocol traffic flowing from the Internet edge, it is assumed that an upstream internal router to the management subnet has ACLs which determine which specific protocols go to specific management hosts.

object-group service webservices tcp description Services provide by the webserver port-object eq ftp port-object eq https port-object eq www object-group service tcp-mgmt-svcs tcp description TCP-based network management services port-object eq tacacs object-group service tcp-mgmt-clients tcp description TCP-based network management clients port-object eq ssh port-object eq https object-group service udp-mgmt-svcs udp description UDP-based network management services port-object eq syslog port-object eq tftp port-object eq ntp

Data Center Networking: Internet Edge Design Architectures 956484

3-27

Chapter 3 Partially Resilient Design

Internet Edge Security Implementation

port-object eq snmptrap port-object eq tacacs object-group service udp-mgmt-clients udp description UDP-based network management clients port-object eq snmp

You should not permit traffic to flow from outside of a firewall directly to hosts on the internal network, a special case exists for must be made for event data (syslogs, authentication, SNMP traps) coming from the outer router to the network management subnet. This traffic is permitted by the first two lines of the outside interface’s ACL.
access-list outside_access_in permit udp object-group outerrouters mgmtservers 255.255.0.0 object-group udp-mgmt-svcs access-list outside_access_in permit tcp object-group outerrouters mgmtservers 255.255.0.0 object-group tcp-mgmt-svcs

The remaining outside interface’s ACL is straightforward. It provides external access to hosts located within the DMZ. This follows the basic security policy recommendation that all externally originated traffic must flow into a DMZ, and not directly to internal subnets.
access-list outside_access_in permit tcp any object-group webservices host 200.200.200.65 access-list outside_access_in permit tcp any host 200.200.200.66 eq smtp access-list outside_access_in permit udp any host 200.200.200.67 eq domain

The NAT rules established later this configuration require that all internal hosts be translated into global pool 1 on the other, lower level interfaces. However, since both the VPN and DMZ interfaces use private IP addressing, it is undesirable to translate the private IP addresses of the internal network for permitted traffic flowing to these interfaces. This outbound nat0 ACL provides no NAT translation for these cases.
access-list inside_outbound_nat0_acl permit ip internalsvrs 255.255.0.0 vpnusers 255.255.255.128 access-list inside_outbound_nat0_acl permit ip mgmtservers 255.255.0.0 host vpn3000-private access-list inside_outbound_nat0_acl permit ip any object-group dmzservers

The following ACL applied to the VPN interface defines the allowed management traffic from the VPN concentrator to the network management subnet, as well as allows VPN users to access the internal network, which is on a higher level interface. Note that if there are any restrictions on areas of the internal network accessible by VPN users, you can modify this ACL to suit. Traffic from VPN users is permitted by default to the DMZ and outside interfaces
access-list vpn_access_in permit udp host vpn3000-private mgmtservers 255.255.0.0 object-group udp-mgmt-svcs access-list vpn_access_in permit tcp host vpn3000-private mgmtservers 255.255.0.0 object-group tcp-mgmt-svcs access-list vpn_access_in deny ip host vpn3000-private any access-list vpn_access_in permit ip vpnusers 255.255.255.128 any

The NAT rules established later in this configuration require that all VPN hosts are translated into global pool 1 on the other, lower level interfaces. However, since hosts on the DMZ interface use private IP addressing, it is undesirable to translate the private IP addresses of the VPN network for permitted traffic flowing to the DMZ interface. This outbound nat0 ACL provides no NAT translation for these cases.
access-list vpn_outbound_nat0_acl permit ip vpnusers 255.255.255.128 object-group dmzservers

Data Center Networking: Internet Edge Design Architectures

3-28

956484

Chapter 3

Internet Edge Security Implementation Partially Resilient Design

Permitted traffic from the internal network is defined by the internal interface’s ACL. First define the network management traffic allowed to flow from the internal management subnet to the outer router outside the firewall, the VPN concentrator, and servers within the DMZ. Note the use of previously defined object groups to minimize the number of protocol definitions required. Otherwise no other traffic is permitted to flow to/from the management subnet.
access-list inside_access_in permit udp mgmtservers 255.255.0.0 object-group udp-mgmt-svcs object-group outerrouters access-list inside_access_in permit tcp mgmtservers 255.255.0.0 object-group tcp-mgmt-svcs object-group outerrouters access-list inside_access_in permit udp mgmtservers 255.255.0.0 object-group outerrouters object-group udp-mgmt-clients access-list inside_access_in permit tcp mgmtservers 255.255.0.0 object-group outerrouters object-group tcp-mgmt-clients access-list inside_access_in permit udp mgmtservers 255.255.0.0 object-group udp-mgmt-svcs object-group dmzservers access-list inside_access_in permit tcp mgmtservers 255.255.0.0 object-group tcp-mgmt-svcs object-group dmzservers access-list inside_access_in permit udp mgmtservers 255.255.0.0 object-group dmzservers object-group udp-mgmt-clients access-list inside_access_in permit tcp mgmtservers 255.255.0.0 object-group dmzservers object-group tcp-mgmt-clients access-list inside_access_in permit udp mgmtservers 255.255.0.0 object-group udp-mgmt-svcs host vpn3000-private access-list inside_access_in permit tcp mgmtservers 255.255.0.0 object-group tcp-mgmt-svcs host vpn3000-private access-list inside_access_in permit udp mgmtservers 255.255.0.0 host vpn3000-private object-group udp-mgmt-clients access-list inside_access_in permit tcp mgmtservers 255.255.0.0 host vpn3000-private object-group tcp-mgmt-clients access-list inside_access_in deny ip mgmtservers 255.255.0.0 any

The internal mail server must be able to communicate using SMTP to the external mail server, as well as to the DNS server. These next two lines of the inside interface’s ACL permit this traffic.
access-list inside_access_in permit tcp host intmailsvr host mailserver eq smtp access-list inside_access_in permit udp host intmailsvr any eq domain

Since VPN users are permitted to access internal servers, this traffic is explicitly permitted. Otherwise, block traffic from internal servers from traversing the Internet edge. The next two lines of the inside interface’s ACL accomplish this.
access-list inside_access_in permit ip internalsvrs 255.255.0.0 vpnusers 255.255.255.128 access-list inside_access_in deny ip internalsvrs 255.255.0.0 any

Since an ACL is applied to the inside interface, its default posture of permitting traffic to flow to lower level interfaces is blocked by the implied ‘deny ip any any’ on the bottom of all ACLs. Therefore an explicit ‘permit ip any any’ statement is required to allow traffic to flow from internal hosts to the Internet, DMZ, and to VPN users.
access-list inside_access_in permit ip any any

Since the DMZ interface is a lower level than the internal network, you must allow event management data to flow from the DMZ servers to the management subnet. Accomplish this using the first two lines of the DMZ interface’s ACL.
access-list dmz_access_in permit udp object-group dmzservers mgmtservers 255.255.0.0 object-group udp-mgmt-svcs access-list dmz_access_in permit tcp object-group dmzservers mgmtservers 255.255.0.0 object-group tcp-mgmt-svcs

Data Center Networking: Internet Edge Design Architectures 956484

3-29

Chapter 3 Partially Resilient Design

Internet Edge Security Implementation

In the case of the DMZ, servers play very specific roles, and can be bounded to those roles by using the DMZ interface’s ACL. For example, the web server is only permitted to respond to requests from its defined services (in an object group), and is not allowed to forward any other web server initiated traffic (due to the implied ‘deny ip any any’ at the bottom of the ACL).
access-list access-list access-list access-list access-list dmz_access_in dmz_access_in dmz_access_in dmz_access_in dmz_access_in permit permit permit permit permit tcp tcp tcp udp udp host host host host host webserver object-group webservices any mailserver host intmailsvr eq smtp mailserver any eq smtp dnsserver eq domain any dnsserver any eq domain

The following set of lines provide basic configuration of telnet/SSH screens, enables logging (w/ timestamps) to the network management host, and defines the Ethernet characteristics for the PIX interfaces.
pager lines 24 logging on logging timestamp logging trap notifications logging history warnings logging host inside 10.1.0.65 interface gb-ethernet0 1000sxfull interface gb-ethernet1 1000sxfull interface ethernet0 100full interface ethernet1 100full mtu outside 1500 mtu inside 1500 mtu vpn 1500 mtu dmz 1500

The following section defines the IP addresses for each of the PIX interfaces.
ip ip ip ip address address address address outside 200.200.200.1 255.255.255.0 inside 10.0.0.1 255.255.255.0 vpn 10.0.1.1 255.255.255.0 dmz 192.168.1.1 255.255.255.0

The following lines provide RFC-2827 anti-spoofing functionality.
ip ip ip ip verify verify verify verify reverse-path reverse-path reverse-path reverse-path interface interface interface interface outside inside vpn dmz

The following lines relate to the PIX’s basic IDS capabilities and define that IDS info & attack signature violations result in an alarm posted to the syslog server.
ip ip ip ip ip ip ip ip ip ip ip ip audit audit audit audit audit audit audit audit audit audit audit audit name Attack attack action alarm name Info info action alarm interface outside Info interface outside Attack interface inside Info interface inside Attack interface vpn Info interface vpn Attack interface dmz Info interface dmz Attack info action alarm attack action alarm

Multi-interface PIXs (515E, 525, 535, and FWSM for the Catalyst 6000/Cisco 7600) are capable of operating with a secondary unit in an active/standby failover configuration. For resilient designs, enable both LAN-based failover and stateful failover.

Data Center Networking: Internet Edge Design Architectures

3-30

956484

Chapter 3

Internet Edge Security Implementation Partially Resilient Design

Note

The secondary unit has the ‘failover lan unit secondary’ command as opposed to the primary designation for LAN failover.
failover failover failover failover failover failover failover failover failover failover failover failover failover failover timeout 0:00:00 poll 15 replication http ip address outside 200.200.200.2 ip address inside 10.0.0.2 ip address vpn 10.0.1.2 ip address dmz 192.168.1.2 ip address intf4 0.0.0.0 ip address intf5 0.0.0.0 link vpn lan unit primary lan interface vpn lan key ******** lan enable

The following section of commands relate to the functionality of PIX Device Manager (PDM). Most importantly, PDM requires that you define network objects and groups relative to their interface.
pdm pdm pdm pdm pdm pdm pdm pdm pdm pdm pdm pdm pdm pdm pdm pdm pdm pdm location vpn3000-private 255.255.255.255 vpn location vpnusers 255.255.255.128 vpn location webserver 255.255.255.255 dmz location mailserver 255.255.255.255 dmz location dnsserver 255.255.255.255 dmz location internalsvrs 255.255.0.0 inside location mgmtservers 255.255.0.0 inside location outerrouter 255.255.255.255 outside location syslog 255.255.255.255 inside location SMNP 255.255.255.255 inside location console 255.255.255.255 inside location tftpsvr 255.255.255.255 inside location aaa 255.255.255.255 inside location ntpsvr 255.255.255.255 inside location intmailsvr 255.255.255.255 inside group dmzservers dmz logging warnings 100 history enable

The follow line defines the ARP timeout characteristic for the PIX. By default, ARP entries remain in cache for 4 hours.
arp timeout 14400

The following lines define the address translation characteristics of the PIX. The global lines indicates that the IP address of the defined interfaces of the PIX are used for port address translation (PAT, many:1 dynamic NAT).
global (outside) 1 interface global (vpn) 1 interface global (dmz) 1 interface

Users on the inside and VPN interfaces are generally translated as they access hosts on other interfaces. However, as previously noted, there are exceptions in accessing hosts on the inside, DMZ, and VPN interfaces, due to a common use of RFC-1918 private IP addresses. The previously defined, apply outbound NAT0 ACL at the start of each interfaces’ NAT definition to alleviate this issue
nat (inside) 0 access-list inside_outbound_nat0_acl nat (inside) 1 0.0.0.0 0.0.0.0 0 0 nat (vpn) 0 access-list vpn_outbound_nat0_acl

Data Center Networking: Internet Edge Design Architectures 956484

3-31

Chapter 3 Partially Resilient Design

Internet Edge Security Implementation

nat (vpn) 1 vpn3000-private 255.255.255.255 0 0 nat (vpn) 1 vpnusers 255.255.255.128 0 0

The DMZ servers and internal network management hosts are statically translated on the outside interface to allow external hosts and the outer router (respectively) to access these hosts using publicly accessible IP addresses.
static static static static static static static static static static (dmz,outside) 200.200.200.65 webserver netmask 255.255.255.255 0 0 (dmz,outside) 200.200.200.66 mailserver netmask 255.255.255.255 0 0 (dmz,outside) 200.200.200.67 dnsserver netmask 255.255.255.255 0 0 (inside,outside) 200.200.200.100 laptop netmask 255.255.255.255 0 0 (inside,outside) 200.200.200.129 syslog netmask 255.255.255.255 0 0 (inside,outside) 200.200.200.130 SMNP netmask 255.255.255.255 0 0 (inside,outside) 200.200.200.131 console netmask 255.255.255.255 0 0 (inside,outside) 200.200.200.132 tftpsvr netmask 255.255.255.255 0 0 (inside,outside) 200.200.200.133 aaa netmask 255.255.255.255 0 0 (inside,outside) 200.200.200.134 ntpsvr netmask 255.255.255.255 0 0

The following lines applies the firewall ACLs to their associated interfaces.
access-group access-group access-group access-group outside_access_in in interface outside inside_access_in in interface inside vpn_access_in in interface vpn dmz_access_in in interface dmz

The following three lines enable authenticated RIPv2 routing protocol support on the PIX firewall. The first line provides the ability for the firewall to receive routes from the outer router, including a default route. As an alternative, a statically defined default route may also be applied, using the PIX ‘route’ command.
rip outside passive version 2 authentication md5 cisco 1

This is the second of three lines defining authenticated RIPv2 support on the PIX, this time on the inside interface. This allows the PIX to learn routes to internal subnets via RIP from adjacent interior routers, without statically defining them.
rip inside passive version 2 authentication md5 cisco 1

Thus far, the PIX has interoperated passively in the authenticated RIPv2 routing protocol, accepting routing updates from its router neighbors on the inside and outside interface. This third RIPv2 configuration line configures the PIX to actively send a default route to its neighbors on the inside interface. This allows the internal routers to correctly point and propagate a default route towards the PIX.
rip inside default version 2 authentication md5 cisco 1

The following lines provide the (default) timeout characteristics for stateful connections passing through the PIX.
timeout xlate 3:00:00 timeout conn 1:00:00 half-closed 0:10:00 udp 0:02:00 rpc 0:10:00 h323 0:05:00 sip 0:30:00 sip_media 0:02:00 timeout uauth 0:05:00 absolute

The following lines define the (default) definition of AAA servers used in operating the PIX.
aaa-server aaa-server aaa-server aaa-server TACACS+ protocol tacacs+ TACACS+ (inside) host aaa cisco timeout 5 RADIUS protocol radius LOCAL protocol local

Data Center Networking: Internet Edge Design Architectures

3-32

956484

Chapter 3

Internet Edge Security Implementation Partially Resilient Design

The following line provides the NTP server definition used to maintain the PIX clock. Note that in this example, ntpsvr represents an internal NTP server, and MD5 authentication is used to verify NTP updates.
ntp ntp ntp ntp authentication-key 1 md5 ******** authenticate trusted-key 1 server ntpsvr key 1 source inside prefer

The following two lines enable PDM to run from the network management workstation.
http server enable http console 255.255.255.255 inside

The following lines deal with SNMP management configuration, which was configured to use a host called ‘SNMP’ on the network management subnet.
snmp-server snmp-server snmp-server snmp-server snmp-server host inside SMNP location Rack A3 contact Joe User community cisco enable traps

In this design, the PIX firewall was configured to write its configuration to an external TFTP server, whenever changes are made and a ‘write network’ command is issued.
tftp-server inside tftpsvr /pix.cfg

The following two lines relate to inherent security features offered by the PIX. The first line enables the PIX floodguard feature, which allows the PIX to reclaim incomplete connection resources (such as embryonic connections) in the event of an insufficient resource condition, even if the timers have not expired. The second specifies that when an incoming packet does a route lookup, the incoming interface is not used to determine which interface the packet should go to, and which is the next hop. For a multi interface PIX, enabling ‘sysopt route dnat’ provides a performance advantage.
floodguard enable sysopt route dnat

The following lines provide SSH configuration information, allowing the network management station to use an SSH client to access the PIX remotely.
telnet timeout 5 ssh console 255.255.255.255 inside ssh timeout 5

The telnet/console/SSH terminal width is set to 80 characters by default.
terminal width 80

All PIX configurations end with a crypto checksum, which allows administrator to detect changes made to the configuration.
Cryptochecksum:08e3727f3c5158b69de91cf550345d22B

Data Center Networking: Internet Edge Design Architectures 956484

3-33

Chapter 3 Partially Resilient Design

Internet Edge Security Implementation

Basic Forwarding
As stated before, for firewall-centric designs, the number of flow relationships is expressed by the equation N(N-1), where N is the number of firewall interfaces. For a five interface PIX firewall, as shown in Figure 3-3, this means there are twenty separate flow relationships. However, because one interface on each PIX firewall is dedicated to failover functions, there are actually the same twelve relationships as in the basic design.
Table 3-5 Flow Relationships—Partially Resilient Design

Source Interface Inside

Destination Interface Outside

Description Represents traffic flows originating from internal hosts toward the Internet. This traffic is predominantly inspected by implied stateful inspection, as well as outbound anti-spoofing and RFC 1918 rules. Internal servers do not directly access the Internet. Therefore, filters should be put in place to explicitly exclude traffic from these hosts.

Inside

DMZ

Represents traffic flows originating from internal hosts to DMZ servers. Much of this is traffic related to DNS and e-mail from internal hosts accessing the Internet and collecting e-mail. However, this also includes DMZ management and e-commerce application traffic. Although stateful inspection is in place, use explicit filters to limit this traffic to defined permissible DMZ traffic. Generally represents the return path for VPN traffic accessing internal servers. This traffic is predominantly inspected by implied stateful inspection. Note, limit traffic to VPN-connected servers by explicit filtering rules.

Inside

VPN

Outside

Inside

Represents traffic flows originating from Internet hosts to internal hosts. Block this traffic, with the exception of reply traffic statefully matching sessions originating from internal hosts. Generally represents traffic required to support e-commerce applications (HTTP/HTTPS, DNS, SMTP, etc.). This traffic is explicitly allowed via firewall rules only. Represents traffic that flows from Internet hosts to VPN-connected hosts. It is treated similar to Outside-Inside traffic. Represents any traffic originating from the DMZ toward internal hosts—primarily reply traffic, with the exception of server event management traffic (ex. syslog). Block this traffic, with the exception of reply traffic statefully matching sessions originating from internal hosts. Put explicit filters s in place to allow event management traffic.

Outside

DMZ

Outside

VPN

DMZ

Inside

Data Center Networking: Internet Edge Design Architectures

3-34

956484

Chapter 3

Internet Edge Security Implementation Partially Resilient Design

Source Interface DMZ

Destination Interface Outside

Description Represents any traffic originating from the DMZ toward Internet hosts—primarily reply traffic, with the exception of DNS requests originating from the DNS server. Put explicit filters on replies in place, based on the behavior of DMZ-based applications. Represents any traffic originating from the DMZ toward VPN-connected users—primarily reply traffic from DMZ servers. Block this traffic, with the exception of reply traffic statefully matching sessions originating from internal hosts.There may be cases where DMZ servers need to communicate with VPN-connected servers for e-commerce, DNS, and SMTP purposes. You must explicitly permit this traffic with filtering rules. Unlike DMZ-originated traffic, which is predominantly reply traffic, VPN traffic toward internal hosts generally falls into two categories:
1.

DMZ

VPN

VPN

Inside

Replies from VPN-connected remote servers to originating internal hosts. Block this traffic, with the exception of reply traffic statefully matching sessions originating from internal hosts.You can add explicit filters to allow event management traffic. Originating requests from VPN-connected remote users towards internal servers. This traffic is explicitly allowed only via firewall rules.

2.

Data Center Networking: Internet Edge Design Architectures 956484

3-35

Chapter 3 Partially Resilient Design

Internet Edge Security Implementation

Source Interface VPN

Destination Interface Outside

Description This traffic is unusual in scope, as it assumes that VPN-connected host users require access to the Internet while connected to the enterprise. This is often the case, as many remote-access VPN implementations may not allow split-tunneling at the far end. This traffic is treated similar to Inside-Outside traffic, in that this traffic is predominantly inspected by implied stateful inspection, as well as outbound anti-spoofing and RFC 1918 rules. VPN-connected remote servers are not expected to require Internet access. Therefore, put filters in place to explicitly exclude traffic from these hosts.

VPN

DMZ

Again, there are two separate cases for VPN traffic toward the DMZ:
1.

VPN-connected servers should not need to communicate with DMZ hosts, except for e-commerce requirements (backend applications, replication, etc.), DNS, and SMTP transfers. This traffic should be explicitly limited by filtering rules, in addition to the implicit stateful inspection that exists. Traffic from VPN-connected users is similar to Inside-DMZ traffic. Although stateful inspection is in place, explicit filters should be used to limit this traffic to defined permissible DMZ traffic.

2.

As in the previous design, you must consider the security level value of each interface, as this impacts the basic forwarding operation to the firewall. Consider the following guidelines:
• • • • • • •

The basic forwarding policy of the PIX firewall is that traffic originating from a higher-level interface to a lower-level one is implicitly permitted, and stateful replies are allowed in reverse. The basic forwarding policy of the PIX firewall is that traffic originating from a lower-level interface to a higher-level one is implicitly dropped. The inside interface always has a security level of 100 and can, therefore, implicitly forward traffic to all other interfaces. The outside interface always has a security level of 0 and, therefore, all traffic entering this interface must match an existing session state, or by explicitly permitted by a rule. Other interfaces are assigned a value between 1-99. If two interfaces are set to the same security level, then no communicate is allowed between those interfaces. It is assumed that VPN traffic has a higher degree of trust than DMZ traffic. Therefore, the security level of the VPN interface should be higher than the DMZ. This results in the following basic traffic flows being implicitly forwarded by the stateful inspection engine (with stateful replies allowed in reverse):
– Inside-VPN – Inside-DMZ – Inside-Outside

Data Center Networking: Internet Edge Design Architectures

3-36

956484

Chapter 3

Internet Edge Security Implementation Partially Resilient Design

– VPN-DMZ – VPN-Outside – DMZ-Outside

Routing differs significantly over the Basic design, due to the existence of Layer 2 resiliency mechanisms. The following routing guidelines apply:

The outer routers have a default route to their respective ISP connections and may be required to participate in separate routing protocol processes to each ISP connection. This may require the redistribution of static routes, depending on the NAT implementation. This design does support dual-ISP, co-located, and multi-egress ISP designs. An interface on each outer router supports routing updates between the routers, which ensures reachability to and from both ISPs. Use a separate routing process (such as an IBGP instance) to redistribute routes between outer routers without extending the ISP routing processes across both routers and provide control over inter-ISP routing information. The outer routers also have the necessary internal routes to the active IP address of the PIX firewall. HSRP or VRRP must be established across the inside interfaces of the outer routers. The firewall, and any separate VPN device has a default route toward the shared HSRP/VRRP IP address of the inside interfaces of the outer routers. This is generally a static route. The inner routers use a static or RIP provided default route to the active IP address of the inside interface of the PIX firewalls. HSRP or VRRP must be established across the outside interfaces of the inner routers. The firewall, and any separate VPN device, has a statically defined or RIP learned route to internal networks toward the shared HSRP/VRRP IP address of the outside interfaces of the inner routers. The following rules apply to RIP support on the PIX firewall:
– The PIX firewall passively listens for RIP updates on the RIP enabled interfaces. – If configured to do so, the PIX advertises a default route (but not specific learned routes) via

• •

• •

RIP to other routers connected to the interface.
– RIP v1 and v2 are supported. RIP v2 is preferred because it supports variable-length subnet

masks and routing protocol authentication.

Routing between the firewall and separate VPN device is fairly easy for remote access VPNs because the IPSec Mode Config addressing mechanism results in VPN-connected remote hosts appearing to exist on the same subnet as the active firewall's VPN interface. Routing between the firewall and separate VPN device may be complex in the case of site-to-site VPN connectivity, if the network behind the VPN tunnel is complex. This is due to RIP routing limitations on the PIX firewall.

Security Policy Functional Deployment
Table 3-6 defines the security policy functions deployed in this design.

Data Center Networking: Internet Edge Design Architectures 956484

3-37

Chapter 3 Partially Resilient Design

Internet Edge Security Implementation

Table 3-6

Security Policy Function—Partially Resilient Design

Security Policy Functional Name Management Traffic Rules (Element Security)

Deployment Various, based on element security features of the elements within the design

Comment The general recommendation is to use SSH for console connectivity, SSL for device manager connectivity (PDM and the VPN-3000 device manager both use SSL), or IPSec to protect naturally non-encrypted traffic such as Telnet, SNMP, and HTTP. This provides encryption of interactive management streams. The ip verify reverse-path interface outside command on the PIX firewall provides this anti-spoofing functionality. When implemented on the outer IOS router, enable this feature with the ip verify reverse-path command applied to the router's Internet connected interface. The ip verify reverse-path interface inside command on the PIX firewall provides this anti-spoofing functionality. Because network address translation occurs at the firewall level within this design, no traffic passing inward through the outer router should be destined for an RFC 1918 addressed host. Apply an ACL to the Internet facing interface of the outer router to provide sufficient protection. Implement an ACL to ensure that no outbound traffic is destined for RFC 1918 addresses. Craft the ACL to explicitly deny IP traffic that has been identified as harmful to the enterprise (based on security policy) when sourced from the Internet. Create firewall rules to explicitly deny IP traffic sourced from within the enterprise, which is not permitted or otherwise to be explicitly filtered, based on security policy The purpose of these firewall rules is to permit IP traffic to flow statefully, as required to support business applications. These rules are generally applied as exceptions to the implied traffic blocking which occurs from traffic originating from a lower security level interface.

RFC 2827 In

ip verify reverse-path on firewall or outer router

RFC 2827 Out

ip verify reverse-path on firewall ACL on outer router

RFC 1918 In

RFC 1918 Out Basic Filtering In

ACL on outer router ACL in on the outer router outside interface PIX firewall rules on inside, VPN, and DMZ interfaces PIX firewall rules on inside interface

Basic Filtering Out

Stateful Inspection Rules

NAT Issues
The number of real (Internet routable) IP addresses required in support of this design is based on the following factors:

In general, host IP addresses from higher-level interfaces are translated to addresses in lower-level interfaces.

Data Center Networking: Internet Edge Design Architectures

3-38

956484

Chapter 3

Internet Edge Security Implementation Partially Resilient Design

Individual servers (or aggregation devices, such as load balancers) in the DMZ require unique real IP addresses, either applied directly to the device (meaning no address translation) or via 1:1 static NAT. Host applications have differing levels of support for the translation process. Hosts may require public IP addresses or specific methods of address translation to resolve incompatible application issues. Note that:
– PAT generally supports TCP/UDP based applications. Other IP protocols (such as those used

natively by IPSec VPNs) are generally not supported by PAT.
– Applications that carry IP address information within the packet payload may not function

correctly, as this information is not translated.
– Applications that require the use of specific source TCP/UDP ports are generally not supported

by PAT because the PAT process randomizes this information. However, these application may be supported by static or dynamic 1:1 NATb.
– Applications in which endpoints separately checksum IP header information, such as IPSec

Authentication Header mode, do not function with any form of address translation. Address translation is usually performed on the firewall, as this is the Internet edge device that interconnects all the areas (outside, inside, DMZs) of the design. This is reflected in the firewall configuration provided. An alternative is to perform address translation functions on devices behind the firewall. This provides the benefit of making internal traffic appear to be adjacent to the firewall. This simplifies firewall traffic forwarding because summarizing of internal network routes, either via static routes or via routing protocol support on the firewall, is not required. Examples include:

Performing address translation on the inner routers for internal networks. Note, however, that each inner router must have unique and non-overlapping address translation definitions to function properly. Use of load-balancer and content network engines within the DMZ. Use of mode-config (for remote access VPNs) or other NAT mechanisms (for site-to-site VPNs) on VPN devices connected externally to the firewall.

• •

DMZ Design
This Internet edge design makes use of DMZs that are adjacent to the inside and outside networks. This allows the use of a single firewall engine upon which to define the forwarding relationships between separate network interfaces. The purpose of this DMZ design is to allow direct n-way associations between the inside, outside, and DMZ networks, without requiring traffic between any two networks to pass through an intermediate network. This provides the benefits of:
• • •

Establishing a unique set of firewall rules between any two firewall interfaces (see Basic Forwarding) based on security policy Overall higher performance because the firewall interfaces are dedicated to the networks they support Improved security because hosts connected to each interface only see traffic relevant to that area of the network (no transitional data paths)

Data Center Networking: Internet Edge Design Architectures 956484

3-39

Chapter 3 Partially Resilient Design

Internet Edge Security Implementation

The switch in the DMZ provides connectivity for that DMZ's hosts, as well as a means to monitor traffic flowing on that DMZ via NIDS. This switch should have strict Layer 2 port security, which limits the maximum number of MAC addresses per port (to prevent flooding attacks) and statically defines the MAC address of the host assigned to that port. For Cisco Catalyst switches, establish a private VLAN on this switch to group together DMZ hosts that need to communicate with each other and, more importantly, to prevent communication between DMZ hosts that do not need to communicate with each other. This allows customers to place dissimilar servers in a common DMZ, as opposed to administrating separate DMZs for each set of dissimilar hosts. This switch provides basic connectivity for DMZ hosts, but does not exclude the use of more advanced networking infrastructures. However, if multiple subnets exist within the DMZ, it is necessary to ensure that the firewall has the required routing information to forward traffic to subnets not directly connected in the DMZ design. Accomplish this via static routes, routing protocols, or address translation behind the firewall's DMZ interface.

Intrusion Detection Capabilities
The use of SPAN ports on the switches connected to each firewall interface allows the connectivity of NIDS sensors to monitor traffic flowing across that interface. The management ports of these sensors should be connected either out-of-band or to the inside switch to protect sensor communication, as well as provide the ability to shun attackers by adjusting firewall or outer router ACLs. Servers placed in the DMZ should have HIDS installed to monitor for potential intrusions and aberrant server behavior. Specific firewall rules are required to support HIDS event management traffic.

Network Management
Network management of the Partially Resilient design is similar to that of the Basic network design. However, it is important to remember the following guidelines, which are associated with the use of failover and HSRP/VRRP:

Do not manage the standby PIX firewall directly because this causes the PIX configurations to be out of sync. Always manage the active PIX firewall and allow it to sync configurations with the standby PIX firewall. If a failover occurs, SSH reports a bad crypto fingerprint error. This is because the two PIX firewalls have separate crypto identities, and is normal. If a failover occurs while you are managing the PIX firewall via PDM, you will receive a bad key error. Close PDM and restart the session. To avoid SSH crypto errors, manage routers via their real IP addresses, not their HSRP/VRRP shared addresses.

• • •

A single host with the following tools is sufficient for basic management:
• • •

Web browser w/ SSL support, for use in accessing PIX device manager An SSH client (Telnet is not recommended for remote console access.) A syslog server

Manage routers and switches via SSH. Configure SNMP for read-only support. The use of Telnet, HTTP, and SNMP read-write network management tools is not recommended for devices on the Internet edge.

Data Center Networking: Internet Edge Design Architectures

3-40

956484

Chapter 3

Internet Edge Security Implementation Fully Resilient Design

Fully Resilient Design
This Internet edge design (shown in Figure 3-4) is very similar to the Partially Resilient design, except that it provides Layer 2 resiliency. This results in a single forwarding path design (due to the failover mechanism between PIX firewalls), but one that is not impacted by a single device failure.
Figure 3-4 Fully Resilient Design

Enterprise

DMZ

Internet

Configurations
The following is the configuration for the PIX firewall shown within this design: The next lines define the PIX interface names and security levels.
nameif nameif nameif nameif gb-ethernet0 outside security0 gb-ethernet1 inside security100 ethernet0 dmz security50 ethernet1 vpn security100

The next lines provide the PIX Telnet/SSH and enable passwords.
enable password 2KFQnbNIdI.2KYOU encrypted passwd 2KFQnbNIdI.2KYOU encrypted

The next two lines define the hostname and domain for this PIX. The PIX setup wizard provides this information and provides the basis for the SSH and SSL certificates used by the PIX.
hostname pix domain-name cisco.com

The next two lines define the PIX clock timezone characteristics.
clock timezone EST -5 clock summer-time EDT recurring

Data Center Networking: Internet Edge Design Architectures 956484

76680

3-41

Chapter 3 Fully Resilient Design

Internet Edge Security Implementation

This next section defines the fixup protocols currently active within the PIX. Fixup protocols define the stateful behavior of complex protocols, so that ports are properly, dynamically opened for the operation of these applications, without exposing the enterprise environment to needless numbers of open ports.
fixup fixup fixup fixup fixup fixup fixup fixup fixup fixup fixup protocol protocol protocol protocol protocol protocol protocol protocol protocol protocol protocol ftp 21 http 80 h323 h225 1720 h323 ras 1718-1719 ils 389 rsh 514 rtsp 554 smtp 25 sqlnet 1521 sip 5060 skinny 2000

This section provides the PIX firewall rules name and network object group definitions. Use these names and object groups simplify the identities of hosts within firewall rules definitions.
names name 192.168.1.65 webserver name 10.0.1.3 vpn3000-private name 192.168.1.67 dnsserver name 10.0.1.128 vpnusers name 192.168.1.66 mailserver name 10.1.0.0 internalsvrs name 10.10.0.0 mgmtservers name 200.200.200.3 outerrouter1 name 200.200.200.4 outerrouter2 name 10.10.0.67 console name 10.10.0.66 SMNP name 10.10.0.65 syslog name 10.10.0.68 tftpsvr name 10.10.0.69 aaa name 10.10.0.70 ntpsvr name 10.1.0.65 intmailsvr object-group network dmzservers description Servers in DMZ network-object webserver 255.255.255.255 network-object mailserver 255.255.255.255 network-object dnsserver 255.255.255.255 object-group network outerrouters description Outer Routers outside the firewalls network-object outerrouter1 255.255.255.255 network-object outerrouter2 255.255.255.255

In addition to using object groups to group hosts, you can also use object-groups to bundle TCP/UDP services in a common definition. This helps reduce the number of line entries in the firewalls ACLs. This configuration provides two examples. First, the three services offered by a web server (HTTP, HTTPS, FTP, located within the DMZ), are bundled together to simplify the number of lines required to properly firewall these services. Second, as an alternative to defining individual network management servers that perform certain functions, management protocols are grouped into four areas: TCP mgmt servers, TCP mgmt clients, UDP mgmt servers, and udp mgmt clients.

Note

While creating these management groups greatly simplifies management protocol traffic flowing from the Internet edge, it is assumed that an upstream internal router to the management subnet has ACLs which determine which specific protocols go to specific management hosts.

object-group service webservices tcp

Data Center Networking: Internet Edge Design Architectures

3-42

956484

Chapter 3

Internet Edge Security Implementation Fully Resilient Design

description Services provide by the webserver port-object eq ftp port-object eq https port-object eq www object-group service tcp-mgmt-svcs tcp description TCP-based network management services port-object eq tacacs object-group service tcp-mgmt-clients tcp description TCP-based network management clients port-object eq ssh port-object eq https object-group service udp-mgmt-svcs udp description UDP-based network management services port-object eq syslog port-object eq tftp port-object eq ntp port-object eq snmptrap port-object eq tacacs object-group service udp-mgmt-clients udp description UDP-based network management clients port-object eq snmp

You should not permit traffic to flow from outside of a firewall directly to hosts on the internal network, a special case exists for must be made for event data (syslogs, authentication, SNMP traps) coming from the outer router to the network management subnet. This traffic is permitted by the first two lines of the outside interface’s ACL.
access-list outside_access_in permit udp object-group outerrouters mgmtservers 255.255.0.0 object-group udp-mgmt-svcs access-list outside_access_in permit tcp object-group outerrouters mgmtservers 255.255.0.0 object-group tcp-mgmt-svcs

The remaining outside interface’s ACL is straightforward. It provides external access to hosts located within the DMZ. This follows the basic security policy recommendation that all externally originated traffic must flow into a DMZ and not directly to internal subnets.
access-list outside_access_in permit tcp any object-group webservices host 200.200.200.65 access-list outside_access_in permit tcp any host 200.200.200.66 eq smtp access-list outside_access_in permit udp any host 200.200.200.67 eq domain

The NAT rules established later in this configuration require that all internal hosts are translated into global pool 1 on the other, lower level interfaces. However, since both the VPN and DMZ interfaces use private IP addressing, it is undesirable to translate the private IP addresses of the internal network for permitted traffic flowing to these interfaces. This outbound nat0 ACL provides no NAT translation for these cases.
access-list inside_outbound_nat0_acl permit ip internalsvrs 255.255.0.0 vpnusers 255.255.255.128 access-list inside_outbound_nat0_acl permit ip mgmtservers 255.255.0.0 host vpn3000-private access-list inside_outbound_nat0_acl permit ip any object-group dmzservers

The following ACL applied to the VPN interface defines the allowed management traffic from the VPN concentrator to the network management subnet, as well as allows VPN users to access the internal network, which is on a higher level interface.

Note

If there are any restrictions on areas of the internal network accessible by VPN users, you can modify this ACL to suit. Traffic from VPN users is permitted by default to the DMZ and outside interfaces

Data Center Networking: Internet Edge Design Architectures 956484

3-43

Chapter 3 Fully Resilient Design

Internet Edge Security Implementation

access-list vpn_access_in permit udp host vpn3000-private mgmtservers 255.255.0.0 object-group udp-mgmt-svcs access-list vpn_access_in permit tcp host vpn3000-private mgmtservers 255.255.0.0 object-group tcp-mgmt-svcs access-list vpn_access_in deny ip host vpn3000-private any access-list vpn_access_in permit ip vpnusers 255.255.255.128 any

The NAT rules established later in this configuration require that all VPN hosts are translated into global pool 1 on the other, lower level interfaces. However, since hosts on the DMZ interface use private IP addressing, it is undesirable to translate the private IP addresses of the VPN network for permitted traffic flowing to the DMZ interface. This outbound nat0 ACL provides no NAT translation for these cases.
access-list vpn_outbound_nat0_acl permit ip vpnusers 255.255.255.128 object-group dmzservers

Permitted traffic from the internal network, is defined by the internal interface’s ACL. First define the network management traffic allowed to flow from the internal management subnet to the outer router outside the firewall, the VPN concentrator, and servers within the DMZ. Note the use of previously defined object groups to minimize the number of protocol definitions required. Otherwise no other traffic is permitted to flow to/from our management subnet.
access-list inside_access_in permit udp mgmtservers 255.255.0.0 object-group udp-mgmt-svcs object-group outerrouters access-list inside_access_in permit tcp mgmtservers 255.255.0.0 object-group tcp-mgmt-svcs object-group outerrouters access-list inside_access_in permit udp mgmtservers 255.255.0.0 object-group outerrouters object-group udp-mgmt-clients access-list inside_access_in permit tcp mgmtservers 255.255.0.0 object-group outerrouters object-group tcp-mgmt-clients access-list inside_access_in permit udp mgmtservers 255.255.0.0 object-group udp-mgmt-svcs object-group dmzservers access-list inside_access_in permit tcp mgmtservers 255.255.0.0 object-group tcp-mgmt-svcs object-group dmzservers access-list inside_access_in permit udp mgmtservers 255.255.0.0 object-group dmzservers object-group udp-mgmt-clients access-list inside_access_in permit tcp mgmtservers 255.255.0.0 object-group dmzservers object-group tcp-mgmt-clients access-list inside_access_in permit udp mgmtservers 255.255.0.0 object-group udp-mgmt-svcs host vpn3000-private access-list inside_access_in permit tcp mgmtservers 255.255.0.0 object-group tcp-mgmt-svcs host vpn3000-private access-list inside_access_in permit udp mgmtservers 255.255.0.0 host vpn3000-private object-group udp-mgmt-clients access-list inside_access_in permit tcp mgmtservers 255.255.0.0 host vpn3000-private object-group tcp-mgmt-clients access-list inside_access_in deny ip mgmtservers 255.255.0.0 any

The internal mail server must be able to communicate using SMTP to the external mail server, as well as to the DNS server. These next two lines of the inside interface’s ACL permit this traffic.
access-list inside_access_in permit tcp host intmailsvr host mailserver eq smtp access-list inside_access_in permit udp host intmailsvr any eq domain

Since VPN users are permitted to access internal servers, this traffic is explicitly permitted. Otherwise traffic from internal servers is block from traversing the Internet edge. The next two lines of the inside interface’s ACL accomplish this
access-list inside_access_in permit ip internalsvrs 255.255.0.0 vpnusers 255.255.255.128 access-list inside_access_in deny ip internalsvrs 255.255.0.0 any

Data Center Networking: Internet Edge Design Architectures

3-44

956484

Chapter 3

Internet Edge Security Implementation Fully Resilient Design

Since an ACL is being applied to the inside interface, it’s default posture of permitting traffic to flow to lower level interfaces is blocked by the implied ‘deny ip any any’ on the bottom of all ACLs. Therefore an explicit ‘permit ip any any’ statement is required to allow traffic to flow from internal hosts to the Internet, DMZ, and to VPN users.
access-list inside_access_in permit ip any any

Since the DMZ interface is a lower level than the internal network, you must allow event management data to flow from the DMZ servers to the management subnet. Accomplish this using the first two lines of the DMZ interface’s ACL.
access-list dmz_access_in permit udp object-group dmzservers mgmtservers 255.255.0.0 object-group udp-mgmt-svcs access-list dmz_access_in permit tcp object-group dmzservers mgmtservers 255.255.0.0 object-group tcp-mgmt-svcs

In the case of the DMZ, servers play very specific roles, and can be bounded to those roles by using the DMZ interface’s ACL. For example, the web server is only permitted to respond to requests from its defined services (in an object group), and is not allowed to forward any other web server initiated traffic (due to the implied ‘deny ip any any’ at the bottom of the ACL)
access-list access-list access-list access-list access-list dmz_access_in dmz_access_in dmz_access_in dmz_access_in dmz_access_in permit permit permit permit permit tcp tcp tcp udp udp host host host host host webserver object-group webservices any mailserver host intmailsvr eq smtp mailserver any eq smtp dnsserver eq domain any dnsserver any eq domain

The following set of lines provide basic configuration of telnet/SSH screens, enables logging (w/ timestamps) to the network management host, and defines the Ethernet characteristics for the PIX interfaces
pager lines 24 logging on logging timestamp logging trap notifications logging history warnings logging host inside 10.1.0.65 interface gb-ethernet0 1000sxfull interface gb-ethernet1 1000sxfull interface ethernet0 100full interface ethernet1 100full mtu outside 1500 mtu inside 1500 mtu vpn 1500 mtu dmz 1500

The following section defines the IP addresses for each of the PIX interfaces.
ip ip ip ip address address address address outside 200.200.200.1 255.255.255.0 inside 10.0.0.1 255.255.255.0 vpn 10.0.1.1 255.255.255.0 dmz 192.168.1.1 255.255.255.0

The following lines provide RFC-2827 anti-spoofing functionality
ip ip ip ip verify verify verify verify reverse-path reverse-path reverse-path reverse-path interface interface interface interface outside inside vpn dmz

The following lines relate to the PIX’s basic IDS capabilities and define that IDS info & attack signature violations result in an alarm posted to the syslog server.
ip audit name Attack attack action alarm

Data Center Networking: Internet Edge Design Architectures 956484

3-45

Chapter 3 Fully Resilient Design

Internet Edge Security Implementation

ip ip ip ip ip ip ip ip ip ip ip

audit audit audit audit audit audit audit audit audit audit audit

name Info info action alarm interface outside Info interface outside Attack interface inside Info interface inside Attack interface vpn Info interface vpn Attack interface dmz Info interface dmz Attack info action alarm attack action alarm

Multi-interface PIXs (515E, 525, 535, and FWSM for the Catalyst 6000/Cisco 7600) are capable of operating with a secondary unit in an active/standby failover configuration. For resilient designs, both LAN-based failover and stateful failover are enabled.

Note

The secondary unit has the ‘failover lan unit secondary’ command as opposed to the primary designation for LAN failover.
failover failover failover failover failover failover failover failover failover failover failover failover failover failover timeout 0:00:00 poll 15 replication http ip address outside 200.200.200.2 ip address inside 10.0.0.2 ip address vpn 10.0.1.2 ip address dmz 192.168.1.2 ip address intf4 0.0.0.0 ip address intf5 0.0.0.0 link vpn lan unit primary lan interface vpn lan key ******** lan enable

The following section of commands relate to the functionality of PIX Device Manager (PDM). Most importantly, PDM requires network objects and groups to be defined relative to their interface.
pdm pdm pdm pdm pdm pdm pdm pdm pdm pdm pdm pdm pdm pdm pdm pdm pdm pdm location vpn3000-private 255.255.255.255 vpn location vpnusers 255.255.255.128 vpn location webserver 255.255.255.255 dmz location mailserver 255.255.255.255 dmz location dnsserver 255.255.255.255 dmz location internalsvrs 255.255.0.0 inside location mgmtservers 255.255.0.0 inside location outerrouter 255.255.255.255 outside location syslog 255.255.255.255 inside location SMNP 255.255.255.255 inside location console 255.255.255.255 inside location tftpsvr 255.255.255.255 inside location aaa 255.255.255.255 inside location ntpsvr 255.255.255.255 inside location intmailsvr 255.255.255.255 inside group dmzservers dmz logging warnings 100 history enable

The follow line defines the ARP timeout characteristic for the PIX. By default, ARP entries remain in cache for 4 hours
arp timeout 14400

Data Center Networking: Internet Edge Design Architectures

3-46

956484

Chapter 3

Internet Edge Security Implementation Fully Resilient Design

The following lines define the address translation characteristics of the PIX. The global lines indicates that the IP address of the defined interfaces of the PIX are to be used for port address translation (PAT, many:1 dynamic NAT).
global (outside) 1 interface global (vpn) 1 interface global (dmz) 1 interface

Users on the inside and VPN interfaces are generally translated as they access hosts on other interfaces. However, as previously noted, there are exceptions in accessing hosts on the inside, DMZ, and VPN interfaces, due to a common use of RFC-1918 private IP addresses. The previously defined outbound NAT0 ACL is applied at the start of each interfaces’ NAT definition to alleviate this issue
nat nat nat nat nat (inside) 0 access-list inside_outbound_nat0_acl (inside) 1 0.0.0.0 0.0.0.0 0 0 (vpn) 0 access-list vpn_outbound_nat0_acl (vpn) 1 vpn3000-private 255.255.255.255 0 0 (vpn) 1 vpnusers 255.255.255.128 0 0

The DMZ servers and internal network management hosts are statically translated on the outside interface to allow external hosts and the outer router (respectively) to access these hosts using publicly accessible IP addresses
static static static static static static static static static static (dmz,outside) 200.200.200.65 webserver netmask 255.255.255.255 0 0 (dmz,outside) 200.200.200.66 mailserver netmask 255.255.255.255 0 0 (dmz,outside) 200.200.200.67 dnsserver netmask 255.255.255.255 0 0 (inside,outside) 200.200.200.100 laptop netmask 255.255.255.255 0 0 (inside,outside) 200.200.200.129 syslog netmask 255.255.255.255 0 0 (inside,outside) 200.200.200.130 SMNP netmask 255.255.255.255 0 0 (inside,outside) 200.200.200.131 console netmask 255.255.255.255 0 0 (inside,outside) 200.200.200.132 tftpsvr netmask 255.255.255.255 0 0 (inside,outside) 200.200.200.133 aaa netmask 255.255.255.255 0 0 (inside,outside) 200.200.200.134 ntpsvr netmask 255.255.255.255 0 0

The following lines applies the firewall ACLs to their associated interfaces.
access-group access-group access-group access-group outside_access_in in interface outside inside_access_in in interface inside vpn_access_in in interface vpn dmz_access_in in interface dmz

The following three lines enable authenticated RIPv2 routing protocol support on the PIX firewall. The first line provides the ability for the firewall to receive routes from the outer router, including a default route. As an alternative, a statically defined default route may also be applied, using the PIX ‘route’ command.
rip outside passive version 2 authentication md5 cisco 1

This is the second of three lines defining authenticated RIPv2 support on the PIX, this time on the inside interface. This allows the PIX to learn routes to internal subnets via RIP from adjacent interior routers, without statically defining them
rip inside passive version 2 authentication md5 cisco 1

Thus far, the PIX interoperated passively in the authenticated RIPv2 routing protocol, accepting routing updates from its router neighbors on the inside and outside interface. This third RIPv2 configuration line configures the PIX to actively send a default route to its neighbors on the inside interface. This allows the internal routers to correctly point and propagate a default route towards the PIX.
rip inside default version 2 authentication md5 cisco 1

Data Center Networking: Internet Edge Design Architectures 956484

3-47

Chapter 3 Fully Resilient Design

Internet Edge Security Implementation

The following lines provide the (default) timeout characteristics for stateful connections passing through the PIX.
timeout xlate 3:00:00 timeout conn 1:00:00 half-closed 0:10:00 udp 0:02:00 rpc 0:10:00 h323 0:05:00 sip 0:30:00 sip_media 0:02:00 timeout uauth 0:05:00 absolute

The following lines define the (default) definition of AAA servers to be used in operating the PIX.
aaa-server aaa-server aaa-server aaa-server TACACS+ protocol tacacs+ TACACS+ (inside) host aaa cisco timeout 5 RADIUS protocol radius LOCAL protocol local

The following line provides the NTP server definition used to maintain the PIX clock. Note that in this example, ntpsvr represents an internal NTP server, and MD5 authentication is used to verify NTP updates
ntp ntp ntp ntp authentication-key 1 md5 ******** authenticate trusted-key 1 server ntpsvr key 1 source inside prefer

The following two lines enable PDM to be run from the network management workstation.
http server enable http console 255.255.255.255 inside

The following lines deal with SNMP management configuration, which has been configured to use a host called ‘SNMP’ on the network management subnet.
snmp-server snmp-server snmp-server snmp-server snmp-server host inside SMNP location Rack A3 contact Joe User community cisco enable traps

In this design, the PIX firewall is configured to write its configuration to an external TFTP server, whenever changes are made and a ‘write network’ command is issued.
tftp-server inside tftpsvr /pix.cfg

The following two lines relate to the inherent security features offered by the PIX. The first line enables the PIX floodguard feature, which allows the PIX to reclaim incomplete connection resources (such as embryonic connections) in the event of an insufficient resource condition, even if the timers have not expired. The second specifies that when an incoming packet does a route lookup, the incoming interface is not used to determine which interface the packet should go to, and which is the next hop. For a multi interface PIX, enabling ‘sysopt route dnat’ provides a performance advantage.
floodguard enable sysopt route dnat

The following lines provide SSH configuration information, allowing the network management station to use an SSH client to access the PIX remotely.
telnet timeout 5 ssh console 255.255.255.255 inside ssh timeout 5

The telnet/console/SSH terminal width is set to 80 characters by default.
terminal width 80

Data Center Networking: Internet Edge Design Architectures

3-48

956484

Chapter 3

Internet Edge Security Implementation Fully Resilient Design

All PIX configurations end with a crypto checksum, which allows administrator to detect if changes have been made to the configuration
Cryptochecksum:08e3727f3c5158b69de91cf550345d22B

Basic Forwarding
From a Layer 2 standpoint, a pair of Layer 2 switches is used on the inside, outside, and (aggregate) DMZ areas of the Internet edge design. As placed in the design, each subnet is designed to have no Layer 2 forwarding loops. Spanning-tree should be disabled to prevent topology changes and the associated reconvergence time from impacting or inadvertently triggering the Layer 3 resiliency mechanisms. Although networks of significant disparity of security function, such as inside and outside, should not co-exist on the same Layer 2 switch, the DMZs and failover should be able to be aggregated across a common pair of switches using VLAN separation and a VLAN trunk between the switches. If a customer’s security policy requires the use of separate switches for each DMZ, this can also be supported. As mentioned before, for firewall-centric designs, the number of flow relationships can be expressed by the equation N(N-1), where N is the number of firewall interfaces. For a five interface PIX firewall, as shown in Figure 3-4, this means there would be twenty separate flow relationships. However, because one interface on each PIX is dedicated for failover purposes, there are actually the same twelve relationships as in the basic design.
Table 3-7 Flow Relationships—Fully Resilient Design

Source Interface Inside

Destination Interface Outside

Description Represents traffic flows originating from internal hosts toward the Internet. This traffic is predominantly inspected by implied stateful inspection, as well as outbound anti-spoofing and RFC 1918 rules. Internal servers do not directly access the Internet. Therefore, put filters in place to explicitly exclude traffic from these hosts.

Inside

DMZ

Represents traffic flows originating from internal hosts to DMZ servers. Much of this is traffic related to DNS and e-mail from internal hosts accessing the Internet and collecting e-mail. However, this also includes DMZ management and e-commerce application traffic. Although stateful inspection is in place, use explicit filters to limit this traffic to defined permissible DMZ traffic. Generally represents the return path for VPN traffic accessing internal servers. This traffic is predominantly inspected by implied stateful inspection. Note limit traffic to VPN-connected servers by explicit filtering rules.

Inside

VPN

Data Center Networking: Internet Edge Design Architectures 956484

3-49

Chapter 3 Fully Resilient Design

Internet Edge Security Implementation

Source Interface Outside

Destination Interface Inside

Description Represents traffic flows originating from Internet hosts to internal hosts. Block this traffic, with the exception of reply traffic statefully matching sessions originating from internal hosts. Generally represents traffic required to support e-commerce applications (HTTP/HTTPS, DNS, SMTP, etc.). This traffic is explicitly allowed via firewall rules only. Represents traffic that flows from Internet hosts to VPN-connected hosts. It is treated similar to Outside-Inside traffic. Represents any traffic originating from the DMZ toward internal hosts—primarily reply traffic, with the exception of server event management traffic (ex. syslog). Block this traffic, with the exception of reply traffic statefully matching sessions originating from internal hosts. Put explicit filters in place to allow event management traffic. Represents any traffic originating from the DMZ toward Internet hosts—primarily reply traffic, with the exception of DNS requests originating from the DNS server. Put explicit filters on replies sin place, based on the behavior of DMZ-based applications. Represents any traffic originating from the DMZ toward VPN-connected users—primarily reply traffic from DMZ servers. Block this traffic, with the exception of reply traffic statefully matching sessions originating from internal hosts.There may be cases where DMZ servers need to communicate with VPN-connected servers for e-commerce, DNS, and SMTP purposes. Explicitly permit this traffic by filtering rules. Unlike DMZ-originated traffic, which is predominantly reply traffic, VPN traffic toward internal hosts generally falls into two categories:
1.

Outside

DMZ

Outside

VPN

DMZ

Inside

DMZ

Outside

DMZ

VPN

VPN

Inside

Replies from VPN-connected remote servers to originating internal hosts. Block this traffic, with the exception of reply traffic statefully matching sessions originating from internal hosts. Add explicit filters to allow event management traffic. Originating requests from VPN-connected remote users towards internal servers. This traffic is explicitly allowed only via firewall rules.

2.

Data Center Networking: Internet Edge Design Architectures

3-50

956484

Chapter 3

Internet Edge Security Implementation Fully Resilient Design

Source Interface VPN

Destination Interface Outside

Description This traffic is unusual in scope, as it assumes that VPN-connected host users require access to the Internet while connected to the enterprise. This is often the case, as many remote-access VPN implementations may not allow split-tunneling at the far end. This traffic is treated similar to Inside-Outside traffic, in that this traffic is predominantly inspected by implied stateful inspection, as well as outbound anti-spoofing and RFC 1918 rules. VPN-connected remote servers are not expected to require Internet access. Therefore, filters should be put in place to explicitly exclude traffic from these hosts.

VPN

DMZ

Again, there are two separate cases for VPN traffic toward the DMZ:
1.

VPN-connected servers should not need to communicate with DMZ hosts, except for e-commerce requirements (backend applications, replication, etc.), DNS, and SMTP transfers. This traffic should be explicitly limited by filtering rules, in addition to the implicit stateful inspection that exists. Traffic from VPN-connected users is similar to Inside-DMZ traffic. Although stateful inspection is in place, explicit filters should be used to limit this traffic to defined permissible DMZ traffic.

2.

As with the previous designs, you must consider the security level value of each interface because this impacts the basic forwarding operation to the firewall. Consider the following guidelines:
• • • • • • •

The basic forwarding policy of the PIX firewall is that traffic originating from a higher-level interface to a lower-level one is implicitly permitted and stateful replies are allowed in reverse. The basic forwarding policy of the PIX firewall is that traffic originating from a lower-level interface to a higher-level one is implicitly dropped. The inside interface always has a security level of 100 and can, therefore, implicitly forward traffic to all other interfaces. The outside interface always has a security level of 0 and, therefore, all traffic entering this interface must match an existing session state, or by explicitly permitted by a rule. Other interfaces are assigned a value between 1-99. If two interfaces are set to the same security level, then no communication is allowed between those interfaces. It is assumed that VPN traffic has a higher degree of trust than DMZ traffic. Therefore, the security level of the VPN interface should be higher than that of the DMZ. This results in the following basic traffic flows being implicitly forwarded by the stateful inspection engine (with stateful replies allowed in reverse):
– Inside-VPN – Inside-DMZ – Inside-Outside

Data Center Networking: Internet Edge Design Architectures 956484

3-51

Chapter 3 Fully Resilient Design

Internet Edge Security Implementation

– VPN-DMZ – VPN-Outside – DMZ-Outside

Due to the existence of Layer 2 resiliency mechanisms, routing differs significantly over that of the Basic design. The following routing guidelines apply:

The outer routers have a default route to their respective ISP connections and may be required to participate in separate routing protocol processes to each ISP connection. This may require the redistribution of static routes, depending on how NAT is implemented. This design does support dual-ISP, co-located, and multi-egress ISP designs. Routing updates are allowed between the outer routers, which ensures reachability to and from both ISPs. Use a separate routing process (such as an IBGP instance) to redistribute routes between outer routers, without extending the ISP routing processes across both routers, and provides control over inter-ISP routing information. The outer routers also have the necessary internal routes to the active IP address of the PIX firewall. HSRP or VRRP must be established across the inside interfaces of the outer routers. The firewall, and any separate VPN device, has a default route towards the shared HSRP/VRRP IP address of the inside interfaces of the outer routers. This is generally a static route. The inner routers use a static or RIP provided default route to the active IP address of the inside interface of the PIX firewalls. HSRP or VRRP must be established across the outside interfaces of the inner routers. The firewall, and any separate VPN device, has a statically defined or RIP learned routes to internal networks towards the shared HSRP/VRRP IP address of the outside interfaces of the inner routers. The following rules apply to RIP support on the PIX firewall.
– The PIX firewall passively listens for RIP updates on the RIP enabled interfaces. – If configured to do so, the PIX advertises a default router via RIP to other routers connected to

• •

• •

the interface, but not specific learned routes.
– RIP v1 and v2 are supported. RIP v2 is preferred because it supports variable-length subnet

masks and routing protocol authentication. Routing between the firewall and separate VPN device is fairly easy for remote access VPNs because the IPSec Mode Config addressing mechanism results in VPN-connected remote hosts appearing to exist on the same subnet as the active firewall's VPN interface. Routing between the firewall and separate VPN device may be complex in the case of site-to-site VPN connectivity, if the network behind the VPN tunnel is complex. This is due to RIP routing limitations on the PIX firewall.

Security Policy Functional Deployment
The defined security policy functions deployed in this design are shown in Table 3-8.

Data Center Networking: Internet Edge Design Architectures

3-52

956484

Chapter 3

Internet Edge Security Implementation Fully Resilient Design

Table 3-8

Security Policy Function—Fully Resilient Design

Security Policy Functional Name

Deployment

Comment The general recommendation is to use SSH for console connectivity, SSL for device manager connectivity (PDM and the VPN-3000 device manager both use SSL), or IPSec to protect naturally non-encrypted traffic such as Telnet, SNMP, and HTTP. This provides encryption of interactive management streams. The ip verify reverse-path interface outside command on the PIX firewall provides this anti-spoofing functionality.When implemented on the outer IOS router, this feature is enabled with the ip verify reverse-path command applied to the router's Internet connected interface. The ip verify reverse-path interface inside command on the PIX firewall provides this anti-spoofing functionality. Because network address translation occurs at the firewall level in this design, no traffic passing inward through the outer router should be destined for an RFC 1918 addressed host. An ACL applied in to the Internet facing interface of the outer router will provide sufficient protection. Implement an ACL to ensure that no outbound traffic is destined for RFC 1918 addresses.

Management Traffic Rules Various, based on element (Element Security) security features of the elements within the design

RFC 2827 In

ip verify reverse-path on firewall or outer router

RFC 2827 Out

ip verify reverse-path on firewall ACL on outer router

RFC 1918 In

RFC 1918 Out Basic Filtering In

ACL on outer router

ACL in on the outer router Craft the ACL to explicitly deny IP traffic that outside interface has been identified as harmful to the enterprise (based on security policy) when sourced from the Internet. PIX firewall rules on inside, VPN, and DMZ interfaces PIX firewall rules on inside interface Create firewall rules to explicitly deny IP traffic sourced from inside the enterprise, which is not permitted or should be explicitly filtered, based on security policy. The purpose of these firewall rules is to permit IP traffic to flow statefully, as required, to support business applications. These rules are generally applied as exceptions to the implied traffic blocking which occurs from traffic originating from a lower security level interface.

Basic Filtering Out

Stateful Inspection Rules

Data Center Networking: Internet Edge Design Architectures 956484

3-53

Chapter 3 Fully Resilient Design

Internet Edge Security Implementation

NAT Issues
The number of real (Internet routable) IP addresses required in support of this design is based on the following factors:
• • •

In general, host IP addresses from higher-level interfaces are translated to addresses in lower-level interfaces. Individual servers (or aggregation devices, such as load balancers) in the DMZ require unique real IP addresses, either applied directly to the device (i.e no address translation) or via 1:1 static NAT. Host applications have differing levels of support for the translation process. Hosts may require public IP addresses or specific methods of address translation to resolve incompatible application issues. Note that:
– PAT generally supports TCP/UDP based applications. Generally, PAT does not support other IP

protocols, such as those used natively by IPSec VPNs.
– Applications that carry IP address information within the packet payload may not function

correctly, as this information is not translated.
– Generally, PAT does not support applications that require the use of specific source TCP/UDP

ports because the PAT process randomizes this information. However, static or dynamic 1:1 NATb may support these application.
– Applications in which endpoints separately checksum IP header information, such as IPSec

Authentication Header mode, will not function with any form of address translation. Address translation is usually performed on the firewall, as this is the Internet edge device that interconnects all the areas (Outside, Inside, DMZs) of the design. This is reflected in the firewall configuration provided. An alternative is to perform address translation functions on devices behind the firewall. This gives you the benefit of making internal traffic appear to be adjacent to the firewall. This aids in simplifying firewall traffic forwarding because summarizing of the internal network routes, either via static routes or via routing protocol support on the firewall, is not required. Examples include:

Performing address translation on the inner routers for internal networks. Note, however, that each inner router must have unique and non-overlapping address translation definitions to function properly. Use of load-balancer and content network engines within the DMZ. Use of mode-config (for remote access VPNs) or other NAT mechanisms (for site-to-site VPNs) on VPN devices connected externally to the firewall.

• •

DMZ Design
This Internet edge design makes use of DMZs that are adjacent to the inside and outside networks. This allows the use of a single firewall engine upon which to define the forwarding relationships between separate network interfaces. The purpose of this DMZ design is to allow direct n-way associations between the inside, outside, and DMZ networks, without requiring traffic between any two networks to pass through an intermediate network. This provides the benefits of:

Establishing a unique set of firewall rules between any two firewall interfaces (see Basic Forwarding), based on security policy

Data Center Networking: Internet Edge Design Architectures

3-54

956484

Chapter 3

Internet Edge Security Implementation Fully Resilient Design

• •

Overall higher performance because firewall interfaces are dedicated to the networks they support Improved security because hosts connected to each interface only see traffic relevant to that area of the network (no transitional data paths)

The switch in the DMZ provides connectivity for that DMZ's hosts, as well as a means to monitor traffic flowing on that DMZ via NIDS. This switch should have strict Layer 2 port security, which limits the maximum number of MAC addresses per port (to prevent flooding attacks) and statically defines the MAC address of the host assigned to that port. For Cisco Catalyst switches, you can establish a private VLAN to group together DMZ hosts that need to communicate with each other and, more importantly, to prevent communication between DMZ hosts that do not need to communicate with each other. This allows customers to place dissimilar servers in a common DMZ, as opposed to administrating separate DMZs for each set of dissimilar hosts. This switch provides basic connectivity for DMZ hosts, but does not exclude the use of more advanced networking infrastructures. However, if multiple subnets exist within the DMZ, it is necessary to ensure that the firewall has the required routing information to forward traffic to subnets not directly connected in the DMZ design. Accomplish this via static routes, routing protocols, or address translation behind the firewall's DMZ interface. Because two switches are used to support connectivity for DMZ hosts, hosts must be connected to only one of the switches, unless the hosts can support multiple interfaces on the same IP subnet.

Intrusion Detection Capabilities
The use of SPAN ports on the switches connected to each firewall interface allows the connectivity of NIDS sensors to monitor traffic flowing across that interface.Connect the management ports of these sensors s either out-of-ban, or to the inside switch to protect sensor communication, as well as provide the ability to shun attackers by adjusting firewall or outer router ACLs. Unfortunately, the use of Layer 2 resilience requires additional NIDS sensors for complete coverage. Servers placed in the DMZ should have HIDS installed to monitor for potential intrusions and aberrant server behavior. Specific firewall rules are required to support HIDS event management traffic.

Network Management
Network management of the Fully Resilient design is similar to that of the Basic network design. However, it is important to remember the following guidelines, which are associated with the use of failover and HSRP/VRRP:

Do not manage the standby PIX firewall directly because this causes the PIX configurations to be out of sync. Always manage the active PIX firewall and allow it to sync configurations with the standby PIX firewall. If a failover occurs, SSH reports a bad crypto fingerprint error. This is because the two PIX firewalls have separate crypto identities and is normal. If a failover occurs while you are managing the PIX firewall via PDM, you will receive a bad key error. Close PDM and restart the session. To avoid SSH crypto errors, manage routers via their real IP addresses, not their HSRP/VRRP shared addresses.

• • •

A single host with the following tools is sufficient for basic management:

Web browser w/ SSL support, for use in accessing PIX device manager

Data Center Networking: Internet Edge Design Architectures 956484

3-55

Chapter 3 Fully Resilient Design

Internet Edge Security Implementation

• •

An SSH client (Telnet is not recommended for remote console access.) A syslog server

Use SSH to manage routers and switches s. Configure SNMP for read-only support. The use of Telnet, HTTP, and SNMP read-write network management tools is not recommended for devices on the Internet edge.

Data Center Networking: Internet Edge Design Architectures

3-56

956484

C H A P T E R

4

Single Site Multi Homing
This chapter clarifies and identifies typical single site Internet edge designs. This encompasses the core design principles associated with all network infrastructure designs, with the unique requirements that are relevant to Internet Edge topologies. Like any infrastructure design, these solutions must be highly scalable while maintaining the key aspects of redundancy and security. Last but not least, the solution as a whole must not be too complex to manage. The key redundancy function associated with this type of design is the resiliency of having ISP connections to two or more providers depending on the bandwidth requirements of the server farm architecture or any other internet services. A connection to two or more internet connections is referred to as multi-homing.

Internet Edge Design Guidance
As mentioned above, Internet Edge solutions touch many different types of enterprise networks and therefore may potentially have many different topologies. They can range from any remote office connection to a major ISP peering point. Therefore, maintaining common design principles allows you to apply these recommendations to almost all Internet Edge topologies.

High Availability
In the single ISP topology, the need for redundancy at the edge is a null issue because if the primary edge router fails the Internet connection goes down. Therefore defining redundancy at the edge of the network has no beneficial affect. However, when the provider supplies two terrestrial circuits, as depicted below, you can take advantage of the redundancy offered by mulit-homing. Figure 4-1 displays a multi-homed topology.

Data Center Networking: Internet Edge Design Architectures 956484

4-1

Chapter 4 Internet Edge Design Guidance

Single Site Multi Homing

Figure 4-1

Single Site Multi-Homing

SP 1

Internet

SP 2

Edge connectivity

Edge routing

Edge security Server farms architectures

Internet edge topologies consist of multiple layers. There must be no single point of failure within the network architecture. Therefore, complete device redundancy in this architecture is a necessity. These redundant devices, coupled with specific Layer 2 and Layer 3 technologies, help achieve redundancy. To meet this requirement, the Internet edge topologies use some of the key functions of the IOS software. The Layer 2 features used include:
• • • • • •

Port fast Bridge Protocol Data Unit (BPDU) Guard and Root Guard Broadcast Suppression Uplinkfast Etherchannel Unidirectional Link Detection (UDLD)

Data Center Networking: Internet Edge Design Architectures

4-2

956484

76696

Chapter 4

Single Site Multi Homing Internet Edge Design Guidance

The above technologies increase convergence times and lower operational downtime. These technologies also offer basic security functions to protect against rogue devices on the network that become malicious in the event of a network attack. The Layer 3 features used for high availability offer redundant default gateways for networked hosts and provide a predictable traffic flow both in normal operating conditions and under the adverse conditions surrounding a network link or device failure. The Layer 3 features include:
• • •

Hot Standby Router Protocol (HSRP) Multi-group Hot Standby Router Protocol (MHSRP) Dynamic routing protocol metric tuning (EIGRP and OSPF)

HSRP and Multigroup HSRP offer Layer 3 gateway redundancy while the dynamic routing protocols offer a look into network availability from a higher level.

Scalability
The network architecture must be scalable to accommodate increasing user support, as well as unforeseen bursts in network traffic. While feature availability and processing power of network devices are important design considerations, physical capacity attributes, like port density, can limit architecture scalability. Within the border layer of this topology, the termination of circuits can become a burden on device scalability. Improper memory provisioning on a device can cause performance to degrade and hence cause the device to process traffic at a slower rate. These principles are the same for the layers of firewall device and Layer 3 switching capacities. Port density scalability is important at the Layer 3 switching layer because it provides additional connections for host devices, in this case, servers.

Intelligent Network Services
In all network topologies, the intelligent network services present within IOS software revisions, such as QoS, and high availability technologies, such as HSRP, are used to ensure network availability. For instance, with QoS, the IP bits within one packet can be adjusted to create a higher priority on the network for that packet over other packets.

HSRP
HSRP enables a set of routers to work together to present the appearance of a single virtual router or default gateway to the hosts on a LAN. HSRP is particularly useful in fault tolerant network environments running critical applications. By sharing an IP and MAC address, two or more routers acting as one virtual router are able to transparently assume the routing responsibility in the event of a defined outage or an unexpected failure. This allows hosts on a LAN to continue to forward IP packets to a consistent IP and MAC address enabling the transparent changeover of routing devices during a failure. HSRP allows administrators to configure Hot Standby Groups to share responsibility for an IP address. Administrators give each router a priority. The priority weights the prioritization of routers for active router selection. One router in each group is the active forwarder and one is the stand-by. This determination is made according to the router's configured priorities. The router with the highest priority wins and, in the event that there is a priority tie, the greater value of their configured IP addresses breaks the tie. Other routers in this group monitor the active and stand-by routers' status to enable further fault tolerance. All HSRP routers participating in a standby group watch for hello packets from the active and the standby routers. All routers in the group learn the hello and dead timers from the active router, as

Data Center Networking: Internet Edge Design Architectures 956484

4-3

Chapter 4 Internet Edge Design Guidance

Single Site Multi Homing

well as the IP address of the standby router, if these parameters are not explicitly configured on each individual router. Although this process is dynamic, it is recommended that the network administrator define the HSRP dead timers. If the active router becomes unavailable due to scheduled maintenance, power failure, or other reasons, the stand-by router transparently assumes the role of the active router within a few seconds. This changeover occurs when the dead timer is reached or when three successive hello packets are missed. The standby router promptly takes over the virtual addresses and identities responsibilities during a failure of the active router. When the secondary interface assumes mastership, the new master sends a gratuitous ARP, which updates the CAM (Content Addressable Memory) on the Layer 2 switch. This then becomes the primary route for the devices accessing this gateway. Configure these HSRP timers on a per HSRP instance.

Internal Routing
Before discussing the basic ways you can connect autonomous systems (AS) to ISPs, some basic routing terminology and concepts must be discussed. There are three basic routing approaches: static routing, default routing and dynamic routing.

Static routing refers to route destinations manually configured in the router. Network reachability in this case is not dependent on the existence and state of the network itself. Whether a destination is up or down, the static routes remain in the routing table, and traffic is still sent toward that destination. Default routing refers to a “last resort” outlet. Traffic to destinations that are unknown to the router are sent to that default outlet. Default routing is the easiest form of routing for a domain connected to a single exit point. Dynamic routing refers to routes learned via an internal or external routing protocol. Network reachability is dependent on the existence and state of the network. If a destination is down, the route disappears from the routing table and traffic is not sent toward that destination.

These three routing approaches are possibilities for all the AS configurations considered in upcoming sections, but there is an optimal approach. Thus, in illustrating different ASs, this document considers whether static, dynamic, default, or some combination of these routing methods is optimal. This document also considers whether interior or exterior routing protocols are appropriate. You can use Internal Gateway Protocols (IGPs) to advertise your network internally. Use an IGP between your network and your ISPs network to redistribute routes internally. This has all the benefits of dynamic routing where network information and changes are dynamically sent to the ISP. Also, the IGPs distributes the network routes upstream to the BGP function.

Edge Routing
BGP performs interdomain routing in TCP/IP networks. BGP is an exterior gateway protocol (EGP), which means that it performs routing between multiple ASs or domains and exchanges routing and reachability information with other BGP systems. BGP replaces its predecessor, the now obsolete Exterior Gateway Protocol (EGP), as the standard exterior gateway-routing protocol used in the global Internet. It solves serious problems found in EGP and scales to Internet growth more efficiently. As with any routing protocol, BGP maintains routing tables, transmits routing updates, and bases routing decisions on routing metrics. The primary function of a BGP system is to exchange network-reachability information, including information about the list of AS paths, with other BGP systems. Use this information to construct a graph of AS connectivity where you can prune routing loops and enforce AS-level policy decisions. Each BGP router maintains a routing table that lists all feasible paths to a particular network. The router does not refresh the routing table, instead routing information received from peer routers is retained until the router receives an incremental update.

Data Center Networking: Internet Edge Design Architectures

4-4

956484

Chapter 4

Single Site Multi Homing Internet Edge Design Guidance

BGP devices exchange routing information upon initial data exchange and during incremental updates. When a router first connects to the network, BGP routers exchange their entire BGP routing tables. However, when the routing table changes, routers send only the changed portion of their routing table. BGP routers do not send regularly scheduled routing updates and BGP routing updates advertise only the optimal path to a network. BGP uses a single routing metric to determine the best path to a given network. This metric consists of an arbitrary unit number that specifies the degree of preference of a particular link. The BGP metric is typically assigned to each link by the network administrator. The value assigned to a link is based on any number of criteria, including the number of ASs through which the path passes, stability, speed, delay, or cost. BGP performs three types of routing:
• • •

Interautonomous system routing Intra-autonomous system routing Pass-through autonomous system routing

Interautonomous system routing occurs between two or more BGP routers in different ASs. Peer routers in these systems use BGP to maintain a consistent view of the internetwork topology. BGP neighbors communicating between ASs must reside on the same physical network. The Internet serves as an example of an entity that uses this type of routing because it contains ASs or administrative domains. Many of these domains represent the various institutions, corporations, and entities that make up the Internet. BGP is frequently used to provide path determination that creates optimal routing within the Internet. Intra-autonomous system routing occurs between two or more BGP routers located within the same AS. Peer routers within the same AS use BGP to maintain a consistent view of the system topology. BGP is also used to determine which router serves as the connection point for specific external ASs. Once again, the Internet provides an example of interautonomous system routing. An organization, such as a university, can make use of BGP to provide optimal routing within its own administrative domain or AS. The BGP protocol provides both inter- and intra-autonomous system routing services. Pass-through autonomous system routing occurs between two or more BGP peer routers that exchange traffic across an AS that does not run BGP. In a pass-through AS environment, the BGP traffic did not originate within the AS in question and is not destined for a node in the AS. BGP must interact with the intra-autonomous system routing protocol available to successfully transport BGP traffic through that AS.

Data Center Networking: Internet Edge Design Architectures 956484

4-5

Chapter 4 Design Caveats

Single Site Multi Homing

Figure 4-2

E-BGP and I-BGP

SP 1

Internet

SP 2

E-BGP Instance

I-BGP Instance

Design Caveats
When implementing an internet edge topology, you can take certain common design principles for granted. For example, the addressing of an internet edge topology requires careful consideration. More specifically, if you have not received a registered address space for your entire network infrastructure from the American Registry for Internet Numbers (ARIN), then you must get your addresses from the upstream providers. This assumes that each provider provides you with a contiguous block within the ISP’s address range. This makes it impossible for you to advertise each of these blocks to the other upstream ISP routers. If you are peering with multiple ISP’s and assuming the addresses of one of the two networks, it is difficult for the other ISP to advertise the routes of your address space. This is because the network address is most likely summarized at a different peering point within the ISP network. Therefore, the addressing remains limited to the ISP block supplied by the respective ISP. If you were

Data Center Networking: Internet Edge Design Architectures

4-6

956484

76697

Chapter 4

Single Site Multi Homing Design Recommendations

to advertise these address ranges, you run the risk of becoming a transit network in the internet backbone. Which means that some of the peers on one ISP backbone could perceive your network topology as a closer route to the other ISP backbone. This issue is also apparent in instances where you use the same network addressing as the I-BGP instance and advertise yourself as a more attractive route to the each of the ISP’s respectively. Another issue associated with this type of design is the DNS (Domain Name Service) resolution to the associated address schemes. For instance, if you were to address the server farm with the address block from ISP A and advertise this address via DNS, that A record might not be addressable to many users on the internet. The reasons are that the advertisement is destined to a specific ISP route. In the event of failure and the primary ISP that holds that address range is no longer reachable, you would blackhole the entire web site. Therefore, the workaround is to have multiple DNS a records associated to the same Virtual IP Address (VIP). The DNS server returns two different A records for the same server farm using an address from the two different address blocks from the upstream ISP. Build this redundancy into your DNS implementation by defining a DNS round robin between the two A records associated with this site.

Design Recommendations
Internet Edge Design Fundamentals
As mentioned above; Internet Edge topologies are in every Internet facing network, however, the scale of these topologies may be different. These topologies are increasingly important to business functions. The scalability of these topologies must not be overlooked. Below are the details of the functional layers of the internet edge topologies and how they interact with one another. It is imperative to this type of architecture to have complete redundancy.

Data Center Networking: Internet Edge Design Architectures 956484

4-7

Chapter 4 Design Recommendations

Single Site Multi Homing

Figure 4-3

Physical Layer Topology

WWW

BGP AS 1

BGP AS 2

WWW

172.16.10.X S2/0 .254 Border router R1 Layer 2 switching

S2/0 .1 F0/0 .254 F2/0 .3

BGP AS 100 172.18.21.X .1 HSRP 172.16.20.X

S2/0 .1 F0/0 .1 F2/0 .2

172.16.11.X S2/0 .254

R2

E0 .253 Firewall security 172.16.20.X 172.16.20.X E1 VLAN6 .253 F3/1 .3 G1 G2 .1 HSRP

E0 .254

Layer 3 switching

VLAN6 E1 F3/1 .254 .2 G1 G2 CE1 F 1/0 172.16.25.5/24 CE1 F 1/0 172.16.25.6/24

VLAN 10 F3/3 .254 172.16.100.X Laptop 172.16.100.1 DG 172.16.100.254
76698

Border Routers
The border routers, typically deployed in pairs, are the edge-facing devices of the network. The quantity of border routers is a provisioning decision based on memory requirements and physical circuit termination. The border routers are the point at which ISP termination and initial security parameters are provisioned. The border router layer serves as the gateway of the network and utilizes an externally facing Layer 3 routing protocol like BGP integrated with an internally facing routing protocol, such as EIGRP or OSPF, to intelligently route traffic throughout the external and internal networks, respectively. The internet edge in an enterprise environment may provide internet connectivity to an ISP through the use of single-homed core routers, or to several ISPs using multi-homed core routers.

Data Center Networking: Internet Edge Design Architectures

4-8

956484

Chapter 4

Single Site Multi Homing Implementation Details

Layer 2 Switching Layer
Beneath the border layer is the Layer 2 switching layer. This layer functions as a security gateway by offering physical separation between the border routers, firewalls and internal Layer 3 switching platforms. This layer also offers HA (high availability) services such as HSRP and stateful firewall failover. You must consider the aggregate throughput of the external links when engineering this platform.

Firewall Layer
The firewall layer is a security layer that supports stateful packet inspection into the network infrastructure and to the services and applications offered in the server farms and database layers. This layer acts as the network address translation (NAT) device in most design topologies. NAT at the internet Edge is common based on the ever depleting Ipv4 address pool associated with ISP’s. The firewall layer allows many ISP’s to provide a limited address range requiring you to define NAT pools at the egress point of the topology.

Layer 3 Switching Layer
The Layer 3 switching layer is the final layer in the internet edge topology. This is also a functional layer of the server farm design as well. The Layer 3 switching layer may act as either a core layer or an aggregation layer in some design topologies. Yet the primary function, from the standpoint of the internet edge design topology, is to advertise the IGP routing protocol internally to the infrastructure as well as the static routes defined upstream to the firewall layer. This layer is the termination point for the IGP internal to the infrastructure. This is a necessity because, in the Internet Edge design, the PIX layer is a default route from the internal network. This route is also redistributed internally as the gateway of last resort for the 0.0.0.0 route.

Implementation Details
Single Site Multi-Homing Topology
Below are the configuration details associated with single site multi-homing design. In this section, the router configurations were taken from the primary route or R1 as depicted in Figure 4-4

Data Center Networking: Internet Edge Design Architectures 956484

4-9

Chapter 4 Implementation Details

Single Site Multi Homing

Figure 4-4

Internet Edge Test Topology

WWW

BGP AS 1

BGP AS 2

WWW

172.16.10.X S2/0 .254

S2/0 .1 F0/0 .254 F2/0 .3

BGP AS 100 172.18.21.X .1 HSRP 172.16.20.X

S2/0 .1 F0/0 .1 F2/0 .2

172.16.11.X S2/0 .254

R1

R2

E0 .253 172.16.20.X 172.16.20.X E1 VLAN6 .253 F3/1 .3 G1 G2 .1 HSRP

E0 .254

VLAN6 E1 F3/1 .254 .2 G1 G2 CE1 F 1/0 172.16.25.5/24 CE1 F 1/0 172.16.25.6/24

VLAN 10 F3/3 .254 172.16.100.X Laptop 172.16.100.1 DG 172.16.100.254
76699

Internet Cloud Router BGP
router bgp 1 no synchronization bgp log-neighbor-changes network 1.0.0.0 network 2.0.0.0 network 3.0.0.0 network 4.0.0.0 network 5.0.0.0 network 6.0.0.0 network 7.0.0.0

Data Center Networking: Internet Edge Design Architectures

4-10

956484

Chapter 4

Single Site Multi Homing Implementation Details

network 8.0.0.0 network 9.0.0.0 network 100.0.0.0 redistribute connected neighbor 172.16.10.254 remote-as 100 neighbor 172.16.11.254 remote-as 100 no auto-summary router bgp 2 no synchronization bgp log-neighbor-changes network 1.0.0.0 network 2.0.0.0 network 3.0.0.0 network 4.0.0.0 network 5.0.0.0 network 6.0.0.0 network 7.0.0.0 network 8.0.0.0 network 9.0.0.0 network 100.0.0.0 redistribute connected neighbor 172.16.10.254 remote-as 100 neighbor 172.16.11.254 remote-as 100 no auto-summary

Primary Customer Configurations
router bgp 100 bgp log-neighbor-changes network 172.16.10.0 network 172.16.21.0 redistribute connected neighbor 172.16.10.1 remote-as 1 neighbor 172.16.21.254 remote-as 100 neighbor 172.16.21.254 next-hop-self

Secondary Customer Configurations
router bgp 100 bgp log-neighbor-changes network 172.16.11.0 network 172.16.20.0 redistribute connected neighbor 172.16.11.1 remote-as 2 neighbor 172.16.21.1 remote-as 100 neighbor 172.16.21.1 next-hop-self

BGP Attributes
BGP attributes control both inbound and outbound network routes. These attributes can be adjusted to control the decision making process of BGP itself. The BGP attributes are a set of parameters that describe the characteristics of a prefix (route). The BGP decision process uses these attributes to select the best routes. The next few sections cover these attributes and how they can be manipulated to affect the routing behavior.

Data Center Networking: Internet Edge Design Architectures 956484

4-11

Chapter 4 Implementation Details

Single Site Multi Homing

Controlling Outbound Routes
Weight Attribute
The weight attribute is a proprietary Cisco attribute used for path selection when there are multiple routes to the same destination. This occurs when you want to use both outbound links in conjunction. The weight attribute is local to the router on which it is assigned and is not propagated in routing updates. By default, the weight attribute is 32768 for paths that the router originates and zero for other paths. Routes with a higher weight are preferred when there are multiple routes to the same destination. Below are sample configurations defined for the weight attribute which is default in Cisco IOS. Define a weight statement as follows to control route updates from a specific ISP ASs on the primary router:
Router R1 router bgp 100 neighbor 172.16.10.1 remote-as 1 neighbor 172.16.10.1 filter-list 5 weight 2000 neighbor 172.16.21.254 remote-as 100 neighbor 172.16.21.254 next-hop-self neighbor 172.16.21.254 filter-list 6 weight 1000 ! ip as-path access-list 5 permit ^1$ ip as-path access-list 6 permit ^100$

In the above example, a weight of 2000 is assigned toupdates from the neighbor router at IP address 171.16.10.1 that are permitted by access list 5. Access list 5 permits updates whose AS_path attribute starts with 1 (as specified by ^) and ends with 1 (as specified by $).
Note

The ^ and $ symbols are used to form regular expressions. For a complete explanation of regular expressions, see the appendix on regular expressions in the Cisco Internetwork Operating System (Cisco IOS) software configuration guides and command references. This example also assigns 1000 to the weight attribute of updates from the neighbor at IP address 172.15.21.254 that are permitted by access list 6. Access list 6 permits updates whose AS_path attribute starts with 100 and ends with 100. In effect, this configuration assigns 2000 to the weight attribute of all route updates received from AS 1 and assigns 1000 to the weight attribute of all route updates from AS 100. This implies that the majority of the traffic would be routed to the upstream E-BGP instance rather than the I-BGP instance. This assumes that the ISP connection associated with each border router is the primary route for this router. This type of design topology is most beneficial when the I-BGP routes can be re-distributed internally to the network topology. Below are the R2 configuration of the weight attribute as well. In this design, the primary route is the ISP link terminated on the border router itself.

Local Preference
Setting the local preference also affects the BGP decision process. If multiple paths for the same prefix are available, the path with the larger local preference is preferred. Local preference is at the highest level of the BGP decision process (comes after the Cisco proprietary weight parameter); and is considered before the path length. A longer path with a higher local preference is preferred over a shorter

Data Center Networking: Internet Edge Design Architectures

4-12

956484

Chapter 4

Single Site Multi Homing Implementation Details

path with a lower local preference. The following configuration depicts the configuration commands needed to set up local preference routing. The configurations below are from the primary border router R1. This is the routing configuration to define the BGP parameter:
router bgp 100 no synchronization network 172.16.10.0 mask 255.255.255.0 network 172.16.20.0 mask 255.255.255.0 neighbor 172.16.21.254 remote-as 100 neighbor 172.16.21.254 next-hop-self neighbor 172.16.10.1 remote-as 1 neighbor 172.16.10.1 filter-list 10 out neighbor 172.16.10.1 route-map SETLOCAL in no auto-summary

The configurations below are defined to associate a route map with the incoming routes. Apply this access list to the router that you want to define as the primary router.
ip as-path access-list 10 permit ^$ route-map SETLOCAL permit 10 set local-preference 150

The route-map SETLOCAL assigns a local preference of 150 for all routes coming from the upstream router in the ISP cloud (note the keyword in). With this configuration, the local preference attribute of any update coming from AS 1 is set to 150. Also, define a local preference on the secondary border router with the following configuration.
router bgp 100 no synchronization network 172.16.10.0 mask 255.255.255.0 network 172.16.20.0 mask 255.255.255.0 neighbor 172.16.21.1 remote-as 100 neighbor 172.16.21.1 next-hop-self neighbor 172.16.11.1 remote-as 2 neighbor 172.16.11.1 filter-list 10 out neighbor 172.16.11.1 route-map SETLOCAL in no auto-summary

This configuration defines the route map configured for the ISP AS 2.
ip as-path access-list 10 permit ^$ route-map SETLOCAL permit 10 set local-preference 200

The route-map SETLOCAL assigns a local preference of 200 for all routes coming from the upstream router in the ISP cloud (note the keyword in). With this configuration, the local preference attribute of any update coming from AS 1 is set to 200.

Controlling Inbound Routes
In internet edge topologies, controlling outbound routes is first and foremost. This is how your network topology is seen by the world. Controlling outbound routes also defines, by default, how traffic returns to your site. Controlling the outbound traffic allows you to manipulate the amount of traffic that comes in from various ISPs. More specifically, if you wanted to define that all traffic leaves your topology from one ISP link and all traffic destined to the topology comes inbound on another ISP link, implement AS prepending. This is the most common deployment for instances where a network administrator does not want to leave a link idle.

Data Center Networking: Internet Edge Design Architectures 956484

4-13

Chapter 4 Security Considerations

Single Site Multi Homing

AS Path Attribute
Whenever an update passes through an AS, BGP prepends its AS number to the update. The AS_path attribute is the list of AS numbers that an update has traversed in order to reach a destination. An AS-SET is a set of all the ASs that have been traversed. This becomes relevant when a network administrator wants to append multiple AS path statements on an update to upstream providers for the purpose of making that route less attractive to the upstream ISP routers. Since routing distance in BGP is defined by AS hop count, the more AS path lengths associated with a specific link determines how attractive the link is to upstream routers in the topology.

AS Prepend Configuration
The following configuration was taken from router R1 as depicted above in Figure 4-4. R1 was previously defined as the local preference router. Therefore, to control the inbound routes of the topology, you must define the same configuration on R1. This configuration makes R2 a more attractive route to our advertised address space and effectively distributes the load of the both the ingress and egress routes across both routers.
router bgp 100 network 172.16.10.0 mask 255.255.255.0 network 172.16.20.0 mask 255.255.255.0 neighbor 172.16.21.254 remote-as 100 neighbor 172.16.21.254 next-hop-self neighbor 172.16.10.1 remote-as 1 neighbor 172.16.10.1 route-map AddASnumbers out no auto-summary route-map AddASnumbers permit 10 set as-path prepend 100 100

In the above configuration, the route map states that for outbound advertisements to ISP AS1, more path hops are appended to the advertisement.

Security Considerations
Security is a necessity in all network architectures today, regardless of your Internet connectivity. You must ensure that the network architecture and the network devices are securely provisioned and managed. Internet Edge security is discussed in Chapter 2, “Internet Edge Security Design Principles” and Chapter 3, “Internet Edge Security Implementation.” This section provides a brief summary from that guide of the security functions supported within Internet Edge designs. These functions include:
• •

Element Security – The secure configuration and management of the devices that collectively define the Internet Edge. Identity Services – The inspection of IP traffic across the Internet Edge requires the ability to identify the communicating endpoints. Although this can be accomplished with explicit user/host session authentication mechanisms, usually IP identity across the Internet Edge is based on header information carried within the IP packet itself. Therefore, IP addressing schemas, address translation mechanisms, and application definition (IP protocol/port identity) play key roles in identity services. IP Anti-Spoofing – This includes support for the requirements of RFC-2827, which requires enterprises to protect their assigned public IP address space, and RFC-1918, which allows the use of private IP address spaces within enterprise networks.

Data Center Networking: Internet Edge Design Architectures

4-14

956484

Chapter 4

Single Site Multi Homing Security Considerations

Demilitarized Zones (DMZ) – A basic security policy for enterprise networks is that internal network hosts must not be directly accessible from hosts on the Internet (as opposed to replies from Internet hosts for internally initiated session, which are statefully permitted). For those hosts, such as web servers, mail servers, VPN devices, etc., which are required to be directly accessible from the Internet, it is necessary to establish quasi-trusted network areas between, or adjacent to both, the Internet and the internal enterprise network. Such DMZs allow internal hosts and Internet hosts to communicate with DMZ hosts, but the separate security policies between each area prevent direct communication originating from Internet hosts from reaching internal hosts. Basic Filtering and Application Definition – Derived from enterprise security policies, implement ACLs to provide explicitly permitted and/or denied IP traffic that may traverse between areas (Inside, Outside, DMZ, etc.) defined to exist within the Internet Edge. Stateful Inspection – Provides the ability to establish and monitor session states of traffic permitted to flow across the Internet Edge, and deny that traffic which fails to match the expected state of an existing or allowed session. Intrusion Detection – The ability to promiscuously monitor network traffic across a discrete point within the Internet Edge, and alarm and/or take action upon detecting suspect behavior that may threaten the enterprise network.

Please refer to the above mentioned chapters for detailed insight into security parameters and the measures taken within Internet edge topologies.

Data Center Networking: Internet Edge Design Architectures 956484

4-15

Chapter 4 Security Considerations

Single Site Multi Homing

Data Center Networking: Internet Edge Design Architectures

4-16

956484

C H A P T E R

5

Scaling the Internet Edge: Firewall Load Balancing
Many different architectures require firewalling for the network infrastructure, server farms, and clients. With these requirements becoming more prevalent, traffic requirements on some networks may exceed the capabilities of the current existing infrastructure. Furthermore, the firewall devices in such network topologies could potentially become a network bottleneck. Most firewall topologies are restricted by either the connections per second (CPS) supported on the device or packet per second (PPS) throughput of the device, hence, requiring some type of firewall load balancing (FWLB) solution. Therefore, the requirements of the load-balancing device must exceed the CPS and concurrent connection (CC) numbers of multiple firewall devices if utilized with such high performing firewalls, as the Cisco PIX 535 Firewall. When examining the vast variety of firewall technologies, it becomes apparent that a FWLB solution must remain agnostic to any one single deployment scenario and must have flexible interoperability with Cisco products, as well as other equipment manufacturer products. The firewall technologies that should be supported in the FWLB solution include:
• • • •

Application firewalls, such as Checkpoint Firewall-1 software Appliance firewalls, such as the Cisco Secure PIX, stealth Layer 2 firewalls, such as the Netscreen Firewalls Proxy application firewalls, such as Axent's Raptor product

These various technologies support the deployment of this topology in most network architectures without relying or interfering with other vendors' software or hardware. Yet, in some cases, FWLB may not necessarily be the best solution. If a single firewall device with higher throughput performance fixes the problem without adding configuration and deployment complexities, it should be considered as the first choice. But in most environments, it is possible that the network support organization and security group are not one in the same. This often complicates the decision making process. This chapter offers design guidance and design perspective on firewall load balancing. It provides insight into the configuration of FWLB when using the Content Switching Module (CSM).

Data Center Networking: Internet Edge Design Architectures 956484

5-1

Chapter 5 Network Topology

Scaling the Internet Edge: Firewall Load Balancing

Network Topology
FWLB solutions must support multiple topologies. These architectures may also have some security holes or weaknesses. It is in your best interest to work with the security team to define and comply with your corporate internal security policies. This requires the network group to harden the infrastructure with security best practices. This section discusses two topologies:
• •

One Arm Topology Sandwich Topology

One Arm Topology
Figure 5-1 displays the tested topology, the One Arm Topology. In this topology, the firewall devices are physically and logically adjacent to the switch itself. The load-balancing device processes both the inbound and outbound flows to and from the firewall cluster. Therefore, both the insecure and secure interfaces of the firewall are on the same switch. This type of deployment relies solely on the policies put in place to secure the switch itself. If someone were to gain access to the switch and had the ability to change the configuration parameters, then this infrastructure would be compromised.
Figure 5-1 FWLB One Arm Topology

Internet

Data Center Networking: Internet Edge Design Architectures

5-2

81100

Server Farm

956484

Chapter 5

Scaling the Internet Edge: Firewall Load Balancing Network Topology

Sandwich Topology
The most common deployment is the Sandwich topology, shown in Figure 5-2. This topology requires a load-balancing device before and after the firewall cluster. This type of design ensures the highest level of security due to physical separation of the firewall interfaces across multiple switches. In FWLB technologies, there are two hash predictors:

Bi-Directional hash, which requires both load-balancing devices to share a common hash value that ultimately produces the same route. Accomplish bi-directional hashing by hashing the source and destination IP address along with the destination port of the given flow. Uni-Directional hash produces the route in the same fashion as a bi-directional hash and also creates a TCP connection table with the reverse flow path defined. This allows you to match return path traffic against this connection table rather than being hashed.
FWLB Sandwich Topology

Figure 5-2

Internet

Some environments define DMZ segments, which house the DNS servers, FTP servers, and SMTP relay servers. Some of these segments may also house HTTP servers for the front end of a three-tier architecture. The design principles remain the same for implementing the DMZ portion of the FWLB architecture shown in Figure 5-3.

Data Center Networking: Internet Edge Design Architectures 956484

81101

Server Farm

5-3

Chapter 5 System Components

Scaling the Internet Edge: Firewall Load Balancing

Figure 5-3

FWLB DMZ Topology Details

Internet Client VLAN 100

Server VLAN 101

Server VLAN 201

DMZ VLANs 300/301

Client VLAN 200
81102

Server Farm

System Components
This FWLB topology was tested using theCSM with PIX 535 firewalls.

Hardware Requirements and Software Requirements
This section lists the minimum software release requirements.
Table 5-1 Hardware and Software Requirements

Product Number WS-X6066-SLB-APC with Supervisor Engine 1A WS-X6066-SLB-APC with Supervisor Engine 2 72-876-01 800-05097-01

Product Description Content Switching Module Content Switching Module Console Cable Accessory kit (contains the Console Cable)

Minimum Software Version 1.1(1) 1.2(1) N/A N/A

Recommended Software Version Cisco IOS Release 2.1(1) or higher 2.1(1) or higher N/A N/A 12.1(8a)EX 12.1(8a)EX N/A N/A

Data Center Networking: Internet Edge Design Architectures

5-4

956484

Chapter 5

Scaling the Internet Edge: Firewall Load Balancing Configuration Tasks

The current release requirements in conjunction with the IOS release are:
• • •

Catalyst 6000 family CSM software release 1.1(1) requires Cisco IOS Release 12.1(6)E or 12.1(7)E. Catalyst 6000 family CSM software release 1.2(1) requires Cisco IOS Release 12.1(8a)E or later only. Catalyst 6000 family CSM software release 2.1(1) requires Cisco IOS Release 12.1(8a)EX or later only.

Caution

You can not use the CSM in a Catalyst 6000 family switch with the Catalyst operating system.

Features
Although most of these designs are deployed in an active- standby configuration, they can also be deployed with all of the firewalls devices in an active state. Examples active-active deployments are when all active firewalls share state information in a real-time environment across all devices or when the devices are configured in an all active stateless topology. In this case, either the devices do not have failover capabilities or the stateful failover has been disabled. This type of design is acceptable in certain environments that do not require stateful failover. In these types of architectures, next click failover is acceptable and the TCP or HTTP state is not required to be passed. The common PIX deployment is also supported. In this case, active firewalls are located adjacent to the primary Catalyst 6000 switch and the CSM. The standby unit is locally adjacent to the standby Catalyst 6000 switch and standby CSM. In this environment, the failover of the firewalls does not cause the CSM to fail but does flush the content addressable memory(CAM) table on the primary switch, which results in having the new CAM entries on the standby switch. Firewall redundancy is at Layer 2 for this topology.

Configuration Tasks
The configuration tasks in this section apply to the One Arm topology configuration shown in Figure 5-4. These configuration examples are a guideline for the configuration of this topology.

Data Center Networking: Internet Edge Design Architectures 956484

5-5

Chapter 5 Configuration Tasks

Scaling the Internet Edge: Firewall Load Balancing

Figure 5-4

One-Arm Topology

Internet Client VLAN 100 Server VLAN 101 Server VLAN 201

Client VLAN 200

Configuring FWLB with the CSM
The first step of the FWLB design with the CSM is to define your VLANs. This is necessary because the CSM uses a concept of client and server VLANs to load balance traffic to and from the firewall farm. As shown in Figure 5-4, the client VLANs handle the inbound and outbound traffic from clients and servers respectively. The server VLANs handle the traffic destined to either the internal or external interfaces of the firewalls. Therefore, each side of a FWLB topology requires both VLANs. On the insecure side (or outside interface) of the firewall cluster, you must configure client VLAN 100. This defines the origination point of the ingress traffic that needs to be load balanced. Configure server VLAN 101 to define the firewall's interfaces, which are configured by defining a server farm that hosts the outside interface addresses to load balance across. After this is complete, configure the secure side VLANs. Configure the secure client side VLAN 200 for outbound, internally originated traffic. This also defines the originating VLAN for the traffic that needs to be load balanced. Finally, server VLAN 201, which hosts the inside interfaces of the firewall cluster, must also be configured. Next, complete the same configurations for the DMZ segments. The following sections provide step-by-step examples and explanations.

Data Center Networking: Internet Edge Design Architectures

5-6

81103

Server Farm

956484

Chapter 5

Scaling the Internet Edge: Firewall Load Balancing Configuration Tasks

VLAN Configuration on the CSM
When beginning the FWLB configurations, you must specify the VLAN database on the switch to define and configure the VLANs that the CSM uses. Once you complete the initial VLAN configurations in the switch, you can begin configuring the CSM. The first step is to define the VLANs created on the switch with their respective client and server VLANS on the CSM.

Note

In the one-arm topology, both the insecure and secure VLANs are configured on the same CSM. In the Sandwich topology, the secure VLAN and DMZ VLAN configuration is implemented on the secure side switch instead. To configure the VLANs on the CSM, do the following:

Step 1

Configure insecure/outside VLANs on CSM A.
Switch-A(config) # module csm 4 Switch-A(config-module-csm) # vlan Switch-A(config-slb-vlan-client) # Switch-A(config-slb-vlan-client) # Switch-A(config-slb-vlan-client) # Switch-A(config-module-csm) # vlan Switch-A(config-slb-vlan-server) # Switch-A(config-slb-vlan-server) # 100 client ip address 100.0.0.21 255.255.255.0 gateway 100.0.0.1 exit 101 server ip address 100.0.0.21 255.255.255.0 alias 100.0.0.20 255.255.255.0

Step 2

Next, configure the secure/inside VLANs on CSM A.
Switch-A(config) # module csm 4 Switch-A(config-module-csm) # vlan Switch-A(config-slb-vlan-client) # Switch-A(config-slb-vlan-client) # Switch-A(config-slb-vlan-client) # Switch-A(config-module-csm) # vlan Switch-A(config-slb-vlan-server) # Switch-A(config-slb-vlan-server) # 200 client ip address 200.0.0.21 255.255.255.0 gateway 200.0.0.1 exit 201 server ip address 200.0.0.21 255.255.255.0 alias 200.0.0.1 255.255.255.0

Step 3

Next, configure the DMZ VLANs for any DMZ interfaces that may be present in your topology.
Switch-A(config) # module csm 4 Switch-A(config-module-csm) # vlan Switch-A(config-slb-vlan-client) # Switch-A(config-slb-vlan-client) # Switch-A(config-slb-vlan-client) # Switch-A(config-module-csm) # vlan Switch-A(config-slb-vlan-server) # Switch-A(config-slb-vlan-server) # 300 client ip address 201.0.0.21 255.255.255.0 gateway 201.0.0.1 exit 301 server ip address 201.0.0.21 255.255.255.0 alias 201.0.0.1 255.255.255.0

Server Farm Configuration
After the CSM VLAN configurations are complete, the server farms (SF) need to be established. Again, for each side of the firewall farm, a client and server farm must be established. These configurations define the physical addressing of the firewall interfaces as well as the predictor algorithm.

Data Center Networking: Internet Edge Design Architectures 956484

5-7

Chapter 5 Configuration Tasks

Scaling the Internet Edge: Firewall Load Balancing

Note

In the one-arm topology, all server farm and virtual services are created on the same switch. In the Sandwich topology, the Secure Server Farm (SEC-SF) and Generic Server Farm (GENERIC-SF) are implemented on the secure side switch.

Step 1

Configure the INSEC-SF on CSM A.
Switch-A(config) # module csm 4 Switch-A(config-module-csm) # serverfarm INSEC-SF Switch-A(config-slb-sfarm) # no nat server Switch-A(config-slb-sfarm) # predictor hash address source 255.255.255.255 Switch-A(config-slb-sfarm) # real 100.0.0.3 Switch-A(config-slb-real) # inservice Switch-A(config-slb-real) # exit Switch-A(config-slb-sfarm) # real 100.0.0.4 Switch-A(config-slb-real) # inservice

Step 2

Configure the SEC-SF on CSM A.
Switch-A(config) # module csm 4 Switch-A(config-module-csm) # serverfarm SEC-SF Switch-A(config-slb-sfarm) # no nat server Switch-A(config-slb-sfarm) # predictor hash address destination 255.255.255.255 Switch-A(config-slb-sfarm) # real 200.0.0.3 Switch-A(config-slb-real) # inservice Switch-A(config-slb-real) # exit Switch-A(config-slb-sfarm) # real 200.0.0.4 Switch-A(config-slb-real) # inservice

Step 3

Configure the DMZ-SF on CSM A.
Switch-A(config) # module csm 4 Switch-A(config-module-csm) # serverfarm DMZ-SF Switch-A(config-slb-sfarm) # no nat server Switch-A(config-slb-sfarm) # predictor hash address destination 255.255.255.255 Switch-A(config-slb-sfarm) # real 201.0.0.3 Switch-A(config-slb-real) # inservice Switch-A(config-slb-real) # exit Switch-A(config-slb-sfarm) # real 201.0.0.4 Switch-A(config-slb-real) # inservice

Step 4

Configure the GENERIC-SF on CSM A
Switch-A(config) # module csm 4 Switch-A(config-module-csm) # serverfarm GENERIC-SF Switch-A(config-slb-sfarm) # real 200.1.0.101 Switch-A(config-slb-real) # inservice Switch-A(config-slb-real) # exit Switch-A(config-slb-sfarm) # real 200.1.0.102 Switch-A(config-slb-real) # inservice

Virtual Service Configuration
After creating the client and server VLANs and the server farms, the virtual services (VS) must be defined. The virtual service definitions are used to establish a server farm with an incoming traffic destination. As shown in the configurations below, the virtual service for the Insecure Server Farm (INSEC-SF) states that any traffic destined for the 200.0.0.0 network should be load balanced across the INSEC-SF. The default load-balancing algorithm is round robin.

Data Center Networking: Internet Edge Design Architectures

5-8

956484

Chapter 5

Scaling the Internet Edge: Firewall Load Balancing Configuration Tasks

Step 1

Configure the INSEC-VS on CSM A.
Switch-A(config) # module csm 4 Switch-A(config-module-csm) # vserver INSEC-VS Switch-A(config-slb-vserver) # virtual 200.0.0.0 255.255.255.0 any Switch-A(config-slb-vserver) # vlan 100 Switch-A(config-slb-vserver) # serverfarm INSEC-SF Switch-A(config-slb-vserver) # replicate csrp connection Switch-A(config-slb-vserver) # inservice

Step 2

Configure the SEC-VS on CSM A.
Switch-A(config) # module csm 4 Switch-A(config-module-csm) # vserver SEC--VS Switch-A(config-slb-vserver) # virtual 0.0.0.0 0.0.0.0 any Switch-A(config-slb-vserver) # vlan 200 Switch-A(config-slb-vserver) # serverfarm SEC-SF Switch-A(config-slb-vserver) # replicate csrp connection Switch-A(config-slb-vserver) # inservice

Step 3

Configure the DMZ-VS on CSM A.
Switch-A(config) # module csm 4 Switch-A(config-module-csm) # vserver DMZ-VS Switch-A(config-slb-vserver) # virtual 0.0.0.0 0.0.0.0 any Switch-A(config-slb-vserver) # vlan 300 Switch-A(config-slb-vserver) # serverfarm DMZ-SF Switch-A(config-slb-vserver) # replicate csrp connection Switch-A(config-slb-vserver) # inservice

Step 4

Configure the GENERIC-VS on CSM A.
Switch-A(config) # module csm 4 Switch-A(config-module-csm) # vserver GENERIC-VS Switch-A(config-slb-vserver) # virtual 200.0.0.100 255.255.255.255 tcp 0 Switch-A(config-slb-vserver) # vlan 200 Switch-A(config-slb-vserver) # serverfarm GENERIC-SF Switch-A(config-slb-vserver) # replicate csrp connection Switch-A(config-slb-vserver) # inservice

Probe Definitions
The ability to define health check probes to traverse the firewall clusters must be configured. This ensures that the firewall routes defined in each server farm and virtual service are active. The probes must first be defined in the server farm configuration. Then, the probe itself must be configured. The probe is used to define an ICMP health-check or ping instance to an address other than the firewall interfaces. In this case, the probes defined in the secure side server farms are destined to ping the alias IP address of the insecure VLAN. This VLAN is located on the opposite side of the firewall cluster. Pings will be sent out every active route defined in the server farm configuration.

Note

Although the CSM supports multiple probe types, only ICMP is supported with FWLB topologies. Configure the probes in the server farm.
Switch-A(config) # module csm 4 Switch-A(config-module-csm) # serverfarm SEC-FW Switch-A(config-slb-sfarm) # real 200.0.0.3

Step 1

Data Center Networking: Internet Edge Design Architectures 956484

5-9

Chapter 5 Configuration Tasks

Scaling the Internet Edge: Firewall Load Balancing

Switch-A(config-slb-sfarm) # real 200.0.0.4 Switch-A(config-slb-sfarm) probe sec-fw-probe

Step 2

Configure the probe.
Switch-A(config-module-csm) probe sec-fw-probe icmp Switch-A(config-slb-probe-icmp) address 100.0.0.20

High Availability
In future testing of this solution, performance benchmarks will be tested and documented. These results will be included in future revisions of this document.

Configuring CSM Failover
The CSM is also capable of stateful failover configurations. Cisco recommends that you evaluate how much traffic traverses the fault-tolerant VLAN and be sure not to oversubscribe the port-channel of which state information travels. This is imperative for the stability of the failover architecture. If you are deploying this topology in an architecture that is of high throughput, it is appropriate to deploy a dedicated Fault-Tolerant (FT) VLAN. This section contains configuration examples that define the FT VLAN configuration on the CSM.

FT VLAN Configuration
Implement the following configuration on both Catalyst switches that house a CSM.
Step 1

Configure the FT group.
Switch-x(config-module-csm)# ft group 1 vlan 302 Switch-x(config-slb-ft)# priority 10

Note

The above configuration does not define preemption as a feature. The reason is that when a CSM device fails, the state information is carried across to the secondary but upon preemption the connections already established will actually be dropped. Define an associated port-channel that the FT traffic will traverse.
Switch-x(config)# interface Port-channel1 Switch-x(config-if)# no ip address Switch-x(config-if)# switchport Switch-x(config-if)# switchport trunk encapsulation dot1q

Step 2

Step 3

Assign the channel-groups to a specific interface.
Switch-x(config)# interface GigabitEthernet1/2 Switch-x(config-if)# no ip address Switch-x(config-if)# switchport Switch-x(config-if)# switchport trunk encapsulation dot1q Switch-x(config-if)# channel-group 1 mode on

To verify the configuration, enter the show mod csm 4 ft detail command.
FT group 1, vlan 302

Data Center Networking: Internet Edge Design Architectures

5-10

956484

Chapter 5

Scaling the Internet Edge: Firewall Load Balancing Complete Configurations

This box is active priority 30, heartbeat 1, failover 3, preemption is off total buffer count 6213, illegal state transitions 0 receive buffers not committed 0, send buffers not committed 0 updates: sent 4, received 0, committed 0 coup msgs: sent 1, received 0 election msgs: sent 138, received 6 heartbeat msgs: sent 1118313, received 886 relinquish msgs: sent 0, received 1 conn replicate msgs: sent 0, received 0 conn refresh msgs: sent 0, received 0 conn reset msgs: sent 0, received 0 conn redundancy errors: msgs lost 0, msgs rejected 0 packets: total received 0, total dropped 0, duplicates 0 checksum failed 0, dumped 4, buffer unavailable 0 number of state updates in last 10 transfers: 0 0 0 0 0 0 0 0 0 0

Convergence Results
After defining the failover configuration, the environment was tested by opening a Telnet session and testing connectivity to the backend servers. Failover of the overall device from a CSM perspective was at 4 seconds. This is defined based on the above default failover timers set in the FT configuration. The heartbeat for the fail-over timers is 1 second, as stated below:
priority 30, heartbeat 1, failover 3, preemption is off

This means that after the failover maximum of 3 retries, the CSM fails over to the standby device. In our test environment, this was at 4 seconds. Also in a convergence scenario, the firewall devices can sometimes be implemented in an active-standby fashion, assuming the firewalls have a Layer 2 adjacency. In this environment, the dead-timers and values associated are configurable. For example, the non-configurable heartbeat in the PIX is coupled with a keep-alive value or integer that actually polls the standby unit as well as link utilization. If the polling interval is set to 3 seconds with a heartbeat value of 3, it takes 9 seconds for the firewall device to fail over to the standby unit.

Performance Benchmarks
This section will define the performance capabilities of the overall solution. These numbers will be included in future revisions of the document.

Complete Configurations
This section contains example complete configurations for the CSM (Example 5-1) and the PIX (Example 5-2).
Example 5-1 Catalyst with CSM Configuration

FWLB-6K1#wr t Building configuration... Current configuration : 6844 bytes !

Data Center Networking: Internet Edge Design Architectures 956484

5-11

Chapter 5 Complete Configurations

Scaling the Internet Edge: Firewall Load Balancing

version 12.1 service timestamps debug uptime service timestamps log uptime service password-encryption ! hostname FWLB-6K1 ! boot buffersize 522200 boot system slot0:c6sup12-jsvdbg-mz.121-99.HERSCHEL_IOS_UBLDIT68 boot bootldr bootflash:c6msfc2-boot-mz.121-5c.E8 enable secret 5 $1$1z2H$9yx7dmWgWPht4GaKONkCJ0 ! redundancy main-cpu auto-sync standard ip subnet-zero ! ! no ip domain-lookup ! no mls ip multicast non-rpf cef mls qos statistics-export interval 300 mls qos statistics-export delimiter | module ContentSwitchingModule 4 vlan 100 client ip address 100.0.0.21 255.255.255.0 ! vlan 101 server ip address 100.0.0.21 255.255.255.0 ! vlan 200 client ip address 200.0.0.21 255.255.255.0 ! vlan 201 server ip address 200.0.0.21 255.255.255.0 ! vlan 300 client ip address 201.0.0.21 255.255.255.0 ! vlan 301 server ip address 201.0.0.21 255.255.255.0 ! serverfarm DMZ-SF no nat server no nat client real 201.0.0.3 inservice real 201.0.0.4 inservice ! serverfarm GENERIC no nat server no nat client real 200.0.0.101 no inservice real 200.0.0.102 inservice ! serverfarm INSEC-SF no nat server no nat client real 100.0.0.4 inservice real 100.0.0.3

Data Center Networking: Internet Edge Design Architectures

5-12

956484

Chapter 5

Scaling the Internet Edge: Firewall Load Balancing Complete Configurations

inservice real 100.0.0.5 no inservice ! serverfarm SEC-SF no nat server no nat client real 200.0.0.4 inservice real 200.0.0.3 inservice real 200.0.0.5 no inservice ! vserver DMZ-VS virtual 0.0.0.0 0.0.0.0 any vlan 300 serverfarm DMZ-SF inservice ! vserver GENERIC-VS virtual 200.0.0.100 tcp 0 vlan 200 inservice ! vserver INSEC-VS virtual 200.0.0.0 255.255.255.0 any vlan 100 serverfarm INSEC-SF inservice ! vserver SEC-VS virtual 0.0.0.0 0.0.0.0 any vlan 200 serverfarm SEC-SF inservice ! ft group 1 vlan 302 priority 10

Example 5-2

PIX Configuration

Note

Each PIX configuration has completely unique IP addressing per interface.
FWLB-PIX3# wr t Building configuration... : Saved : PIX Version 5.3(2) nameif gb-ethernet0 outside security0 nameif gb-ethernet1 inside security100 nameif ethernet0 intf2 security10 nameif ethernet1 intf3 security15 enable password 2KFQnbNIdI.2KYOU encrypted passwd 2KFQnbNIdI.2KYOU encrypted hostname FWLB-PIX1 domain-name cisco.com fixup protocol ftp 21 fixup protocol http 80 fixup protocol h323 1720

Data Center Networking: Internet Edge Design Architectures 956484

5-13

Chapter 5 Complete Configurations

Scaling the Internet Edge: Firewall Load Balancing

fixup protocol rsh 514 fixup protocol rtsp 554 fixup protocol smtp 25 fixup protocol sqlnet 1521 fixup protocol sip 5060 names pager lines 24 logging on no logging timestamp no logging standby no logging console no logging monitor no logging buffered no logging trap no logging history logging facility 20 logging queue 512 interface gb-ethernet0 1000auto interface gb-ethernet1 1000auto interface ethernet0 auto shutdown interface ethernet1 auto shutdown mtu outside 1500 mtu inside 1500 mtu intf2 1500 mtu intf3 1500 ip address outside 100.0.0.3 255.255.255.0 ip address inside 200.0.0.3 255.255.255.0 ip address intf2 250.0.0.3 255.255.255.0 ip address intf3 127.0.0.1 255.255.255.255 ip audit info action alarm ip audit attack action alarm no failover failover timeout 0:00:00 failover poll 15 failover ip address outside 0.0.0.0 failover ip address inside 0.0.0.0 failover ip address intf2 0.0.0.0 failover ip address intf3 0.0.0.0 arp timeout 14400 nat (inside) 0 100.0.0.102 255.255.255.255 0 0 nat (inside) 0 200.0.0.102 255.255.255.255 0 0 static (inside,outside) 200.0.0.102 200.0.0.102 netmask 255.255.255.255 0 0 static (inside,outside) 100.0.0.102 100.0.0.102 netmask 255.255.255.255 0 0 conduit permit tcp any any route outside 0.0.0.0 0.0.0.0 100.0.0.21 1 route intf2 201.0.0.0 255.255.255.0 201.0.0.21 timeout xlate 3:00:00 timeout conn 1:00:00 half-closed 0:10:00 udp 0:02:00 rpc 0:10:00 h323 0:05:00 sip 0:30:00 sip_media 0:02:00 timeout uauth 0:05:00 absolute aaa-server TACACS+ protocol tacacs+ aaa-server RADIUS protocol radius no snmp-server location no snmp-server contact snmp-server community public no snmp-server enable traps floodguard enable no sysopt route dnat isakmp identity hostname telnet timeout 5 ssh timeout 5 terminal width 80 Cryptochecksum:25fc99017802445b1d8d22743f0d7a35 : end

Data Center Networking: Internet Edge Design Architectures

5-14

956484

Chapter 5

Scaling the Internet Edge: Firewall Load Balancing Complete Configurations

[OK]

Data Center Networking: Internet Edge Design Architectures 956484

5-15

Chapter 5 Complete Configurations

Scaling the Internet Edge: Firewall Load Balancing

Data Center Networking: Internet Edge Design Architectures

5-16

956484

C H A P T E R

6

Multi Site Multi Homing
In almost all Enterprise infrastructures today Internet connectivity is universal, although the topology designs may be unique. This chapter introduces a reference topology for Internet edge network deployments. This covers the basis of design principles as well as an introduction of common deployment barriers related to Internet edge topologies.

Overview
This chapter identifies and clarifies multi-site Internet edge designs. Multi-site Internet edge design refers to the instance of having more than one data center connected to the Internet backbone. This can either imply that each respective data center is multi-homed or has a single connection each. This architecture includes the core design principles associated with all network infrastructure designs while paying special attention to the unique requirements relevant to Internet edge multi-site topologies. Like any infrastructure design, these aformentioned designs must be highly scalable while maintaining the key aspects of security and redundancy. The key security functions include:
• • • • • •

Element security Identity services IP anti-spoofing Demilitarized zones (DMZ) Basic filtering and application definition Intrusion detection Multiple data centers that act as Internet gateways for internal users. Distributed data centers that provide Internet/intranet server farm resiliency.

The key redundancy functions associated with multi-site topologies are:
• •

This chapter also discusses multi-homing. Multi-homing provides ISP resiliency by connecting each data center to two or more ISPs depending on the bandwidth requirements, the server farm architecture, or other internet services. These Internet connections can be a transit point for traffic both inbound to the architecture and outbound to the Internet backbone for both the Internet/intranet server farms as depicted in Figure 6-1. This chapter also describes and chapters some common deployment problems when introducing distributed data centers into any network topology. Deploying distributed data centers introduces additional complexities to network administrators who want to fully utilize both internet gateway locations. These challenges include:

Application distribution

Data Center Networking: Internet Edge Design Architectures 956484

6-1

Chapter 6 Overview

Multi Site Multi Homing

• •

DNS propagation Replication timeouts

These design issues are covered in the Data Center Networking: Distributed Data Center SRND located at http://www.cisco.com/en/US/netsol/ns110/ns53/ns224/ns304/networking_solutions_design_guidances _list.html.
Figure 6-1 Data Center Topology

Internet PSTN Remote Office Internet Gateway SP1 SP2 VPN Internet Edge

Partners WAN

DMZ
Or Or Extranet Data Center

Private WAN

Internet Server Farm

Corporate Infrastructure

Campus Core
87352

Intranet Data Center

Data Center Networking: Internet Edge Design Architectures

6-2

956484

Chapter 6

Multi Site Multi Homing Multi-Site Multi-Homing Design Principles

Multi-Site Multi-Homing Design Principles
As mentioned above, Internet edge solutions touch many different types of enterprise networks and therefore may potentially have many different topologies. They can range from any remote office connection to a major ISP peering point. Therefore, using the common design principles associated with all network architectures allows you to carry these recommendations into almost all Internet edge topologies; ranging from a single-site ISP connection to a multi-site multi-homing environment.

High Availability
With topologies that connect to a single ISP, which is an enterprise connected to a single ISP, the differences in redundancy reside at the ISP peering point. If you have a single ISP connection at the edge of your network topology, the need for redundancy is a null issue because you have only a single exit point. If the primary edge router fails, the Internet connection itself is down. Therefore, defining redundancy at the edge of the network has no beneficial affect unless the provider supplies two terrestrial circuits as depicted below. Multi-homing implementations offer redundancy in these instances, as well as in the instances where there are multiple data centers for a single enterprise. You can leverage each respective data center for redundancy and scalability, if you partition applications across multiple data centers for high availability and scalability. For more information on distributed data center topologies, refer to the Data Center Networking: Distributed Data Center SRND located at http://www.cisco.com/en/US/netsol/ns110/ns53/ns224/ns304/networking_solutions_design_guidances _list.html.

Data Center Networking: Internet Edge Design Architectures 956484

6-3

Chapter 6 Multi-Site Multi-Homing Design Principles

Multi Site Multi Homing

Figure 6-2

Multi-Site Multi-Homing Design

Internet SP 1 SP 3 SP 2

Corporate WAN

West Coast Remote offices

East Coast Remote offices

West Coast Data Center

East Coast Data Center

Multi-site internet edge topologies are also composed of multiple layers. There must be no single point of failure within the network architecture. Therefore, complete Internet edge device redundancy in this architecture is a necessity. The infrastructure devices, such as routers and switches, coupled with specific Layer 2 and Layer 3 technologies, help achieve this device redundancy. To meet this redundancy requirement, Internet edge topologies use some of the key functions of the IOS software. The Layer 3 features, used for high availability, offer redundant default gateways for networked hosts and provide a predictable traffic flow in both normal operating conditions and under the adverse conditions surrounding a network link or device failure. These Layer 3 features include:
• • •

Hot Standby Router Protocol (HSRP) Multigroup Hot Standby Router Protocol (MHSRP) Routing protocol metric tuning (EIGRP and OSPF)

These Layer 3 functions also apply to redundancy by offering multiple default gateways in the network topologies. HSRP and Multigroup HSRP offer Layer 3 gateway redundancy, whereas the dynamic routing protocols offer a look into network availability from a higher level.

Data Center Networking: Internet Edge Design Architectures

6-4

956484

87353

Chapter 6

Multi Site Multi Homing Multi-Site Multi-Homing Design Principles

For instance, you could deploy HSRP between the edge routers to propagate a single default gateway instance to the internal networks. In this case, if the primary router fails, the HSRP address is still active on the secondary router instance, therefore the defined static route is still valid.

Scalability
The network architecture must be scalable to accommodate increasing user support, as well as unforeseen bursts in network traffic. While feature availability and the processing power of network devices are important design considerations; physical capacity attributes, like port density, can limit architecture scalability. The termination of circuits can become a burden on device scalability within the border layer of this topology. This burden is the same for the firewall device provisioning layer and the Layer 3 switching layer. Port density scalability is also important at the Layer 3 switching layer because it provides additional connections for host devices, in this case, servers.

Intelligent Network Services
In all network topologies, the intelligent network services present in the IOS software, such as QoS functions and high availability technologies like HSRP, are used to ensure network availability. Below, HSRP is documented and detailed for typical deployment scenarios.

HSRP
HSRP enables a set of routers to work together, giving the appearance of a single virtual router or default gateway to the hosts on a LAN. HSRP is particularly useful in environments where critical applications are running and fault-tolerant networks have been designed. By sharing an IP address and a MAC address, two or more routers acting as a single virtual router are able to seamlessly assume the routing responsibility in a defined event or an unexpected failure. This enables hosts on a LAN to continue to forward IP packets to a consistent IP and MAC address enabling a transparent changeover of routing devices. HSRP allows you to configure hot standby groups to share responsibility for an IP address. You can give each router a priority, which enables you to weight the prioritization of routers for active router selection. One of the routers in each group is elected to be the active forwarder and one is elected as the stand-by router. This is done according to the router's configured priorities. The router with the highest priority wins and, in the event of a tie in priority, the greater value of their configured IP addresses breaks the tie. Other routers in this group monitor the active and stand-by routers' status to enable further fault tolerance. All HSRP routers participating in a standby group watch for hello packets from the active and the standby routers. They learn the hello and dead timers, as wells as the shared standby IP address from the active router in the group, if these parameters are not explicitly configured on each individual router. Although this is a dynamic process, Cisco recommends that you define the HSRP dead timers in the topology. If the active router becomes unavailable due to scheduled maintenance, power failure, or other reasons; the stand-by assumes this functionality transparently within a few seconds. Failover occurs when three successive hello packets are missed and the dead timer is reached. The standby router promptly takes over the virtual addresses, identity, and responsibility. When the secondary interface assumes mastership, the new master sends a gratuitous ARP, which updates the L2 switch's content addressable memory (CAM). This then becomes the primary route for the devices accessing this gateway. These HSRP timers can be configured on a per instance of HSRP.

Data Center Networking: Internet Edge Design Architectures 956484

6-5

Chapter 6 Multi-Site Multi-Homing Design Principles

Multi Site Multi Homing

Routing Protocol Technologies
Before introducing and examining the basic ways in which autonomous systems can be connected to ISPs, some basic terminology and concepts of routing must be established. There are three basic routing approaches: Static Default Dynamic Static routing refers to routes to destinations manually listed in the router. Network reachability, in this case, is not dependent on the existence and state of the network itself. Whether a destination is up or down, the static routes remain in the routing table, and traffic is still sent toward that destination. Default routing refers to a “last resort” outlet. Traffic to destinations that are unknown to the router are sent to the default outlet. Default routes are also manually listed in the router. Default routing is the easiest form of routing for a domain connected to a single exit point. Dynamic routing refers to the router learning routes via an internal or external routing protocol. Network reachability is dependent on the existence and state of the network. If a destination is down, the route disappears from the routing table and traffic is sent toward that destination. These three routing approaches are possibilities for all the configurations considered in forthcoming sections, but usually there is an optimal approach. Thus, in illustrating different autonomous systems, this chapter considers whether static, dynamic, default, or some combination of these routing approaches is optimal. This chapter also considers whether interior or exterior routing protocols are appropriate. Interior gateway protocols (IGPs) can be used for the purpose of advertising the customer's networks. An IGP can be used between the enterprise and provider for the enterprise to advertise its routes. This has all the benefits of dynamic routing because network information and changes are dynamically sent to the provider. Also, the IGP's distribute the network routes upstream to the BGP function.

Edge Routing - BGP
Border gateway protocols (BGPs) perform interdomain routing in TCP/IP networks. BGP is an exterior gateway protocol (EGP), which means that it performs routing between multiple autonomous systems or domains and exchanges routing and reachability information with other BGP systems. BGP devices exchange routing information upon initial data exchange and during incremental updates. When a router first connects to the network, BGP routers exchange their entire BGP routing tables and, when the routing table changes, those same routers send only the changed portion of their routing tables. BGP routers do not send regularly scheduled routing updates and BGP routing updates advertise only the optimal path to a network. BGP uses a single routing metric to determine the best path to a given network. This metric consists of an arbitrary unit number that specifies the degree of preference of a particular link. The BGP metric typically is assigned to each link by the network administrator. The value assigned to a link can be based on any number of criteria, including the number of autonomous systems through which the path passes, stability, speed, delay, or cost. BGP performs three types of routing: interautonomous system routing, intra-autonomous system routing, and pass-through autonomous system routing.

Interautonomous system routing occurs between two or more BGP routers in different autonomous systems. Peer routers in these systems use BGP to maintain a consistent view of the internetwork topology. BGP neighbors communicating between autonomous systems must reside on the same physical network. The Internet serves as an example of an entity that uses this type of routing

Data Center Networking: Internet Edge Design Architectures

6-6

956484

Chapter 6

Multi Site Multi Homing Multi-Site Multi-Homing Design Principles

because it is comprised of autonomous systems or administrative domains. Many of these domains represent the various institutions, corporations, and entities that make up the Internet. BGP is frequently used to provide path determination to provide optimal routing within the Internet.

Intra-autonomous system routing occurs between two or more BGP routers located within the same autonomous system. Peer routers within the same autonomous system use BGP to maintain a consistent view of the system topology. BGP is also used to determine which router serves as the connection point for specific external autonomous systems. Once again, the Internet provides an example of interautonomous system routing. An organization, such as a university, could make use of BGP to provide optimal routing within its own administrative domain or autonomous system. The BGP protocol can provide both inter- and intra-autonomous system routing services. Pass-through autonomous system routing occurs between two or more BGP peer routers that exchange traffic across an autonomous system that does not run BGP. In a pass-through autonomous system environment, the BGP traffic did not originate within the autonomous system in question and is not destined for a node in the autonomous system. BGP must interact with whatever intra-autonomous system routing protocol is being used to successfully transport BGP traffic through that autonomous system.

BGP Attributes
BGP attributes support the control of both inbound and outbound network routes. These attributes can be adjusted to control the decision making process of BGP itself. The BGP attributes are a set of parameters that describe the characteristics of a prefix (route). The BGP decision process uses these attributes to select its best routes. Specific attributes associated with larger topologies like this one are addressed later in this chapter. More specifically, the MED attribute and the use of route reflectors are addressed. Figure 6-3 displays a multi site multi homed topology.

Data Center Networking: Internet Edge Design Architectures 956484

6-7

Chapter 6 Design Caveats

Multi Site Multi Homing

Figure 6-3

Multi Site Internet Edge Topology

Internet SP 1 SP 3 SP 2

Corporate WAN

West Coast Remote offices

East Coast Remote offices

West Coast Data Center

East Coast Data Center

Design Caveats
In certain multi-site deployments, device placement becomes a caveat to the overall design. In a specific instance, the placement of the firewall and how it is introduced into the architecture from a routing standpoint are of major concern. There are two main caveats to be concerned with when designing your network:
• •

Inability to terminate IGP on firewall device Lack of upstream route health or interface uptime

In a design where the PIX firewall is placed at the edge of the network between the Internet border routers and the internet data center core switches, the PIX can become a black hole route to the end-users that are geographically adjacent to that data center. In detail, when deploying a PIX firewall, the most common deployment is to have the device configured with static routes upstream to the internet border routers and with a static route downstream to the internal Layer 3 switching platform. Since static

Data Center Networking: Internet Edge Design Architectures

6-8

956484

87353

Chapter 6

Multi Site Multi Homing Multi-Site Multi-Homing Design Recommendations

routing is the configuration of choice, you can assume that the firewall cannot participate in the IGP routing protocol. If the external routes from the internet border routers disappear from the routing table, the internal routing process has no idea that this is no longer a valid route to the Internet. Since the PIX is not participating in an IGP routing protocol, the firewall has no intelligence of the routing updates that take place above the firewall layer. Therefore, the device still accepts packets destined for the Internet. This is usually the case because the Layer 3 switching layer below the PIX device propagates or redistributes a static route of 0.0.0.0 into the IGP downstream.

Work Arounds
The aformentioned problem is common when deploying distributed data centers and has the following three work arounds:

The first work around is using the BGP routing protocol to inform the Layer 3 switching platform of the route change by tunneling the I-BGP traffic through the firewall to the peer on the inside interface or the Layer 3 switching platform that houses the IGP routing process. This design is documented in Chapter 7, “High Availability via BGP Tunneling.” With a future release of HSRP, the you could use HSRP tracking to track the HSRP interface of the Internet border routers. This assumes that the border routers also implement a tracking instance of the upstream ISP interfaces. This has not been tested or documented. Finally, you could use the Firewall Service Module (FWSM) in the edge Layer 3 switching platform. This deployment allows you to process OPSF routes internal to the IGP by having the firewall device participate in the OSPF process. This deployment that has been tested and is documented in this chapter.

Multi-Site Multi-Homing Design Recommendations
As mentioned above; multi-site Internet edge topologies are different than single site topologies in various ways. Also, the scale of these topologies may be different. But these topologies are increasingly important to enterprise business functions. Hence, the scalability of these topologies can not be overlooked. It is also imperative to this type of architecture to have complete redundancy. The details of the functional layers of the Internet edge topologies and how they interact with one another are detailed below. When deploying a distributed data center environment, you must adhere to certain characteristics. For example, these topologies still use a similar ISP multi-homing relationship, but the attributes are slightly different. Also, since this architecture is distributed, it becomes a network that has multiple Internet gateways in different data centers. This network is usually partitioned in such a way that locally adjacent users traverse their respective local data centers. This type of design recommendation assumes that you have configured the internal IGP to route locally adjacent end-users through their respective data centers while still offering redundancy to the other data center in the event of failure. This distributed data center design topology deployed of a not-so stubby area networks. This allows you to define multiple Internet data center topologies without changing the integrity of the core infrastructure. Each of the geographically dispersed area's and autonomous systems are represented below in Figure 6-4.

Data Center Networking: Internet Edge Design Architectures 956484

6-9

Chapter 6 Multi-Site Multi-Homing Design Recommendations

Multi Site Multi Homing

Figure 6-4

Internet Edge AS/ Area Topology

East Coast BGP ISP AS1 Internet ISP Cloud ISP AS 1/2 BGP ISP AS2

West Coast

BGP AS 100 OSPF NSSA 251 OSPF NSSA 252

OSPF Area 0

Corporate LAN Connectivity
87354

Border Router Layer
Border routers, typically deployed in pairs, are the edge-facing devices of the network. The number of border routers deployed is a decision of provisioning, based on memory requirements, and physical circuit termination. The border router layer is where you provision ISP termination and initial security parameters. The border router layer serves as the gateway of the network and uses an externally facing Layer 3 routing protocol, like BGP, integrated with an internally facing OSPF to intelligently route traffic throughout the external and internal networks, respectively. This layer starts the OSPF process internally into the network. The Internet edge in an enterprise environment may provide Internet connectivity to an ISP through the use of multi-homed internet border routers. This layer also injects the gateway of last resort route into the IGP through specific BGP parameters defined below.

Internet Data Center Core Switching Layer
The Layer 3 switching layer is the layer in the multi-site internet edge topology that serves as the gateway to the core of the network. This is also a functional layer of the internet server farm design. This layer may act as either a core layer or an aggregation layer in some design topologies. Yet, the primary function, from the internet edge design topology standpoint, is to advertise the IGP routing protocol internally to the infrastructure. OSPF processes for each data center interfaces with Area 0 at this layer, as shown in Figure 6-4.

Data Center Networking: Internet Edge Design Architectures

6-10

956484

Chapter 6

Multi Site Multi Homing Multi-Site Multi-Homing Design Recommendations

Firewall Layer
The firewall layer is a security layer that allows stateful packet inspection into the network infrastructure and to the services and applications offered in the server farms and database layers. In this topology, the firewall layer is represented by the FWSM in the Catalyst 6500 series switching platform. This layer also acts as the network address translation (NAT) device in most design topologies. NAT, at the Internet edge, is common based on the ever depleting Ipv4 address pool associated with ISP's. This allows many ISP's to give a limited address range, which, in turn, requires NAT pools at the egress point of the topology.

Data Center Core Switching Layer
The data center core layer in this topology is the transport layer between data centers. This assumes that the layers are represented in the same Area 0 in the OPSF routing process. This layer is also the termination point for both the geographically adjacent WAN routers or the geographically adjacent LAN's in the architecture. This layer allows you to control the administrative distances or actual costs associated with the gigabit links to the upstream edge Layer 3 switches. Figure 6-5 displays how the network topology is partitioned into two different geographic areas.
Figure 6-5 Physical Layer Topology

Networks 1.x.x.x,...5.x.x.x .1 East Coast internet edge .1 East Coast edge .2 .1 East Coast core .254 Network 172.16.253.x .100 Network 172.16.251.x

Internet ISP Cloud ISP AS 1/2

Networks 6.x.x.x,...12.x.x.x .1 .254 West Coast internet edge .1 West Coast edge .2 .1 West Coast core

Network 172.16.11.x

Network 172.16.10.x Network FWSM Outside 172.16.254.x FWSM Intside .100 Network 172.16.252.x

.1

Network 172.16.250.x

.100

Corporate LAN Connectivity

Data Center Networking: Internet Edge Design Architectures 956484

87355

6-11

Chapter 6 Implementation Details

Multi Site Multi Homing

Implementation Details
Below are the implementation details associated with defining and configuring the multi-site Internet edge topology. Also, there are specific configurations associated with each layer that allow for the route control and failover of the topology stated above.

Multi-Site Multi-Homing Topology
In this section, the router configurations were taken from the each of the East Coast routers as depicted in the Figure 6-5. These configurations were defined solely for this testbed and are not representative of the normal ISP recommended configurations.

Internet Cloud Router Configurations
The Internet cloud routers were configured with loopback interfaces for testing purposes. These interfaces allow ping traffic to traverse the internal network outbound to the internet backbone. Below, each configuration was defined with each of the respective network segments. This also made it easier to determine the routes locally adjacent to each of the internet gateway routers.

Internet Cloud Router ISP AS1
hostname InternetCloud1 interface Loopback0 ip address 2.0.0.1 ip address 3.0.0.1 ip address 4.0.0.1 ip address 5.0.0.1 ip address 1.0.0.1

255.255.255.0 255.255.255.0 255.255.255.0 255.255.255.0 255.255.255.0

secondary secondary secondary secondary

When looking into the BGP process below, you can see that only specific subnets were defined for redistribution. Since this solution is not performance focused, the decision was made to only propagate those routes. This allows for BGP redistribution to the lower layers to ensure that the internal OSPF redistribution is working correctly.
router bgp 1 network 1.0.0.0 network 2.0.0.0 network 3.0.0.0 network 4.0.0.0 network 5.0.0.0 network 20.10.5.0 network 172.16.11.0 redistribute connected neighbor 20.10.5.254 remote-as 2 neighbor 172.16.11.254 remote-as 100

Internet Cloud Router ISP AS2
The configuration of the second Internet cloud router is the same as the first except that different IP addresses were used.
hostname InternetCloud2

interface Loopback0

Data Center Networking: Internet Edge Design Architectures

6-12

956484

Chapter 6

Multi Site Multi Homing Implementation Details

ip ip ip ip ip ip no !

address 7.0.0.1 255.255.255.0 secondary address 8.0.0.1 255.255.255.0 secondary address 9.0.0.1 255.255.255.0 secondary address 11.0.0.1 255.255.255.0 secondary address 12.0.0.1 255.255.255.0 secondary address 6.0.0.1 255.255.255.0 ip directed-broadcast

router bgp 2 network 6.0.0.0 network 7.0.0.0 network 8.0.0.0 network 9.0.0.0 network 11.0.0.0 network 12.0.0.0 network 20.10.5.0 network 172.16.10.0 redistribute connected neighbor 20.10.5.1 remote-as 1 neighbor 172.16.10.254 remote-as 100

Internet Edge Configurations
The first layer in the topology was the Internet border router layer. At this layer, the peering relationship via BGP to the ISP routers takes place. Also at this layer, the first instance of the OSPF process begins. The BGP process propagates a default route into the OSPF routing instance. Below are the internet edge router configurations.

East Coast Internet Edge Configurations
EdgeRouter1#wr t Building configuration...

! hostname EdgeRouter1 !

The interface configurations below represent the donwstream link to the outside interface or segment of the FWSM link:

Note

The OSPF hello and dead-interval timers must be the same across all links and interface:
! interface FastEthernet0/0 ip address 172.16.253.254 255.255.255.0 no ip route-cache ip ospf hello-interval 1 ip ospf dead-interval 3 no ip mroute-cache duplex full !

The following configuration examples are associated with the upstream links to the ISP clouds:
interface FastEthernet3/0

Data Center Networking: Internet Edge Design Architectures 956484

6-13

Chapter 6 Implementation Details

Multi Site Multi Homing

ip address 172.16.11.254 255.255.255.0 no ip redirects no ip route-cache no ip mroute-cache duplex half

The following OSPF and BGP edge configurations allow the edge to redistribute BGP processes to the internal network. The redistribute bgp command within the OSPF process causes this redistribution. This assumes that the router can propagate those routes internal to the other network segments. Injecting full BGP routes into an IGP is not recommended. Doing so adds excessive routing overhead to any IGP. Interior routing protocols were never designed to handle more than the networks inside your autonomous systems, plus some exterior routes from other IGPs. This does not mean that BGP routes should never be injected into IGPs. Depending on the number of BGP routes and how critical the need for them to be in the IGP, injecting partial BGP routes into IGP may well be appropriate. Below are the OSPF and BGP configurations respectively:

Note

Router OSPF is defined as a not so stubby area (NSSA). This is needed to redistribute the external routes form the upstream routing instance: For the sake of the testbed topology and to define that routes have been updated properly, specific BGP routes were redistributed into the architecture:
router ospf 500 log-adjacency-changes area 251 nssa redistribute bgp 100 network 172.16.251.0 0.0.0.255 area 251 network 172.16.253.0 0.0.0.255 area 251

Note

In typical Internet edge deployments, the edge routing instance does not redistribute the BGP process into the OSPF process, but rather uses the default-information originate command to define a default route to edge routing instance. That default route is then redistributed via the OSPF process to the internal network only if the edge routing instance has a default route itself:
router ospf 500 log-adjacency-changes area 251 nssa network 172.16.251.0 0.0.0.255 area 251 network 172.16.253.0 0.0.0.255 area 251 default-information originate route-map SEND_DEFAULT_IF

The ACL's below state that if the router has any entry in its routing table from the next hop ISP router, then it sends the default route internal to the network. This configuration must be deployed on both edge routing devices.
access-list 1 permit 0.0.0.0 access-list 2 permit 172.16.11.1 route-map SEND_DEFAULT_IF permit 10 match ip address 1 match ip next-hop 2

Data Center Networking: Internet Edge Design Architectures

6-14

956484

Chapter 6

Multi Site Multi Homing Implementation Details

Note

The route map SEND_DEFAULT_IF is associated with the default-information originate command. This route map matches on the condition that the 0/0 default (access-list 1) has a next hop of 172.16.11.1 (access-list 2). This satisfies the condition that the 0/0 is learned via EBGP rather than I-BGP. Below is the BGP routing instance that defines the upstream BGP neighbor that is necessary for the above route-map to work.
router bgp 100 no synchronization bgp log-neighbor-changes network 172.16.11.0 redistribute connected neighbor 172.16.11.1 remote-as 1 no auto-summary

Below, are the routes available to the East Coast edge routers:

Note

Setting the route propagation via OSPF on the FWSM requires defining route-maps that only allow specific traffic to the edge layer. Therefore, the only internal route propagated is the 172.16.251.x. This can be controlled by supernetting the segment to allow only specific addresses.
EdgeRouter1#sho ip route Codes: C - connected, S - static, I - IGRP, R - RIP, M - mobile, B - BGP D - EIGRP, EX - EIGRP external, O - OSPF, IA - OSPF inter area N1 - OSPF NSSA external type 1, N2 - OSPF NSSA external type 2 E1 - OSPF external type 1, E2 - OSPF external type 2, E - EGP i - IS-IS, L1 - IS-IS level-1, L2 - IS-IS level-2, ia - IS-IS inter area * - candidate default, U - per-user static route, o - ODR P - periodic downloaded static route Gateway of last resort is not set B B B B B B B 1.0.0.0/8 [20/0] via 172.16.11.1, 00:09:13 2.0.0.0/8 [20/0] via 172.16.11.1, 00:09:13 3.0.0.0/8 [20/0] via 172.16.11.1, 00:09:13 4.0.0.0/8 [20/0] via 172.16.11.1, 00:09:13 20.0.0.0/8 [20/0] via 172.16.11.1, 00:09:13 5.0.0.0/8 [20/0] via 172.16.11.1, 00:09:13 6.0.0.0/8 [20/0] via 172.16.11.1, 00:08:13 172.16.0.0/24 is subnetted, 6 subnets C 172.16.253.0 is directly connected, FastEthernet0/0 O IA 172.16.251.0 [110/11] via 172.16.253.1, 00:44:14, FastEthernet0/0 C 172.16.11.0 is directly connected, FastEthernet3/0 B 7.0.0.0/8 [20/0] via 172.16.11.1, 00:08:14 B 8.0.0.0/8 [20/0] via 172.16.11.1, 00:08:14 B 9.0.0.0/8 [20/0] via 172.16.11.1, 00:08:14 B 11.0.0.0/8 [20/0] via 172.16.11.1, 00:08:14 B 12.0.0.0/8 [20/0] via 172.16.11.1, 00:08:14

Edge Switching Layer Configurations
The edge switching layer configurations house the FWSM as well as the NSSA instance of the OSPF process. The instance of OSPF that the upstream Internet edge routing layer binds to is the outside interface of the FWSM. The inside instance of the FWSM is an OSPF neighbor to the process running locally on the MSFC on the switch itself. These switches run in a same NSSA area, while running two different OSPF processes. This ensures the tuning of the protocol that defines which routes can be

Data Center Networking: Internet Edge Design Architectures 956484

6-15

Chapter 6 Implementation Details

Multi Site Multi Homing

propagated to the upstream network. This configuration recommendation is to ensure that restricted routes are not externally advertised. With areas defined this way, you can tune the routes appropriately by defining route maps to allow a specific network segment outbound. For instance; if you only wanted to advertise the VIP addresses of the server farm outbound, you could create a route-map that only allows those specific addresses outbound.

East Coast Edge Switching Layer Configurations
EASTEDGE1#wr t Building configuration... hostname EASTEDGE1 !

Below, is the configuration that associates specific VLAN's with the FWSM. VLAN 200 is the internal vlan and VLAN 300 is the external vlan:
firewall module 2 vlan-group 1 firewall vlan-group 1 200,300 ! vlan dot1q tag native

Gigabit 1/1 is the downstream link to the OSPF core Layer 3 switching layer. Note it has be configured to be in VLAN 200:
! ! interface GigabitEthernet1/1 no ip address switchport switchport access vlan 200

FastEthernet 1/1 is the upstream link to the edge routing layer. Note it has be configured to be in VLAN 300:
! interface FastEthernet3/1 no ip address duplex full speed 100 switchport switchport access vlan 300 !

Note

Interface VLAN 200 OSPF configurations are the same across all OSPF interfaces from the timers perspective:
! interface Vlan200 ip address 172.16.251.2 255.255.255.0 ip ospf hello-interval 1 ip ospf dead-interval 3

Note

The OSPF routing process below is the internal OSPF neighbor to the core switching layer:
!

Data Center Networking: Internet Edge Design Architectures

6-16

956484

Chapter 6

Multi Site Multi Homing Implementation Details

router ospf 500 log-adjacency-changes area 251 nssa network 172.16.251.0 0.0.0.255 area 251 ! ip classless no ip http server ! ! arp 127.0.0.12 0000.2100.0000 ARPA ! ! line con 0 line vty 0 4 login transport input lat pad mop telnet rlogin udptn nasi ! end

Below are the configurations associated with the FWSM. Notice the OSPF configuration in the FWSM itself and how it binds itself to the OSPF process.
EDGE1# sess slot 2 proc 1 The default escape character is Ctrl-^, then x. You can also type 'exit' at the remote prompt to end the session Trying 127.0.0.21 ... Open

FWSM passwd: Welcome to the FWSM firewall Type help or '?' for a list of available commands. EASTFWSM> en Password: EASTFWSM# wr t Building configuration... : Saved : FWSM Version 1.1(1) no gdb enable nameif vlan200 inside security100 nameif vlan300 outside security0 enable password 8Ry2YjIyt7RRXU24 encrypted passwd 2KFQnbNIdI.2KYOU encrypted hostname EASTFWSM fixup protocol ftp 21 fixup protocol h323 H225 1720 fixup protocol h323 ras 1718-1719 fixup protocol ils 389 fixup protocol rsh 514 fixup protocol smtp 25 fixup protocol sqlnet 1521 fixup protocol sip 5060 fixup protocol skinny 2000 fixup protocol http 80 names access-list outside permit tcp any any access-list outside permit udp any any access-list outside permit icmp host 6.0.0.1 any echo-reply access-list inside permit tcp any any access-list inside permit udp any any access-list inside permit icmp host 172.16.250.10 any echo

Data Center Networking: Internet Edge Design Architectures 956484

6-17

Chapter 6 Implementation Details

Multi Site Multi Homing

ACL 500 defines which addresses need to be matched to support the advertisement of the OSPF:
access-list 500 permit 172.16.251.0255.255.255.0 pager lines 24 icmp permit any inside icmp permit any outside mtu inside 1500 mtu outside 1500 ip address inside 172.16.251.100 255.255.255.0 ip address outside 172.16.253.1 255.255.255.0 no failover failover lan unit secondary failover timeout 0:00:00 failover poll 15 failover ip address inside 0.0.0.0 failover ip address outside 0.0.0.0 pdm history enable arp timeout 14400 static (inside,outside) 172.16.250.10 172.16.250.10 netmask 255.255.255.255 0 0 access-group inside in interface inside access-group outside in interface outside

The OSPF interface timer configurations below are common across the architecture:
interface inside ospf hello-interval 1 ospf dead-interval 3 ! ! interface outside ospf hello-interval 1 ospf dead-interval 3 !

The route-map below states that the permitted advertised traffic must match the access-list 500 addresses. This route-map is then bound to the OSPF process that is redistributing the routes, as seen below in router OSPF 100:
route-map 500 permit 10 match ip address 500 !

The OSPF configurations below are representative of the recommended security configuration. Within the configuration, two different OSPF routing processes were defined to control inbound and outbound route propagation:
router ospf 500 network 172.16.251.0 255.255.255.0 area 251 area 251 nssa log-adj-changes redistribute ospf 100 router ospf 100 network 172.16.253.0 255.255.255.0 area 251 area 251 nssa log-adj-changes

redistribute ospf 500 subnets route-map 500

Data Center Networking: Internet Edge Design Architectures

6-18

956484

Chapter 6

Multi Site Multi Homing Implementation Details

! timeout xlate 3:00:00 timeout conn 1:00:00 half-closed 0:10:00 udp 0:02:00 rpc 0:10:00 h323 0:05:00 sip 0:30:00 sip_media 0:02:00 timeout uauth 0:05:00 absolute aaa-server TACACS+ protocol tacacs+ aaa-server RADIUS protocol radius aaa-server LOCAL protocol local no snmp-server location no snmp-server contact snmp-server community public no snmp-server enable traps floodguard enable no sysopt route dnat telnet timeout 5 ssh timeout 5 terminal width 80 Cryptochecksum:03e78100e37fef97b96c15d54be90956 : end [OK]

Showing the routes available to the FWSM ensures that the proper outside/inside routes were propagated:
EASTFWSM# sho route C 127.0.0.0 255.255.255.0 is directly connected, eobc O N2 1.0.0.0 255.0.0.0 [110/1] via 172.16.253.254, 0:46:35, outside O N2 2.0.0.0 255.0.0.0 [110/1] via 172.16.253.254, 0:46:35, outside O N2 3.0.0.0 255.0.0.0 [110/1] via 172.16.253.254, 0:46:35, outside O N2 4.0.0.0 255.0.0.0 [110/1] via 172.16.253.254, 0:46:35, outside O N2 20.0.0.0 255.0.0.0 [110/1] via 172.16.253.254, 0:46:35, outside O N2 5.0.0.0 255.0.0.0 [110/1] via 172.16.253.254, 0:46:35, outside O N2 6.0.0.0 255.0.0.0 [110/1] via 172.16.253.254, 0:45:35, outside 172.16.0.0 255.255.255.0 is subnetted, 5 subnets 172.16.252.0 [110/12] via 172.16.251.1, 1:16:14, inside 172.16.253.0 172.16.254.0 172.16.250.0 172.16.251.0 is directly connected, outside [110/22] via 172.16.251.1, 1:10:14, inside [110/11] via 172.16.251.1, 1:21:35, inside is directly connected, inside

O IA C O IA O IA C

O N2 7.0.0.0 255.0.0.0 [110/1] via 172.16.253.254, 0:45:36, outside O N2 8.0.0.0 255.0.0.0 [110/1] via 172.16.253.254, 0:45:36, outside O N2 9.0.0.0 255.0.0.0 [110/1] via 172.16.253.254, 0:45:36, outside O N2 11.0.0.0 255.0.0.0 [110/1] via 172.16.253.254, 0:45:36, outside O N2 12.0.0.0 255.0.0.0 [110/1] via 172.16.253.254, 0:45:36, outside EASTFWSM# exit

Data Center Networking: Internet Edge Design Architectures 956484

6-19

Chapter 6 Implementation Details

Multi Site Multi Homing

Logoff

[Connection to 127.0.0.21 closed by foreign host]

Below are the routes associated with the edge switching layers. Notice that the edge layer has two routes to the Internet backbone: one primary route via the OSPF process running locally on the switch and the redundant route running through Area 0 to the secondary switch. This is the same respectively on each switch.
EASTEDGE1#sho ip route Codes: C - connected, S - static, I - IGRP, R - RIP, M - mobile, B - BGP D - EIGRP, EX - EIGRP external, O - OSPF, IA - OSPF inter area N1 - OSPF NSSA external type 1, N2 - OSPF NSSA external type 2 E1 - OSPF external type 1, E2 - OSPF external type 2, E - EGP i - IS-IS, L1 - IS-IS level-1, L2 - IS-IS level-2, ia - IS-IS inter area * - candidate default, U - per-user static route, o - ODR P - periodic downloaded static route Gateway of last resort is not set O O O O O O O O O O O C O O O O O N2 N2 N2 N2 N2 N2 N2 IA N2 IA N2 N2 N2 N2 N2 1.0.0.0/8 [110/1] via 172.16.251.100, 00:45:56, Vlan200 2.0.0.0/8 [110/1] via 172.16.251.100, 00:45:56, Vlan200 3.0.0.0/8 [110/1] via 172.16.251.100, 00:45:56, Vlan200 4.0.0.0/8 [110/1] via 172.16.251.100, 00:45:56, Vlan200 20.0.0.0/8 [110/1] via 172.16.251.100, 00:45:56, Vlan200 5.0.0.0/8 [110/1] via 172.16.251.100, 00:45:56, Vlan200 6.0.0.0/8 [110/1] via 172.16.251.100, 00:44:56, Vlan200 172.16.0.0/24 is subnetted, 5 subnets 172.16.252.0 [110/3] via 172.16.251.1, 01:15:39, Vlan200 172.16.253.0 [110/11] via 172.16.251.100, 01:20:56, Vlan200 172.16.254.0 [110/13] via 172.16.251.1, 01:09:35, Vlan200 172.16.250.0 [110/2] via 172.16.251.1, 01:20:56, Vlan200 172.16.251.0 is directly connected, Vlan200 7.0.0.0/8 [110/1] via 172.16.251.100, 00:44:57, Vlan200 8.0.0.0/8 [110/1] via 172.16.251.100, 00:44:57, Vlan200 9.0.0.0/8 [110/1] via 172.16.251.100, 00:44:57, Vlan200 11.0.0.0/8 [110/1] via 172.16.251.100, 00:44:57, Vlan200 12.0.0.0/8 [110/1] via 172.16.251.100, 00:44:57, Vlan200

Core Switching Layer Configurations
The core switching layers are the layers that house the OPSF Area 0 process. This layer becomes the transport for Internet destined traffic for each of the respective data centers in the event of failure. This is also the layer where the configurations are controlled to ensure that traffic is destined to the right geographical areas. This is accomplished using the ip ospf cost configurations on interfaces where the OSPF neighbor areas are present.

East Coast Core Switching Layer Configurations
EASTCOASTCORE#wr t Building configuration... ! hostname EASTCOASTCORE ! ! interface Port-channel1

Data Center Networking: Internet Edge Design Architectures

6-20

956484

Chapter 6

Multi Site Multi Homing Implementation Details

no ip address switchport switchport trunk encapsulation dot1q

The interface configurations below state that Gigabit 1/1 is the OSPF interface that neighbors to the East Coast edge layer. This is where the you can tune the OSPF cost to define that any users locally adjacent to the East Coast core would chose this upstream link for the internet traffic. This same configuration is also tuned to ensure any west coast traffic traverse the west coast routes:
! interface GigabitEthernet1/1 ip address 172.16.251.1 255.255.255.0 ip ospf hello-interval 1 ip ospf dead-interval 3 ip ospf cost 5 ! interface GigabitEthernet1/2 no ip address switchport switchport trunk encapsulation dot1q channel-group 1 mode on ! interface GigabitEthernet2/1 no ip address switchport switchport trunk encapsulation dot1q channel-group 1 mode on ! interface Vlan1 no ip address shutdown ! interface Vlan200 ip address 172.16.250.1 255.255.255.0 ip ospf hello-interval 1 ip ospf dead-interval 3 ! router ospf 500 log-adjacency-changes area 251 nssa network 172.16.250.0 0.0.0.255 area 0 network 172.16.251.0 0.0.0.255 area 251 ! ip classless no ip http server ! ! ! line con 0 line vty 0 4 login transport input lat pad mop telnet rlogin udptn nasi ! end

The following displays the routes associated with East Coast core. Note the path preference is the respective edge switching layer.
EASTCOASTCORE#sho ip route Codes: C - connected, S - static, I - IGRP, R - RIP, M - mobile, B - BGP D - EIGRP, EX - EIGRP external, O - OSPF, IA - OSPF inter area N1 - OSPF NSSA external type 1, N2 - OSPF NSSA external type 2 E1 - OSPF external type 1, E2 - OSPF external type 2, E - EGP

Data Center Networking: Internet Edge Design Architectures 956484

6-21

Chapter 6 Implementation Details

Multi Site Multi Homing

i - IS-IS, L1 - IS-IS level-1, L2 - IS-IS level-2, ia - IS-IS inter area * - candidate default, U - per-user static route, o - ODR P - periodic downloaded static route Gateway of last resort is not set O O O O O O O O O O C C O O C O O O N2 N2 N2 N2 N2 N2 N2 IA IA 1.0.0.0/8 [110/1] via 172.16.251.100, 00:06:46, GigabitEthernet1/1 2.0.0.0/8 [110/1] via 172.16.251.100, 00:06:46, GigabitEthernet1/1 3.0.0.0/8 [110/1] via 172.16.251.100, 00:06:46, GigabitEthernet1/1 4.0.0.0/8 [110/1] via 172.16.251.100, 00:06:46, GigabitEthernet1/1 20.0.0.0/8 [110/1] via 172.16.251.100, 00:06:46, GigabitEthernet1/1 5.0.0.0/8 [110/1] via 172.16.251.100, 00:06:46, GigabitEthernet1/1 6.0.0.0/8 [110/1] via 172.16.251.100, 00:05:53, GigabitEthernet1/1 172.16.0.0/24 is subnetted, 5 subnets 172.16.252.0 [110/2] via 172.16.250.254, 00:36:37, Vlan200 172.16.253.0 [110/11] via 172.16.251.100, 00:41:53, GigabitEthernet1/1 172.16.254.0 [110/12] via 172.16.250.254, 00:30:32, Vlan200 172.16.250.0 is directly connected, Vlan200 172.16.251.0 is directly connected, GigabitEthernet1/1 7.0.0.0/8 [110/1] via 172.16.251.100, 00:05:54, GigabitEthernet1/1 8.0.0.0/8 [110/1] via 172.16.251.100, 00:05:54, GigabitEthernet1/1 127.0.0.0/8 is directly connected, EOBC0/0 9.0.0.0/8 [110/1] via 172.16.251.100, 00:05:54, GigabitEthernet1/1 11.0.0.0/8 [110/1] via 172.16.251.100, 00:05:54, GigabitEthernet1/1 12.0.0.0/8 [110/1] via 172.16.251.100, 00:05:54, GigabitEthernet1/1

N2 N2 N2 N2 N2

West Coast Core Switching Layer Configurations The West Coast configurations refer to the IP routes. The West Coast core prefers the West Coast edge as its primary routes: WESTCOASTCORE#wr t Building configuration... ! hostname WESTCOASTCORE ! ! ! ! interface Port-channel1 no ip address switchport switchport trunk encapsulation dot1q ! interface GigabitEthernet1/1 ip address 172.16.252.1 255.255.255.0 ip ospf hello-interval 1 ip ospf dead-interval 3 ip ospf cost 5 ! interface GigabitEthernet1/2 no ip address switchport switchport trunk encapsulation dot1q channel-group 1 mode on ! interface GigabitEthernet2/1 no ip address switchport switchport trunk encapsulation dot1q channel-group 1 mode on !

Data Center Networking: Internet Edge Design Architectures

6-22

956484

Chapter 6

Multi Site Multi Homing Implementation Details

! interface Vlan1 no ip address shutdown ! interface Vlan200 ip address 172.16.250.254 255.255.255.0 ip ospf hello-interval 1 ip ospf dead-interval 3 ! router ospf 500 log-adjacency-changes area 252 nssa network 172.16.250.0 0.0.0.255 area 0 network 172.16.252.0 0.0.0.255 area 252 ! The following displays the routes associated with West Coast core. Note the path preference is that of the West Coast edge switching layer. WESTCOASTCORE#sho ip route Codes: C - connected, S - static, I - IGRP, R - RIP, M - mobile, B - BGP D - EIGRP, EX - EIGRP external, O - OSPF, IA - OSPF inter area N1 - OSPF NSSA external type 1, N2 - OSPF NSSA external type 2 E1 - OSPF external type 1, E2 - OSPF external type 2, E - EGP i - IS-IS, L1 - IS-IS level-1, L2 - IS-IS level-2, ia - IS-IS inter area * - candidate default, U - per-user static route, o - ODR P - periodic downloaded static route Gateway of last resort is not set O O O O O O O C O O C O O O C O O O N2 N2 N2 N2 N2 N2 N2 1.0.0.0/8 [110/1] via 172.16.252.100, 00:02:37, GigabitEthernet1/1 2.0.0.0/8 [110/1] via 172.16.252.100, 00:02:37, GigabitEthernet1/1 3.0.0.0/8 [110/1] via 172.16.252.100, 00:02:37, GigabitEthernet1/1 4.0.0.0/8 [110/1] via 172.16.252.100, 00:02:37, GigabitEthernet1/1 20.0.0.0/8 [110/1] via 172.16.252.100, 00:02:37, GigabitEthernet1/1 5.0.0.0/8 [110/1] via 172.16.252.100, 00:02:37, GigabitEthernet1/1 6.0.0.0/8 [110/1] via 172.16.252.100, 00:01:44, GigabitEthernet1/1 172.16.0.0/24 is subnetted, 5 subnets 172.16.252.0 is directly connected, GigabitEthernet1/1 172.16.253.0 [110/12] via 172.16.250.1, 00:26:19, Vlan200 172.16.254.0 [110/11] via 172.16.252.100, 00:26:19, GigabitEthernet1/1 172.16.250.0 is directly connected, Vlan200 172.16.251.0 [110/2] via 172.16.250.1, 00:26:19, Vlan200 7.0.0.0/8 [110/1] via 172.16.252.100, 00:01:45, GigabitEthernet1/1 8.0.0.0/8 [110/1] via 172.16.252.100, 00:01:45, GigabitEthernet1/1 127.0.0.0/8 is directly connected, EOBC0/0 9.0.0.0/8 [110/1] via 172.16.252.100, 00:01:45, GigabitEthernet1/1 11.0.0.0/8 [110/1] via 172.16.252.100, 00:01:45, GigabitEthernet1/1 12.0.0.0/8 [110/1] via 172.16.252.100, 00:01:45, GigabitEthernet1/1

IA

IA N2 N2 N2 N2 N2

BGP Attribute Tuning
In Internet edge topologies, controlling the outbound routes is first and foremost. This is how your network topology is seen by the world. Which also suggests that, by default, this is how traffic returns to your site. Controlling the outbound traffic of the topology allows you to manipulate the amount of traffic that comes in from one ISP or another. In detail, if you wanted to define that all traffic leaves your

Data Center Networking: Internet Edge Design Architectures 956484

6-23

Chapter 6 Security Considerations

Multi Site Multi Homing

topology from the one ISP link, yet all traffic destined to the topology comes inbound on another ISP link; you must implement autonomous system prepending. This is most commonly deployed in instances where you do not want to leave an link idle. For more information regarding route control via BGP, refer to Chapter 4, “Single Site Multi Homing.”

Security Considerations
Security is a necessity in all network architectures today, regardless of your Internet connectivity. Proper requirements must be taken to ensure that the network architecture and the network devices are securely provisioned and managed. Internet edge security is discussed Chapter 2, “Internet Edge Security Design Principles” and Chapter 3, “Internet Edge Security Implementation.” This section provides a brief summary from those guides of the security functions supported within Internet edge designs. These functions include:
• •

Element Security - The secure configuration and management of the devices that collectively define the Internet edge. Identity Services - The inspection of IP traffic across the Internet edge requires the ability to identify the communicating endpoints. Although this can be accomplished with explicit user/host session authentication mechanisms, usually IP identity across the Internet edge is based on header information carried within the IP packet itself. Therefore, IP addressing schemas, address translation mechanisms, and application definition (IP protocol/port identity) play key roles in identity services. IP Anti-Spoofing - This includes support for the requirements of RFC-2827, which requires enterprises to protect their assigned public IP address space; and RFC-1918, which allows the use of private IP address spaces within enterprise networks. Demilitarized Zones (DMZ) - A basic security policy for enterprise networks is that internal network hosts should not be directly accessible from hosts on the Internet (as opposed to replies from Internet hosts for internally initiated session, which are statefully permitted). For those hosts, such as web servers, mail servers, VPN devices, etc., which are required to be directly accessible from the Internet, it is necessary to establish quasi-trusted network areas between, or adjacent to both, the Internet and the internal enterprise network. Such DMZs allow internal hosts and Internet hosts to communicate with DMZ hosts, but the separate security policies between each area prevent direct communication originating from Internet hosts from reaching internal hosts. Basic Filtering and Application Definition - Derived from enterprise security policies, ACLs are implemented to provide explicitly permitted and/or denied IP traffic which may traverse between areas (Inside, Outside, DMZ, etc.) defined to exist within the Internet edge. Stateful Inspection - Provides the ability to establish and monitor session states of traffic permitted to flow across the Internet edge, and deny that traffic which fails to match the expected state of an existing or allowed session. Intrusion Detection - The ability to promiscuously monitor network traffic across discrete points within the Internet edge, and alarm and/or take action upon detecting suspect behavior that may threaten the enterprise network.

Data Center Networking: Internet Edge Design Architectures

6-24

956484

C H A P T E R

7

High Availability via BGP Tunneling
In a distributed data center environment, the awareness of ISP failure is extremely important, as is the necessity of firewalling the network infrastructure, server farms, and clients. The Internet edge routing environment requires that the architecture is resilient to failures and is also highly available. With these requirements becoming more prevalent and the need to distribute these architectures, the firewall must also be aware of any failures in the network to ensure failover to secondary infrastructure devices. Also, the firewall devices must have more routing intelligence to accurately fail in the event of a catastrophic network event. BGP tunneling solves the problem that makes the firewall aware of network failures. The introduction of the Firewall Service Module (FWSM) in the Catalyst 6500 and support for OSPF in PIX version 6.3, solves the problem caused when the firewall black- holes user traffic based on static routing when the deployed IGP is OSPF. However, for situations where the implemented IGP is something other than OSPF, like EIGRP, you must still make the internal network segment aware of the upstream BGP routing updates or failures. Given the lack of support for IGPs other than OSPF on the firewall, the need for tunneling of the BGP routing protocol through the firewall is the most viable option. This offers the resilience of running a dynamic routing protocol on the firewall and the redistribution of the routing updates without having the firewall support it.

Overview
The BGP tunneling network topology is very similar to the existing Internet edge topology as seen in Figure 7-1. The differences are that instead of defining a static route downstream to the Layer 3 switching layer from the firewall, the BGP routing protocol is tunneled through the firewall layer; and that BGP is terminated on the Layer 3 switching layer. This is assuming that the firewall layer is in redundancy failover mode.

Data Center Networking: Internet Edge Design Architectures 956484

7-1

Chapter 7 Configuration Sequence and Tasks

High Availability via BGP Tunneling

Figure 7-1

BGP Tunnel Topology

BGP AS 1

BGP AS 2

External BGP Instance

BGP Tunnel through PIXen

External BGP Instance
87465

In this topology, the upstream edge routing layers are defined as E-BGP neighbors with the internal Layer 3 switching layer. The BGP protocol is, in turn, tunneled through the firewall itself. Because the internal and external peers are two hops away from one another, you must use the E-BGP multi-hop command. Using this command identifies that the external routing instances are segregated from each other at the edge layer. In a typical Internet edge design, the edge routing instance is connected via an internal I-BGP link between the routers. In this design, the I-BGP instance is no longer needed. The redistribution of routes to the Internet is propagated internally to the infrastructure via the IGP routing protocol. Each Layer 3 switching layer sends a default route internal if the respective BGP peer is receiving routes form its upstream provider or, in this case, the BGP neighbor. This ensures that the integrity of the ISP connection is valid. If each upstream provider receives routes from the ISP, it can actually send a default route internal to the network infrastructure. This is accomplished by defining a conditional default-originate statement coupled with a specific route-map that either defines a specific prefix in the routing table or a wild card for any entry in the routing table.

Configuration Sequence and Tasks
The configuration sequences are similar to that of the Internet edge architecture yet different with regards to the peering relationships of BGP internal to the network. First, you must define the BGP peering relationships with the ISP routers via EBGP. Once the BGP peering relationships have been established, you can then define and tunnel the BGP routing protocol to the interior network segments. This requires the edge routers to be peered via BGP through the PIX to the core Internet data center

Data Center Networking: Internet Edge Design Architectures

7-2

956484

Chapter 7

High Availability via BGP Tunneling Configuration Sequence and Tasks

Layer 3 switches. To tunnel the routing protocol through the firewall, you must define conduits on the firewall on a host-by-host basis. The hosts being the respective BGP interfaces on both sides of the firewall. After completing the BGP configurations, you must ensure that a default route is propagated to the internal network segment. Accomplish this using certain tuning metrics of either the IGP routing protocol present in the internal Layer 3 switching layer or the BGP instance present on the edge router layer, as shown inFigure 7-2.
Figure 7-2 BGP Tunneling Detailed Topology

www BGP AS 1 BGP AS 2

www

172.16.10.x

S 2/0 .1 S 2/0 .254 BGP AS 100 172.16.21.x .10 HSRP 172.16.20.x

S 1/0 172.16.11.x .1 S 1/0 .254

F 0/0 .254

F 0/0 .1 F 0/0 .2

F 3/0 .1 R1

R2

E0 .3

172.16.20.x

E0 .4

E1 .3 VLAN6 F 3/1 .1 G1 G2

172.16.22.x .10 HSRP VLAN6 F 3/1 .2 G1 G2

E1.4

Again, the initial configuration steps require you to define the BGP peering relationships with the internal Layer 3 switches as well as the upstream ISP.

Data Center Networking: Internet Edge Design Architectures 956484

87466

7-3

Chapter 7 Configuration Sequence and Tasks

High Availability via BGP Tunneling

Edge Router BGP Configurations
The edge routing layer is the layer that peers the BGP routing protocol with the ISP layer, as well as the internal BGP instance. Below, you can see the BGP configurations that define the autonomous system (AS) number as well as the networks to be propagated. In a normal Internet edge configuration, a distribution list could be defined to ensure that the I-BGP instance is not propagated to the ISPs, therefore preventing your network from becoming a transient network.
router bgp 100 bgp log-neighbor-changes network 172.16.10.0 network 172.16.21.0 network 172.16.20.0 network 172.16.22.0

Next, define the BGP neighbor configurations. First to the upstream provider as shown below:
neighbor 172.16.10.1 remote-as 1

Then define the BGP neighbor to the adjacent router. Define the I-BGP instance between the routers as follows:
neighbor 172.16.21.254 remote-as 100 neighbor 172.16.21.254 next-hop-self

Lastly, define the internal BGP peers to ensure that routing updates are propagated through the firewall layer. Also, configure a route-map that states if anything is in the routing table, a default is propagated to the internal BGP peers.
neighbor 172.16.22.1 remote-as 100 neighbor 172.16.22.1 ebgp-multihop 255 neighbor 172.16.22.2 remote-as 100 neighbor 172.16.22.2 ebgp-multihop 255 default-information originate route-map SEND_DEFAULT_IF

Note

In the Internet edge tunneling deployments, the edge routing instance does not propagate the entire BGP routing table through the tunnel but rather utilizes the default-information originate command to define a default route to edge routing instance. That default route is then distributed via the BGP process to the internal network only if the edge routing instance has a default route itself: The ACL's below state that if the router has any entry in its routing table from the next hop ISP router, then it sends the default route internal to the network. This configuration must be deployed on both edge routing devices.
access-list 1 permit 0.0.0.0 access-list 2 permit 172.16.11.1 route-map SEND_DEFAULT_IF permit 10 match ip address 1 match ip next-hop 2

Note

The route map SEND_DEFAULT_IF is associated with the default-information originate command. This route map matches on the condition that the 0/0 default (access-list 1) has a next hop of 172.16.11.1 (access-list 2). This satisfies the condition that the 0/0 is learned via EBGP rather than I-BGP.

Data Center Networking: Internet Edge Design Architectures

7-4

956484

Chapter 7

High Availability via BGP Tunneling Configuration Sequence and Tasks

The decision to define a default-information originate statement is based your requirements and can be defined to be as granular as tuning it to see specific prefixes from the upstream provider. Again, this is dependent on the your business requirements. After you complete the external BGP configurations, it is time to make sure the firewall allows the BGP protocol through.

Firewall Configurations
As seen above in the BGP configuration, the BGP peering relationships were configured. It is now up to the PIX to allow the protocol through to the internal routing layer. This is accomplished via conduits that allow the protocol through the device implicitly. This configuration is shown below.
nameif ethernet0 outside security0 nameif ethernet1 inside security100 enable password OnTrBUG1Tp0edmkr encrypted passwd OnTrBUG1Tp0edmkr encrypted hostname pixfirewall fixup protocol ftp 21 fixup protocol http 80 fixup protocol smtp 25 fixup protocol h323 1720 fixup protocol rsh 514 fixup protocol sqlnet 1521 names pager lines 24 no logging timestamp no logging standby logging console debugging no logging monitor no logging buffered no logging trap logging facility 20 logging queue 512 interface ethernet0 auto interface ethernet1 auto mtu outside 1500 mtu inside 1500 ip address outside 172.16.20.3 255.255.255.0 ip address inside 172.16.22.3 255.255.255.0 failover failover timeout 0:00:00 failover ip address outside 172.16.20.4 failover ip address inside 172.16.22.4 failover link outside arp timeout 14400 nat (inside) 1 0.0.0.0 0.0.0.0 0 0 static (inside,outside) 9.9.9.9 10.1.1.3 netmask 255.255.255.255 0 0 static (inside,outside) 6.6.6.6 10.1.1.2 netmask 255.255.255.255 0 0

Note

The conduits below state that the BGP routing protocol is allowed both ways to the hosts defined. In this case, the hosts are the BGP interfaces of either the edge routers or the internal Layer 3 switch.
conduit conduit conduit conduit conduit permit permit permit permit permit icmp any tcp host tcp host tcp host tcp host any 172.16.22.1 172.16.22.2 172.16.22.1 172.16.22.2

eq eq eq eq

bgp bgp bgp bgp

host host host host

172.16.20.1 172.16.20.1 172.16.20.2 172.16.20.2

Data Center Networking: Internet Edge Design Architectures 956484

7-5

Chapter 7 Configuration Sequence and Tasks

High Availability via BGP Tunneling

no rip outside passive no rip outside default no rip inside passive no rip inside default route outside 0.0.0.0 0.0.0.0 172.16.20.10 1 route inside 0.0.0.0 0.0.0.0 172.16.22.10 1 timeout xlate 3:00:00 conn 1:00:00 half-closed 0:10:00 udp 0:02:00 timeout rpc 0:10:00 h323 0:05:00 timeout uauth 0:05:00 absolute aaa-server TACACS+ protocol tacacs+ aaa-server RADIUS protocol radius no snmp-server location no snmp-server contact snmp-server community public no snmp-server enable traps telnet 10.1.1.200 255.255.255.255 inside telnet timeout 5 terminal width 80 Cryptochecksum:1e6c9c0ed5b2f51d0a5fbc49c1f4e15a : end [OK]

Internal Routing Layer BGP Configurations
Like the edge configurations, the BGP instance at this layer is what peers the protocol to the upstream edge layer and receives its default route to the upstream Internet edge layer.
router bgp 100 bgp log-neighbor-changes

The BGP configurations below only state that the adjacent network is distributed:
network 172.16.20.0 network 172.16.22.0

The BGP neighbor configurations below state that the MSFC is peering with both of the upstream BGP routers:
neighbor 172.16.20.1 remote-as 100 neighbor 172.16.20.1 ebgp-multihop 255 neighbor 172.16.20.2 remote-as 100 neighbor 172.16.20.2 ebgp-multihop 255

Finally, after the BGP peers have been established, the IGP routing protocol of choice can propagate the default learned from the edge layer. This is accomplished in different ways depending on the internal routing protocol of choice.

IGP Router Configurations
OSPF Configurations
Cisco recommends using the default-information originate always command when using OSPF. This states that as long there is a default route in the routing table, the 0/0 is advertised internally as well. Since you should be receiving a default from the upstream edge routers, the route should be propagated

Data Center Networking: Internet Edge Design Architectures

7-6

956484

Chapter 7

High Availability via BGP Tunneling Convergence Results

internally. In the event of an Internet failure, you would assume that the default would disappear form the routing table and no longer be propagated internally. Below you can see the configuration examples with OSPF IGP.
router ospf 500 log-adjacency-changes network 172.16.251.0 0.0.0.255 area 0 default-information originate always

EIGRP Configurations
BGP-learned defaults are injected into EIGRP via redistribution. The 0/0 metric needs to be converted into an EIGRP-compatible metric using the default-metric router command. The BGP instance on the MSFC injects its default with a high metric in such a way that the IGP MSFC router always gets a lower metric.
router eigrp 500 redistribute bgp 100 route-map DEFAULT_ONLY network 172.16.0.0 default-metric 5 100 250 100 1500

This ACL states that only the default route can be propagated from the BGP instance:
access-list 5 permit 0.0.0.0

The route-map then defines the default only instance that is applied to the EIGRP routing process.
route-map DEFAULT_ONLY permit 10 match ip address 5

Note

The internal routing layer uses a route map DEFAULT_ONLY to match on the default route 0/0. Any other updates are prevented from being redistributed into EIGRP. The MSFC also sets the metric by using the default-metric router command.

Convergence Results
Convergence testing will be updated after specific configuration requirements are met and routing metric tuning has been completed.

Data Center Networking: Internet Edge Design Architectures 956484

7-7

Chapter 7 Convergence Results

High Availability via BGP Tunneling

Data Center Networking: Internet Edge Design Architectures

7-8

956484

I N D EX

A
AAA ACL ACLs AH
2-21 2-5, 6-14 2-6 2-15

basic filtering and application definition Basic Filtering In Basic Filtering Out Basic Forwarding BDPU guard BGP
4-6 4-2 3-2 3-3 3-1

6-24

aggregate bandwidth
2-10

4-11, 6-6, 6-9, 6-10, 6-12, 6-13, 6-14 4-11 6-7 6-23

BGP Attributes BGP attributes BGP peer BGP peers
2-4 2-18 4-4 2-17 2-7 6-1

American Registry for Internet Numbers Application Definition Application distribution ARIN
4-6

BGP attribute tuning
7-2, 7-3 7-4

ARP spoofing ARP tables AS paths

BGP routing table BGP tunneling
7-1

6-6

Bi-Directional hash
2-17 2-18 7-4 2-18 5-1

5-3 6-6

Asymmetry Concerns Forwarding Translation

border gateway protocols border routers BPDU
4-2 4-8, 6-10

autonomous system autonomous systems

Bridge-level filtering mechanisms bridge protocol data unit Broadband Design Basic Forwarding Configuration DMZ Design
3-3 3-7 4-2

2-10

Axent's Raptor product

B
Basic Design
3-10 3-19

3-4 3-10 3-10

Intrusion Detection Capabilitties NAT Issues
3-9 3-10

Basic Forwarding Configuration DMZ Design NAT Issues
3-11 3-24

Network Managment
3-25

Security Policy Functional Deployment broadcast suppression
4-2

3-8

Intrusion Detection Capabilities
3-23 3-25

Network Management basic filtering
6-1

Security Policy Functional Deployment Basic Filtering and Application Definitio

3-22

C
CAM
5-5, 6-5 5-1

1-4

Checkpoint Firewall-1 software

Data Center Networking: Internet Edge Design Architectures 956513

IN-1

Index

Cisco Secure PIX classless routing

5-1 2-6

Element security element security Etherchannel
5-6 5-5, 6-5 4-13 4-12 4-2

6-1 6-24

Class C network space
2-6

Configuring FWLB with the CSM content addressable memory controlling inbound routes controlling outbound routes CSM
5-6

Exterior Gateway Protocol exterior gateway protocol

4-4 6-6

F
firewall
7-1 4-9 5-1 6-9, 7-1 3-41

D
DEFAULT_ONLY default routing
6-6 6-1 6-24 1-4 7-7

Firewall Layer

firewall load balancing Firewall Service Module Fully Resilient Design Basic Forwarding Configurations DMZ Design NAT Issues
3-54 3-41

Demilitarized zones demilitarized zones deny ip any any DHCP DMZ
2-5

3-49

Demilitarized Zones (DMZ)
3-15 6-4

Intrusion Detection Capabilities
3-54 3-55

3-55

device redundancy

Network Management
3-1 6-2 2-8

2-13, 3-1, 4-15, 5-3, 5-6, 6-1, 6-24

Security Policy Functional Deployment FWSM
6-9, 6-11, 6-15, 6-16, 6-17, 6-19, 7-1

3-52

DMZ Design

DNS propagation DNS request/reply DNS resolution DoS attacks dynamic routing

4-7

G
Generic Server Farm
5-8

2-11 6-6 4-3

GENERIC-SF global address

5-8 2-18

dynamic routing protocol metric tuning

E
E-BGP EBGP
7-2 6-15 1-2

H
HIDS
2-12, 2-13 1-1

high availability honey pots Guidelines
1-3, 4-4 1-4

Edge Connectivity Edge Routing Edge Security EGP EIGRP
4-4, 6-6 4-3, 4-8, 6-4

2-12, 2-13 2-2

Host Addressing
2-3

host-based intrusion detection hot standby router protocol
1-4

2-12

4-3, 6-4

Element Security

HSRP

2-6, 2-11, 4-3, 6-4, 6-5, 6-9

Data Center Networking: Internet Edge Design Architectures

IN-2

956513

Index

HSRP dead timers HTTP
2-21, 5-3

6-5

intrusion detection system IP Anti-Spoofing IP anti-spoofing IPSec Ipv4
2-18 2-10 1-4 6-1

2-12

I
I-BGP ICMP
7-2 2-10 5-9 1-4 6-1 6-24

IPSec ESP
4-9

ISP peering point ISP resiliency
6-1

6-3

ICMP health-check Identity Services Identity services identity services IDS
2-12, 2-14 2-14

L
Layer 2 Switching Layer Layer 3 features
2-11 6-4 4-9 4-9

IDS alarms IGP IGPs IKE
6-9, 6-10 6-6

IEEE 802.1D loop-detection

Layer 3 Switching Layer levels of trust local address
2-9 2-18

2-7, 2-11 6-14

injecting partial BGP routes INSEC-SF
5-8 5-8

M
MAC address MAC flooding
4-5, 6-6 6-5 2-4 2-10

Insecure Server Farm inside interface
2-19

interautonomous system routing interior gateway protocols Internal Routing
4-4 6-6

MAC-independent Layer 2 forwarding rules manageability
1-1 3-2 2-15

Management Traffic Rules
4-7 4-7

Internet Edge Design Fundamentals Internet Key Exchange Internet-level trust Intrusion Detection Host-Based
2-13 2-10 2-7

maximum connection rate MHSRP MSFC multi
6-1 6-4 7-6

Internet Edge Design Recommendations

intra-autonomous system routing
1-4, 2-12

4-5, 6-7

multigroup hot standby router protocol multi-homing
6-1 3-30 6-4

6-4

multi-interface PIXs
2-14 2-14 2-13

Implementation And Performance Considerations Enterprise Interior Perimeter Exterior Network-Based intrusion detection
2-12 2-13

multi-site internet edge topologies multi-site topologies
6-1

Implementation and Performance Considerations Variance-Based Capture Systems
6-1, 6-24 3-1

N
NAT
2-7, 4-9, 6-11 2-19

NAT Mechanisms

Intrusion Detection Capabilities

Data Center Networking: Internet Edge Design Architectures 956513

IN-3

Index

Netscreen Firewalls

5-1 6-11 3-1 2-12

PIX firewall policy routing Port fast
4-2

6-8 2-18

network address translation

Network Address Translation Issues network-based intrusion dDetection Network Management NIDS
2-12, 2-14 6-14 6-9 3-1

Port-level security mechanism

2-10

not so stubby area not-so stubby area NSSA
6-14

Q
QoS
4-3

R O
redundancy One Arm Topology OPSF OSPF
6-9 4-3, 4-8, 6-10, 6-12, 6-14, 6-16, 7-1, 7-6 6-21 6-13 5-2 4-1 6-2 2-18

Replication timeouts resiliency mechanisms RFC 1918
2-18 3-2 3-2 3-2

OSPF cost

RFC 1918 In RFC 1918 Out RFC 2827 Out
6-18 6-13

OSPF dead-interval timers OSPF hello timers outside interface

OSPF interface timer configurations
2-19

rootguard route-map

4-2 6-18, 7-2 6-4 1-3 1-3

routing protocol metric tuning

P
P anti-spoofing
6-24 3-25

Routing Protocols Overview Default Routing (Static) Dynamic Routing BGP
1-4 3-34

Partially Resilient Design Basic Forwarding Configuration DMZ Design NAT Issues
3-26 3-39

S
3-40

Intrusion Detection Capabilities
3-38 3-40

SAFE

2-2, 2-3 5-2

Sandwich Topology scalability
3-37 4-5, 6-7 1-1

Network Management

Security Policy Functional Deployment pass-through autonomous system routing permit ip any any PIX
7-1, 7-5 3-7 3-3 3-15

Scalability Requirements Bandwidth
2-15 2-16 2-17

2-15

Connection Rate Total Connections SEC-SF secuity
5-8 1-1

PIX 501 PIX 535

PIX 501 firewall
5-1

Secure Server Farm

5-8

Data Center Networking: Internet Edge Design Architectures

IN-4

956513

Index

Security Considerations Element Security Identity Services Assessment Enforcement Identity Risk Trust
2-1 2-1 2-1 2-1 2-1 2-20 2-21

2-20 2-21

unidirectional link detection uplinkfast
4-2

4-2

Common Internet Edge Security Policies

Security Design Requirements

V
variance-based capture systems VIP
4-7 4-7 5-7 2-9 2-12

virtual IP Address VLAN database VLAN tagging
2-1 3-1

Security Policy Definition SEND_DEFAULT_IF sensor placement session bandwidth signatures SMTP SSL
2-11 2-11 2-14 2-14 2-15

VRRP

2-11

Security Policy Functional Deployment
6-15, 7-4

W
WS-X6066-SLB-APC with Supervisor Engine 1A WS-X6066-SLB-APC with Supervisor Engine 2
3-1 5-4 5-4

single forwarding path model spanning-tree protocol
2-21 1-4 6-24 3-1

Stateful Inspection stateful inspection

stateful inspection firewall stateful inspection firewalls Stateful Inspection Rules static routing
6-6

2-10

3-3

steady-state connection rate

2-15

T
TCP handshake
2-8 2-9

Topology/Trust Model trusted forwarding
2-9

U
UDLD
4-2 5-3

Uni-Directional hash

Data Center Networking: Internet Edge Design Architectures 956513

IN-5

Index

Data Center Networking: Internet Edge Design Architectures

IN-6

956513

Sign up to vote on this title
UsefulNot useful