This action might not be possible to undo. Are you sure you want to continue?
A Guide to Decision Making
By Dr. Jim Metzler
APPlicATion Delivery HAnDbook | februAry 2008
The Handbook of
Executive Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 The Applications Environment . . . . . . . . . . . . . . . . . . . . . . . . 8 Planning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 Network and Application Optimization . . . . . . . . . . . . . . . . 24 Managed Service Providers . . . . . . . . . . . . . . . . . . . . . . . . . 43 Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47 The Changing Network Management Function . . . . . . . . . . 61 Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80 Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84 Interviewees . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96 Appendix - Advertorials . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96
IT Innovation Report
Published By Kubernan www.Kubernan.com Cofounders Jim Metzler firstname.lastname@example.org Steven Taylor email@example.com Design/Layout Artist Debi Vozikis Copyright © 2008 Kubernan For Editorial and Sponsorship Information Contact Jim Metzler or Steven Taylor Kubernan is an analyst and consulting joint venture of Steven Taylor and Jim Metzler.
Professional Opinions Disclaimer All information presented and opinions expressed in this IT Innovation Report represent the current opinions of the author(s) based on professional judgment and best available information at the time of the presentation. Consequently, the information is subject to change, and no liability for advice presented is assumed. Ultimate responsibility for choice of appropriate solutions remains with the reader.
A Guide to Decision Making
APPlicATion Delivery HAnDbook | februAry 2008
We are just ending the first phase of a fundamental transformation of the IT organization. At the beginning of this transformation, virtually all IT organizations were comprised of myriad stove piped functions; e.g., devices, networks, servers, storage, databases, security, operating systems. A major component of the transformation is that leading edge IT organizations are now creating an environment that is characterized by the realization that IT is comprised of just two functions, application development and application delivery, and that these functions must work in an integrated fashion in order for the IT organization to ensure acceptable application performance. This view of IT affects everything – including the organizational structure, the management metrics, the requisite processes, technologies and tools. One of the primary goals of this handbook is to help IT organizations plan for that transformation. As described in the handbook, the activities that comprise a successful application delivery function are planning, optimizing, managing and controlling application performance. Each of these activities is challenging today and will become more challenging over the next few years. As described in Chapters 2 and 3, part of the increased challenge will come from the deployment of new application development paradigms such as SOA (Services Oriented Architecture), Rich Internet Architecture and Web 2.0. Also adding to the difficulty of ensuring acceptable application performance is the increased management complexity associated with the burgeoning deployment of the virtualization of IT resources (i.e., desktops, servers, storage, applications), the growing impact of wireless communications, the need to provide increasing levels of security as well as emerging trends such as storage optimization. Chapter 4 of this handbook discusses planning. As that chapter points out, in most companies the focus of application development is on ensuring that applications are developed on time, on budget, and with few security vul-
nerabilities. That narrow focus combined with the fact that application development has historically been done over a high-speed, low-latency LAN, means that the impact of the WAN on the performance of the application is generally not known until after the application is fully developed and deployed. In addition, most IT organizations do not know the impact that a major change, such as consolidating data centers, will have until after the initiative is fully implemented. As a result, IT organizations are left to react to application and infrastructure issues typically only after they have impacted the user. Chapter 4 discusses techniques such as WAN emulation, baselining and predeployment assessments that IT organizations can use to identify and eliminate issues prior to their impacting users and identifies criteria that IT organizations can use to choose appropriate tools. Chapter 5 discusses two classes of network and application optimization solutions. One class focuses on the negative effect of the WAN on application performance. This category is referred to alternatively as a WAN optimization controller (WOC) or a Branch Office Optimization Solution. Branch Office Optimization Solutions are often referred to as symmetric solutions because they typically require an appliance in both the data center as well as the branch office. Some vendors, however, have implemented solutions that call for an appliance in the data center, but instead of requiring an appliance in the branch office only requires software on the user’s computer. This class of solution is often referred to as a software only solution and is most appropriate for individual users or small offices. Chapter 5 contains an extensive set of criteria that IT organizations can use to choose a Branch Office Optimization Solution. The second class of solution discussed in Chapter 5 is often referred to as an Application Front End (AFE) or Application Device Controller (ADC). This solution is typically referred to as being an asymmetric solution because an appliance is only required in the data center and not the
A Guide to Decision Making
” Chapter 7 discusses this issue as well as some of the organizational issues that impact successful application delivery. Today most IT organizations that have deployed a network and application optimization solution have done so in a do-it-yourself (DIY) fashion. In most cases. management and control) that the IT organization does not possess. the IT organization was quite capable of building and managing the frame relay network. Chapter 6 contains an extensive set of criteria that IT organizations can use to determine if one of these services would add value. in the early to mid 1990s. As is described in Chapter 6. and highlights the shift that most NOCs are taking from where they focus almost exclusively on the availability of networks to where they are beginning to also focus on the performance of networks and applications. there are two distinct classes of application delivery MSPs that differ primarily in terms of how they approach the optimization component of application delivery. Included in the chapter is a discussion of the factors that are driving the NOC to change as well as the factors that are inhibiting the NOC from being able to change. network analytics and route analytics. The chapter identifies criteria that IT organizations can use to choose appropriate solutions and also includes some specific suggestions for how IT organizations can manage VoIP. but choose not to do so in order to focus its attention on other activities or to reduce cost. As part of the research that went into the creation of this handbook. Chapter 5 also contains an extensive set of criteria that IT organizations can use to choose an AFE. many IT organizations began to acquire managed frame relay services from an MSP as an alternative to building and managing a frame relay network themselves. The other class of application delivery MSP adds intelligence to the Internet to allow it to be support production applications. including the lack of effective processes as well as the adversarial relationship that often exists between the application development organization and the network organization. This includes discovery. Chapter 8 examines the attempt on the part of many Network Operations Centers (NOCs) to improve their processes. He stated that in his organization it is common to have the end user notice application degradation before the IT function does and that this results in IT looking like “a bunch of bumbling idiots.APPlicATion Delivery HAnDbook | februAry 2008 branch office. from a server farm. Chapter 9 examines the type of control functionality that IT organizations should implement in order to ensure acceptable application performance. the CIO of a government organization was interviewed. optimization. Part of the appeal of using an MSP for application delivery is that in many instances MSPs have expertise across all of the components of application delivery (planning. The primary role of the AFE is to offload computationally intensive tasks. Chapter 8 details how the approach that most IT organizations take to reducing the mean time to repair has to be modified now that the NOC is gaining responsibility for application performance and the chapter also examines the myriad techniques that IT organizations use to justify an investment in performance management. that the NOC should be renamed the Application Operations Center (AOC). Chapter 8 concludes with the observation that given where NOC personnel spend their time. such as the processing of SSL traffic. As a result. these MSPs can provide functionality that the IT organization on its own could not provide. For example. This includes route control as a way to impact the path that traffic takes as it A Guide to Decision Making 4 . Chapter 6 describes another alternative – the use of a managed service provider (MSP) for application delivery services. MSPs are not new. One class of application delivery MSP provides site-based services that are similar to the current DIY approach used by most IT organizations. end-to-end visibility. Chapter 7 also discusses the fact that most IT organization are blind to the growing number of applications that use port 80 and describes a number of management techniques that IT organizations can use to avoid the “bumbling idiot syndrome”.
the decentralization of employees and the complexity associated with the current generation of n-tier applications. however. A major component of the transformation is that leading edge IT organizations are now creating an environment that is characterized by the realization that: If you work in IT.APPlicATion Delivery HAnDbook | februAry 2008 transits an IP network. few IT organizations were concerned with application delivery. At the beginning of this transformation. has serious limitations including the fact that even after deploying the work-arounds the IT organization typically does not see all of the traffic and the deployment of multiple security appliances significantly drives up the operational costs and complexity. Instead of reaching a point where the challenges associated with application delivery are going away. desktops. terminology. the requisite processes. Some of the IT organizations that were interviewed for this handbook want to believe that the challenges associated with application delivery are going away. the deployment of new application development paradigms1 such as SOA (Services Oriented Architecture). Unfortunately because traditional firewalls cannot provide the necessary security functionality. It also follows because of the increasing management complexity associated with the burgeoning deployment of the virtualization of IT resources (i. Application delivery is now a top of mind topic for virtually all IT organizations. holistically have been out of favor so long that it is now acceptable to use them again. applications). The chapter also describes a process for implementing QoS and summarizes the status of current QoS implementations. the need to provide increasing levels of security as well as emerging trends such as storage optimization. The chapter concludes by identifying criteria that IT organizations can use to choose a next generation firewall. This includes the lack of visibility into application performance. A Guide to Decision Making 5 . This approach. Chapter 9 makes the assertion that firewalls are typically placed at a point where all WAN access for a given site coalesces and that this is the logical place for a policy and security control point for the WAN.0 will dramatically increase the difficulty of ensuring acceptable application performance. virtually all IT organizations were comprised of myriad stove piped functions. leading edge companies are creating an IT organization that is comprised of two functions: application development and application delivery. you either develop applications or you deliver applications. The complexity associated with application delivery will increase over the next few years. This view of IT affects everything – including the organizational structure. Rich Internet Architecture and Web 2. While the transforma1 Kubernan asserts its belief that words such as paradigm and Introduction Background and Goal As recently as a few years ago. the growing impact of wireless communications. Put another way. technologies and tools. That has all changed. the management metrics. They want to believe that application developers will soon start to write more efficient applications and that bandwidth costs will decrease to the point where they can afford to throw bandwidth at performance problems. That follows in part because as explained in this handbook.e. tools and processes.. servers. By stove piped is meant that these functions had few common goals. IT organizations have resorted to implementing myriad work-arounds. we are just ending the first phase of a fundamental transformation of the IT organization. storage. there are many factors that complicate the task of ensuring acceptable application performance. As is described in this handbook. the centralization of IT resources. Both of these functions must work holistically in order to ensure acceptable application performance.
We have just spent the last few years coming to understand the importance and difficulty associated with application delivery and to deploy a first generation of tools typically in a stand-alone. For example. As is discussed in chapter 6. Other areas that were either added or expanded upon include the: • Impact of Web services on security • Development of a new generation of firewalls • Use of WAN emulation to develop better applications and to plan for change • Impact of Web 2. In addition. implemented and managed by IT organizations in a holistic fashion. now often has an additional focus on the performance of networks and applications. which once focused almost exclusively on the availability of networks. one of the advantages of using a managed service provider is that they often have the skills and processes that are necessary to bridge the gap that typically exists within an IT organization between the application development groups and the rest of the IT function. One of the goals of this handbook is to help IT organizations plan for that transformation – hence the subtitle: A guide to decision making. One of these new chapters. nor planned. This edition of the handbook differs from the original version in several ways. chapter 6 discusses the use the various types of managed service providers as a very viable option that IT organizations can use to better ensure acceptable application deliv- A Guide to Decision Making 6 . it will not happen quickly. First. information was added to increase both the breadth and depth of this edition. Second.APPlicATion Delivery HAnDbook | februAry 2008 tion is indeed fundamental. information that was contained in the original version that is no longer relevant was deleted from this edition. leading edge IT organizations will develop plans for how they want to evolve from a stove-piped IT infrastructure function to an integrated application delivery function. As we enter the next phase of application delivery.0 on application performance and management • The criticality of looking deep into the packet for more effective management • Status of QoS deployment • Appropriate metrics for VoIP management • Development of software-based WAN optimization solutions • Factors that impact the transparency of WAN optimization solutions • Issues associated with high-speed data replication Forward to the 2008 Edition This handbook builds on the 2007 edition of the application delivery handbook. The includes a discussion of how the NOC. designed. Chapter 8 details the evolving network management function. Chapter 8 also examines how the NOC has to change in order to reduce the meant time to repair that is associated with application performance issues and details the myriad ways that IT organizations justify an investment in performance management. tactical fashion. a significant amount of new market research is included. ery. Senior IT management needs to ensure that their organization evolves to where it looks at application delivery holistically and not just as an increasing number of stove-piped functions This transformation will not be easy in part because it crosses myriad organizational boundaries and involves rapidly changing technologies that have never before been developed by vendors. Successful application delivery requires the integration of tools and processes. there are two new chapters in this edition.
this handbook incorporates market research data that has been gathered over the last two years.” He further revealed that this situation has also fostered an environment in which individual departments have both felt the need and been allowed to establish their own shadow IT organizations. this is a lengthy handbook. Given the breadth and extent of the input from both IT organizations and leading edge vendors this handbook represents a broad consensus on a framework that IT organizations can use to improve application delivery. cover-to-cover reading. For example. “If the performance of one of your company’s key applications is beginning to degrade. who is the most likely to notice it first – the IT organization or the end user?” Seventy three percent of the survey respondents indicated that it was the end user. The sponsors of the handbook provided input into the areas of this handbook that are related to their company’s products and services. Also. Kubernan has conducted extensive market research into the challenges associated with application delivery. A Guide to Decision Making 7 . require linear. To compensate for this. Context Over the last two years. In addition. To allow IT organizations to compare their situation to those of other IT organizations. Several techniques were employed to keep the handbook a reasonable length.APPlicATion Delivery HAnDbook | februAry 2008 • Criteria to evaluate WAN optimization controllers (WOCs) • Criteria to evaluate application front ends (AFEs) • Issues associated with port hopping applications Unfortunately. The handbook also contains input gathered from interviewing roughly thirty IT professionals. The Government CIO stated that in his organization the fact that the IT organization does not know when an application has begun to degrade has lead to the perception that IT is “a bunch of bumbling idiots. To compensate for that limitation. Most IT professionals cannot be quoted by name or company in a document like this without their company heavily filtering their input. the end user. Both the sponsors and the IT professionals also provided input into the relationship between and among the various components of the application delivery framework. however. not the IT organization. contains material supplied by the majority of the leading application delivery vendors. the handbook allocates more space discussing a new topics (such as the impact of Web 2. Chapter 12 contains a brief listing of the people who were interviewed.0) than it does on topics that are relatively well understood – such as the impact of consolidating servers out of branch offices and into centralized data centers. In the vast majority of instances when a key business application is degrading. The Appendix to the handbook. The fact that end users notice application degradation prior to it being noticed by the IT organization is an issue of significant importance to virtually all senior IT managers. the handbook does not contain a detailed analysis of any technology. the handbook includes an extensive bibliography. A reader may start reading this handbook in the middle and use the references embedded in the text as forward and backwards pointers to related information. In particular. One of the most significant results uncovered by that market research is the dramatic lack of success IT organizations have relative to managing application performance. the body of the handbook does not discuss any vendor or any products or services. however. It does not. along with the phrase that is used in the handbook to refer to them. in Kubernan asked 345 IT professionals the following question. first notices the degradation.
It is unlikely any IT organization will exhibit all of the dynamics described. visibility and reporting. The Applications Environment This section of the handbook will discuss some of the primary dynamics of the applications environment that impact application delivery. achieving the goal stated above requires a broader perspective on the factors that impact the ability of the IT organization to assure acceptable application performance.) This handbook is being written with that IT organization and others like them in mind. with a focus on application performance. No product or service in the marketplace provides a best in class solution for each component of the application delivery framework. IT ends up looking like bumbling idiots.APPlicATion Delivery HAnDbook | februAry 2008 In situations in which the end user is typically the first to notice application degradation. the task of determining what it would take to improve the performance of the application was lengthy and served to further frustrate the users of the application. A goal of this handbook is to help IT organizations develop the ability to minimize the occurrence of application performance issues and to both identify and quickly resolve issues when they do occur. baselining. Jim Metzler was hired by an IT organization that was hosting an application on the east coast of the United States that users from all over the world accessed. As a result. management. Users of this application that were located in the Pac Rim were complaining about unacceptable application performance. However. The preceding statement sounds simple. It is important to note that most times when the industry uses the phrase application delivery. Network and application optimization is important. this handbook will develop a framework for application delivery. With these factors in mind. Companies that want to be successful with application delivery must understand their current and emerging application environment. It is also unlikely that an IT organization will not exhibit at least some of these dynamics. In addition to performing market research. The current approach to managing application performance reduces the confidence that the company has in the IT organization. Application delivery needs to have topdown approach. (Chapter 7 details what has to be done to reduce the mean time to repair application performance issues. Application delivery is more complex than just network and application acceleration. This includes processes such as discovery (what applications are running on the network and how are they being used). A Guide to Decision Making 8 . To achieve that goal. Some overlap exists in the model as a number of common IT processes are part of multiple components. this refers to just network and application optimization. Kubernan also provides consulting services. the framework this handbook describes is comprised of four primary components. The IT organization wanted Jim to identify what steps it could take to improve the performance of the application. Successful application delivery requires the integration of planning. less than one-quarter of IT organizations claim they have that understanding. and control. companies have to carefully match their requirements to the functionality the alternative solutions provide. However. network and application optimization. Given that the IT organization had little information about the semantics of the application.
there is at most a moderate emphasis during the design and development of an application on how well that application will run over a WAN. That narrow focus combined with the fact that application development has historically been done over a high-speed. the delay associated with the data transfer will be ignored and only the delay associated with the application turns will be calculated. it is important during application development to identify and eliminate any factor that could have a negative impact on application performance.1. they are important to the successful operation of the enterprise. In particular. This concept will be expanded upon in Chapter 4.1: Chatty Application The preceding example also demonstrates the relationship between network delay and application delay. To exemplify the impact of a chatty protocol assume that a given transaction requires 200 application turns. Other Data Applications This category contains the bulk of a company’s data applications. which is not noticeable. A relatively small increase in network delay can result a very significant increase in application delay. Business Critical A company typically runs the bulk of its key business functions utilizing a handful of applications. This approach is far more effective than trying to implement a work-around after an application has been fully developed and deployed. 2. which is very noticeable. and with few security vulnerabilities.APPlicATion Delivery HAnDbook | februAry 2008 The Application Development Process In most situations the focus of application development is on ensuring that applications are developed on time. or acquire them from a software-as-a-service provider such as Salesforce. Taxonomy of Applications The typical enterprise has tens and often hundreds of applications that transit the WAN. A chatty application requires hundreds of application turns to complete a transaction. The preceding example demonstrates the need to be cognizant of the impact of the WAN on application performance during the application development lifecycle. low-latency LAN. as well as applications that are less delay sensitive such as email. buy them from a vendor such as Oracle or SAP. the delay on the WAN is 20 seconds. While these applications do not merit the same attention as the enterprise’s business critical applications. One way that these applications can be categorized is: 1. Figure 3. However. 3. A company can develop these applications internally. For simplicity. This lack of emphasis on how well an application will run over the WAN often results in the deployment of chatty applications as shown in Figure 3. Communicative and Collaborative This includes delay sensitive applications such as Voice over IP and conferencing. means that the impact of the WAN on the performance of the application is generally not known until after the application is fully developed and deployed. but that the round trip delay of the WAN on which the application will be deployed is 100 milliseconds.com. In this case. In the majority of cases. on budget. A Guide to Decision Making 9 . the delay on the LAN is 200 milliseconds. Further assume that the latency on the LAN on which the application was developed was 1 millisecond.
companies that have already adopted MPLS will find it easier to justify deploying VoIP. This type of requirement could exist to enable effective disaster recovery or because the remote office needs to access applications that disparate data centers host. the traffic flow on the data network naturally follows a simple hub-and-spoke design. Another factor affecting traffic flow is that many organizations require that a remote office have access to multiple data centers. for example a Citrix printing flow vs. Traffic Flow Considerations In many situations. A number of factors. transactional or data transfer in orientation. cause the traffic flow in a network to follow more of a mesh pattern. live Internet radio is real time but in virtually all cases it is not critical to the organization’s success. viruses. This type of network is often referred as an any-to-any network. Analogously. or some-to-many. YouTube. http://www. as well as music downloading. however. but which are critical to the operation of the IT infrastructure. 5. spyware or other security vulnerabilities. streaming news and multimedia.webtorials. VoIP is an example of an application where traffic can flow between any two sites in the network. whether they are one-to-many. This type of network is sometimes referred to as a one-tomany network. It is also important to realize an application such as Citrix Presentation Server or SAP is comprised of multiple modules with varying characteristics. Thus. Since they make different demands on the network. transactional or data transfer in orientation.APPlicATion Delivery HAnDbook | februAry 2008 4. editing a Word document. An example of this is a bank’s ATM network where the traffic flows from an ATM to a data center and back again. One factor is the wide spread deployment of Voice over IP (VoIP) 2. companies that want to broadly deploy VoIP are likely to move away from a Frame Relay or an ATM network and to adopt an MPLS network. This type of network is often referred as a some-to-many network Every component of an application delivery solution has to be able to support the company’s traffic patterns. 6. many-to-many. Steven Taylor. As a result. IT Infrastructure-Related Applications This category contains applications such as DNS and DHCP that are not visible to the end user. another way to classify applications is whether the application is real time. What is important is the ability to recognize application traffic flows for what they are. For example. this information must be combined with the business criticality of the application. For maximum benefit. An important relationship exists between VoIP deployment and MPLS deployment. Successful application delivery requires that IT organizations are able to identify the applications running on the network and are also able to ensure the acceptable performance of the applications relevant to the business while controlling or eliminating applications that are not relevant. it is not terribly meaningful to say that Citrix Presentation Server traffic is real time. Webification of Applications The phrase Webification of Applications refers to the growing movement to implement Web-based user interfaces and to utilize chatty Web-specific protocols such as 2 2005/2006 VoIP State of the Market Report.com A Guide to Decision Making 10 . Recreational This category includes a growing variety of applications such as Internet radio. Malicious This includes any application intended to harm the enterprise by introducing worms. MPLS is an any-to-any network.
recently announced it was reducing the number of data centers it supports from 85 down to six 3. the dense nature of XML also creates some security vulnerabilities. Assume that a client was attempting to open up a 20 megabyte file on a remote server. running over the WAN. Exchange or NFS (Network File System). when people access an application they are accessing it over the WAN. for example. which were designed to run over the LAN. or are in the process of consolidating servers out of branch offices and into centralized data centers. high latency WAN. HP. XML) tend to greatly increase the amount of data that transits the network and is processed by the servers.html 4 http://en. low latency LAN. This increases the distance between remote users and the applications they need to access.g.APPlicATion Delivery HAnDbook | februAry 2008 HTTP. many companies are also reducing the number of data centers they support worldwide. a protocol is referred to as being chatty if it requires tens if not hundreds of turns for a single transaction.. The new 80/20 rule states that 80% of a company’s employees access applications over a relatively low-speed. The server sends each of these data blocks to the client where it is verified and an acknowledgement is sent back to the server. That means communications based on XML consume more IT resources than communications that are not based on XML. In the vast majority of situations. While server consolidation produces many benefits.com/business/content/business/stories/ other/05/18hp.org/wiki/Software_as_a_Service A Guide to Decision Making 11 . XML is a dense protocol. Software as a Service According to Wikipedia4. As will be discussed in Chapter 9.wikipedia. The webification of applications introduces chatty protocols into the network. software as a service (SaaS) is a software application delivery model where a software vendor develops a web-native software application and hosts and operates (either independently or through a third-party) the application for use by its customers over the Internet. it can also produce some significant performance issues. Server consolidation typically results in chatty protocols such as CIFS (Common Internet File System). In addition.statesman. Many companies are also adopting a single-hosting model whereby users from all over the globe transit the WAN to access an application that the company hosts in just one of its data centers. Similar to the definition of a chatty application. some or these protocols (e. it can take several seconds for the user to be able to open up the file. As a result. One of the effects of data center consolidation and single hosting is that it results in additional WAN latency for remote users. or possibly thousands of small data blocks. Server Consolidation Many companies either already have. The server must wait for an acknowledgement prior to sending the next data block. CIFS would decompose that file into hundreds. Data Center Consolidation and Single Hosting In addition to consolidating servers out of branch offices and into centralized data centers. This consolidation typically reduces cost and enables IT organizations to have better control over the company’s data. In addition. Customers do not pay for owning the 3 Hewlett-Packard picks Austin for two data centers http://www. The way that CIFS works is that it decomposes all files into smaller blocks prior to transmitting them. Changing Application Delivery Model The 80/20 rule in place until a few years ago stated that 80% of a company’s employees were in a headquarters facility and accessed an application over a high-speed.
Twelve percent (12%) of IT organizations state that troubleshooting an IT operational issues occurs cooperatively across all IT disciplines. generally replacing the earlier terms Application Service Provider (ASP) and On-Demand. they care about the business results. companies are continually changing their business processes and IT organizations are continually changing the network infrastructure. the application logic. A Guide to Decision Making 12 . security. etc. They use it through an API accessible over the Web and often written using Web Services. servers. operating systems.APPlicATion Delivery HAnDbook | februAry 2008 software itself but rather for using it. In addition. The planning and operations of these sub-specialties are typically not well coordinated within the application delivery function. often defensive approach to application delivery. servers. book business and ship product. The CYA approach to application delivery focuses on showing that it is not your fault that the application is performing badly. The applications that were written for the mainframe computers of that era were monolithic in nature.. For example. Since these tiers are implemented on separate systems. point of sale devices). Only 14% of IT organizations claim to have aligned the application delivery function with the application development function. In addition. This drives the need for both the dynamic setting of parameters and automation. Dynamic IT Environments The environment in which application delivery solutions are implemented is highly dynamic. by definition of SaaS. as well as access to data. The goal of the CIO approach is to rapidly identify and fix the problem. For example. Eight percent (8%) of IT organizations state they plan and holistically fund IT initiatives across all of the IT disciplines. He also said that he and his peers do not care about the pieces that comprise IT. There are many challenges associated with SaaS. networks. (See Chapter 6 for a discussion of the use of managed service providers as a way to mitigate some of the impact of the Internet. The term SaaS has become the industry preferred term. desktops. storage. WAN performance impacts n-tier applications more than Fractured IT Organizations The application delivery function consists of myriad subspecialties such as devices (e. He stated that he is tired of having each of them explain to him that their component of IT is fine and yet the company struggles to provide customers an acceptable level of access to their Web site. To be successful. Monolithic means that the application performed all of the necessary functions. such as providing the user interface. they cannot change the software in order to make it perform better. application delivery solutions must function in a highly dynamic environment. companies regularly deploy new applications and updates to existing applications. the user accesses the application over the Internet and hence incurs all of the issues associated with the Internet. laptops. market research performed in 2006 indicates that typically little coordination exists between the application delivery function and the application development function. He has five IT disciplines that report directly to him.g. The Industrial CIO described the current fractured. Most companies have moved away from deploying monolithic applications and towards a form of distributed computing that is often referred to as n-tier applications.) In addition. Application Complexity Companies began deploying mainframe computers in the late 1960s and mainframes became the dominant style of computing in the 1970s. since the company that uses the software does not own the software.
WAN performance impacts Web services-based applications significantly more than WAN performance impacts n-tier applications. For example. the blueprint for Web services A Guide to Decision Making 13 . As part of parsing the message. The amount of work required by XML parsing is directly affected by the size of the SOAP message. they can also serve to guide security attacks against the organization. and then back again over the Internet using standard protocols such as HTTP or HTTPS. Assuming that a hacker has gained access to an organization’s WSDL document. As such. One of these limitations is that the current generation of firewalls is not capable of parsing XML. an application server(s) and a database server(s). When a Web services message is received. In a Web services-based application. If the goal of the hacker is to create a denial of service attack or degrade application performance. the first step the system takes is to read through. IT organizations need to be able to perform anomaly detection in order to distinguish valid messages from invalid messages. the impact of the WAN is constrained to a single traffic flow. the hacker can learn a great deal about the underlying technology and can use this knowledge to further exploit the system. Because of this. For example. the hacker could submit excessively large payloads that would consume an inordinate amount of system resources and hence degrade application performance. Part of this challenge stems from the fact that in most instances. the hacker can then begin to look for vulnerabilities in the system. In addition. the hacker could exploit the verbose nature of both XML and SOAP 5. In a 3-tier application the application server(s) and the database server(s) typically reside in the same data center. The movement to a Service-Oriented Architecture (SOA) based on the use of Web services-based applications represents the next step in the development of distributed computing. application is comprised of a Web browser. Unfortunately. the WAN impacts multiple traffic flows and hence has a greater overall impact on the performance of a Web servicesbased application that it does on the performance of an n-tier application. IT organizations need to be able to perform signature detection to detect the signature of known attacks. that being the flow between the user’s Web browser and the application server. parameters are extracted and content is inserted into databases. the Web services that comprise the application typically run on servers that are housed within multiple data centers. the typical 3-tier communication is outlined in Web Services Description Language (WSDL) documents. Just as WAN performance impacts n-tier applications more than monolithic applications. To understand why the movement to Web servicesbased applications will drastically complicate the task of ensuring acceptable application performance. These documents are intended to serve as a guide to an IT organization’s Web services.APPlicATion Delivery HAnDbook | februAry 2008 monolithic applications. 5 Simple Object Access Protocol (SOAP) is the Web Services specification used for invoking methods on remote software components. using an XML vocabulary. or parse. As a result. these firewalls are blind to XML traffic. The information flow in a 3-tier application is from the Web browser to the application server(s) and to the database. As a result. by seeing how the system reacts to invalid data that the hacker has intentionally submitted. As part of providing security for Web services. For example. Chapter 9 will discuss some of the limitations of the current generation of firewalls. the elements of the message. Web Services and Security The expanding use of Web services creates some new security challenges. consider the 3-tier application architecture that was previously discussed. IT organizations need to be able to inspect XML and SOAP messages and make intelligent decisions based on the content of these messages.
The Application Front Ends that are described in Chapter 5 were designed primarily to offload communications processing from servers.4% 37.2% 11. however. to name a few.7% 27.pdf A Guide to Decision Making 15 .0 applications including the use of mashups?” Their responses are shown in Table 3.4% 2.6% Their Figure 3. is intended to provide insight into the key factors that impact application response time. http://www. In addition to a services focus. Like all models. content that is tailored to preferences they indicate. consumers can be presented with geographic.2. Web 2.1: Current Use of New Application Architectures The same group of IT professionals were then asked to indicate how their company’s use of those application architectures would change over the next year. 11 Why Centralizing Microsoft Servers Hurts Performance.” Kubernan recently presented over 200 IT professionals with the following question: “Which of the following best describes your company’s approach to using new application architectures such as Services Oriented Architecture (SOA).7% 46. Response No change is expected We will reduce our use of these architectures We will increase our use of these architectures N/A or Don’t Know Other Table 3. user created.and demographic-specific content. sales promotions. They were not designed to offload any backend processing. Web 2. responses are shown in Table 3.2: Application Response Time Model The Branch Office Optimization Solutions that are described in Chapter 5 were designed primarily to deal with the size of the payload and the number of application turns. and compelling marketing offers. Emerging application architectures (SOA. Peter Sevcik and Rebecca Wetzel.0) have already begun to impact IT organizations and this impact will increase over the next year. the number of application turns (AppTurns). the number of simultaneous TCP sessions (concurrent requests). The model. the following is only an approximation and as a result it is not intended to provide results that are accurate to the millisecond level. rich and in many cases.8% 0. The following model is a variation of the application response time model created by Sevcik and Wetzel11. or Web 2.1.7% 24. and news feeds. Response Don’t use them Make modest use of them Make significant use of them N/A or Don’t Know Other Percentage of Respondents 24. A model is helpful to illustrate the potential performance bottlenecks in any application environment in general. further personalization. the network round trip time (RTT).2% Quantifying Application Response Time As noted. as well as in a Web 2.3% 1.APPlicATion Delivery HAnDbook | februAry 2008 interaction. and constantly updated content such as stock quotes. RIA. the WAN bandwidth. surveys and contests.0 environment in particular. the application response time (R) is impacted by amount of data being transmitted (Payload).net/solutions/ literature/ms_server_centralization. Table 3. the server side delay (Cs) and the client side delay (Cc). For instance.juniper. Rich Internet Applications (RIA).0 characteristics include featuring content that is dynamic.2: Increased Use of New Application Architectures Percentage of Respondents 23. Web 2. As shown below.0 has some unique characteristics.
APPlicATion Delivery HAnDbook | februAry 2008
The Web 2.0 Performance Issues
As noted, the existing network and application optimization solutions were designed to mitigate the performance impacts of large payloads and multiple application turns. Microprocessor vendors such as Intel and AMD continually deliver products that increase the computing power that is available on the desktop. As a result, these products minimize the delays that are associated with client processing (Cc). This leaves just one element of the preceding model that has to be more fully accounted for – server side delay. This is the critical performance bottleneck that has to be addressed in order for Web 2.0 applications to perform well. The existing generation of network and application optimization solutions does not deal with a key requirement of Web 2.0 applications – the need to massively scale server performance. The reason this is so critical is that unlike clients, servers suffer from scalability issues. In particular, servers have to support multiple users and each concurrent user consumes some amount of server resources: CPU, memory, I/O. Chris Loosley12 highlighted the scalability issues associated with servers Loosley pointed out that activities such as catalog browsing is a relatively fast and efficient activity that does not consume a lot of server resources. He contrasted that to an activity that required the server to update something, such as clicking a button to add an item to a shopping cart. His points out that activities such as updating consumes significant server resources and so the number of concurrent transactions, server interactions that update a customer’s stored information, plays a critical role in determining server performance. The Mobile Software CEO addressed the issue of scalability when he stated that there is no better application framework than ASP.NET, but that ASP.NET does make it very easy to develop applications that do not perform
12 Rich Internet Applications: Design, Measurement and Manage-
well. As The Mobile Software CEO sees it, IT organizations need to answer the question of “How will we scale Web 2.0 applications that have a rich amount of information from a dynamic database?” He said that a big part of the issue is that because of the dynamic content that is associated with Web 2.0 applications, “caching is not caching – it is different for every single application that you work with”. As a result, IT organizations need to answer questions such as: “When can I cache that data?” and “How do I keep that cache up to date?” He added that the best way to solve the Web 2.0 performance problems is to deploy intelligent tools. The Business Intelligence CTO pointed out that the most important server side issue associated with traditional applications was providing page views; while with Web 2.0 applications it is supporting API calls. He emphasized that “You can not scale a Web site just by throwing servers at it. That buys you time, but it does not solve the problem.” His recommendation was that IT organizations should make relatively modest investments in servers and make larger investments in tools to accelerate the performance of applications.
The classic novel Alice in Wonderland by the English mathematician Lewis Carroll first explained part of the need for the planning component of the application delivery framework. In that novel Alice asked the Cheshire cat, “Which way should I go?” The cat replied, “Where do you want to get to?” Alice responded, “I don’t know,” to which the cat said, “Then it doesn’t much matter which way you go.” Relative to application performance, most IT organizations are somewhat vague on where they want to go. In particular, only 38% of IT organizations have established
ment Challenges, Chris Loosley, http://www.keynote.com/docs/ whitepapers/RichInternet_5.pdf
A Guide to Decision Making
APPlicATion Delivery HAnDbook | februAry 2008
well-understood performance objectives for their company’s business-critical applications. It is extremely difficult to make effective network and application design decisions if the IT organization does not have targets for application performance that are well understood and adhered to. One primary factor driving the planning component of application delivery is the need for risk mitigation. One manifestation of this factor is the situation in which a company’s application development function has spent millions of dollars to either develop or acquire a highly visible, business critical application. The application delivery function must take the proactive steps this section will describe in order to protect both the company’s investment in the application as well as the political capital of the application delivery function. Hope is not a strategy. Successful application delivery requires careful planning, coupled with extensive measurements and effective proactive and reactive processes.
• Identify the impact of a change to the network, the servers, or to an application. • Create a network design that maximizes availability and minimizes latency. • Create a data center architecture that maximizes the performance of all of the resources in the data center. • Choose appropriate network technologies and services. • Determine what functionality to perform internally and what functionality to acquire from a third party. This topic will be expanded upon in Chapter 6.
Chapter 3 outlined some of the factors that increase the difficulty of ensuring acceptable application performance. One of these factors is the fact that in the vast majority of situations, the application development process does not take into account how well the application will run over a WAN. One class of tool that can be used to test and profile
Many planning functions are critical to the success of application delivery. They include the ability to: Profile an application prior to deploying it, including running it in conjunction with a WAN emulator to replicate the performance experienced in branch offices. • Baseline the performance of the network. • Perform a pre-deployment assessment of the IT infrastructure. • Establish goals for the performance of the network and for at least some of the key applications that transit the network. • Model the impact of deploying a new application.
application performance throughout the application lifecyle is a WAN emulator. These tools are used during application development and quality assurance (QA) and serve to mimic the performance characteristics of the WAN; e.g., delay, jitter, packet loss. One of the primary benefits of these tools is that application developers and QA engineers can use them to quantify the impact of the WAN on the performance of the application under development, ideally while there is still time to modify the application. One of the secondary benefits of using WAN emulation tools is that over time the application development groups come to understand how to write applications that perform well over the WAN. Table 4.1, for example, depicts the results of a lab test that was done using a WAN emulator to quantify the affect
A Guide to Decision Making
APPlicATion Delivery HAnDbook | februAry 2008
that WAN latency would have on an inquiry-response application that has a target response time of 5 seconds. Similar tests can be run to quantify the affect that jitter and packet loss have on an application.
Network Latency 0 ms 25 ms 50 ms 75 ms 100 ms 125 ms 150 ms Measured Response Time 2 seconds 2 seconds 2 seconds 2 seconds 4 seconds 4 seconds 12 seconds
One obvious conclusion that can be drawn from table 4.2 is: The vast majority of IT organizations see significant value from a tool that can be used to test application performance throughout the application lifecyle. The flawed application development process is just one of the factors that Chapter 3 identified that increase the difficulty of ensuring acceptable application performance. Other factors include the consolidation of IT resources and the deployment of demanding applications such as VoIP. IT organizations will not be regarded as successful if they do not have the capability to both develop applications that run well over the WAN and to also plan for changes such as data center consolidation and the deployment of VoIP. This follows because as previously stated, hope is not a strategy. IT organizations need to be able to first anticipate the issues that will arise as a result of a major change and then take steps to mitigate the impact of those issues. Whenever an IT organization is considering implementing a tool of this type it is important to realize that the ultimate goal of these tools is to provide insight and not an undo level of precision. In particular, IT environments are complex and dynamic. As a result, it can be extremely difficult and laborious to have the tool accurately represent every aspect of the IT environment. In addition, even if the tool could accurately represent every aspect of the IT environment at some point in time, that environment will change almost immediately and that representation would no longer be totally accurate.
Table 4.1: Impact of Latency on Application Performance
As Table 4.1 shows, if there is no WAN latency the application has a two-second response time. This two-second response time is well within the target response time and most likely represents the time spent in the application server or the database server. As network latency is increased up to 75 ms., it has little impact on the application’s response time. If network latency is increased above 75 ms, the response time of the application increases rapidly and is quickly well above the target response time. Over 200 IT professionals were recently asked “Which of the following describes your company’s interest in a tool that can be used to test application performance throughout the application lifecyle – from application design through ongoing management?” The survey respondents were allowed to indicate multiple answers. Their responses are depicted in Table 4.2.
Response If the tool worked well it would make a significant improvement to our ability to manage application performance The output of tools like this is generally not that helpful Tools like this tend to be too difficult to use, particularly during application development Our applications developers would be resistant to using such a tool Our operations groups lack the application specific skills to use a tool like this Percentage of Respondents 71%
9% 13% 11% 17%
Given the complex and dynamic nature of the IT environment, a valid use of a WAN emulation tool is to provide insight into what happens if WAN delay increases from 70 ms to 100 ms.. For example, would that increase the application delay by a second? By two seconds? By five seconds? It is reasonable to demand that the WAN emula-
Table 4.2: Interest in an Application Lifecycle Management Tool
A Guide to Decision Making
• Determine if some form of optimization technology will improve the performance of the application. increasing it by 4. • Identify the sensitivity of the application to parameters such as WAN latency and use this information to set effective thresholds. In particular. Since companies perform these tests just before they put the application into production. • Gather information on the performance of the application that can be used to set the expectations of the users. In many cases. One of the reasons why IT organizations should not expect a level of undo precision from a WAN emulation tool has already been discussed – the complex and dynamic nature of the IT environment.85 seconds vs. that indeed that is correct. It is not reasonable. in order to add additional insight requires the tool to become very complex and typically require a level of granular input that either does not exist or is incredible time consuming to create.90 seconds. In the vast majority of cases. The preceding discussion of using a WAN emulator to either develop more efficient applications or to quantify the impact of a change such as a data center initiative is a proactive use of the tool. The 80/20 rule applies here: 80% of the insight can be provided while only incurring 20% of the complexity. However. One of the key characteristics of these tools is that they typically contain a slippery slope of complexity. some IT organizations only profile an application shortly before they deploy it. By that is meant that when creating a simulation tool.2 indicates that IT professionals are well aware of the fact that many of these tools are unacceptably complex. while the survey respondents indicated a strong interest in these tools. Alternatively. a tool that is unduly complex is of no use to an IT organization. this is usually too late to make any major change. however.APPlicATion Delivery HAnDbook | februAry 2008 tion tool provide accurate insight. • Learn about the factors that influence how well an application will run over a WAN. that the application development organization often took the approach of deploying the application and then dealing with the performance problems later. Another reason is the inherent nature of any modeling or simulation tool. thirty percent of the survey respondents indicated either that tools like this tend to be difficult or that their operations group would not have the skills necessary to use a tool like this. For example. IT organizations profile an application in a reactive fashion. “We are required to go through a lot of hoops. increase in WAN delay results in a 2 second increase in application delay. The advantages of this approach are that it helps the IT organization: • Identify minor changes that can be made to the application that will improve its performance. but that if the application development organization was under a lot of management pressure to get the application into production.” He went on to say that sometimes the testing was helpful. That means the organization profiled the application only after users complained about its performance. to expect that the tool would be able to determine whether a 30 ms. The application delivery function needs to be involved early in the applications development cycle. The Automotive Network Engineer provided insight into the limitations of testing an application just prior to deployment. The data in Table 4. it is reasonable to demand that if the tool indicates that a 30 ms. increase in WAN delay would increase application delay by 4. A Guide to Decision Making 19 . a great deal of insight can be provided without having the tool be unduly complex. He stated that relative to testing applications just prior to putting them into production.
delay) of applications and various IT resources including servers. A Guide to Decision Making 20 .APPlicATion Delivery HAnDbook | februAry 2008 The Consulting Architect pointed out that his organization is creating an architecture function. One of these tools establishes trends relative to their traffic. They are: I. One goal of the architecture function is to strike a balance between application development and application delivery. The Team Leader has asked the two vendors to integrate the two tools so that he will know how much capacity he has left before the performance of a given application becomes unacceptable. such as WAN links. how well will the application run if an organization deploys these optimization solutions? What additional management and security issues do these solutions introduce? A primary way to balance the requirements and capabilities of the application development and the application-delivery functions is to create an effective architecture that integrates those two functions. In most cases. utilization. WAN links and routers. does the decision to use chatty protocols mean that additional optimization solutions would have to be deployed in the infrastructure? If so. They have. A large part of the motivation for the creation of this function is to remove the finger pointing that goes on between the network and the application-development organizations. such as performing a pre-assessment of the network prior to deploying an application or performing proactive alarming.. For example. One role of the architecture group is to identify the effect of that decision on the application-delivery function and to suggest solutions. The other tool baselines the end-to-end responsiveness of applications. II. Baselining allows an IT organization to understand the normal behavior of those applications and IT resources. It does this by quantifying the key characteristics (e. on a Friday. For example. widely deployed two tools that assist with baselining. Application performance is tied directly to factors such as WAN Baselining Introduction Baselining provides a reference from which service quality and application delivery effectiveness can be measured. The Key Steps Four primary steps comprise baselining. baselining focuses on measuring the utilization of resources. the activity and responses times for a CRM application might be different at 8:00 a. In addition.g. That means baselining is a component of several key processes. However. however. the activity and response times for that CRM application are likely to differ greatly during a week in the middle of the quarter as compared with times during the last week of the quarter. on a Monday than at 8:00 p. application performance is only indirectly tied to the utilization of WAN links.m.. there might be good business and technical factors drive the application development function to develop an application using chatty protocols. The Team Leader stated that his organization does not baseline the company’s entire global network. Quantify the Utilization of the Assets over a Sufficient Period of Time Organizations must compute the baseline over a normal business cycle. For example. Baselining is an example of a task that one can regard as a building block of management functionality. response time. These organization must determine which are the most important resources and baseline them.m. Identify the Key Resources Most IT organizations do not have the ability to baseline all of their resources. One way to determine which resources are the most important is to identify the company’s key business applications and to identify the IT resources that support these applications.
. Application Monitoring To what degree (complete. e. • Unknown applications. Application Profiling and Response Time Analysis Can the solution: • Provide response time metrics based on synthetic traffic generation? • Provide response time metrics based on monitoring actual traffic? • Relate application response time to network activity? • Provide application baselines and trending? A Guide to Decision Making 21 . To ensure they get all of the benefits they expect from that upgrade. Organizations should baseline by measuring 100% of the actual traffic from the real users. that company should measure key parameters both before and after the upgrade. Microsoft Exchange. Determine how the Organization Uses Assets This step involves determining how the assets are being consumed by answering questions such as: Which applications are the most heavily used? Who is using those applications? How has the usage of those applications changed? In addition to being a key component of baselining. SAP R/3. partial. many IT organizations set a limit on the maximum utilization of their WAN links hoping that this will result in acceptable WAN latency. • Custom applications. Sampling and synthetic approaches to baselining can leave a number of gaps in the data and have the poten- tial to miss important behavior that is both infrequent and anomalous. Citrix Presentation Server. this step also positions the application.g. e-mail.. For example. III. IT organizations need to modify their baselining activities to focus directly on delay. none) can the solution identify: • Well-known applications. assume that a company is going to upgrade all of its Web servers. • Peer-to-peer applications. Since it is often easier to measure utilization than delay. PeopleSoft. • Web-based applications.Utilize the Information The information gained from baselining has many uses. e. For simplicity. • Complex applications. budget planning and chargeback. the criteria are focused on baselining applications and not other IT resources. Selection Criteria The following is a set of criteria that IT organizations can use to choose a baselining solution. such as a server upgrade.g. IV. Oracle. including URL-by-URL tracking. VoIP. This includes capacity planning. An IT organization can approach baselining in multiple ways. Those parameters include WAN and server delay as well as the end-to-end application response time as experienced by the users.delivery function to provide the company’s business and functional managers insight into how their organizations are changing based on how their use of key applications is changing.APPlicATion Delivery HAnDbook | februAry 2008 delay. a network redesign or the implementation of a patch. Another use for this information is to measure the performance of an application before and after a major change.
the identification of the impact of implementing the application. Rather. In addition to identifying the applications that are running on the network. One of the two key questions that an organization must answer during pre-deployment assessment is: Can the network provide appropriate levels of security to protect against attacks? As part of a security assessment.. it is extremely difficult to answer questions like this if the IT organization does not have targets for application performance that are well understood and adhered to. i. The Engineering CIO said that the organization is deploying VoIP. In particular. Chapter 7 will discuss this task in greater detail. Organizations should not look at the process of performing a pre-deployment network assessment in isolation. the typical application environment is both complex and dynamic. To assist with this function. The Team Leader stated his organization determines whether to perform a network assessment prior to deploying a new application on a case-by-case basis. it did an assessment of the ability of the infrastructure to support VoIP. they should consider it part of an applicationlifecycle management process that includes a comprehensive assessment and analysis of the existing network. The organization identified the network capacity at each office. because as Chapter 3 described. and the establishment of effective processes for ongoing fact-based data management. IPS (Intrusion Prevention System) and NAC (Network Access Control). the development of a thorough rollout plan including: the profiling of the application. As part of that deployment. Part of the value of this activity is to identify recreational use of the network. The second key question that an organization must answer during pre-deployment assessment is: Can the network provide the necessary levels of availability and performance? As previously mentioned. The assessment was comprised of an analysis using an excel spreadsheet. it is also important to categorize those applications using an approach similar to what Chapter 3 described. it determined where it needed to add capacity. Chapter 7 quantifies the extent to which corporate networks are carrying recreational traffic. The key components of a pre-deployment network assessment are: Create an inventory of the applications running on the network This includes discovering the applications that are running on the network. Blocking this recreational use can free up additional WAN bandwidth. he pointed out that it tends to perform an assessment if it is a large deployment or if it has some concerns about whether the infrastructure can support the application. it is important review the network and the attached devices and to document the existing security functionality such as IDS (Intrusion Detection System). the current utilization of that capacity and the added load that would come from deploying VoIP. on-line gaming and streaming radio or video. his organization has recently acquired tools that can help it with tasks such as assessing the ability of the infrastructure to support VoIP deployment as well as evaluating the design of their MPLS network. It is also necessary to test the network to see how it responds to potential security threats.APPlicATion Delivery HAnDbook | februAry 2008 Pre-Deployment Assessment The goal of performing a pre-deployment assessment of the current environment is to identify any potential problems that might affect an IT organization’s ability to deploy an application. A Guide to Decision Making 22 .e. The next step is to analyze the configuration of the network elements to determine if any of them pose a security risk. It is also difficult to answer this question. Based on this set of information.
IT organizations need to measure more than just delay. If a company is about to deploy VoIP. NetFlow represents a more advanced source of management data than SNMP MIBs. This new standard. which is referred to as IPFIX (IP Flow Information EXport). The branch office router outputs a flow record after it determines that the flow is finished. The advantage of deploying dedicated instrumentation is that it enables a more detailed view into application performance. such as the number of packets sent and received. then the pre-assessment baseline must also measure the current levels of jitter and packet loss. Within NetFlow. For example. The disadvantage of this approach is that it increases the cost of A Guide to Decision Making 23 . as well as the overall link utilization. Create response time baselines for key essential applications This activity involves measuring the average and peak application response times for key applications both before and after the new application is deployed. for example. is based on NetFlow Version 9. IT organizations can typically rely on having access to management data from SNMP MIBs (Simple Network Management Protocol Management Information Bases) on network devices. baselining typically refers to measuring the utilization of key IT resources. As previously described. the volume of traffic in the flow. As part of performing a pre-deployment network assessment. Evaluate bandwidth to ensure available capacity for new applications This activity involves baselining the network as previously described. The goal is to use the information about how the utilization of the relevant network resources has been trending to identify if any parts of the network need to be upgraded to support the new application. Moving these activities to an off-peak time adds additional bandwidth. packet inspection-based dedicated instrumentation. The recommendation was made that companies should modify how they think about baselining to focus not on utilization. the number of packets that are discarded. This data source provides data link layer visibility across the entire enterprise network and captures parameters. but on delay. The IETF is in the final stages of approving a standard (RFC 3917) for logging IP packets as they flow through a router. whereas data from standard SNMP MIB monitoring can be used to quantify overall link utilization. such as timestamps for the flow start and finish time. this class of management data can be used to identify which network users or applications are consuming the bandwidth. switch or other networking device and reporting that information to network management and accounting systems. such as downloads of server patches or security patches to desktops that are being performed during peak times. as VoIP quality is highly sensitive to those parameters. This data will allow IT organizations to determine if deploying the new application causes an unacceptable impact on the company’s other key applications. and its source and destination IP addresses and source and destination port numbers. In some instances. This record contains information. An important consideration for IT organizations is whether they should deploy vendor-specific.APPlicATion Delivery HAnDbook | februAry 2008 Another part of the value of this activity is to identify business activities. a network flow is defined as a unidirectional sequence of packets between a given source and destination. NetFlow is a Cisco IOS software feature and also the name of a Cisco protocol for collecting IP traffic information. however. such as switches and routers.
k. • Offload computationally intensive tasks from client systems and servers There are two principal categories of network and application optimization products. One of the architectural advantages of this approach is that it monitors performance and events closer to the user’s actual experience.e.. Layer 3 and 4 visibility and shaping.. Note that while a software only solution can not typically match the performance of a symmetric solution. transport layer or application turns) that are necessary for a given transaction. database. it must not introduce any appreciable management overhead. to identify every IP application. In addition. but do not require an appliance in the branch office. i. Whereas gaining access to management data is relatively easy. • Ensure that the WAN link is never idle if there is data to send. One of the most challenging components of this activity is to unify this information so the organization can leverage it to support myriad activities associated with managing application delivery. the software only solution is most appropriate for individual users or small offices. because of the requirement to have an appliance in each branch office. It is difficult.. for example. • Mitigate the inefficiencies of older protocols. i. However. for an agent-based approach to be successful. network. It is also difficult to quantify application response time and to identify the individual sources of delay. Network and Application Optimization Introduction The phrase network and application optimization refers to an extensive set of techniques that organizations have deployed in an attempt to optimize the performance of networks and applications as part of assuring acceptable application performance.a.e. host and conversation on the network as well as applications that use protocols such as IPX or DECnet. This class of solution is often referred to as a software only solution. IT organizations that are looking for a software only solution should expect that the solution will provide a rich set of functionality. A compromise is to rely on data from SNMP MIBs and NetFlow in small sites and to augment this with dedicated instrumentation in larger. The trade-off between a traditional symmetric solution based on an appliance and a software only solution is straightforward. a traditional symmetric solution also tends to be more expensive. • Reduce the number of round trips (a. As a result. Because the traditional symmetric solution involves an appliance in each branch office. however. One category focuses on the negative effect of the WAN on application performance. collecting and analyzing details on every application in the network is challenging. application server.APPlicATion Delivery HAnDbook | februAry 2008 the solution. The primary role that these techniques play is to: • Reduce the amount of data that is sent over the WAN. A Guide to Decision Making 24 . it has the dedicated hardware that allows it to service a large user base. Some vendors. that does not mean that a software only solution is less functional than a symmetric solution. This category is often referred to as a WAN optimization controller (WOC) but will also be referred to in this handbook as Branch Office Optimization Solutions. Another consideration is whether or not IT organizations should deploy software agents on end systems. more strategic sites. A potential disadvantage of this approach is that there can be organizational barriers that limit the ability of the IT organization to put software on each end system. Branch Office Optimization Solutions are often referred to as symmetric solutions because they typically require an appliance in both the data center as well as the branch office. have implemented solutions that call for an appliance in the data center.
APPlicATion Delivery HAnDbook | februAry 2008
Layer 7 visibility and shaping, packet marking based on DSCP (DiffServ code point), as well as sophisticated analysis and reporting. The typical software only solution is comprised of: • Agents that sit on each PC and which serve to monitor and shape WAN application and user traffic in accordance with assigned policy. • A PC or server that has two functions. One function is to serve as a collector of network statistics. The other function is to store policies that are accessed by the agents. • A management console that is used for monitoring, policy development and management. The second category of product that will be discussed in this Chapter is often referred to as an Application Front End (AFE) or Application Device Controller (ADC). This solution is typically referred to as being an asymmetric solution because an appliance is only required in the data center and not the branch office. The genesis of this category of solution dates back to the IBM mainframe-computing model of the late 1960s and early 1970s. Part of that computing model was to have a Front End Processor (FEP) reside in front of the IBM mainframe. The primary role of the FEP was to free up processing power on the general purpose mainframe computer by performing communications processing tasks, such as terminating the 9600 baud multi-point private lines, in a device that was designed just for these tasks. The role of the AFE is somewhat similar to that of the FEP in that the AFE performs computationally intensive tasks, such as the processing of SSL (Secure Sockets Layer) traffic, and hence frees up server resources. However, another role of the AFE is to function as a Server Load Balancer (SLB) and, as the name implies, balance traffic over multiple servers. While performing these functions accelerates the performance of Web-based applications, AFEs often do not accelerate the performance of standard Windows based applications.
Companies deploy Branch Office Optimization Solutions and AFEs in different ways. The typical company, for example, has many more branch offices than data centers. Hence, the question of whether to deploy a solution in a limited tactical manner vs. a broader strategic manner applies more to Branch Office Optimization Solutions than it does to AFEs. Also, AFEs are based on open standards and as a result a company can deploy AFEs from different vendors and not be concerned about interoperability. In contrast, Branch Office Optimization Solutions are based on proprietary technologies and so a company would tend to choose a single vendor from which to acquire these solutions.
Alice in Wonderland Revisited
Chapter 4 began with a reference to Alice in Wonderland and discussed the need for IT organizations to set a direction for things such as application performance. That same reference to Alice in Wonderland applies to the network and application optimization component of application delivery. In particular, no network and application optimization solution on the market solves all possible application performance issues. To deploy the appropriate network and application optimization solution, IT organizations need to understand the problem they are trying to solve. Chapter 3 of this handbook described some of the characteristics of a generic application environment and pointed out that to choose an appropriate solution, IT organizations need to understand their unique application environment. In the context of network and application optimization, if the company either already has or plans to consolidate servers out of branch offices and into centralized data centers, then as described later in this section, a WAFS (Wide Area File Services) solution might be appropriate. If the company is implementing VoIP, then any Branch Office Optimization Solution that it implements
A Guide to Decision Making
APPlicATion Delivery HAnDbook | februAry 2008
must be able to support traffic that is both real-time and meshed, and have strong QoS functionality. Analogously, if the company is making heavy use of SSL, it might make sense to implement an AFE to relieve the servers of the burden of processing the SSL traffic. In addition to high-level factors of the type the preceding paragraph mentioned, the company’s actual traffic patterns also have a significant impact on how much value a network and application optimization solution will provide. To exemplify this, consider the types of advanced compression most solution providers offer. The effectiveness of advanced compression depends on two factors. One factor is the quality of the compression techniques that have been implemented in a solution. Since many compression techniques use the same fundamental and widely known mathematical and algorithmic foundations, the performance of many of the solutions available in the market will tend to be somewhat similar. The second factor that influences the effectiveness of advanced compression solutions is the amount of redundancy of the traffic. Applications that transfer data with a lot of redundancy, such as text and html on web pages, will benefit significantly from advanced compression. Applications that transfer data that has already been compressed, such as the voice streams in VoIP or jpg-formatted images, will see little improvement in performance from implementing advanced compression and could possibly see performance degradation. Because a network and optimization solution will provide varying degrees of benefit to a company based on the unique characteristics of its environment, third party tests of these solutions are helpful, but not conclusive. In order to understand the performance gains of any network and application optimization solution, that solution must be tested in an environment that closely reflects the environment in which it will be deployed.
Branch Office Optimization Solutions
The goal of Branch Office Optimization Solutions is to improve the performance of applications delivered from the data center to the branch office or directly to the end user. Myriad techniques comprise branch office optimization solutions. Table 5.1 lists some of these techniques and indicates how organizations can use each of these techniques to overcome some characteristic of the WAN that impairs application performance.
WAN Characteristic Insufficient Bandwidth WAN Optimization Techniques Data Reduction: • Data Compression • Differencing (a.k.a., de-duplication) • Caching Protocol Acceleration: • TCP • HTTP • CIFS • NFS • MAPI Mitigate Round-trip Time • Request Prediction • Response Spoofing Congestion Control Forward Error Correction (FEC) Quality of Service (QoS)
Packet Loss Network Contention
Table 5.1: Techniques to Improve Application Performance
Below is a brief description of some of the principal WAN optimization techniques.
This refers to keeping a local copy of information with the goal of either avoiding or minimizing the number of times that information must be accessed from a remote site. As described below, there are multiple forms of caching. Byte Caching With byte caching the sender and the receiver maintain large disk-based caches of byte strings previous sent and received over the WAN link. As data is queued for the WAN, it is scanned for byte
A Guide to Decision Making
APPlicATion Delivery HAnDbook | februAry 2008
strings already in the cache. Any strings that result in cache hits are replaced with a short token that refers to its cache location, allowing the receiver to reconstruct the file from its copy of the cache. With byte caching, the data dictionary can span numerous TCP applications and information flows rather than being constrained to a single file or single application type. Object Caching Object caching stores copies of remote application objects in a local cache server, which is generally on the same LAN as the requesting system. With object caching, the cache server acts as a proxy for a remote application server. For example, in Web object caching, the client browsers are configured to connect to the proxy server rather than directly to the remote server. When the request for a remote object is made, the local cache is queried first. If the cache contains a current version of the object, the request can be satisfied locally at LAN speed and with minimal latency. Most of the latency involved in a cache hit results from the cache querying the remote source server to ensure that the cached object is up to date. If the local proxy does not contain a current version of the remote object, it must be fetched, cached, and then forwarded to the requester. Loading the remote object into the cache can potentially be facilitated by either data compression or byte caching. Compression The role of compression is to reduce the size of a file prior to transmitting that file over a WAN. As described below, there are various forms of compression. Static Data Compression Static data compression algorithms find redundancy in a data stream and use encoding techniques
to remove the redundancy, creating a smaller file. A number of familiar lossless compression tools for binary data are based on Lempel-Ziv (LZ) compression. This includes zip, PKZIP and gzip algorithms. LZ develops a codebook or dictionary as it processes the data stream and builds short codes corresponding to sequences of data. Repeated occurrences of the sequences of data are then replaced with the codes. The LZ codebook is optimized for each specific data stream and the decoding program extracts the codebook directly from the compressed data stream. LZ compression can often reduce text files by as much as 60-70%. However, for data with many possible data values LZ may prove to be quite ineffective because repeated sequences are fairly uncommon. Differential Compression; a.k.a., Differencing or De-duplication Differencing algorithms are used to update files by sending only the changes that need to be made to convert an older version of the file to the current version. Differencing algorithms partition a file into two classes of variable length byte strings: those strings that appear in both the new and old versions and those that are unique to the new version being encoded. The latter strings comprise a delta file, which is the minimum set of changes that the receiver needs in order to build the updated version of the file. While differential compression is constrained to those cases where the receiver has stored an earlier version of the file, the degree of compression is very high. As a result, differential compression can greatly reduce bandwidth requirements for functions such as software distribution, replication of distributed file systems, and file system backup and restore.
A Guide to Decision Making
the end systems communicate with the appliances using the native protocol.1.APPlicATion Delivery HAnDbook | februAry 2008 Figure 5. Protocol acceleration is typically based on per-session packet processing by appliances at each end of the WAN link. A subsequent section of the handbook will discuss some of the technical challenges associated with data replication and will describe how FEC mitigates some of those challenges. Operating at the block level results in smaller dynamic dictionaries that can reside in memory rather than on disk. TCP has multiple mechanisms to determine the congestion window. TCP Acceleration TCP can be accelerated between appliances with a variety of techniques that increase a session’s ability A Guide to Decision Making 28 . The appliances at each end of the link act as a local proxy for the remote system by providing local termination of the session. as shown in Figure 5. the TCP congestion control mechanisms are based on a parameter referred to as the congestion window. To achieve this goal. streaming data. Therefore. there are many forms of protocol acceleration. As a result. the processing required for compression and decompression introduces only a small amount of delay. As described below. Congestion Control The goal of congestion control is to ensure that the sending device does not transmit more data than the network can accommodate. This extra packet is used to recover from an error and hence avoid having to retransmit packets. Forward Error Correction (FEC) FEC is typically used at the physical layer (Layer 1) of the OSI stack. allowing the technique to be applied to real-time. and the sessions are relayed between the appliances across the WAN using the accelerated version of the protocol or using a special protocol designed to address the WAN performance issues of the native protocol. FEC can also be applied at the network layer (Layer 3) whereby an extra packet is transmitted for every n packets sent. Protocol Acceleration Protocol acceleration refers to a class of techniques that improves application performance by circumventing the shortcomings of various communication protocols.1: Protocol Acceleration Appliances Real Time Dictionary Compression The same basic LZ data compression algorithms discussed earlier can also be applied to individual blocks of data rather than entire files.
CIFS and NFS use numerous Remote Procedure Calls (RPCs) for each file sharing operation. CIFS and NFS file access can be greatly accelerated by using a WAFS transport protocol between the acceleration appliances. a number of smaller packets are aggregated into a single larger packet. If a file is being updated. This results in the familiar ping-pong behavior that amplifies the effects of latency. the receiver tells the sender which packets in the window were received. Request Prediction By understanding the semantics of specific protocols or applications. it can appear to the user that the file server is local rather than remote. NFS and CIFS suffer from poor performance over the WAN because each small data block must be acknowledged before the next one is sent. With the WAFS protocol. This means that all subsequent If an attachment is edited downloads of the same attachment can be satisfied from the local application server. the appliances can use differential file compression to conserve WAN bandwidth. A Guide to Decision Making 29 . This results in an inefficient ping-pong effect that amplifies the effect of WAN latency. HTTP Acceleration Web pages are often composed of many separate objects. TCP Fast Start remedies this by accelerating the growth of the TCP window size to quickly take advantage of link bandwidth. CIFS and NFS Acceleration As mentioned earlier. HTTP can be accelerated by appliances that use pipelining to overlap fetches of Web objects rather than fetching them sequentially. it is often possible to anticipate a request a user will make in the near future. Microsoft Exchange Acceleration Most of the storage and bandwidth requirements of email programs. such as Microsoft Exchange. the entire file can be moved or pre-fetched from the remote server to the local appliance’s cache. TCP selective acknowledgment (SACK) improves performance in the event that multiple packets are lost from one TCP window of data. This technique eliminates numerous round trips over the WAN. In addition. reducing the overhead associated with numerous small packets. locally and then returned to via the remote mail server. Typically a browser will wait for a requested object to be returned before requesting the next one. packet aggregation. As a result. Increasing the window size for large transfers allows more packets to be simultaneously in transit boosting bandwidth utilization. the appliance can use object caching to maintain local storage of frequently accessed web objects. are due to the attachment of large files to mail messages. selective acknowledgement. each of which must be requested and retrieved sequentially. when a remote file is accessed. Making this request in advance of it being needed eliminates virtually all of the delay when the user actually makes the request. With SACK. and TCP Fast Start. TCP slow start and congestion avoidance lower the data throughput drastically when loss is detected. Some of the available techniques are dynamic scaling of the window size.APPlicATion Delivery HAnDbook | februAry 2008 to more fully utilize link bandwidth. Web accesses can be further accelerated if the appliance continually updates objects in the cache instead of waiting for the object to be requested by a local browser before checking for updates. CIF and NFS acceleration can use differential compression and block level compression to further increase WAN efficiency. Microsoft Exchange acceleration can be accomplished with a local appliance that caches email attachments as they are downloaded. With packet aggregation. Downloading email attachments from remote Microsoft Exchange Servers is slow and wasteful of WAN bandwidth because the same attachment may be downloaded by a large number of email clients on the same remote site LAN. allowing the sender to retransmit only the missing data segments instead of all segments sent since the first lost packet.
APPlicATion Delivery HAnDbook | februAry 2008
Many applications or application protocols have a wide range of request types that reflect different user actions or use cases. It is important to understand what a vendor means when it says it has a certain application level optimization. For example, in the CIFS (Windows file sharing) protocol, the simplest interactions that can be optimized involve drag and drop. But many other interactions are more complex. Not all vendors support the entire range of CIFS optimizations. Request Spoofing This refers to situations in which a client makes a request of a distant server, but the request is responded to locally.
techniques was to solve a particular problem supports that position. He also stated that his company is “absolutely becoming more proactive moving forward with deploying these techniques.” Similarly, The Motion Picture Architect commented that his organization has been looking at these technologies for a number of years, but has only deployed products to solve some specific problems, such as moving extremely large files over long distances. He noted that his organization now wants to deploy products proactively to solve a broader range of issues relative to application performance. According to The Motion Picture Architect, “Even a well written application does not run well over long distances. In order to run well, the application needs to be very thin and it is very difficult to write a full featured application that is very thin.” IT organizations often start with a tactical deployment of WOCs and expand this deployment over time. Current Deployments Table 5.2 depicts the extent of the deployment of branch office optimization solutions.
Have not deployed, but plan to deploy 24% Deployed in test mode 9% Limited production deployment 17%
Tactical vs. Strategic Solutions
To put the question of tactical vs. strategic in context, refer again to the IT organization that Chapter 2 of this handbook referenced. For that company to identity the problem that it is trying to solve, it must answer questions such as: Is the problem just the performance of this one application as used just by employees in the Pac Rim? If that is the problem statement, then the company is looking for a very tactical solution. However, the company might decide that the problem that it wants to solve is how can it guarantee the performance of all of their critical applications for all of its employees under as wide a range of circumstances as possible. In this case, the company needs a strategic solution. Historically, Branch Office Optimization Solutions have been implemented in a tactical fashion. That means that companies have deployed the least amount of equipment possible to solve a specific problem. Kubernan recently asked several hundred IT professionals about the tactical vs. strategic nature of how they use these techniques. Their answers, which Figure 5.2 shows, indicate the deployment of these techniques is becoming a little more strategic. The Electronics COO who noted that his company’s initial deployment of network and application optimization
No plans to deploy 45%
Broadly deployed 5%
Table 5.2: Deployment of Branch Office Optimization Solutions
One conclusion that can be drawn from the data in Table 5.2 is: The deployment of WAN Optimization Controllers will increase significantly. The Engineering CIO stated that his organization originally deployed a WAFS solution to alleviate redundant file copy. He said he has been pleasantly surprised by the additional benefits of using the solution. In addition, his organization plans on doing more backup of files over the network and
A Guide to Decision Making
APPlicATion Delivery HAnDbook | februAry 2008
10 points to 50 points, with 10 points
60% 50% 40% 30% 20% 10% 0% 17% 51%
meaning not important, 30 points meaning average importance, and
50 points meaning critically important. The score for each criteria can range from 1 to 5, with a 1 meaning fails to meet minimum needs, 3 meaning acceptable, and 5 meaning significantly exceeds requirements.
Rich Set of Techniques, Proactively Deployed
Some Techniques, Proactively Deployed
Driven by Particular Initiatives
For the sake of example, consider solution A. For this solution, the weighted score for each criterion (WiAi) is found by multiplying the
Figure 5.2: Approach to Network and Application Optimization
he expects the WAFS solution they have already deployed will assist with this. The points The Engineering CIO raised go back to the previous discussion of a tactical vs. a strategic solution. In particular, most IT organizations that deploy a network and application optimization solution do so tactically and later expand the use of that solution to be more strategic. When choosing a network and application optimization solution it is important to ensure that the solution can scale to provide additional functionality over what is initially required.
weight (Wi) of each criteria, by the score of each criteria (Ai). The weighted score for each criterion are then summed (Σ WiAi) to get the total score for the solution. This process can then be repeated for additional solutions and the total scores of the solutions can be compared.
Score for Solution “A” Ai Score for Solution “B” Bi
Criterion Performance Transparency Solution Architecture OSI Layer Capability to Perform Application Monitoring Scalability Cost-Effectiveness Application Sub-classification Module vs. Application Optimization Disk vs. RAM-based Compression Protocol Support Security Ease of Deployment and Management Change Management Support for Meshed Traffic Support for Real Time Traffic Total Score
The recommended criteria for evaluating WAN Optimization Controllers are listed in Table 5.3. This list is intended as a fairly complete compilation of all possible criteria, so a given organization may apply only a subset of these criteria for a given purchase decision. In addition, individual organizations are expected to ascribe different weights to each of the criteria because of differences in WAN architecture, branch office network design, and application mix. As shown in the table, assigning weights to the criteria and relative scores for each solution provides a simple methodology for comparing competing solutions. There are many techniques that IT organizations can use to complete Table 5.3 and then use its contents to compare solutions. For example, the weights can range from
Table 5.3: Criteria for WAN Optimization Solutions
A Guide to Decision Making
APPlicATion Delivery HAnDbook | februAry 2008
Each of the criteria is explained below. Performance Third party tests of a solution can be helpful. It is critical, however, to quantify the kind of performance gains that the solution will provide in the particular environment where it will be installed. For example, if the IT organization either already has, or is in the process of consolidating servers out of branch offices and into centralized data centers, then it needs to test how well the WAN optimization solution supports CIFS. As part of this quantification, it is important to identify if the performance degrades as either additional functionality within the solution is activated, or if the solution is deployed more broadly across the organization. Transparency The first rule of networking is to not implement anything that causes the network to break. As a result, an important criterion when choosing a WOC is that it should be possible to deploy the solution without breaking things such as routing, security, or QoS. The solution should also be transparent relative to both the existing server configurations and the existing Authentication, Authorization and Accounting (AAA) systems, and should not make troubleshooting any more difficult. The transparency of these solutions has been a subject of much discussion recently. Because of that, the next section will elaborate on what to look for relative to transparency. Solution Architecture If the organization intends the solution to be able to support additional optimization functionality over time, it is important to determine if the hardware and software architecture can support new functionality without an unacceptable loss of performance.
OSI Layer Organizations can apply many of the optimization techniques discussed in this handbook at various layers of the OSI model. They can apply compression, for example, at the packet layer. The advantage of applying compression at this layer is that it supports all transport protocols and all applications. The disadvantage is that it cannot directly address any issues that occur higher in the stack. Alternatively, having an understanding of the semantics of the application means that compression can also be applied to the application; e.g., SAP or Oracle. Applying compression, or other techniques such as request prediction, in this manner has the potential to be more effective, but is by definition application specific. Capability to Perform Application Monitoring Many network performance tools rely on network-based traffic statistics gathered from network infrastructure elements at specific points in the network to perform their reporting. By design, all WAN optimization devices apply various optimization techniques on the application packets and hence affect these network-based traffic statistics to varying degrees. One of the important factors that determine the degree of these effects is based on the amount of the original TCP/IP header information retained in the optimized packets. This topic will be expanded in the subsequent section on transparency. Scalability One aspect of scalability is the size of the WAN link that can be terminated on the appliance. More important is how much throughput the box can actually support with the relevant and desired optimization functionality turned on. Other aspects of scalability include how many simultaneous TCP connections the appliance can support, as well as how many branches or users a vendor’s complete solution can support.
A Guide to Decision Making
MAPI) while other solutions support that protocol generically. Standard techniques such as RAID can mitigate the risk associated with these points of failure. RAM Advanced compression solutions can be either disk or RAM-based. some solutions apply the same optimization techniques to all of SAP. given the size of these systems they do add to the overall cost and introduce additional points of failure to a solution. while other solutions would apply different techniques to the individual SAP modules based on factors such as their business importance and latency sensitivity. Application Sub-classification An application such as Citrix Presentation Server or SAP is comprised of multiple modules with varying characteristics. In addition. an appliance needs to be deployed in branch offices that will most likely not have any IT staff. While disks are more cost-effective than a RAM-based solution on a per byte basis. In either case.000 times the volume of patterns in their dictionaries as compared with RAM-based systems. Easy of Deployment and Management As part of deploying a WAN optimization solution. it is important that unskilled personnel can install the solution. the solution itself must not create any additional security vulnerabilities. Microsoft Print Services. is slower to access than it would be with the typical RAM-based implementations. Cost Effectiveness This criterion is related to scalability. The data. the more important it is that they are easy to configure and manage. Some Branch Office Optimization solutions can classify at the individual module level.g. In particular. Security The solution must be compatible with the current security environment. the critical issue is how much of an improvement in the performance of that protocol the solution can cause in the type of environment in which the solution will be deployed. and those dictionaries can persist across power failures. It is also important to understand if the solution makes any modifications to the protocol that could cause unwanted side effects. especially cache- A Guide to Decision Making 33 . it is important to understand what the initial solution costs.. Protocol support Some solutions are specifically designed to support a given protocol (e. Module vs. Some solutions. It is also important to understand how the cost of the solution changes as the scope and scale of the deployment increases. some Branch Office Optimization Solutions treat each module of an application in the same fashion. however. It’s also important to consider what other systems will have to be modified in order to implement the WAN optimization solution. In addition. UDP. For example. Other solutions treat modules based both on the criticality and characteristics of that module. CIFS. Application Optimization In line with the previous criterion. break firewall Access Control Lists (ACLs) by hiding TCP header information. Downward scalability refers to the ability of the vendor to offer costeffective products for small branches or even individual laptops. It must not. while others can only classify at the application level. Disk-based systems typically can store as much as 1.APPlicATion Delivery HAnDbook | februAry 2008 Downward scalability is also important. for instance. As such. Disk vs. although the performance gains of a disk-based system are likely to more than compensate for this extra delay. the greater the number of appliances deployed. HTTP. TCP.
and most real time traffic will benefit from QoS.e. typically more than 3) are received out of order. The Manager stated that the packet loss on a good MPLS network typically ranges from 0. particularly on MPLS networks. it is important that the WAN optimization solution can adapt to these changes easily. If packets are lost. Both The Manger and The Consultant agreed that out of order packets are a major issue for data replication. it is important that the WAN optimization solution that they deploy can support meshed traffic flows and can support a range of features such as asymmetric routing. it has the same affect on The Data Replication Bottleneck While packet loss and out of order packets are a nuisance for a network that supports typical data applications13 like file transfer and email. If a company is making this transition. Examples include file transfer. Header compression might be helpful for VoIP traffic.5% on the typical IPSec VPN.1%. While there are significant advantages to MPLS and IP VPN services. The Consultant stated that since out of order packets cause re-transmissions. one of which being high levels of packet loss and out of order packets. A Guide to Decision Making 34 .APPlicATion Delivery HAnDbook | februAry 2008 based or WAFS solutions. TCP or other higher level protocols will cause a re-transmission of packets. low bandwidth connections. The latter involves continuous sessions with many packets sent over high capacity WAN links. web and VoIP This is in contrast to . He added that he sees packet loss of 0. Data replication applications. Performance might suffer. In particular. Support of Meshed Traffic A number of factors are causing a shift in the flow of WAN traffic away from a simple hub-and-spoke pattern to more of a meshed flow. For these companies it is important that the WAN optimization solution can support real time traffic. but that it can reach 5% on some MPLS networks. there are drawbacks. Key WAN Characteristics: Loss and Out of Order Packets As noted in Chapter 3.if at all. throughput can be decreased so significantly that the replication process cannot be completed in a reasonable timeframe . It is preferable that the WAN optimization solution be able to adjust to these changes automatically. The former involves thousands of short-lived sessions made up of a small number of packets typically sent over 13 The phrase typical data application refers to applications that involve inquiries and responses where moderate amounts of information are transferred for brief periods of time. email.05% to 0. resulting in dropped or delayed packet delivery. This is due to routers being oversubscribed in a shared network. if too many packets (i. but the results are not catastrophic. but that in 10% of the situations that he has been involved in. require that every file server be accessed during implementation. The Consultant said that packet loss on an MPLS network is usually low. Traffic such as VOIP and live video typically can’t be accelerated because it is real time and already highly compressed. Data applications can typically recover from lost or out of order packets by retransmitting the lost data. Support for Real Time Traffic Many companies have deployed real-time applications. a data replication application that transfers large amounts of information for a continuous period of time. do not have the same luxury. however. Change Management Since most networks experience periodic changes such as the addition of new sites or new applications. many IT organizations are moving away from a hub and spoke network and are adopting WAN services such as MPLS and IP VPNs. packet loss reached upwards of 1% on a continuous basis. it is a very serious problem when performing data replication and backup across the WAN.
If however. To exemplify the impact of packet loss.. He added that he often sees high levels of out of order packets in part because some service providers have implemented queuing algorithms that give priority to small packets and hence cause packets to be received out of order.al.0 0.0 0% 0% 0% 0% 0% 0% 0% 0% 0% 00 5. Mahdavi & Ott in Computer Communication Review. Packet Loss Probability transmitted. 14 Goodput refers to the amount of data that is successfully Latency 100ms 20. 15 The macroscopic behavior of the TCP congestion avoidance algorithm by Mathis.APPlicATion Delivery HAnDbook | februAry 2008 goodput14 as does packet loss.420 bytes and varying values of RTT. the maximum throughput is 1. The Impact of Loss and Out of Order Packets The affect of packet loss on TCP has been widely ana- More specifically: With a 1% packet loss and a round trip time of 50 ms or greater.0 50ms 10ms 10. 27(3). the maximum throughput is roughly 3 megabits per second no matter how large the WAN link is. assume that MSS is 1. The preceding equation shows that throughput decreases as either RTT or p increases. it particularly 40.0 10 00 % 0. the loss were to increase to 0. the maximum throughput drops to 449 Kbytes/second.3 is: Small amounts of packet loss can significantly reduce the maximum throughput of a single TCP session. The Consultant stated that he thought Figure 5. provide a simple formula that provides insight into the maximum TCP throughput on a single session when there is packet loss. RTT Max Thruput (Mbps) Techniques for Coping with Loss and Out of Order Packets The data in Figure 5.420 Kbytes/second.000 bits/ second. 0..3 shows that while packet loss affects throughput for any TCP stream. 0.3 depicts the impact that packet loss has on the throughput of a single TCP stream with a maximum segment size of 1. 1. lyzed15. et.0 30. July 1997 Figure 5. the throughput is 10.3: Impact of Packet Loss on Throughput 2. Semke.1%. 0. 0. if a thousand bit packet is transmitted ten times in a second before it is successfully received.3 overstated the TCP throughput and that in his experience if you have a WAN link with an RTT of 100 ms and packet loss of 1% “the throughput would be nil”.0 is 100 ms. 01 02 05 10 20 50 00 00 . That formula is: where: MSS: maximum segment size RTT: round trip time p: packet loss rate. A Guide to Decision Making 35 . For example.420 bytes. Mathis. Figure 5. Based on the formula. and p is 0. One conclusion that can be drawn from Figure 5.000 bits/second and the goodput is 1. 0.01%.
To exemplify the impact of FEC. The example demonstrates the value of FEC in a TCP environment. In the most general sense of the term.1%. a 1% packet loss can be reduced to less than 0. The basic premise of FEC is that an additional error recovery packet is transmitted for every ‘n’ packets that are sent. In the case in which one extra packet is carried for every ten normal packets (1:10 FEC). however. The ability of the equipment at the receiving end to reconstitute the lost packets depends on how many packets were lost and how many extra packets were transmitted.org/getrfc. an in-line accelerator deployment such as shown in Figure 5. http://www.04%.1 seconds and a 1:5 FEC algorithm would reduce this to 1.4: In-Line Deployment of Application Accelerators Central Site 16 RFC 2354. FEC has long been used at the physical level to ensure error free transmission with a minimum of re-transmissions. assume that the MSS is 1. Using a 1:10 FEC algorithm would reduce this to 2. a 1% packet loss can be reduced to less than 0. Transmitting a 10 Mbyte file without FEC would take a minimum of 22. numerous techniques. have been developed to mitigate the impact of packet loss. Options for Repair of Streaming Media. When loss is detected. For example. a completely transparent application accelerator solution is one that can be added to the existing network without requiring reconfiguration of any existing network elements or end systems and without disrupting any services that are provided within the network between the pair of accelerators. The additional packet enables the network equipment at the receiving end to reconstitute one of the ‘n’ lost packets and hence negates the actual packet loss. although the technique applies equally well to any Switch Accelerator Appliance Transparency Vendors and IT organizations often cite transparency as a key attribute of application accelerators. no extra packets should be transmitted. Intervening Network Accelerator Appliance WAN WAN Router WAN Router Switch Remote Site Figure 5.4 seconds. and the packet loss is 0.09%. As a result. where “optimizable” traffic is directed to the accelerator by configuring WCCP (The Web Cache Coordination Protocol) or PBR (policy-based routing) on the WAN router. What is needed is a FEC algorithm that adapts to packet loss. For example. such as those associated with multi-media and data replication. introduces overhead which itself can reduce throughput. if a WAN link is not experiencing packet loss.4 is more generally transparent than an out-of-line deployment. If one extra packet is carried for every five normal packets (1:5 FEC).php?rfc=2354 A Guide to Decision Making 36 . such as Forward Error Correction (FEC) .3 seconds. FEC. 16 application regardless of transport protocol.420.APPlicATion Delivery HAnDbook | februAry 2008 affects throughput for high-speed streams. RTT is 100 ms. Recently many enterprises have begun to use FEC at the network layer to improve the performance of applications such as data replication. the algorithm should begin to carry extra packets and should increase the amount of extra packets as the amount of loss increases.rfc-archive.
if there is lack of transparency in the intervening network. This means that the optimized packet addressing is identical to the addressing used if no optimization were being performed. the two appliances are spoofing any intermediate network components into thinking that they are the original source and destination end systems. security functionality and range of applications are determined by other considerations. The key point here is that the degree of transparency depends on several factors: • Deployment topology (in-line vs. the intervening network components see the actual addresses of the acceleration appliances rather than endpoint addresses. then actual addressing will require QoS reconfiguration while transparent addressing would not in most cases. the question comes down to which application accelerator solution has the higher degree of transparency and therefore requires fewer workarounds and reconfiguration tasks. complete transparency is not achievable. the workaround is to move the interacting functionality to be upstream of the accelerator rather than downstream. Transparent Addressing Here. This is accomplished by having the accelerators adopt the source/destination/ port addressing of the conversation end points for each flow. In general. actual addressing appliances based on degree of transparency they offer. the accelerators have their own distinct network addresses. The following is a list of some of the intervening network elements and services that may be affected by the addressing difference: Quality of Service (QoS) Classification If packet classification takes place downstream of the accelerator (either in the WAN edge router or somewhere else in the intervening network) and the assigned traffic class depends at least partially on source and destination addresses. In order to compare transparent addressing vs. out-of-line) • Transparency–related attributes of the accelerator solution • Nature of the intervening network • Types of network monitoring and security devices • Type of applications being optimized Assuming that the deployment topology. security components and configured services of the intervening network. There are three approaches for addressing IP traffic between accelerator appliances that are being employed by various vendors: Tunneling A tunnel encapsulates the original packets in an additional IP header using the IP addresses of the accelerators in the enveloping packet. the intervening network components continue to see the network addresses of the conversation end points. which means that the IP addresses of the originating conversation partners are replaced by the actual appliance addresses for the appliance-to-appliance hop. Actual Addressing In this case. the degree of transparency of an application accelerator solution will be determined primarily by the IP packet header content used in the traffic between the two appliances and how this interacts with the network elements. As a result. Therefore. a network manager needs to consider the specific characteristics of his/her intervening network and how they would be affected by the differences in addressing. With tunnels. intervening network. an overlay network consisting of static paths between pairs of accelerators must be configured. reconfiguration would be required if the classification depends partially on A Guide to Decision Making 37 . Therefore. Even with transparent addressing.APPlicATion Delivery HAnDbook | februAry 2008 In most production networks.
ADCs). Traffic Monitoring If the probes monitoring WAN traffic are downstream of the accelerator. This can have an impact if the IT organization wants to give a different QoS priority level to traffic that is compressed and accelerated by the accelerator. With asymmetric routing there are some conditions where a transparently addressed optimized packet would arrive at the destination end system instead of at the accelerator using the same address. resolution may require configuring WCCP on the WAN edge router. then actual addressing obscures the identity of the flow’s end points because the probe sees all optimized traffic as originating and terminating at the accelerators.. and protocol) may need to be reconfigured to pass traffic between accelerators using actual addressing. If the PBR decision is partially based on DPI. their policies based on the IP address 5-tuple (source and destination addresses. the traffic monitoring tools will continue to show network usage based on the originating end systems. Access Control Lists (ACLs) and PBR If ACLs have been deployed within intervening network devices to block certain traffic flows based on IP address information. For ACLs or PBR that enable or redirect traffic based on the IP addressing. That precedent is the Front End Processor (FEP) that was introduced A Guide to Decision Making 38 . some reconfiguration or workarounds will be necessary for actual addressing. Security Devices If firewalls. For example. adding accelerators of either type (actual or transparent addressing) will be likely to have network-wide transparency.g. From this analysis it is quite clear that the degree of transparency of a WAN optimization solution depends largely on the characteristics and services of the existing network between co-operating pairs of accelerators. compared to traffic that bypasses the accelerator. With a transparent addressing accelerator. if most network services are performed at the edge of the network and the WAN core network is comparatively “dumb”. This is true because the initial setup of the network session based is always based on originating system addresses would be blocked regardless of the addressing scheme for optimized traffic. source and destination ports numbers.a. Therefore.APPlicATion Delivery HAnDbook | februAry 2008 application payload data (deep packet inspection or DPI) and the optimization process for applications has altered the payload (e. visibility of the end system address (e. the compressed traffic cannot be distinguished from the uncompressed bypass traffic. If this problem occurs. accelerators using actual addressing are transparent to asymmetric routing. via compression). IPS.k. If any of these security devices use policies based on DPI to examine payloads for malicious signatures.. reconfiguration may be required for both actual and transparent addressing.. Because there is no duplication of addresses.g. Application Front Ends (AFEs) Background As previously mentioned. but not for transparent addressing. then this may not continue to work correctly even with transparent addressing if the application optimization has altered the payload sufficiently. If transparent addressing is used. an historical precedent exists to the current generation of AFEs (a. or IDS systems are present within the intervening network. they will continue to block the flows between accelerators regardless of the addressing scheme. for identification of “top talkers”) is obscured unless the probe is moved upstream or data is garnered from the accelerator. Asymmetric Routing Asymmetric routing occurs when the packets flowing between end systems A and B use one path for traffic from A to B and another path for traffic from B to A.
the AFE has assumed. coupled with session awareness and behavioral models of normal application interchange. A Guide to Decision Making 39 . and XML is a verbose protocol that is CPU-intensive. SLB functionality maximizes the efficiency and availability of servers through intelligent allocation of application requests to the most appropriate server. Therefore. While an AFE still functions as a SLB. Among the functions that users can expect from a modern AFE are the following: • Traditional SLB AFEs can provide traditional load balancing across local servers or among geographically dispersed data centers based on Layer 4 through Layer 7 intelligence. Application Firewalls are focused on blocking application-level attacks that are becoming increasingly prevalent. Application Firewalls also have the advantage of providing a measure of protection against zero day exploits by blocking the sessions of clients whose behaviors are outside the bounds of admissibility. SSL offload frees up server resources. an application firewall would be able to detect and block Web sessions that violate rules defining the normal behavior of HTTP applications and HTML programming. As was described in Chapter 3. SSL offload can provide a significant increase in the performance of secure intranet or Internet Web sites. a wider range of more sophisticated roles that enhance server efficiency and provide asymmetrical functionality to accelerate the delivery of applications from the data center to individual remote users. A prime example of this is SSL offload. For example.APPlicATion Delivery HAnDbook | februAry 2008 in the late 1960s and was developed and deployed in order to support mainframe computing. where the AFE terminates the SSL session by assuming the role of an SSL Proxy for the servers. • XML Offload Another function that can be provided by the AFE (as well as standalone devices) is to offload XML processing from the servers by serving as an XML gateway. one of the roles of an XML gateway is to offload XML processing from the general-purpose servers and to perform this processing on hardware that was purpose-built for this task. Hence. the current generation of AFEs evolved from the earlier generations of Server Load Balancers (SLBs) that were deployed in front of server farms. As previously mentioned. Application Firewalls complement traditional perimeter firewalls that are based on recognition of known network-level attack signatures and patterns. application firewalls are typically based on Deep Packet Inspection (DPI). • Application Firewalls AFEs may also provide an additional layer of security for Web applications by incorporating application firewall functionality. allowing existing servers to process more requests for content and handle more transactions. Web services and Web 2. From a more contemporary perspective. • SSL Offload One of the primary new roles played by an AFE is to offload CPU-intensive tasks from data center servers. Another role of an XML gateway is to provide additional security functionality to protect against the kinds of attacks that were described in Chapter 3. An AFE provides more sophisticated functionality than a SLB does.0 applications are XML based. As described in the section of the handbook that deals with Next Generation firewalls. and will most likely continue to assume.
Features AFEs support a wide range of functionality including TCP optimization. and database servers. AFE delay. HTTP compression is asymmetrical in the sense that there is no requirement for additional client-side appliances or technology. and server-side delay. TCP proxy functionality is designed to deal with the complexity associated with the fact that each object on a Web page requires its own short-lived TCP connection. application servers. asymmetrical TCP optimization. such as reverse caching. Processing all of these connections can consume an inordinate about of the server’s CPU resources.4. • Response Time Monitoring The application and session intelligence of the AFE also presents an opportunity to provide real-time and historical monitoring and reporting of the response time experienced by end users accessing Web applications. Web compression. new user requests for static or dynamic Web objects can often be delivered from the cache rather than having to be regenerated by the servers. guide remedial action. HTTP compression is a capability built into both Web servers and Web browsers. As a result.APPlicATion Delivery HAnDbook | februAry 2008 • Asymmetrical Application Acceleration AFEs can accelerate the performance of applications delivered over the WAN by implementing optimization techniques. minimizing the server overhead for fine-grained TCP session management. caching. this list is intended as a fairly complete compilation of possible criteria. Moving HTTP compression from the Web server to the AFE is transparent to the client and so requires no client modifications. Acting as a proxy. and compression. HTTP multiplexing. As was the case with Branch Office Optimization Solutions. Selection Criterion The AFE evaluation criteria are listed in Table 5. a given organization or enterprise might apply only a subset of these criteria for a given purchase decision.4: Criteria for Evaluating AFEs Each of the criteria is described below. Asymmetrical TCP optimization is based on the AFE serving as a proxy for TCP processing. Weight Wi Score for Solution “A” Ai Score for Solution “B” Bi Criterion Features Performance Scalability Transparency and Integration Solution Architecture Functional Integration Virtualization Security Application Availability Cost-Effectiveness Ease of Deployment and Management Business Intelligence Total Score WiAi WiBi Table 5. The AFE can provide the granularity to track performance for individual Web pages and to decompose overall response time into client-side delay. network delay. The resulting data can be used to support SLAs for guaranteed user response times. image compression as well as bandwidth management and traffic shaping. With reverse caching. Reverse caching therefore improves user response time and minimizes loading on Web servers. A Guide to Decision Making 40 . the AFE terminates the clientside TCP sessions and multiplexes numerous short-lived network sessions initiated as clientside object requests into a single longer-lived session between the AFE and the Web servers. and plan additional capacity to maintain service levels. The AFE can also offload Web servers by performing compute-intensive HTTP compression operations.
security. It is critical. scalability and solution architecture identify the ability of the solution to support a range of implementations and to extend to support additional functionality. in the case of AFEs other key measures of performance include how many Layer 4 connections can be supported as well as how many Layer 4 setups and teardowns can be supported. if the organization intends the AFE to be able to support additional optimization functionality over time. As such. In some data centers. or QoS. it may be important to integrate the Layer 2 and Layer 3 access switches with the AFE and firewalls so that all that application intelligence. however. The solution should also be as transparent as possible relative to both the existing server configurations and the existing security domains. to quantify the kind of performance gains that the solution will provide in the particular application environment where it will be installed. because data centers are central points of aggregation.APPlicATion Delivery HAnDbook | februAry 2008 Performance Performance is an important criterion for any piece of networking equipment. AFEs will tend to be somewhat more transparent than other classes of networking equipment. the AFE needs to be able to support the extremely high volumes of traffic transmitted to and from servers in data centers. As part of this quantification. unlike branch office optimization solutions that are proprietary. AFEs are standards based. However. Scalability Scalability of an AFE solution implies the availability of a range of products that span the performance and cost requirements of a variety of data center environments. application acceleration. Transparency and Integration Transparency is an important criterion for any piece of networking equipment. Extensive functional integration reduces the complexity of the network by minimizing the A Guide to Decision Making 41 . but it is critical for a device such as an AFE. Functional Integration In many data center environments there are programs in progress to reduce overall complexity by consolidating both the servers and the network infrastructure. and server offloading are applied at a single point in the data center network. In particular. It is very important to be able to deploy an AFE solution and not break anything such as routing. The AFE also has to be able to easily integrate with other components of the data center. A related consideration is how device performance is affected as additional functionality is enabled. Third party tests of a solution can be helpful. it is important to identify if the performance of the solution degrades as either additional functionality within the solution is activated or if there are changes made to the application mix within the data center. An AFE solution can contribute significantly to network consolidation by supporting a wide range of applicationaware functions that transcend basic server load balancing and content switching. Solution Architecture Taken together. and other appliances that may be deployed to provide application services. Performance requirements for accessing data center applications and data resources are usually characterized in terms of both the aggregate throughput of the AFE and the number of simultaneous application sessions that can be supported. A simple definition of performance is how many bits per second the device can support. such as the firewalls. it is important to determine if the hardware and software architecture can support new functionality without an unacceptable loss of performance and without unacceptable downtime. As such. and should not make troubleshooting any more difficult. application security. While this is extremely important.
For example. further increasing the cost. Security functionality that IT organizations should look for in an AFE includes protection against denial of service attacks. it is important to understand what the initial solution costs. It is also important to understand how the cost of the solution changes as the scope and scale of the deployment increases. plus the associated savings in management costs and power and cooling costs. Each logical AFE can be configured individually to meet the server-load balancing. higher availability when faced with a failover. Since the AFE is in-line with the Web servers and other application servers. such as firewalls as well as IDS and IPS appliances. server virtualization supports data center consolidation by allowing a number of applications running on separate virtual machines to share a single physical server. high availability configurations that feature automated fail-over among the redundant devices. each virtualized AFE can consolidate the functionality of a number of physical AFEs dedicated to the support of single applications. Therefore. Virtualization As is discussed in Chapter 9. For example. while also allowing the configuration of appli- cation-specific security features that complement general purpose security measures. a common practice was to run only one application per server to maximize operating system stability. While this clearly is important. but it also necessitated additional real estate and power. Application Availability The availability of enterprise applications is typically a very high priority. Security The solution must be compatible with the current security environment. acceleration.APPlicATion Delivery HAnDbook | februAry 2008 number of separate boxes and user interfaces that must be navigated by data center managers and administrators. In particular. Reduced complexity generally translates to lower TCO and higher availability. the AFE would not need to be reconfigured when an application is moved or automatically fails-over to a new physical machine. with a virtual AFE mapped to a virtual machine. as previously mentioned. AFEs can also be virtualized by partitioning a single physical AFE into a number of logical AFEs or AFE contexts. It should also be relatively easy to deploy and manage new applications and so ease of configuration management is a particularly important consideration where a wide diversity of applications is supported by the data center. integrated intrusion protection. For example. there are other dimensions to what is meant by application availability. a traditional approach to defining application availability is to make sure that the AFE is capable of supporting redundant. protection against SSL attacks and sophisticated reporting. Benefits of virtualization include lowering TCO through consolidation of AFE physical devices. A Guide to Decision Making 42 . Ease of Deployment and Management As with any component of the network or the data center. virtualization is becoming a key technology for achieving data center consolidation and the related benefits. and security requirements of a single application or a cluster of applications. an architecture that enables scalability through the use of software license upgrades tends to minimize the application downtime that is associated with hardware-centric capacity upgrades. an AFE solution should be relatively easy to deploy and manage. Prior to virtualization. the solution itself must not create any additional security vulnerabilities. Cost Effectiveness This criterion is related to scalability. In addition. Virtualization adds significantly to the flexibility of the data center by allowing applications to be easily moved from one physical server to another. Not only was the extra hardware this approach required expensive.
ineffective processes is one of the most significant impediments to successful application delivery. Many IT organizations are responding to this challenge by enhancing their understanding of application performance issues and then implementing their own application delivery solutions based on the products discussed in the preceding chapter. optimization. the ADMSP’s scale of operations justifies their investment in highly automated management tools and more sophisticated management processes that can greatly enhance the productivity of operational staff. ADMSPs will have broader and deeper application-oriented technical expertise than an enterprise IT organization can afford to accumulate. The efficiency of all these processes can further reduce the OPEX cost component underlying the service. management. however. Lower the Total Cost of Ownership (TCO) In addition to reducing capital expenditure. including: Reduce Capital Expenditure In cases where the ADMSP provides the equipment as CPE bundled with the service. Leverage the MSP’s Management Processes The ADMSP should also be able to leverage sophisticated processes in all phases of application delivery. some AFEs also provide data that can be used to provide business level functionality. is particularly important in the case of application delivery because the typical IT organization does not have personnel who have Managed Service Providers Introduction As previously noted. the ability to be able to leverage the MSP’s expertise is a factor that could cause an IT organization to use an MSP for a variety of services. and further reduce TCO. Leverage the MSP’s Expertise In most cases. business intelligence. business process management and Web analytics. and control. including the provision of basic transport services. the need for capital expenditure to deploy application optimization solutions can be avoided. improve reliability. This higher level of expertise can result in full exploitation of all available technologies and optimal service implementations and configurations that can increase performance. however. managed application delivery services can also reduce operational expense (OPEX) related to technical training of existing employees in application optimization or hiring of additional personnel with this expertise. Similar to the discussion in the preceding paragraph. This criterion. In particular. fraud management. data gathered by an AFE can feed security information and event monitoring. the customer of managed services can also benefit from the lower cost structure of ADMSP operations. is particularly important in the case of application delivery because as will be shown in the next Chapter. The ability to leverage the MSP’s management processes is a factor that could cause an IT organization to use an MSP for a variety of services. which can A Guide to Decision Making 43 . planning. There is a wide range of potential benefits that may be gained from outsourcing to an Application Delivery MSP (ADMSP). virtually all organizations are under increasing pressure to ensure acceptable performance for networked applications. leverage economies of scale by supplying the same type of service to numerous customers.APPlicATion Delivery HAnDbook | februAry 2008 Business Intelligence In addition to traditional network functionality. In terms of OPEX. This criterion. Other IT organizations prefer to outsource all or part of application delivery to a Managed Service Provider (MSP). In particular. including application assessment.
there are two primary categories of managed application delivery service environments: 1.APPlicATion Delivery HAnDbook | februAry 2008 a thorough understanding of both applications and networks. Timely Deployment of Technology Incorporating a complex application delivery solution in the enterprise network can be quite time consuming. web access and SSL VPN access) that traverse the Internet A Guide to Decision Making 44 . especially where a significant amount of training or hiring is required. allowing the solution to be deployed in a much more timely fashion. This allows the customer of managed application delivery services to gain the benefits of technologies and facilities that are beyond the reach of the typical IT budget. the enterprise may be able to avoid being locked in to a particular equipment vendor due to large sunk costs in expertise and equipment.5% 79. with an ADMSP. Better Strategic Focus The availability of managed application delivery services can free up enterprise IT staff facilitating the strategic alignment of in-house IT resources with the enterprise business objectives. as well as the interaction between them.6% Performed by 3rd Party 11. ADMSP facilities can take full advantage of the most advanced technologies in building their facilities to support service delivery. Conversely.1 contains their responses.8% 7. another conclusion that can be drawn is that between 20 and 25 percent of IT organizations either prefer to have the indicated functionality performed by a third party or are receptive to that concept. in-house IT can focus on a smaller set of technologies and in-house services that are deemed to be of greater strategic value to the business.5% 13. Interest in Managed Services Kubernan asked 200 IT professionals about their preference in a DIY approach vs. the learning curve is essentially eliminated. Site-based services comprised of managed WAN Optimization Controllers (WOCs) installed at participating enterprise sites 2. Internet-based services that deal with acceleration of applications (e. Enhanced Flexibility Managed application delivery services also provide a degree of flexibility that allows the enterprise to adapt rapidly to changes in the business environment resulting from competition or mergers/acquisitions.g.1% 5. Leverage the MSP’s Technology Because of economies of scale.3% Table 6.1 is that between 75 and 80 percent of IT organizations prefer to perform the indicated planning functionality themselves. with a managed service.0% 16. using a managed service provider for a number of tasks related to application delivery.0% 10. Table 6. Different Types of Managed Application Delivery Services Currently. In addition. For example. In contrast.4% 15.9% No Preference 9. Perform it Themselves 79.6% 75.1: Preference for Performing Planning Functions Once conclusion that can be drawn from Table 6.6% Function Profiling an application prior to deploying it Baselining the performance of the network Baselining the performance of key applications Assessing the infrastructure prior to deploying a key new application such as VoIP 76.
1. The techniques that are avail- A Guide to Decision Making 45 . have deployed services built on top of the Internet that are intended to mitigate this weakness. MPLS services tend (PoPs) distributed across the Internet and do not require that remote sites accessing the services have any special hardware or software installed. Some MSPs. many IT organizations are moving away from a hub and spoke network based on frame relay and ATM and are adopting WAN services such as MPLS. there is no requirement for any remote site CPE or additional client software beyond the conventional Web browser. Site-based services are generally based on MSP deployment of WOCs that are commercially available from the vendors that are addressing the enterprise market for application acceleration/optimization. As noted above. The WAN depicted in the figure is typically a private leased line network or a VPN based on Frame Relay. And. It is not possible to use MPLS to support nomadic workers who need connectivity from virtually anywhere. The application optimization service may be offered as an optional add-on to a WAN service or as a standalone service that can run over WAN services provided by a third party. both the WOC and the WAN router would to be expensive and there tends to be a long lead-time associated with deploying new MPLS services.2 shows how an Internet-based service focused on the acceleration of Web applications is typically delivered. The AFE shown in the figure is performing firewall. An area where the use of MPLS is inappropriate is the support of home and nomadic workers.APPlicATion Delivery HAnDbook | februAry 2008 Site-based Services These services are based on the deployment of managed WOC CPE at the central data center and at each remote site participating in the application optimization project. MPLS has advantages and disadvantages.1: Site-Based Application Delivery Services be deployed and managed by the same MSP. such as a hotel room. Where the application delivery service is bundled with a managed router and WAN service. or an airport. ATM or MPLS. The servers deployed across the MSP geographically dispersed PoPs form an intelligent distributed processing infrastructure and perform a variety of AFE/WOC functions to accelerate the web traffic and optimize traffic flows through the Internet. The remote site connects to a nearby ADMSP PoP typically using a broadband connection. and similar functions that may or may not be included in the MSP offering. Internet-based Services As noted in Chapter 3. A major weakness of the Internet is that it cannot provide low. Servers WAN WOC WOC however. it is typically prohibitively expensive. predictable delay. One of the disadvantages of MPLS is that similar to Frame Relay and ATM. a coffee shop. Since the Internet also supports meshed traffic flows. One of the advantages of MPLS is that it is widely deployed. as illustrated in Figure 6. it also is being used by many enterprises as an alternative to legacy WAN services such as frame relay and ATM. Accelerating Web Applications Figure 6. while it is possible to use MPLS to support home workers. Like any WAN service. These services are focused on optimizing the delivery of applications over the Internet in part to allow the Internet to be exploited as a lower cost WAN alternative. Internet-based services are based primarily on proprietary application acceleration and WAN optimization servers located at MSP points of presence WAN Router WAN Router AFE Remote Site Central Data Center Site Figure 6. load balancing.
there is typically no requirement for remote site CPE or special client software. The central site CPE servers can also be used to extend the Dynamic Route Optimization and Transport Optimization functions to include the central site as well as the MSP PoPs. Local caching is much more effective than reverse caching at a central site because multiple Internet roundtrip delays are avoided for every local cache hit. but can also support a broad range of additional application-specific WOC functions because of the presence of the central site Application Acceleration servers. including the delay and packet loss characteristics of every PoP-to-PoP route.2: Internet-Based Web Application Delivery Services WAN Router AFE/ Firewall Central Data Center Site of the Internet are minimized with Dynamic Route Optimization. the Application Delivery servers in the local PoPs and Application Delivery servers at the central site CPE form pairs of cooperating WOCs analogous to the pair of enterprise WOCs depicted in Figure 6. As shown in Figure 6. default Internet routes determined by BGP do not take into account either delay or packet loss and are therefore likely to yield significantly lower performance. The beneficial impact of TCP optimization is greatly magnified when the delay and packet loss A Guide to Decision Making 46 . mitigating the inefficiencies stemming from the slow start algorithm and the retransmission behavior of TCP. For application specific functions. • Compression of Web content transferred between the ingress and egress PoPs • Dynamic Route Optimization to minimize latency and packet loss. This allows an optimum end-to-end route to be constructed from the highest performing PoP-to-PoP intermediary route segments. these services typically require the installation of managed CPE Application Acceleration servers at the central site that interoperate with the Application Acceleration servers in the MSP PoPs. • TCP transport optimization among PoPs. Accelerating any IP-based Application Another category of Internet-based service provides acceleration of any application that runs over the Internet Protocol (IP). • Caching Web content at the local Web Acceleration Servers MSP PoPs Web Servers ISP Internet ISP WAN Router Remote Site Figure 6. The application acceleration servers deployed for IP Application Delivery services can potentially exploit all of the acceleration techniques described above for acceleration of web-based applications. This can be especially benefi- PoP with intelligent pre-fetch of data from the central or origin web servers. As is the case with the Internet-based web services.3. In comparison. The determination of the best route through the Internet requires that the servers at each PoP gather dynamic information on the performance characteristics of alternate paths through the Internet.APPlicATion Delivery HAnDbook | februAry 2008 able to accelerate web traffic include: • Dynamic mapping of the remote site to the optimum local PoP based on PoP server utilization and real-time data on Internet traffic loading.1.
IT organizations should evaluate the degree to which using a particular ADMSP would allow them to lower their total cost of ownership or leverage that ADMSP’s management processes. As noted. The central site Application Acceleration servers can also perform AFE functions such as SSL processing to support secure access to enterprise applications. design and planning. Instead. they should consider the following criteria: • Is the MSP offering a turnkey solution with simple pricing? • Does the MSP provide network and application performance monitoring? • Does the MSP provide a simple to understand management dashboard? • What functionality does the MSP have to troubleshoot problems in both a proactive and a reactive fashion? • What professional services (i. Independent of whether an IT organization is evaluating a site-based service or an Internet-based service.APPlicATion Delivery HAnDbook | februAry 2008 cial where the central site maintains diverse connectivity to one or more ISPs.3: Internet-Based IP Application Delivery Services Selection Criterion The beginning of this Chapter listed a number of benefits that an IT organization may gain from using an MSP for application delivery. implementation) are available? • What technologies are included as part of the service? • What is the impact of these technologies on network and application performance? • Does the MSP offer application level SLAs? A Guide to Decision Making 47 . Internet-based application delivery services do not require CPE at the remote sites and may or may not require CPE at the central site. For example. These benefits are criteria that IT organizations can use as part of their evaluation of ADMSPs. site-based application delivery services require CPE at all of the customer sites. assessment. Application Acceleration Servers MSP PoPs Application Acceleration Servers Application Servers ISP Internet ISP ISP SOHO Router WAN Router AFE/ Firewall Central Data Center Site Mobile User Figure 6. The choice between a sitebased application delivery service and an Internet-based application delivery service is also impacted by the ability of the Internet-based application delivery service to provide acceptable performance. Internet-based application delivery services implement optimization functionality within the network. The choice between a site-based application delivery service and an Internet-based application delivery service is in part an architecture decision.e. performance analysis and optimization..
within his company. and not the IT organization. • Automatically identify performance issues and resolve them. • Gather the appropriate management data on the performance of the applications and the infrastructure that supports them. • Gain visibility into the operational architecture and dynamic behavior of the network. the answer shared equally means that multiple components of IT are equally likely to cause application degradation.1. Shared Equally Netw ork Storage • Identify the sources of delay in the performance of the applications and the infrastructure. • Provide end-to-end visibility into the ongoing performance of the applications and the infrastructure. Servers Middlew are Application 0. virtually any component of IT could be the source of the problem.0% 5.1: Causes of Application Degradation The data in Figure 7. When an application experiences degradation.” As part of that survey.APPlicATion Delivery HAnDbook | februAry 2008 • What is the scope of the service? Does it include application management? Server management? • Is it possible to deploy a site-based application delivery service and not deploy WAN services from the same supplier? The Consulting Architect commented that.0% 30.0% 10. As Chapter 2 mentioned. He stated that once a problem has been reported that identifying the root cause of the problem bounces around within the IT organization and that “It’s always assumed to be the network. the end user and not the IT organization usually first finds application-performance issues. Kubernan asked several hundred IT pro- A Guide to Decision Making 48 . first notices application degradation.0% 20.0% Figure 7. Kubernan also asked the survey respondents to indicate what component of IT was the biggest cause of application degradation. Management Introduction The primary management tasks associated with application delivery are to: • Discover the applications running over the network and identify how they are being used. In Figure 7. Kubernan asked more than 300 IT professionals: “If the performance of one of your company’s key applications is beginning to degrade who notices it first? The end user or the IT organization?” Three-quarters of the survey respondents indicated that it was the end user. IT organizations will not be considered successful with application delivery as long as the end user.0% 15. Most of my job is defending the network.1 speaks to the technical complexity associated with managing application performance. The Organizational Dynamic To understand how IT organizations respond to application degradation.0% 25. Figure 7.1 summarizes their answers.
Group Network Group – including the NOC Application development group Server group Storage group Application performance-management group Other No group Percentage of Respondents 64. We would have to run around with sniffers and other less friendly tools to trouble shoot problems.9% 17. The good news is that most IT organizations recognize the importance of managing application performance. The data in Tables 7. it is gaining in importance and keeping about the same importance in the rest of the IT organizations.2.0% 13.3. In particular.” Their responses are contained in Table 7. The ASP Architect provided insight into the challenges of determining the source of an application-performance A Guide to Decision Making 49 .6% 3.2: The Relationship between IT Groups Percentage of Respondents 0.3% 24.APPlicATion Delivery HAnDbook | februAry 2008 fessionals to identify which organization or organizations has responsibility for the ongoing performance of applications once they are in production. “We used to have a real problem with identifying performance problems. The traditional IT infrastructure groups as well as by some of the application teams are using the tools that his organization developed.5% 45.0% 7. He went on to say that the reports generated by those tools helped to develop credibility for the networking organization with the applications-development organization. The finger pointing was often pretty bad. Taken together with the data in Figure 7.6% 48. To be successful with application delivery. IT organizations need tools and processes that can identify the root cause of application degradation and which are accepted as valid by the entire IT organization.1: Organization Responsible for Application Performance Kubernan recently asked over 200 IT professionals “How would you characterize the current relationship between your company’s application development organization and the network organization? depicted in Table 7. Identifying the root cause of application degradation is significantly more difficult than identifying the root cause of a network outage. managing application performance clearly is complex. how difficult is it for you to identify the root cause of the degradation? An answer of neutral means that identifying the root cause of application degradation is as difficult as identifying the root cause of a network outage.1 and 7.1% 6. Response Highly adversarial Moderately adversarial Slightly adversarial Neutral Slightly cooperative Moderately cooperative Highly cooperative Table 7. He stated. In slightly over half of the IT organizations. both technically and organizationally.9% 12.3% issue.1. Table 7.9% “ Their responses are ity that is associated with application delivery into context.9% 18. In roughly twenty-five percent of companies there is an adversarial relationship between the applications development groups and the network organization.1 contains their answers.2 speaks to the organizational dynamic that is associated with managing application performance.” He went on to say tha to do a better job of identifying performance problems the IT organization developed some of their own tools. In order to put the technical and organizational complex- Table 7.2% 33. Kubernan asked 200 IT professionals “When an application is degrading. research conducted by Kubernan indicates that in only 2% of IT organizations is managing application performance losing importance.1% 20.
5 indicates that three out of the top four impediments to effective application delivery have little to do with technology. Table 7.3: The Difficulty of Identifying the Cause of Application Degradation developing these processes. It has reached the point that the applications-development groups have seen the benefits and are working.5: Impediments to Effective Application Delivery The data in Table 7. The data in Table 7.9% 6 25.2% Discovery Chapter 4 of this handbook commented on the importance of identifying which applications are running on the network as part of performing a pre-deployment assessment. and improving its communications with the business units in particular.4% 13.4. to identify and resolve application degradation.3% 31.4 contains their answers. The ASP Architect stated that the infrastructure component of the IT organization has worked hard to improve their processes in general. Organizational discord and ineffective processes are at least as much of an impediment to the successful management of application performance as are technology and tools.1% Kubernan gave the same set of IT professionals a set of possible answers and asked them to choose the two most significant impediments to effective application delivery.2% 7.5 shows the answers that received the highest percentage of responses. however. These improvements have greatly enhanced the reputation of the infrastructure organization. in many cases. Answer Our processes are inadequate The difficulty in explaining the causes of application degradation and getting any real buy-in Our tools are inadequate The application development group and the rest of IT have adversarial relations.6% 33. Due to the dynamic nature of IT.4% 5 23. or soon will.5. The next Chapter will discuss the interest on the part of IT organizations to leverage ITIL (IT Infrastructure Library) to develop more effective IT processes.2% Table 7.5% 24.6% 31. In particular. and we have had these processes for a while Yes. it is also important to identify which applications are running on the network on an ongoing basis. and we have recently developed these processes No. to also become ISO certified. The data in Table 7. with the help of the infrastructure organization.0% 3 12.4: Existence of Formalized Processes Percentage of Respondents 22. He pointed out that the infrastructure is now ISO certified and it is adopting an ITIL model for problem tracking.4 indicates that the vast majority of IT organization either have formalized processes for identifying and resolving application degradation.0% Extremely Difficult: 7 8. the data in Table 7. Table 7. these processes are inadequate. both within IT and between the infrastructure organization and the company’s business units.0% 26. but we are in the process of developing these processes No Other Table 7.0% Neutral: 4 23. The data in this table also provides additional insight to the data in Table 7.APPlicATion Delivery HAnDbook | februAry 2008 The Process Barriers Kubernan asked hundreds of IT professionals if their companies have a formalized set of processes for identifying and resolving application degradation. Response Yes. Percentage of Companies 39. Chapter 3 mentioned one reason why identifying which applications are running on the network on an ongoing basis is important: successful application delivery requires Table 7. or are A Guide to Decision Making 50 . indicate that. organizations either currently have processes.6% 2 6.4 clearly indicate that the majority of IT Extremely Easy: 1 1.
As a result. The Port 80 Black Hole As noted. AIM will use port 80. in particular. these applications consume a significant amount of bandwidth. Taking advantage of this fact. Since servers listen to port 80 expecting to receive data from Web clients a firewall can’t block port 80 without eliminating much of the traffic a business may depend on.2 shows a variety of recreational applications along with how prevalent they are.2: Occurrence of Recreational Applications • Increased vulnerability to security breaches • Increased difficulty in complying with government and industry regulations • Increased vulnerability to charges of copyright violation • Increased difficulty in managing the performance of key business-critical. To put this in context. In IP networks. time-sensitive applications These recreational applications are typically not related to the ongoing operation of the enterprise and. identifying the applications that are running on a network is a critical part of managing application performance. For Example. TCP and UDP ports are endpoints to logical connections and provide the multiplexing mechanism to allow multiple applications to share a single connection to the IP network. network managers might well A Guide to Decision Making 51 . the occurrence of recreational applications is likely higher than what Figure 7. and AIM is typically configured to use these ports. This behavior creates what will be referred to as the port 80 black hole. AOL has been assigned ports 5190 through 5193 for its Internet traffic. Instant Messaging A good example of this is AOL’s Instant Messenger (AIM). Since IT professionals probably don’t see all the instances of recreational traffic on their networks. Port numbers range from 0 to 65535. many applications will port-hop to port 80 when their normally assigned ports are blocked by a firewall. there are many applications whose behavior makes this a difficult task. If these ports are blocked. the ports that are numbered from 0 Port Hopping Two applications that often use port hopping are instant messaging (IM) and peer-to-peer (P2P) applications such as Skype.iana. As described in the IANA (Internet Assigned Numbers Authority) Port Number document (www. The port 80 black hole can have four primary effects on an IT organizations and the business it serves: Figure 7. In a recent survey of IT professionals. to 1023 are reserved for privileged system-level services and are designated as well-known ports. 51 percent said they had seen unauthorized use of their company’s network for applications such as Doom or online poker. Unfortunately.2 reflects. however. port 80 is the well-known port for HTTP data exchange and port 443 is the well-known port for secure HTTP exchanges via HTTPS. in many cases. A well-known port serves as a contact point for a client to access a particular service over the network.APPlicATion Delivery HAnDbook | februAry 2008 that IT organizations are able to eliminate the applications that are running on the network and have no business relevance. Figure 7. those that use port hopping to avoid detection.org/assignments/port-numbers). Lack of visibility into the traffic that transits port 80 is a major vulnerability for IT organizations.
In our industry we often overuse the phrase businesscritical. and Derivative markets. The use of the protocol is gathering increased momentum as it begins to be used across the Foreign Exchange.html A Guide to Decision Making 52 . Analogously. Skype is usually very successful at passing typical firewalls. However. supporting Straight-Through-Processing (STP) from Indication-of-Interest (IOI) to Allocations and Confirmations. The FIX protocol is a series of messaging specifications for the electronic communication of trade-related messages. Such networks are useful for many purposes. TCP. Consequently. Some of the reasons why a company might choose to block AIM include security and compliance. Again. Many security experts have warned about the dangers associated with peer-to-peer networks. the claim can easily be made that the applications that support the business functions described above are indeed business-critical. change the 17 Skype: The Future of Traffic Detection and Classification http://www. This requires that phone calls be recorded. a good example is the requirement by the Securities and Exchange Commission that all stock brokers keep complete records of all communications with clients. Peer-to-Peer Networks and Skype A peer-to-peer computer network leverages the connectivity between the participants in a network. In addition. P2P nodes are generally connected via largely ad hoc connections.com/0906/VC1. including file sharing and IP telephony. most network organizations will not even be aware of its existence. it then intentionally connects to other Skype clients and remains connected. Once inside. A good example of this is virtually any application that is based on the Financial Information eXchange (‘FIX’) protocol. there is no standard ‘Skype port’ like there is a ‘SIP port’ or ‘SMTP port’. The point of discussing AIM is not to state whether or not a company should block AIM traffic. Antonio Nucci 17 port that they use each time they start. FIX has become the de-facto messaging standard for pre-trade and trade communications globally within equity markets. and both email and IM archived. However.” FIX-Based Applications Another component of the port 80 black hole is the existence of applications that are designed to use port 80 but which require more careful management that the typical port 80 traffic. or even TCP on port 80. As for compliance. there is often a lot of subjectivity relative to whether or not an application is time-sensitive. Skype is particularly adept at port-hopping with the aim of traversing enterprise firewalls. Skype is a peer-to-peer based IP telephony and IP video service developed by Skype Technologies SA. Since its inception in 1992 as a bilateral communications framework for equity trading between Fidelity Investments and Salomon Brothers. Many IT organizations attempt to block peer-to-peer networks because they have been associated with distributing content in violation of copyright laws. That is a policy decision that needs to be made by the management of the company. and is now experiencing rapid expansion into the post-trade space. maintaining a ‘virtual circuit’. Unlike a typical client-server network where communication is typically to and from a central server along fixed connections. including Skype. Moreover. The founders of Skype Technologies SA are the same people who developed the file sharing application Kazaa. then the machines that connect to it can be infected with no protection from the firewall. If one of those clients happens to be infected. that is not the case For example. AIM can present a security risk because it is an increasingly popular vector for virus and worm transmission.pipelinepub. Entering via UDP. because Skype has the ability to port-hop.APPlicATion Delivery HAnDbook | februAry 2008 think that by blocking ports 5190 – 5193 they are blocking the use of AIM when in reality they are not. many peer-to-peer applications. wrote “In order to avoid detection. Fixed Income. it is much harder to detect anomalous behavior or configure network security devices to block the spread of the infection. if AIM traffic is flowing through port 80 along with lots of other traffic.
Given that one of this handbook’s major themes is that IT organizations need to implement an applicationdelivery function that focuses directly on applications and not on the individual components of the IT infrastructure. These changes could be infrastructure upgrades. As Chapter 4 discussed. it must be possible to view all relevant management data from one place. For example. optimization appliances. • Facilitates making intelligent decisions and getting buy-in from other impacted groups. is comprised of switches and routers. and must be able to do this regardless of the vendors who supply the components or the physical topology of the network. End-to-end visibility is one of the cornerstones of assuring acceptable application performance. end-to-end visibility provides the hard data that enables an IT organization to know that it has to add bandwidth or redesign some of the components of the infrastructure because the volume of traffic associated with the company’s sales order tracking application has increased dramatically. firewalls. during and after it makes changes. intrusion detection and intrusion prevention appliances as well as a virtualized network. To enable cross-functional collaboration. IT organizations typically have easy access to management data from both SNMP MIBs End-to-End Visibility Our industry uses the phrase end-to-end visibility in various ways.2 discussed. having all members of the IT organization have access to the same set of tools that are detailed and accurate enough to identify the sources of application degradation facilitates cooperation. For example. • Enables better cross-functional collaboration. End-to-end visibility refers to the ability of the IT organization to examine every component of IT that impacts the communications once users hit ENTER or click the mouse to when they receive a response from an application. configuration changes or the deployment of a new application. tions the IT organization to curb recreational use of the network. End-to-end visibility is important because it: • Provides the information that allows IT organizations to notice application performance degradation before the end user does. It also posi- A Guide to Decision Making 53 . The typical enterprise network. if a stock broker is placing an order for millions of dollars in stocks. Providing detailed end-to-end visibility is difficult due to the complexity and heterogeneity of the typical enterprise network. application front ends. That is a textbook definition of a time-sensitive application. this handbook will use the following definition of end-toend visibility. for example. • Allows the IT organization to measure the performance of critical applications before. As a result. The type of cross-functional collaboration the preceding bullet mentioned is difficult to achieve if each group within IT has a different view of the factors causing application degradation. An end-to-end monitoring solution must profile traffic in a manner that reflects not only the physical network but also the logical flows of applications. the IT organization is in a position both to determine if the change has had a negative impact and to isolate the source of the problem it can fix the problem quickly. a small delay in placing the order can significantly drive up the cost of the stock. • Identifies the correct symptoms of the degradation and as a result enables the IT organization to reduce the amount of time it takes to remove the sources of the application degradation. As section 7.APPlicATion Delivery HAnDbook | februAry 2008 for the applications that support the business functions described above.
g.6: Percentage of Companies that Set Specific Thresholds As Table 7. the vast majority of IT organizations set thresholds against WAN traffic utilization or some other network parameter. Survey Respondents were instructed to indicate as many parameters as applied to their situation. • Applications that are not based on IP. • Complex applications. • Multimedia applications. IT organizations also have the option of deploying either dedicated instrumentation or software agents to gain a more detailed view into the types of applications listed below.APPlicATion Delivery HAnDbook | februAry 2008 and from NetFlow.g.. • Support the monitoring of real traffic. multichannel links. • Generate and monitor synthetic transactions. Telnet. TCP Connect) LAN Traffic Utilization Application-Response Time (Synthetic Transaction Based) Application Utilization Other Percentage 81.5% 58. • Add minimum management traffic overhead. In a recent survey. Parameter WAN Traffic Utilization Network Response Time (Ping. • Provide visibility into complex network configurations such as load-balanced or fault-tolerant. e. Oracle. • Custom or homegrown applications. The survey respondents were then asked to indicate the network and application parameters against which they set the alarms. Table 7. one of the ways that IT organizations attempted to manage performance was by setting static threshold performance-based alarms.9% Table 7. • Support flexible aggregation of collected information. • Provide visibility into virtual networks such as ATM PVCs and Frame Relay DLCIs. e.2% 12. FTP. SAP and Citrix Presentation Server.g.5% 47. applications based on IPX or DECnet.. Less than one-third of IT organizations set parameters against application-response time. An end-to-end visibility solution should be able to identify: • Well known applications. Other selection criteria include the ability to: • Scale as the size of the network and the number of applications grows. HTTPS and SSH. • Support real-time and historical analysis.6 shows.6 contains their answers to that question. distribution and core components of the network as well as in the storage area networks. for example.8% 30. • Provide visibility into encrypted networks. • Support a wide range of topologies both in the access.. • Integrate with other management systems. e. roughly three-quarters (72. • Web-based applications. Network and Application Alarming Static Alarms Historically. • Capture performance data as well as events such as a fault. • Support granular data collection.8%) of the respondents said they set such alarms. Many companies that set thresholds against WAN utilization use a rule of thumb that says network utilization should not exceed 70 or 80 percent. Companies that use A Guide to Decision Making 54 .2% 5.
as well as the effort needed to keep up with tuning the settings in order to accommodate the constantly changing environment. For example. where a spike is characterized by a change that is Of the Survey Respondents that indicated other. it leads to an obvious conclusion.7: Approach to Setting Thresholds Percentage of Companies 64. the application can perform badly because of the large number of application turns. the applications are performing well. most IT organizations implement static performance alarms by setting thresholds at the high water mark. The Survey Respondents were also asked to indicate the approach that their companies take to setting performance thresholds.7 contains their answers. Application management should focus directly on the application and not just on factors that have the potential to influence application performance. in many cases the use of static performance alarms can result in an unacceptable number of false positives and/or false negatives. Table 7. Other (Please specify). even though the network is exhibiting low levels of delay. Proactive alarming is sometimes referred to as network analytics. Proactive Alarms As noted.3% 18.3% 17. A proactive alarming solution needs to be able to baseline the network to identify normal patterns and then identify in real time a variety of types of changes in network traffic. A Guide to Decision Making 55 . their most common responses were that their companies set the thresholds at what they consider to be an average value. We set the thresholds low because we want to know every single abnormality that occurs. jitter and packet loss. Approach We set the thresholds at a high-water mark so that we only see severe problems. heavy network utilization is unlikely to cause unacceptable application performance. and applies these concepts to real-time operations.4% One conclusion that can be drawn from Table 7. alarms identify are only identified once they have reached the point where they most likely impact users. If the network is lightly utilized. but not always. Table 7. The second assumption is often false. If the network is heavily utilized. One key concept of proactive alarming is that it takes the concepts of baselining. The use of static performance alarms has two other limitations. For example. An example of this is any application that uses a chatty protocol over the WAN. the applications are performing poorly. The goal of proactive alarming is to automatically identify and report on possible problems in real time so that organizations can eliminate them before they impact users. One is that the use of these alarms can result in a lot of administrative overhead due to the effort required to initially configure the alarms. This means that the use of static performance The problems static performance alarms is reactive. In particular.5 is that the vast majority of IT organizations set the thresholds high to minimize the number of alarms that they receive. It is quite possible to have the network operating at relatively low utilization levels and still have the application perform poorly.APPlicATion Delivery HAnDbook | februAry 2008 this approach to managing network and application performance implicitly make two assumptions: 1. In this case. which Chapter 4 describes. the solution must be able to identify a spike in traffic. While this approach makes sense from operationally. 2. The first assumption is often true. Most IT organizations ignore the majority of the performance alarms. if the company is primarily supporting email or bulk file transfer applications. Another limitation of the use of these alarms is accuracy.
As a result. and jitter on otherwise properly configured networks. These changes may be precipitated by a manual process such as adding a router to the network. One of the many strengths of the Internet Protocol (IP) is its distributed intelligence.APPlicATion Delivery HAnDbook | februAry 2008 both brief and distinct. it is also a weakness. In particular. alternative paths might not be properly configured for QoS. while each router makes its own forwarding decision. shift and drift conditions. it is not uncommon for the underlying network infrastructure to change. into the operational architecture and dynamic behavior of the network. Among the many problems created by route flapping is that it consumes a lot of the processing power of the routers and hence degrades their performance. Some criteria organizations can use to select a proactive alarming solution include that the solution should: • Operate off real-time feeds of performance metrics. experience instabilities. In addition. The variability of how the network delivers application traffic across its multiple paths over time can undermine the fundamental assumptions that organizations count on to support many other aspects of application delivery. • Discriminate between physical and virtual network elements. For example. including hourly and daily variations based on the normal course of user community activities. routing instabilities can cause packet loss. • Discriminate between individual applications and users. • Not require any threshold definitions. an organization that has a large complex network needs visibility A Guide to Decision Making 56 . In addition. Route Analytics As Chapter 3 mentions. • Recognize spike. routers exchange reachability information with each other via a routing protocol such as OSPF (Open Shortest Path First). The lack of a single repository of routing information is an issue because routing tables are automatically updated and the path that traffic takes to go from point A to point B may change on a regular basis. configuration errors that occur during routine network changes can cause a wide range of problems that impact application delivery. there is no single repository of routing information in the network. Any or all of these factors have a negative impact on application performance. In this latter case. These configuration errors can be detected if planned network changes can be simulated against the production network. • Collect and present supporting diagnostic data along with alarm. • Eliminate both false positive and false negative alarms. the network itself is likely designed in a sub-optimum fashion. A proactive alarming solution must also be able to identify a significant shift in traffic as well as the longer-term drift. and to become mis-configured. each router makes its own decision about how to forward a packet. While this distributed intelligence is a strength of IP. applications perform poorly after a failure. As a result. the rate of change might be particularly difficult to diagnose if there is an intermittent problem causing a flurry of routing changes typically referred to as route flapping. latency. Based on this information. By the nature of networks that are large and which have complex network topologies. • Integrate with any event console or enterprise-management platform. • Self-learn normal behavior patterns. For example. the mis-configuration of a router or by an automated process such as automatically routing around a failure. many organizations have moved away from a simple hub-and-spoke network topology and have adopted either a some-to-many or a many-to-many topology. Most importantly.
Network analytics and route analytics have some similarities. Whereas the goal of network analytics is to overcome the limitation of setting static performance thresholds.8% 30.8 shows. By integrating the information about the network routes and the traffic that flows over those routes. the far right column of Table 7. both logical and device-specific factors impact application performance. This in turn means that a route analytics solution must be able to record every change in the traffic paths as controlled and notified by IP routing protocols. To compensate for this. what percentage of the time that an application is either unavailable or is exhibiting degraded performance is the cause logical? Is the cause device specific? The responses to that question are contained in the middle column of the following table. This requires the creation and maintenance of a map of network-wide routes and of all of the IP traffic flows that traverse these routes.1% 10.3% those survey respondents who provided an answer other than don’t know. 30% device specific 90% logical. a high percentage of survey respondents answered don’t know. routing and traffic intelligence serves as the basis for: • Real-time monitoring of the network’s Layer 3 operations from the network’s point of view. real-time monitoring. Examples of device specific factors include device or interface failures. each of these techniques relies on continuous.9% 12. As such. SNMP-based management systems can discover and display the individual network elements and their physical or Layer 2 topology. A route analytics solution achieves this goal by providing an understanding of precisely how IP networks deliver application traffic. For example.8: Impact of Logical vs.5% 15. Examples of logical factors include sub-optimal routing. In contrast. 50% device specific 70% logical. device out of memory condition or a failed link. intermittent instability or slowdowns. Percentage of Respondents Less than 10% logical vs. analysis and diagnosis of the issues that occur at the routing layer.5% 22.9% 27. This network-wide. 10% device specific Don’t know 19. application composition and class of service (CoS) of traffic on all routes and all individual links.APPlicATion Delivery HAnDbook | februAry 2008 Factors such as route flapping can be classified as logical as compared to a device specific factor such as a link outage. a route analytics solution can provide information about the volume. however they cannot identify the actual routes packets take as they transit the network. However.8 reflects the responses of A Guide to Decision Making 57 . some of the factors that impact application performance and availability are device specific. To quantify how often a logical factor vs. Logical factors are almost as frequent a source of application performance and availability issues as are device-specific factors. Table 7.4% 14. 90% device specific Up to 30% logical vs. 200 IT professionals were given the following survey question: “Some of the factors that impact application performance and availability are logical in nature.6% 8. the goal of route analytics is to provide visibility.4% Percentage of Respondents Removing “ don’t know” 26. 70% device specific 50% logical. SNMP-based systems cannot easily identify problems such as route flaps or mis-configurations. and unanticipated network behavior.5% 11. The preceding section used the phrase network analytics as part of the discussion of proactive alarming. a device specific factor causes an application delivery issue. Device Specific Factors As Table 7. In your organization.
7% 24. • Mapping Netflow traffic data. Another instance in which a route analytics solution has the potential to provide benefits to IT organizations is when those IT organizations use MPLS services provided by a carrier who uses a route analytics solution. One reason that a route analytics solution can provide value to MPLS networks is that based on the scale of a carrier’s MPLS network.3% 3. in this case it is computed for all routers. • Correlating routing events with other information. network-wide routing map.APPlicATion Delivery HAnDbook | februAry 2008 • Historical analysis of routing and traffic behavior as well as for performing a root causes analysis. One instance in which a route analytics solution has the potential to provide benefits to IT organizations occurs when the IT organization runs a complex private network. • Modeling of routing and traffic changes and simulating post-change behavior. However. to identify underlying cause and effect. For example. • Detecting and alerting on routing events or failures as routers announce them. Route analytics can also be useful in simulating and analyzing the network-wide routing and traffic impact of various failure scenarios as well as the impact of planned network changes such as consolidating servers out of branch offices. In this case.9% 19. Approach Lots of hard work – typically by digging deeply into each device Employee specific tools such as route analytics N/A or don’t know Waiting for it to happen again and trying to capture it in real time Other Table 7.9% 13. This is similar in concept to the task performed by individual routers to create their forwarding tables. including application composition. such as performance data. • Monitoring and displaying routing topology and traffic flow changes as they happen. and reporting on correlated traffic impact. The complexity of these networks increases when the carrier uses BGP (Border Gateway Protocol) as BGP is itself a complex protocol. Two hundred IT professionals were given the following question: “Sometimes logical problems such as routing issues are the source of application degradation and application outages. • Simulating the impact of routing or traffic changes on the production network. analyzing and reporting on historical routing and traffic events and trends. The key functional components in a route analytics solution are: • Listening to and participating in the routing protocol exchanges between routers as they communicate with each other. The purpose of this simulation is to ensure that the planned and unplanned changes will not have a negative effect on the network. across all paths and links in the map. or implementing new WAN links or router hardware. these networks tend to be very complex and hence difficult to monitor and manage.9.3% A Guide to Decision Making 58 . it might be of benefit to the IT organization to take what is likely to be a highly manual process of monitoring and managing routing and to replace it with a highly automated process. • Recording. • Computing a real-time. a mis-configuration in BGP can result in poor service quality and reachability problems as the routing information is transferred between the users’ CE (Customer Edge) routers to the service provider’s PE (Provider Edge) routers.9: Resolving Logical Issues Percentage of Respondents 38. Which of the following describes how you resolve those types of logical issues?” Their answers are shown in Table 7.
3+ 4.APPlicATion Delivery HAnDbook | februAry 2008 As table 7.1118 depicts the relationship between R-Values and Mean Opinion Scores. referred to as the E-Model. based on the environment. EIGRP. evaluating the quality of voice communications by using a Mean Opinion Score (MOS) is somewhat common.1 1. or simply hope to be able to catch recurrences of issues for which there are no obvious physical/device explanations. This indicates that most network management toolsets lack the ability to address the logical issues that route analytics tools are useful for. devices.9 shows.0 3. based on transmission parameters such as delay and packet loss. Since many logical problems exhibit symptoms only intermittently. To increase objectivity. Table 7. and links. IS-IS. The solution should also be aware of both application and CoS issues.11: Comparison of R-Values and Mean Opinion Scores 18 Overcoming Barriers to High-Quality Voice over IP Deploy- ments.10: Mean Opinion Scores and Speech Quality A call with a MOS of 4.6 Table 7. Ideally.0 – 2. and be able to integrate with other network management components. In particular.e. the IT organization might need the solution to support of protocols such as OSPF. as well as with traditional network management solutions that provide insight into specific points in the network. In particular.0 or higher is deemed to be of toll quality. One criterion that an IT organization should look at when selecting a route analytics solution is the breadth of routing protocol coverage. many IT organizations still rely on laborious manual processes. For example. the ability of route analytics to rewind the entire recorded history of network-wide routing and traffic helps network engineers to be able to look into logical issues as if they were seeing them currently.6 2.0 – 4. this data is collected and reported on in a continuous real-time fashion and is also stored in such a way that it is possible to generate meaningful reports that provide an historical perspective on the performance of the network. Intel A Guide to Decision Making 59 . the ITU has developed another model of voice quality. R-Value 90 . interfaces.3 3. a Mean Opinion Score is a result of subjective testing in which people listen to voice communications and place the call into one of five categories. BGP and MPLS VPNs. Table 7. MOS 5 4 3 2 1 Speech Quality Excellent Good Fair Poor Bad Table 7.. Measuring Application Performan ce Evaluating application performance has been used in traditional voice communications for decades.10 depicts those categories.80 60 – 70 50 – 60 0 – 60 Characterization Very Satisfied Satisfied Some Users Dissatisfied Many Users Dissatisfied Nearly All Users Dissatisfied Not Recommended MOS 4. i.” As that title suggests. getting to the root of these problems rather than hoping to solve them in the future also can help increase the overall stability of application delivery and performance. The E-Model calculates the transmission-rating factor R. Recommendation G.100 80 – 90 70 . The Mean Opinion Score is defined in “Methods for Subjective Determination of Voice Quality (ITU-T P.800). and the numerical rating associated with each.1 – 3. a route analytics solution should be capable of being integrated with network-agnostic application performance management tools that look at the endpoint computers that are clients of the network.6 – 3. In particular.107 defines this model. The E-Model is intended to predict how an average user would rate the quality of a voice call. Another criterion is that the solution should be able to collect data and correlate integrated routing and Netflow traffic flow data.6 – 4. This level of automation can greatly speed problem localization and root cause analysis.
UDP does not have any flow control mechanisms to limit transmissions in the presence of congestion. or a proprietary protocol. or SIP). Call signaling (or call setup). IT organizations should look for an integrated solution that relates voice-specific metrics such as MOS values to the underlying network behavior that influences them. and increasingly. To deliver this integration. A critical concern is that because of its real-time nature. NetFlow. deliver information from three sources: call signaling. a voice management solution must. at a minimum. VoIP Characteristics VoIP poses particular challenges to the network for two primary reasons. For example users expect 100% availability and immediate dial tone. and its crossdomain organizational demands mandate an integrated approach to network management. Both sides of the call must use the same codec. In addition. End-to-end delay is also critical. and beyond 250 ms it will almost certainly be unusable. UDP does not offer any feedback information about whether or not packets that have been sent have been received. Those reasons are the new and different protocols that VoIP requires as well as its stringent availability and performance requirements. For instance. is handled by one of a number of different protocols: either one of those standards noted above (H. VoIP almost universally relies on UDP rather than TCP. VoIP is extremely sensitive to a number of network parameters that have much less affect on transactional applications. voice quality will likely begin to degrade.html A Guide to Decision Making 60 . because unlike TCP.323. and vice-versa. IT organizations should avoid the all-toocommon fragmented approach to network management. For example. the Session Initiation Protocol (SIP). Managing VoIP The preceding section discussed the use of MOS to measure the quality of VoIP. At about 150 ms. can also negatively impact voice quality. Most network management solutions don’t measure jitter because while this is a key parameter for VoIP. These are levels of latency that are barely noticeable on most transactional applications. and. which generally results from the incremental adoption of point solutions to address new problems on an ad hoc basis. the Media Gateway Control Protocol (MGCP). there are many different coding algorithms (codecs) available to handle the task of converting a conversation from analog to digital and back to analog again. Managing VoIP Effectively The combination of the user expectation of 100% uptime in voice. is a group of companies collaborating to pro- mote an application-performance metric called Apdex (Application Performance Index) which the alliance states is an open standard that defines a standardized method to report. Instead.apdex. Cisco’s Skinny Client Control Protocol (SCCP). fairly low levels of packet loss can severely impact voice quality. and SNMP. even more important. the Apdex Alliance 19 In addition to these protocol challenges. relate them one to another. SCCP. benchmark and track application performance. The ability of a solution to monitor call setup is 19 http://www. In addition. The negotiation to ensure this is handled by another set of protocols involved with call setup.APPlicATion Delivery HAnDbook | februAry 2008 A number of vendors have begun to develop application-performance metrics based on a somewhat similar approach to the ITU E-Model. both organizationally and technically. This poses particular problems for voice management. MGCP.org/index.323. This section will expand upon that discussion and describe what it takes to successfully manage VoIP. such as H. Jitter. it has virtually no impact on the typical data application. which is the variation in arrival time from packet to packet. its sensitivity to network conditions.
And. In particular. or rogue traffic from an illicit application? What other critical applications are being affected? It is worth pointing out that NetFlow data may not be sufficient to answer those questions. SNMP can give information from both the Cisco IP SLA and the Cisco Class-Based QoS (CBQOS) MIB. From the management console to the reports the solution generates. That follows because of the port 80 black hole that was discussed previously in this chapter. Cisco IP SLA generates synthetic transactions that can be used to emulate voice traffic across important links and derive metrics critical to understanding voice quality. start complaining that they can’t get a dial tone. Without this data. the ability of NetFlow or RMON-2 to deliver data from many points in the network can be critical in managing VoIP. a growing number of applications use port 80. As a result. and generate a number of status codes.APPlicATion Delivery HAnDbook | februAry 2008 critical. in a Ciscobased network. The CBQOS MIB provides information about the classbased queuing mechanism in a Cisco router. SNMP is also a requirement. This is particularly important for monitoring voice-specific metrics such as MOS. what is required is a holistic overview that can relate voice quality to network performance. This information can be used to derive important measurements like delay to dial tone. since NetFlow is port based many of the NetFlow records will list port 80 as the application. advanced monitoring tools that measure the delay. For example. in order for IT organizations to get real time scores for calls in progress. As previously described. the attempt on the part of many NOCs to improve their processes. This chapter will describe the changing role of one of the key players in the application delivery function – the Network Operations Center (NOC). and will highlight the shift that most NOCs are taking from where they focus almost exclusively on the availability of networks to where they are beginning to also focus on the performance of networks and applications. given the increasingly meshed nature of VoIP systems. The Changing Network Management Function Introduction The preceding chapter discussed the organizational complexity and the process barriers associated with ensuring acceptable application performance. enabling IT organizations to ensure that their critical traffic is being treated appropriately when bandwidth is in short supply. this chapter will describe how it is now somewhat common to have the NOC heavily involved in managing the performance of applications. IT organizations won’t know what went wrong if users. Being able to monitor call signaling also implies being able to receive and integrate data from an IP PBX. As part of that description this chapter will examine the current and emerging role of the NOC. The IT organization should be able to use this data to answer questions such as: Is the link being flooded by packets from a scheduled backup. Not only does it deliver data on the health of the devices. jitter. Once all of this data has been collected IT organizations must have a way of integrating it all into a useful overview of voice and network performance. the IT organization should be able to detect that MOS values are dropping on the link between HQ and the Los Angeles office and bring up management data from the appropriate devices to check the traffic composition on the link. Both NetFlow and RMON-2 data can give IT organizations insight into the protocol and class-of-service composition of the traffic. In order to relate this VoIP-specific data to network conditions requires network-specific data. but it can also be used to access data from other sources. the two ends of the conversation negotiate a common codec. loss and MOS for actual voice calls is required. for example. However. establish the channels that will be used for transmitting and receiving. and vice-versa. During call setup. As mentioned. this chapter will also examine how the NOC has to change A Guide to Decision Making 61 . For example.
6% 18. The survey respondents were asked a series of questions regarding senior IT management’s attitude towards the NOC. does not address the issue of whether or not this occurs before the user is impacted. The replies of the NOC-associated respondents who provided a response other than “don’t know” are depicted in Figure 8. They do not support LANs in those sites where the people in the sites believe that they can support the LAN more cost effectively themselves. . For example.6% 19.9% 82.9% Disagree/ Tend to Disagree 9. however..1% common type of event that causes NOC personnel to take action. . however. 20NOC-associated respondents will refer to survey respondent 21Non-NOC respondents will refer to survey respondents who do who work in the NOC not work in the NOC Figure 8..1.1 indicates that roughly half the time either someone in the NOC or an automated alert causes the NOC to take action.. .APPlicATion Delivery HAnDbook | februAry 2008 in order to reduce the meant time to repair that is associated with application performance issues and will detail the ways that IT organizations justify an investment in performance management. the NOC does not meet the organization’s current needs. The data in Figure 8.4% 80.the NOC meets the organization’s current needs. the NOC tends to work on a reactive basis identifying a problem only after it impacts end users.the NOC provides value to our organization...3% 12. The Function of the NOC The set of functions that NOCs perform varies widely amongst IT organizations.. some notable exceptions to that statement.. Our senior IT management believes that… . these respondents indicated that they do not think that working in the NOC is prestigious.1: Events that Cause the NOC to Take Action A Guide to Decision Making 62 . When it comes to how the NOC functions. ..the NOC is capable of resolving problems in an effective manner.1 is positive. one of the most disappointing finding is that: In the majority of cases. Table 8. The survey also asked the respondents about the most Today’s NOC Perceptions of the NOC The survey respondents were asked if they thought that working in the NOC is considered to be prestigious.6% 71.1: IT Management’s Perception of the NOC Overall the data in Table 8. The results are shown in Table 8. There are.7% 87.1% 17..the NOC will be able to meet the organization’s requirements 12 months from now.the NOC is a strategic function of IT. By roughly a 2 to 1 margin.4% 28. The NOC-associated respondents20 were evenly split on this issue... In over a quarter of organizations. This data. That was not the case for the Non-NOC respondents21. Agree/ Tend To Agree 90. The Management Systems Manager pointed out that the NOC in which she works supports the company’s WAN and some LANs. .the NOC works efficiently.4% 81.1..
8% 6. but is probably in the range of ten to twenty percent.1% Second Greatest Amount of Time 16. the results from NOC-associated respondents are shown in Figure 8.7% 5. Greatest Amount of Time Applications Servers LAN WAN Security Storage 39.5% 15. Table 8. our NOC personnel have spent the greatest amount of time addressing issues with… • During the past 12 months.4% 9. the survey contained three questions: • During the past 12 months. A Guide to Decision Making 63 .4% 30. A majority of NOCs use many management tools that are not well integrated.4% 3. silos means that the workgroups have few common goals. our NOC personnel have seen the greatest increase in time spent addressing issues with… where each question contained a number of possible answers. The survey respondents validated that conventional wisdom.1% 14.0% 11.7% Table 8. NOC personnel spend an appreciable amount of their time supporting a broad range of IT functionality. What Do NOC Personnel Monitor? The Survey Respondents were asked four questions about what NOC personnel in their organization monitor.2% Greatest Increase in Time 45. processes and tools. Just under half of NOCs are organized around functional silos. The Medical Supplies CIO elaborated on that and stated that the biggest question he gets from the user is.2: Where the NOC Spends the Most Time There are many conclusions that can be drawn from the data in Table 8.APPlicATion Delivery HAnDbook | februAry 2008 As was discussed in Chapter 2. The conventional wisdom in our industry is that NOC efficiency is reduced because of the silos that exist within the NOC.0% 21. The Medical Supplies CIO said that the percentage of time his organization spends monitoring and troubleshooting network problems varies from month to month. language.9% 21.2% 9.2. He added that his organization has a variety of point products and does not currently have a unified framework for these tools. the IT organization’s inability to identify application degradation before it impacts the user makes the IT organization look like bumbling idiots.2 shows the answers of the NOC-associated respondents. The Manufacturing Analyst said that having management tools that are not well integrated “is a fact of life”.7% 10. including: NOC personnel spend the greatest amount of time on applications and that is a relatively new phenomenon.1% 10. “Why don’t you know that my system is down? Why do I have to tell you?” He said that the fact that end users tend to notice a problem before IT does has the affect of eroding the users’ confidence in IT in general.2. In this context.9% 23. Where Does the NOC Spend Most of Its Time? To identify the areas in which NOC personnel spend most of their time.0% 6. our NOC personnel have spent the second greatest amount of time addressing issues with… • During the past 12 months.
3 shows the responses for both NOC-associated and non-NOC personnel. while there is still more of a focus in the NOC on networks. the interviewees were not as enthusiastic. One of the conclusions that can be drawn from the data in Figure 8.3 is that NOC personnel are involved in myriad tasks beyond simple monitoring. For example.APPlicATion Delivery HAnDbook | februAry 2008 The responsibilities that are highlighted in Figure 8. a similar percentage (63%) believed that their organization would put such a process in place within the next 12 months. Interestingly.3 are where NOC-associated and non-NOC respondents differed most in their responses. To probe the use of ITIL. however. as it is to monitor availability. The Use of ITIL (IT Infrastructure Library) There has been significant discussion over the last few years about using a framework such as ITIL to improve network management practices. NOC personnel are typically involved in traditional network activities such as configuration changes. the areas where there were the greatest differences are all traditional networking activities. Figure 8. What Else Does the NOC Do? We also asked the Survey Respondents about other tasks or responsibilities that NOC personnel are involved in. Figure 8. The most obvious conclusion that can be drawn from Figure 8. The majority of respondents (62%) indicated that their organization did have such a process in place. The Medical Supplies CIO said that A Guide to Decision Making 64 . or intended to adopt such a process within the next 12 months. As shown. The NOC is less likely to be involved in application rollout and the selection of security functionality. He added. Of those respondents who did not. that there is a project underway to change how the NOC functions.2 is: The NOC is almost as likely to monitor performance. In addition.3: Tasks that the NOC is Involved In While the survey data made it look like there was very strong interest in ITIL. The Manufacturing Analyst said that his organization focuses on the availability of networks.2: What the NOC Monitors either now has an IT service management process such as ITIL in place. there is a significant emphasis on applications. The goal of the project is to create a NOC that is more proactive and which focuses both on performance and availability. The fact that 86% of respondents stated that their organization either had or would have within 12 months a service management process in place indicates the emphasis being placed within the NOC to improve their processes. selection of new network technologies and the selection of network service providers. the survey respondents were asked if their organization Figure 8.
APPlicATion Delivery HAnDbook | februAry 2008 they tried to get involved using ITIL to improve some of their processes. The data in Figure 8.4. He added that the approach of sending most troubles directly to the NOC tends to increase the amount of time that it takes to resolve problems because of the added time it takes to show that the network is not at fault. and over 80% agree that the help desk does a good job of routing issues that it cannot resolve to whatever group can best handle them. In the vast majority of instances. In the majority of instances. better than three-quarters Figure 8. If one of these sites makes a change to their LAN. but probably not that big of a difference. In particular. the assumption is that the network is the source of application degradation. It should be noted that of the latter group of respondents (those agreeing). However. their ability to improve their processes is limited by their organizational structure. One of the reasons that the help desk routes so many calls to the NOC is the following. The Management Systems Manager stated that their company has done a lot of ITIL training and that there is some interest in moving more in the direction of setting up a CMDB (Configuration Management Data Base). the central IT group has set up a change management process that calls for them to get together once a week to review changes that cut across organizational silos. There is a lot of interest in ITIL. The Manufacturing Analyst stated that his organization has begun to use ITIL but that they “do not live by the [ITIL] book. but are also involved in problem resolution. but it is too soon to determine how impactful the use of ITIL will be. over 90% of Survey Respondents indicated that their organization has a help desk that assists end users.” The Survey Respondents were asked to indicate their degree of agreement with the statement: “Our NOC personnel not only identify problems. but this is still not the norm.” When asked if his motto was widely accepted within the organization he replied “Some of the mentality is changing. For example. while he does not disagree with the benefits promised by ITIL he finds that it seems too theoretical and he lacks the resources to get deeply involved with it.” Their responses are depicted in Figure 8.4: Role of the NOC in Problem resolution A Guide to Decision Making 65 . The Manufacturing Analyst stated that in his company if there is an IT problem the tendency of the user is to contact the NOC because “We have always had the tools to identify the cause of the problems”.4 is further evidence of the fact that NOC personnel do a lot more than just monitor networks. we are all going to get involved in fixing it. However. some sites control their own LANs.” He added his belief that ITIL will make a difference. His motto is “I don’t care what the problem is. the NOC gets involved in problem resolution. The Management and Security Manager stated that as recently as a year ago his organization had a very defensive approach to operations with a focus on showing that the network was not the source of a trouble. stated that the help desk typically routes issues that it cannot resolve to the NOC. Routing Troubles The vast majority of organizations have at least a simple escalation process in place for problem response. However. it does not go through the change management process set up by the central IT group.
Their responses are shown in Figure 8. almost two thirds of the respondents indicated that their organization would attempt to make any significant changes in their NOC processes within the next 12 months. As shown in Table 8. And a related driver. Given that NOC personnel spend the greatest amount of time on applications.2. the need for better visibility into applications. two thirds of the Survey Respondents also indicated that a growing emphasis on security would impact their NOC over the next 12 months.6 is that a wide range of factors are driving change in the NOC. One clear conclusion that can be drawn from the data in Figure 8.5: Interest in Changing NOC Processes The survey respondents were asked to indicate which factors would drive their NOC to change within the next 12 months. over a quarter of the total base of survey respondents indicated that the NOC does not meet the organization’s current needs.6: Factors Driving Change in the NOC A Guide to Decision Making 66 . is almost as strong a factor causing change in the NOC. However. NOC personnel do not spend a lot of their time today on security. The Medical Supplies CIO stated that in order to place greater emphasis on ensuring acceptable performance Figure 8.1.APPlicATion Delivery HAnDbook | februAry 2008 Change in the NOC Factors Driving Change As shown in Table 8. that may change in the next year as roughly half of the Survey Respondents indicated that combining network and security operations would impact their NOC over the next 12 months. This level of dissatisfaction with the NOC is in line with the fact that as shown in Figure 8. The top driver of change in the NOC is the requirement to place greater emphasis on ensuring acceptable performance for key applications. In addition. it is not at all surprising that: Figure 8.6.5.
One of these trends is the shift away from having NOC personnel sitting at screens all day waiting for green lights to turn yellow or red.” It is important to note that in all likelihood a notably higher percentage of organizations had implemented automated monitoring but had not eliminated or reduced the size of their NOC. It is also not surprising that internal processes are listed as a major factor inhibiting change. problem detection and notification. The lack of management vision and the NOC’s existing processes are almost as big a barrier to change as are the lack of personnel resources and funding. we asked the Survey Respondents to indicate what factors would inhibit their organization from improving their NOC. Factors Inhibiting Change Particularly within large organizations. over a quarter of the NOCFigure 8. this is It was not surprising that the two biggest factors inhibiting change are the lack of personnel resources and the lack of funding. While it is not possible to completely characterize the next generation NOC.APPlicATion Delivery HAnDbook | februAry 2008 for their key applications. change is difficult. The Management Systems Manager said that due to relatively constant turnover in personnel.” He stated that on a going forward basis he wants to place more emphasis on security although he said that he thought it would be difficult to combine security and network operations into a single group.” On a related issue. Some managers have been open to monitoring performance while others have not believed in the importance of managing network performance. The siloed NOC. He added that roughly a year ago they first began to use some monitoring tools and that these tools “Opened our eyes to a lot of things that can degrade and not cause any of the traditional green lights to turn yellor or red. This is in line with the general trend whereby IT budgets are increasing on average by only single digit amounts and headcount is often being held flat. In many cases. and the need to make significant changes to NOC processes have been constant themes throughout this chapter. His rationale for that statement was that security operations involves so much more than just networks. She added that her organization would like to monitor performance but that “It is a resource issue. they have formed an application delivery organization. The Management Systems Manager stated that her organization monitors network availability but does not monitor network performance. there are some clear trends in terms of how NOCs are evolving.” The Next-Generation NOC Given the diversity of how IT organizations currently deploy a NOC it is highly unlikely that there is a single description of a next generation NOC that would apply to all organizations. Their responses are shown in Figure 8. “Management vision changes every couple of years. Another clear trend is the focus on the performance of both networks and applications.7. the interest in ITIL A Guide to Decision Making 67 . The only way we can monitor performance is if we get more people. To better understand the resistance to change.7: Factors Inhibiting Change in the NOC associated respondents indicated that their company had “eliminated or reduced the size of our NOC because we have automated monitoring. In particular.
three-step process for troubleshooting is not changing: • Problem Identification • Problem Diagnosis • Solution Selection and Repair A Guide to Decision Making 68 . • The Manufacturing Specialist: The organization does not officially measure MTTR. As will be explained in this section. is changing because as explained in the preceding section. The IT professional whose interviews are used in this section of the handbook described a range of approaches to MTTR. He also stated that currently the MTTR is around 3 or 4 hours. the responsibility of the network organization is expanding to include the ongoing management of application performance. • The Telecommunications Manager: The organization pays a lot of attention to MTTR. but that it applies only to the availability. The understanding of MTTR. Another somewhat fuzzy trend is the integration of network and security operations. such as fault management. but only estimates it. The basic. There is also wide spread interest in using ITIL to develop more effective processes. but that they do it only for fault management and not for application performance. the NOC should be renamed the Application Operations Center (AOC). however.APPlicATion Delivery HAnDbook | februAry 2008 an example of how tasks that used to be performed by tier 2 and 3 personnel are now being performed by NOC personnel. However. is significantly different from how they apply to managing application performance. but that his management is getting more demanding and wants him to reduce the MTTR. Other trends are less clear. how these steps apply to a traditional network management task. Rethinking MTTR For Application Delivery The Changing Concept of MTTR Mean Time To Repair (MTTR) —the mean or average time that it takes the IT organization to repair a problem— is a critical metric for measuring performance. However. There is no doubt that there will be growing interaction between these groups. the highest priority is a problem that causes a large number of users to not be able to access the information or applications that they need to do their jobs. within the next two years only a minority of IT organizations will fully integrate these two groups. not performance. and they compute separate MTTR metrics for different priorities of problems. • The Financial Engineer: The organization measures MTTR for both availability and application performance. but in a degraded fashion. The next highest priority is a problem that results in a large number of users being able to access the information and applications they need. of the infrastructure and the applications. there is wide admission that current NOC processes are often ineffective. This section describes those changes and their impact on network management. For example. Given where NOC personnel spend their time. • The Education Director: The organization does measure MTTR. There is a very wide range of approaches relative to how IT organizations approach MTTR. it is unlikely that the typical NOC will significantly improve its NOC processes in the next 12 to 18 months. For example.
In those cases where the MTTR is getting large.” One of them is that the user is no longer affected. In these cases. “The [MTTR] clock starts ticking when the ticket is opened and keeps ticking until the problem is resolved. as previously noted most IT organizations do not have objectives for the performance of even their key. As well. Another meaning is that the source of the problem has been determined to be an issue with the application. the focus of diagnosis is to determine which component of the infrastructure is not working. He stated that these processes are “a huge part of our success” and that the processes force you to “understand what you are doing and why you are doing it. that his organization has implemented ITIL-based processes. the network Problem Diagnosis In the case of fault management. The CAT is comprised of technical leads from multiple disciplines that come together to resolve the difficult technical problems. The Financial Engineer stated that when a user calls in and complains about the performance of an application that a trouble ticket is opened. With fault management. application performance management can either be done proactively or reactively. it is fairly easy to set alarms indicating the failure of a component. The Telecommunications Manager also stated that his organization has begun to become ISO9001 certified. He added that application degradation is taken more seriously inside of his company if he can quantify how much revenue was lost as a result of the degradation. network management organizations respond to the fault once end users have been impacted. The Manufacturing Specialist reinforced the importance of effective processes. however.APPlicATion Delivery HAnDbook | februAry 2008 MTTR and Application Performance Management The Manufacturing Specialist commented that within his organization. the issue of whether or not an application has degraded is often highly subjective. management organization attempts to identify and resolve problems before they impact end users. For example. the trouble ticket is closed and they open what they refer to as a bug ticket. Part of the difficulty of diagnosing the cause A Guide to Decision Making 69 . since they often lead to a readily-noticeable outage. As a result. In a proactive approach. By contrast. In a reactive approach. managing application performance is a shared responsibility. his organization forms a group that they refer to as a Critical Action Team (CAT). that “you cannot lose track of existing issues. that because of the success they have had with the management tools that they have deployed. The Financial Engineer added that in some cases. and few monitor the end-to-end performance of their applications.” The Telecommunications Manager stated that they have begun to measure application degradation. He expects that their level of certification will increase as the demands for better application performance increases. it’s relatively easy to identify that a fault exists.” In his organizations there are a couple of meanings of the phrase “the problem is resolved. “The MTTR can get pretty large. He noted.” He added that fault management is still important and that they “are still tracking IOS bugs. Problem Identification Like every component of network management.” Improving processes (including training and the development of cross-domain responsibility) is an important part of improving the MTTR of application performance. that other organizations call them seeking help with resolving a problem. business-critical applications. He stated that in order to get better at managing the performance of applications. identifying application degradation is much more difficult.” He added that roughly 60% of application performance issues take more than a day to resolve. The Manufacturing Specialist also pointed out that while adding the capability of managing application performance is important. He also stated that.
This means that unlike fault management.2% More than 8 hours. almost a third of all performance problems took more than 8 hours to diagnose.3% 44.” A Guide to Decision Making 70 . As it turns out. it can take a long time to identify and fix. The Financial Engineer stated that in his last company.3% The network is usually not the source of application performance degradation. most IT organizations find it difficult to solve problems that cross multiple technology and organizational boundaries.1% More than 24 hours 18.3: Length of time to Diagnose Problem focus on one technology and on one organization. which tends to More than 1 hour. The results displayed in table 8. we asked the survey respondents to indicate what was the average length of time it took to diagnose performance problems before and after purchasing their chosen application performance management solution. This includes the network. In general. He recommended that when a problem is called into the help desk that the person calling in should be encouraged to describe the symptoms of the problem in detail. diagnosing the cause of application degradation crosses multiple technology and organizational boundaries. He pointed out that incorrectly assuming that the majority of issues are network related has the affect of increasing the amount of time it takes to accurately diagnose the problem. “When something is fundamentally wrong. but let than 5 20. He added that. although that is still the default assumption within many IT organizations. the database and the application itself. After deploying the solution.4% 5. every 32 milliseconds the protocol would retransmit volumes of information. Reducing the Time to Diagnose Given that it can take a long time to diagnose an application performance problem. before deploying the solution it was rare that a performance problem was diagnosed in less than 1 hour. and not just what they think the source of the problem is.1% 4. each and every component of IT could cause the application to perform badly. before deploying the solution. but less than 24 14. The Manufacturing Specialist noted that 10% of issues can take longer than a day to resolve and that some of them can go unresolved for months. This took a long time to identify. Less than 1 hour Before Solution After Solution 2. One of the reasons that it is so difficult to diagnose the cause of application degradation is that as discussed in Chapter 7. After deploying the solution.8% More than 3 hours. it is easier to identify the component of the infrastructure that is not functioning than it is to identify the factor that is causing an application to perform badly. Analogously. For example.8% 12. but less than 3 hours 24. the servers.8% 32.APPlicATion Delivery HAnDbook | februAry 2008 of an outage is that a single fault can cause a firestorm of alarms.3. almost a third of all performance problems are diagnosed in less than 1 hour.6% More than 5 hours. but less than 8 19.6% 1. only five percent take that long. Although one should not understate the difficulty of filtering out extraneous alarms to find the defective component. 90% of issues were originally identified as being network issues even though the reality was that only 10% of issues actually were network-related.0% Table 8. He said that one problem they had recently involved the MAPI (Messaging Application Programming Interface) protocol.3 are dramatic. Their responses are shown in Table 8.
The IT Services Director commented that his organization has to cost-justify a solution prior to purchasing it and then go back after implementation and quantify the actual impact of the solution. you are less sure that the problem will go away. “If you buy this tool. The situation is entirely different with managing application performance because the component of IT that is causing the application to degrade may not be the component that gets fixed or replaced. He pointed out that as part of the analysis they typically have to answer questions such as. in some instances the IT organization has to repeat the problem diagnosis as well as the solution selection and repair processes. For example.8. once it has been determined which component has failed. He stated that this process has the affect of increasing the credibility of the IT organization. The IT Services Director commented that at one time his organization could justify an investment in management tools just based on their intuition that the tool would pay for itself. In the case of managing application performance. the solution is obvious: fix that component. In particular. Solution Selection and Repair In the case of fault management. In the case of fault management. Cost-justifying an investment in performance management can be a complex task. The IT Director added that the depth of the analysis management expects depends in part on the cost and scope of the project. Demonstrating the Value of Performance Management Kubernan research indicates that 75% of IT organizations need to cost-justify an investment in performance management. A Guide to Decision Making 71 . For example. That is no longer the case.APPlicATion Delivery HAnDbook | februAry 2008 Choosing and implementing the proper application performance management solution can greatly reduce MTTR and improve cooperation between different IT teams. Over a third of the IT organizations that are required to cost-justify an investment in performance must perform the cost-justification both before and after implementing the solution. the IT organization will have to implement a work-around to compensate for how the application was written. particularly if the application was acquired from a 3rd party. what tool or tools well be retired?” “What is the decommissioning cost?” “What staff costs and productivity enhancements can be anticipated?” “What are the related maintenance and training costs?” The vast majority of IT organizations must cost-justify an investment in performance management. In a recent survey. However. his organization now has a very formal process for evaluating return on investment (ROI). In many instances it can be as much a political process as a technological one. the respondents were asked to indicate which techniques they had used to justify an investment in performance management. as was previously described sometimes the way the application was written will cause the application to perform badly. Their responses are shown in Figure 8. In that case. In fact. Reducing MTTR requires both credible tools and an awareness of and attention to technical and non-technical factors. In an analogous fashion the repair component of fault management differs somewhat from the repair component of application management. once you replace the defective part you fully expect the problem to be fixed. As a result. once you implement the chosen solution. there typically is no solution selection step. There is also some anecdotal evidence that the way that IT organizations perform this cost justification is changing. rewriting the application may not be an option.
Some of these techniques focus on hard savings while other focus on somewhat softer savings. The investment has a payback period of one year. the payback period is ten months. For example. it was widely accepted that the cost to DEC was roughly one million dollars an hour of revenue.webtorials. When he was at DEC. For example. in which case the usual financial metric is the payback period: the amount of time before the initial investment is recovered.8: Techniques to Cost-Justify Performance Management As shown in Figure 8. and after two years there is a total savings of $240. The Cost of Downtime Another way to demonstrate the value of performance management involves the cost of downtime.APPlicATion Delivery HAnDbook | februAry 2008 Figure 8. assume that an IT organization invests $120. For instance. http://www. No one approach to cost-justifying performance management works in all environments nor all the time in a single environment. if communications with one of DEC’s just-in-time manufacturing plants was lost.htm A Guide to Decision Making 72 .000. which is equivalent to a 42% rate of return. As a result.000 for a period of two years. the author used to work for Digital Equipment Corporation (DEC). there are myriad techniques are used to cost-justify an investment in performance management.000 in a new performance management solution and that this results in a monthly savings of $10.com/main/resource/papers/metzler/ pres1/index. it was often possible to justify making IT investments in order to mini22Computing the RoI of an IT Investment. if a one hundred thousand dollar investment in IT results in a monthly savings of ten thousand dollars. ROI Based on Hard Savings In most cases hard savings—a reduction in the money that will leave the company as a result of an investment— is the easiest way to get management approval for any IT investment.8. Jim Metzler. The typical ROI analysis involves an IT investment that will result in a monthly savings. Another common metric is the classic rate of return on the investment 22.
there has to be a widely-accepted cost of downtime for at least some component of the IT infrastructure. assume that the one thousand employees in the customer service organization at a hypothetical company have an average loaded salary of $50/ hour. as well as who has access to which applications. Another way to demonstrate the value of performance management is by showing that reducing downtime not only protects a company’s revenue stream. • Classify traffic based on myriad criteria. IT organizations must be able to: • Affect the routing of traffic through the network. Productivity declines because the company’s sales organization has to wait for the SD component to respond.000 in lost productivity each time there is an outage. If the SD component runs slowly. For example. it was much more difficult to justify an IT investment to minimize the probability of losing communications with any of DEC’s administrative buildings. Revenues decline when customers get irritated enough to take their business elsewhere. • Provide virtualized instances of key IT resources. it also protects the productivity of employees. Not too long ago. • Identify and control the traffic that enters the IT environment over the WAN. • Enforce company policy relative to what devices can access the network. • Perform traffic management and dynamically allocate network resources. It is also possible to demonstrate the value of performance management based on the degradation of an application. As a result. Before they implemented a performance management solution. The success of this argument depends in large part on how much of a productivity loss senior managers actually ascribe to a one hour outage. “Reducing a four-hour down time to one-hour is worth a minimum of $400K to us. it was not seen as a situation that resulted in the company losing revenue. In the preceding examples. it took four hours to find the source of the problem. it could be argued that an hour’s outage costs the company $50. For example. If the jobs of these employees required that they constantly access applications. In contrast.” The cost of downtime varies widely between companies and can also vary widely within a given company. According to the then network manager at the casino. The discussion of DEC highlights the fact that in order to use the cost of downtime to cost-justify acquiring a performance management solution. A performance management solution that can quickly identify the cause of such problems should be relatively easy for that company to cost-justify. consider a company that uses the SD (Sales and Distribution) component of SAP for sales order entry. the company will see an impact on both productivity and in lost revenues. Control Introduction To effectively control both how applications perform.APPlicATion Delivery HAnDbook | februAry 2008 mize the probability of losing communications with any of those plants. but after the solution was implemented it took only one hour. One company that has a widely-accepted cost of downtime is a US based casino that can not be mentioned by name in this handbook. the casino had an intermittent LAN problem that would take parts of their slot machine floor off line to repair. • Prioritize traffic that is business critical and delay sensitive. A Guide to Decision Making 73 . the term downtime literally meant the application was not available. while losing communications to one of DEC’s administrative building was considered a major inconvenience.
both private and public IP networks exhibit a wide range of delay. routing protocols choose the least-cost path through a network. In particular. packet loss. Analysis and Decision Making Use the performance measurements to determine the best path. and jitter) of each path through the network.509 digital certificates (such as Entrust or VeriSign) All common browsers such as Internet Explorer include SSL support by default. steps are: 1.securitytechnet. The least-cost path through a network is often computed to be the path with the least number of hops between the transmitting and receiving devices. One of the purposes of an SSL VPN gateway is to communicate directly with both the user’s browser and the target applications and enable communications between the two. availability. IEEE 4. Measurement Measure the performance (i. A few years ago. 2. such as OSPF. Route control achieves this goal by implementing a four-step process. SSL VPN Gateways The SSL protocol24 is becoming increasingly popular as a means of providing secure Web-based communications to a variety of users including an organization’s mobile employees. 3. during the course of a day. but not all applications do. 23Assessment of VoIP Quality over Internet Backbones. username and token + pin (such as RSA SecurID). http://www. or X. delay. Configuration choices include: • Encryption: 40-bit or 128-bit RC4 encryption • Authentication: Username and password (such as RADIUS). packet loss and availability 23. The goal of route control is to make more intelligent decisions relative to how traffic is routed through an IP network. However it is computed. Unlike IPSec which functions at the network layer. Automatic Route Updates Once the decision has been made to change paths. Those INFOCOM.e. Another purpose of the SSL VPN gateway is to control both access and actions based on the user and the endpoint device. Another challenge is the way that routing protocols choose a path through a network. organizations began to deploy functionality referred to as route control. the least-cost path through a network does not necessarily correspond to the path that enables the optimum performance of the company’s applications. which typically are a web browser on the user’s PC or laptop and an SSL VPN gateway that is deployed in a data center location. In particular. Reporting Report on the performance of each path as well as the overall route optimization process..APPlicATion Delivery HAnDbook | februAry 2008 Route Control One challenge facing IT organizations that run businesscritical applications on IP networks is the variability in the performance of those networks. SSL: Why Choose?. This necessitates either upgrading existing systems to support SSL or deploying an SSL VPN gateway in the data center.pdf A Guide to Decision Making 74 . allow network administrators to assign cost metrics so that some paths through the network are given preference over others. Some sophisticated routing protocols. 2002 24IPSec vs. This analysis has to occur in real time. SSL provides flexibility in allowing enterprises to define the level of security that best meets their needs.com/ resource/rsc-center/vendor-wp/openreach/IPSec_vs_SSL. update the routers to reflect the change. SSL functions at the application layer and uses encryption and authentication as a means of enabling secure communications between two devices.
many IT organizations are adopting a data center design that is based on providing virtual server and storage resources that are available to all applications. In a server-side application virtualization environment. However. isolated environment.wikipedia. While this design was easy to support from the perspective of being able to control the IT resources. creates some A Guide to Decision Making 75 . For example. perhaps traveling. This design. This technology is often used for delivering client/server applications because it mitigates some of the complexities of deploying. updating and securing client software on each individual user’s access device.org/wiki/Desktop_Virtualization Virtualization Until recently. As defined in Wikipedia25.APPlicATion Delivery HAnDbook | februAry 2008 Some of the criteria that IT organizations should use when choosing an SSL VPN gateway include that the gateway should be: • Easy to deploy. the network operations groups need to be able to control the virtual resources to ensure that these resources are not utilized so much that performance is impacted. Increasing the utilization of computing and storage resources can dramatically reduce the TCO associated with a data center. As a result. it lead to significant under-utilization of the resources. The desktop virtualization approach can be contrasted with a traditional local PC desktop. Technology such as caching is often employed to increase the availability of the application. Instead. but streamed to the user’s machine and run in a controlled. In particular. secured server. a single instance of the client application is installed on one or more servers within the data center. Recently there has been a similar movement to virtualize the desktop. at an airport or in a different city. applications become an on-demand service. managing. Another form of virtualization that is used to control IT resources is application virtualization – both client side and server side. data centers were designed in such a way that they provided isolated silos of resources. To achieve this goal. A remotely accessed PC is typically either located at home. Client-side application virtualization enables applications to managed in a centralized application hub. desktop virtualization involves separating the physical location where the PC desktop resides from where the user is accessing the PC. The user is located elsewhere. e. at the office or in a data center. the user interface (logical layer) is abstracted from the application processing (physical layer) that occurs on a centralized. however. network operations groups are now challenged to ensure that the virtual resources are utilized enough that the TCO is significantly reduced. The application executes entirely on the server while its interface is displayed on the users device. smartphones and PDAs • Able to check the client’s security configuration • Able to provide access to both data and the appropriate applications • Highly scalable • Capable of supporting granular authorization policies • Able to support performance enhancing functionality such as caching and compression • Capable of providing sophisticated reporting significant control challenges for the network operations group. in a legacy data center.g. where the user directly accesses the desktop operating 25http://en. That means that computing and storage resources were available only to the application that they were deployed to support. in a hotel room. server utilization is typically between 10 and 30 percent and storage utilization is often less than 20 percent. administer and use • Low cost over the lifecycle of the product • Transparent • Capable of supporting non-traditional devices..
the traffic management solution must have application awareness. or even hop between ports. Quality of Service (QoS) functionality. it will do this to ensure that the application performs well. The focus of the organization’s traffic management processes must be the company’s applications. or alternatively does not receive too much bandwidth. information gathered at Layer 4 or lower allows a network manager to assign a lower priority to their Web traffic than to other WAN traffic. control bandwidth leaving the network. which form the basis of traditional A Guide to Decision Making 76 . a multi-user server PC environment is used to host many users who all share a common PC desktop environment together on a server machine. In other cases. In this model. Queuing mechanisms. these applications will tend to interfere with each other. it may now assign bandwidth to an application. To ensure that an application receives the required amount of bandwidth. Technologies like TCP Rate Control tell the remote servers how fast they can send content providing true bi-directional management.APPlicATion Delivery HAnDbook | februAry 2008 system and all of its peripherals physically using the local keyboard. but do not address traffic coming into the network. because as discussed in Chapter 7 many applications share the same port. Another important factor in traffic management is the ability to effectively control inbound and outbound traffic. In some cases. etc. VNC. Two examples of this type of application that have been discussed previously in this handbook are VoIP and the Sales and Distribution (SD) module of SAP. Assigning Appropriate Bandwidth Once the organization has determined the bandwidth requirements and identifies the degree to which a given application interferes with other applications. the degree to which a given application interferes with other applications is identified. Quantifying the Impact of the Application Since many applications share the same WAN physical or virtual circuit. In particular. ICA. Without information gathered at Layer 7. Traffic Management and QoS Traffic Management refers to the ability of the network to provide preferential treatment to certain classes of traffic.” One of the more common ways to virtualize desktops is through a model that is typically referred to as the shared desktop model. and there are one or more delay-sensitive. In this step of the process. network managers are not able manage the company’s application to the degree that allows them to assign a higher priority to some Web traffic than to other Web traffic. mouse and video display (among other things) are typically redirected across a network via a desktop protocol such as RDP. When a desktop is virtualized. It is required in those situations in which bandwidth is scarce. Some of the key steps in a traffic management process include: Discovering the Application Application discovery must occur at Layer 7. its keyboard. This often means detailed Layer 7 knowledge of the application. Profiling the Application Once the application has been discovered. mouse and video monitor hardware directly. it is necessary to determine the key characteristics of that application. where the bottleneck usually occurs. business-critical applications. and not solely the megabytes of traffic traversing the network. however. The network connection carrying this virtualized desktop information is known as a “desktop access session.
when those queues get oversubscribed (e. degradation can occur across all connections. it is possible to assign bandwidth relative to a specific application such as SAP. For example. an “access control’ or “per call” QOS is sometimes required to establish acceptable quality. The use of MPLS services is one more factor driving the need for IT organizations to understand the applications that transit the network. and the issues that must be considered in the implementation and ongoing auditing and managing of QoS functionality. Other IT organizations implement QoS by deploying MPLS based services. if all of the traffic were assigned to the real-time traffic class that would cost more than a 50-50 split in which half the traffic were assigned to real time and half to best effort. but we’re planning to within the next 12 months No. While most carriers offer between five and eight service classes. Most carriers also charge for a variety of advanced services. Motivation for Deploying QoS Survey Respondents were asked to indicate whether or not they have implemented a QoS policy to prioritize traffic. One of the service classes is referred to as real time and is intended for applications such as voice and video. it is highly desirous to have the bandwidth assignment be performed dynamically in real time as opposed to using pre-assigned static metrics. assigning more capacity to the real-time class than is necessary will increase the cost of the service.5% A Guide to Decision Making 77 . Implementing QoS based on aggregate queues and class of service is often sufficient to prioritize applications.1. The advantage of the later approach is that it frees the IT organization from having to know how many simultaneous sessions will take place. In some solutions.4% 2. the decision-making process involved. Analogously. Due to the dynamic nature of the network and application environment. for simplicity assume that a carrier only offers two service classes. assigning less capacity to this class than is needed will likely result in poor voice quality. Many IT organizations implement QoS via queuing functionality found in their routers. In some other solutions. a 50-50 split would cost more than if all of the traffic was assigned to best effort. For example. voice).2% 14.9% 21. The CoS profile refers to how the capacity of the service is distributed over these two service classes. This is the case because virtually all carriers have a pricing structure for MPLS services that includes a cost for the access circuit and the port speed. However. As a result.1: QoS Deployment Percentage 61. thereby avoiding the problem of over-provisioning WAN bandwidth in an effort to ensure high performance. since most carriers drop any traffic that exceeds what the real-time traffic class was configured to support. the IT organization could decide to allocate 50 Kbps to each SAP session. including network based firewalls and IP multicast. In most cases. In addition. Status of Quality of Service Implementation This section of the Handbook identifies the current and planned deployment of QoS.APPlicATion Delivery HAnDbook | februAry 2008 it will do this primarily to ensure that the application does not interfere with the performance of other applications. Hence. Some solutions provide QoS mechanisms to independently prioritize packets based on traffic class and latency sensitivity.g. the IT organization might decide to allocate 256 Kbps for SAP traffic. and have no plans to implement it at this time Don’t know Table 9. Response Yes. Their responses are contained in Table 9. carriers often charge for the CoS (class of service) profile. However. it is possible to assign bandwidth to a given session. The other is called best effort and is intended for any traffic that is not placed in the real-time service class. we’ve implemented QoS No.
3% bility of using QoS as part of their problem-solving portfolio when confronting an issue such as a poorly-performing application or a congested link. The Mechanics of QoS Deployments Establishing Priorities Survey Respondents were asked to indicate how they either did or will decide which applications are given priority. of course. Their responses.3. Their responses are contained in Table 9. and placed Citrix Presentation Server. however. that they now also use QoS on a tactical basis.9% 2. that is. Nonetheless. Their responses are contained in Table 9. The majority of IT organizations have already deployed QoS. The Pharmaceutical Consultant stated that to date his organization has deployed QoS on all of their roughly 250 WAN routers.2: Reasons for Implementing QoS Percentage 35. so are other latency-sensitive applications such as Citrix Presentation Server. He added.1% 6.APPlicATion Delivery HAnDbook | februAry 2008 The percentage of IT organizations that either have implemented QoS or plan to do so within 12 months is just over 83%.. Because of that.4% 0. it would seem likely that somewhere on the order of 70% of IT organizations will have deployed some form of QoS within the next year.1. they include the possi- A Guide to Decision Making 78 . It is. they are in the process of extending QoS functionality to their LAN routers. and at least 70% will have done so within the next 12 months.5% 43.4% 47.8% 14. they have not deployed QoS on their LAN routers. SQL and some network management The Medical Manager stated that his organization had deployed QoS on all of their WAN routers and he pointed out that since they have not implemented VoIP. Survey Respondents were also asked to indicate the primary reason why they had implemented QoS.7% 15. He stated that initially their deployment of QoS was driven by the need to support a variety of Citrix Presentation Server-based applications as well as their use of video over IP. MPLS To support VoIP/other latency sensitive applications on an MPLS WAN In response to performance issues we were experiencing Don’t know Other Table 9.9% 4. the decision is discussed with other affected groups.0% 17.8% Although the primary responsibility for setting QoS priorities rests with the network groups. Reason To support VoIP To support latency sensitive applications other than VoIP As part of a WAN services offering. that all of those planning to implement actually will within the indicated timeframe. in most IT organizations.2. The Survey Respondents were also asked to indicate the degree of granularity of their QoS policy.g. show the percentage of respondents who have implemented the indicated number of QoS categories. e. His organization implemented four classes of service. unlikely. The Medical Manager added that they deployed QoS primarily to support Citrix-based applications running over an MPLS network. They are also starting to use QoS to support their ongoing deployment of VoIP. due to anything from a budget or headcount freeze to a change in priorities.0% 14. While VoIP is a major driver of QoS deployment. The Medical Manager stated that they decided which applications were given priority after discussions with the application group and business managers. which are depicted in Figure 9.3: Decision Process for Prioritization Percentage 67. Decision Process Network team made the decision Discussed with the application group Discussed with business managers Recommendation from third party Don’t know Other Table 9.9% 14.
he stated that they followed the Cisco guidelines for QoS deployment. his organization also used recommendations from a third party as part of their deployment of QoS. They placed internal email in the next highest service class. such as a poorly performing server.0% 5.APPlicATion Delivery HAnDbook | februAry 2008 8 or more 7 6 5 4 3 2 0.0% 20.0% 30. Most organizations use four or fewer classes of service. In particular. Planning and On-Going Management One of the key tasks that many IT organizations perform prior to implementing QoS is to baseline the network A Guide to Decision Making 79 . The Pharmaceutical Consultant indicated that in addition to discussing application prioritization with the application group. Some of the recommended traffic types are fairly obvious. The Pharmaceutical Consultant stated that the fact that they baselined their network prior to implementing QoS enabled them to tweak their initial plans and have a more successful deployment. are somewhat less obvious.0% 35. however. This includes network management traffic. and finally external email in the best effort service class. The Medical Manager stated that this provided his organization with information that they used in the discussions they had with the application group and business managers relative to identifying which types of traffic should have priority. While important. that can impact application performance. Both The Medical Manager and The Pharmaceutical Consultant stated that their organization performs ongoing monitoring of their QoS implementation. Both The Medical Manager and The Pharmaceutical Consultant stated that they baselined their network prior to implementing QoS.0% 25. Those guidelines specify eight classes of service and indicate which traffic types are assigned to each class. The Medical Manager stated that his organization did not do this and The Pharmaceutical Consultant stated that his organization did. The Medical Manager stated that they regularly check router reports to make sure that their QoS policy is being enforced properly and that the Citrix Presentation Server traffic is receiving the network resources that it needs. as well as routing information. Some of the traffic types. They use this information in order to intelligently assign QoS categories.0% 15. but just for video.0% to determine the traffic characteristics. followed by Web traffic in the next service class. video and traffic that is both mission-critical and transactional. The Pharmaceutical Consultant said that for selected applications they provide ongoing monitoring of the application’s response time.1: Granularity of QoS Implementations traffic in the highest service class. such as voice. few IT organizations establish response time objectives for priority applications prior to deployment. They do this in part to monitor their QoS deployment. Categories Percentage of Respondents Figure 9.0% 10. They also do this to identify other issues.
A Guide to Decision Making 80 . it takes a number of packets to make this identification because the application end points can negotiate a change in port number or perform a range of functions over a single connection. However. and then closes the port at the end of the session. Port numbers in the range 1024 to 49151 are reserved for Registered Ports that are statically assigned to user-level applications and processes. 5-tuple consisting of the source and destination addresses. port 80 is the well-known port for HTTP data exchange and port 443 is the well-known port for secure HTTP exchanges via HTTPS. but select a port dynamically as part of the session initiation process. from start to finish.. SIP uses ports 5059-5061. Next Generation WAN Firewall Current Generation Firewalls The first generation of firewalls was referred to as packet filters. According to Wikipedia26. pre-screened session. Once the session has ended. which are collectively known as the state of the connection. These devices functioned by inspecting packets to see if the packet matched the packet filter’s set of rules. some current generation firewalls have been augmented with IPS/IDS functionality that uses deep packet inspection to screen suspicious-looking traffic for attack signatures or viruses. The most CPU intensive checking is performed at the time of setup of the connection.” One reason that traditional firewalls focus on the packet header is that firewall platforms generally have limited processing capacity due to architectures that are based on software that runs on an industry standard CPU. For example. and Dynamic Ports Chapter 7 pointed out that the ports that are numbered from 0 to 1023 are reserved for privileged system-level services and are designated as well-known ports. and ongoing monitoring of QoS implementations. As a reminder. its entry in the state-table is discarded. Packet filters acted on each individual packet (i.APPlicATion Delivery HAnDbook | februAry 2008 Baselining of pre-QoS performance. which are sometimes referred to as Private Ports. A recent enhancement of the current generation firewall has been the addition of some limited forms of application level attack protection. Most current generation firewalls make two fundamental assumptions. both of which are flawed.org/wiki/Stateful_firewall The Use of Well-Known Ports. may include such details as the IP addresses and ports involved in the connection and the sequence numbers of the packets traversing the connection. limitations in pro26http://en. opens the required port at the beginning of the session. These attributes. the protocol and the port numbers) and did not pay any attention to whether or not a packet was part of an existing stream or flow of traffic. All packets after that (for that session) are processed rapidly because it is simple and fast to determine whether it belongs to an existing. For example. The firewall observes the dynamically selected port number. Port numbers between 49152 and 65535 are reserved for Dynamic Ports.wikipedia. Today most firewalls are based on stateful inspection. a well-known port serves as a contact point for a client to access a particular service over the network. cessing power of current generation firewalls prevents deep packet inspection from being applied to more than a small minority of the packets traversing the device. “A stateful firewall is able to hold in memory significant attributes of each connection. A number of applications do not use static port assignments. In many cases. For example. The first assumption is that the information contained in the first packet in a connection is sufficient to identify the application and the functions being performed by the application. Registered Ports. One of the primary reasons that stateful inspection was added to traditional firewalls was to track the sessions of whitelist applications that use dynamic ports. are important tools for guaranteeing success.e.
firewalls are typically placed at a point where all WAN access for a given site coalesces. As such. has serious limitations including the fact that the firewall helpers often do not see all of the traffic and the deployment of multiple 27Now Might Be a Good Time to Fire Your Firewall. IT organizations have resorted to implementing myriad firewall helpers27. 82% of the survey respondents indicated that they were concerned about the fact that traditional firewalls focus on well known ports and hence are not able to distinguish the various types of traffic that transit port 80. the traditional firewall cannot use deep packet inspection to determine if the traffic either poses a threat or violates enterprise policies for network usage. In addition. was to avoid the complexity of having a large number of security appliances. HTTPS is normally assigned to well-known TCP port 443. Enterprise Requirements The Global Architect stated that they found themselves in the situation where they had a security policy and no ability to enforce it as they could not be sure of what applications were being used. He preferred to have a “firewall on steroids” provide all this functionality. IPS and NAC. This approach. This point was picked up on by The Senior Director who stated that his organization had been looking at adding other security functionality such as IDS. they do not know what applications are running on their other servers and so they do not know what outbound traffic is being generated. “You http://ziffdavisitlink. if they were supporting recreational applications such as Internet Radio. However.leveragesoftware. However. As pointed out in Chapter 7. This is the logical place for a policy and security control point for the WAN. The Global Architect said that in theory an IT organization could mitigate the limitations of a traditional firewall by implementing a traditional firewall combined with other security related functionality such as an IPS. In contrast. These two blind spots are growing in importance because they are being exploited with increasing frequency by application-based intrusions and policy violations. A Next Generation Firewall The comments of The Global Architect and The Senior Director serve to underscore some of the unnatural networking that has occurred over the last decade.aspx ?BlogPostID=603398f2b87548ef9d51d35744dcdda4 A Guide to Decision Making 81 .” Asked about the limitation of traditional firewalls. Since the payload of these packets is encrypted with SSL. In particular. they house their Web servers inside their DMZ so they can control the applications that run on those servers. Another blind spot of current generation firewalls is for HTTP traffic that is secured with SSL (HTTPS). at the end of the day a lot of applications that were declared as outlaws are still running on your network. while that may well have been the case twenty years ago that is often not the case today. It is understandable that IT organizations have deployed work-arounds to attempt to make up for the limitations of traditional firewalls. however. He summed up his feelings by saying. Unfortunately due to the lack of a ‘firewall on steroids’ that could provide the necessary security functionality. Part of the concern of The Global Architect is that if they were running programs such as BitTorrent they were vulnerable to being charged with breaking copyright laws. they were not concerned about managing the inbound connection to their Web servers.com/blog_post_view. In particular. some applications have been designed with the ability to hop between ports.APPlicATion Delivery HAnDbook | februAry 2008 The second assumption is that the TCP and UDP wellknown and registered port numbers are always used as specified by IANA. The Senior Director said that traditional firewalls do not provide any application layer filtering so if you are attacked above Layer 3 “you are toast”. In a recent survey. think that you are in a secure environment. What he wanted. Unfortunately. he stressed that this is only a theory because the IT organization would never have enough knowledge of the applications to make this work. however. they were wasting a lot of WAN bandwidth.
the firewall will have the ability to support the detection of application-level anomalies that signify intrusions or policy violations. This model had A Guide to Decision Making 82 . Multi-gigabit Throughput In order to be deployed in-line as an internal firewall on the LAN or as an Internet firewall for high speed access lines. there needs to be an extensive library of application signatures developed that includes identifiers for all commonly used enterprise applications. Since there is no standard way of identifying applications. recreational applications. In order for the firewall to avoid these limitations and reestablish itself as the logical policy and security control point for the WAN. Firewall programmability continues to grow in importance with the number of new vulnerabilities cataloged by CERT hovering in the vicinity of 8. given today’s complex applications. “If somebody is commu- Application Identification The firewall must be able use deep packet inspection to look beyond the IP header 5-tuple into the payload of the packet to find application identifiers. In this model.000/year. the next generation firewall will need to perform the above functions at multi-gigabit speeds. Application identification will eliminate the port 80 blind spot and allow the tracking of port-hopping applications. Critical to The Global Architect is the ability to tie an event to a user. When asked about the attributes that he expects in a next generation firewall. an application that might be bad for one part of an organization might be good for other parts of the organization. and Internet applications. a component of an application might be bad for one part of an organization but that same component might well be good for other parts of the organization. The Global Architect said that the ability to learn about applications on the fly was a requirement as was the need to run a multi-gigabit speed. SSL proxy functionality. Today’s reality is that an application that might be bad for one organization might well be good for another. The library needs to be easily extensible to include signatures of new applications and custom applications. everyone can access an application that is deemed to be good. What is needed then is not a simple deny/allow model. Extended Stateful Inspection By tracking application sessions beyond the point where dynamic ports are selected. together with application identification. Analogously. Once this inspection is performed and policies applied. but a model that allows IT organizations to set granular levels of control to allow the good aspects of an application to be accessed by the appropriate employees while blocking all access to the bad aspects of an application. Also. and as Internet access rates move to 1 Gbps and beyond via Metro Ethernet. These high speeds will be needed to prevent early obsolescence as the LAN migrates to 10 GbE aggregation and core bandwidths. nobody can access an application that is deemed to be bad.APPlicATion Delivery HAnDbook | februAry 2008 security appliances significantly drives up the operational costs and complexity. what is needed is a next generation firewall with the following attributes: more validity at a time when applications were monolithic in design and before the Internet made a wide variety of applications available. Control Traditional firewalls work on a simple deny/allow model. allowed traffic would be re-encrypted before being forwarded to its destination. Application Identification and SSL processing at these speeds requires a firewall architecture that is based on special-purpose programmable hardware rather on than industry standard generalpurpose processors. To exemplify that he said. SSL Decryption/Re-encryption The firewall will need the ability to decrypt SSL-encrypted payloads to look for application identifiers/signatures. will eliminate the port 443 blind spot. On an even more granular level.
this approach cannot focus on just one component of the task such as network and application optimization. It must also be simple enough for the people at the help desk to be able to use it to analyze what is going on. • The current approach to managing application performance reduces the confidence that the company has in the IT organization. • Application delivery needs to have top-down approach. management and control. first notices the degradation. To deal with these two forces. network and application optimization. • In situations in which the end user is typically the first to notice application degradation. many of which sections 4 and 7 discussed. I can do that with a traditional firewall. with a focus on application performance. management and control. Conclusion For the foreseeable future. To be successful. • Application delivery is more complex than just network and application acceleration. This handbook identified a number of conclusions that IT organizations can use when formulating their approaches to ensuring acceptable application delivery. • If you work in IT. the end user. • Given the breadth and extent of the input from both IT organizations and leading edge vendors this handbook represents a broad consensus on a framework that IT organizations can use to improve application delivery. Analogously. for the foreseeable future the impact of the factors that make application delivery difficult. IT organizations need to develop a systematic approach to applications delivery.” • Senior IT management needs to ensure that their organization evolves to where it looks at application delivery holistically and not just as an increasing number of stove-piped functions • Successful application delivery requires the integration of tools and processes. • Companies that want to be successful with application delivery must understand their current and emerging application environments. “I need the ability to push security-related information from engineering to the help desk. not the IT organization. you either develop applications or you deliver applications. but it is a management nightmare. network and application optimization. the importance of application delivery is much more likely to increase than it is to decrease. Given the complexity associated with application delivery. my ability to tie that application to a user is critical. • A goal of this handbook is to help IT organizations develop the ability to minimize the occurrence of application performance issues and to both identify and quickly resolve issues when they occur. IT ends up looking like bumbling idiots. • In the vast majority of instances when a key business application is degrading. IT organizations must implement an approach to application delivery that integrates the key components of planning.” The Senior Director agreed on the importance of application level visibility and high performance. The next generation firewall must not be so complicated that the average help desk analyst cannot input a rule set. He also stressed the importance of reporting and alerting when he said.APPlicATion Delivery HAnDbook | februAry 2008 nicating using BitTorrent. Those conclusions are: • The complexity associated with application delivery will increase over the next few years. • Successful application delivery requires the integration of planning. is much more likely to increase than it is to decrease. A Guide to Decision Making 83 .
application delivery solutions must function in a highly dynamic environment. it can also produce some significant performance issues. • A relatively small increase in network delay can result a very significant increase in application delay.e. when people accesses an application they are accessing it over the WAN. • People use the CYA approach to application delivery to show it is not their fault that the application is performing badly. user created. Eight percent (8%) of IT organizations state they plan and holistically fund IT initiatives across all of the IT disciplines. Successful application delivery requires careful planning coupled with extensive A Guide to Decision Making 84 . This drives the need for both the dynamic setting of parameters and automation. • Successful application delivery requires IT organizations identify the applications running on the network and ensure the acceptable performance of the applications that are relevant to the business while controlling or eliminating irrelevant applications. RIA. the goal of the CIO approach is to identify and then fix the problem. • It is extremely difficult to make effective network and application-design decisions if the IT organization does not have well-understood and adhered-to targets for application performance. In contrast.. WAN performance impacts Web services-based applications significantly more than WAN performance impacts n-tier applications. • In addition to a services focus. • Hope is not a strategy. In addition. • One effect of data-center consolidation and single hosting is additional WAN latency for remote users. • Only 14% of IT organizations claim to have aligned the application delivery with application development.APPlicATion Delivery HAnDbook | februAry 2008 • In the majority of cases. • Emerging application architectures (SOA. • To be successful. whether they are one-to-many. • The webification of application introduces chatty protocols into the network. • Many IT professionals view the phrase Web 2. XML) tend greatly increase the amount of data that transits the network and is processed by the servers. Web 2. there is at most a moderate emphasis during the design and development of an application on how well that application will run over a WAN. some or these protocols (i. • While server consolidation produces many benefits.0 applications – the need to massively scale server performance. Web 2.0 as either just marketing hype that is devoid of any meaning or they associate it exclusively with social networking sites such as MySpace. • The existing generation of network and application optimization solutions does not deal with a key requirement of Web 2. many-to-many or some-to-many. • Every component of an application-delivery solution has to be able to support the company’s traffic patterns.0 characteristics include featuring content that is dynamic. Twelve percent (12%) of IT organizations state that troubleshooting IT operational issues occurs cooperatively across all IT disciplines. rich and in many cases.0) have already begun to impact IT organizations and this impact will increase over the next year. • Just as WAN performance impacts n-tier applications more than monolithic applications. • In the vast majority of situations.
a tool that is unduly complex is of no use to an IT organization. • The vast majority of IT organizations see significant value from a tool that can be used to test application performance throughout the application lifecyle. • Small amounts of packet loss can significantly reduce the maximum throughput of a single TCP session. • The deployment of WAN Optimization Controllers will increase significantly. IT organizations need to understand the problem they are trying to solve. organizations must ensure the solution can scale to provide additional functionality over what they initially require. organizations must test that solution in an environment that closely reflects the environment in which it will be deployed. • When an application experiences degradation. • A primary way to balance the requirements and capabilities of the application development and the application-delivery functions is to create an effective architecture that integrates those two functions. • IT organizations often start with a tactical deployment of WOCs and expand this deployment over time. • Organizations should baseline their network by measuring 100% of the actual traffic from real users. IT organizations need tools and processes that can identify the root cause of application degradation and which are accepted as valid by the entire IT organization. first notices application degradation. • To be successful with application delivery. • With a 1% packet loss and a round trip time of 50 ms. virtually any component of IT could be the source of the problem.APPlicATion Delivery HAnDbook | februAry 2008 measurements and effective proactive and reactive processes. • An AFE provides more sophisticated functionality than a SLB does. • In order to understand the performance gains of any network and application-optimization solution. • Identifying the root cause of application degradation is significantly more difficult than identifying the root cause of a network outage. • In the vast majority of cases. A Guide to Decision Making 85 . • When choosing a network and application optimization solution it is important to ensure that the solution can scale to provide additional functionality over what is initially required. • To deploy the appropriate network and application optimization solution. • IT organizations will not be regarded as successful if they do no have the capability to both develop applications that run well over the WAN and to also plan for changes such as data center consolidation and the deployment of VoIP. • When choosing a network and application optimization solution. or greater. • IT organizations need to modify their baselining activities to focus directly on delay. the maximum throughput is roughly 3 megabits per second no matter how large the WAN link is. • IT organizations will not be successful with application delivery as long as long as the end user. • The deployment of WAN Optimization Controllers will increase significantly. and not the IT organization. • The application-delivery function needs to be involved early in the applications development cycle.
the NOC tends to work on a reactive basis identifying a problem only after it impacts end users. but it is too soon to determine how impactful the use of ITIL will be. • Choosing and implementing the proper application performance management solution can greatly A Guide to Decision Making 86 . • Improving processes (including training and the development of cross-domain responsibility) is an important part of improving the MTTR of application performance. the NOC gets involved in problem resolution. the NOC should be renamed the Applications Operations Center. • Given where NOC personnel spend their time. although that is still the default assumption within most IT organizations. • Application management should focus directly on the application and not just on factors that have the potential to influence application performance. • Just under half of NOCs are organized around functional silos. • NOC personnel spend an appreciable amount of their time supporting a broad range of IT functionality. • Most IT organizations ignore the majority of the performance alarms. • In the majority of cases. and that is a relatively new phenomenon. • There is a wide range of approaches relative to how IT organizations approach MTTR. the assumption is that the network is the source of application degradation. • To enable cross-functional collaboration. • Lack of visibility into the traffic that transits port 80 is a major vulnerability for IT organizations. • In over a quarter of organizations. • In the vast majority of instances.APPlicATion Delivery HAnDbook | februAry 2008 • Organizational discord and ineffective processes are at least as much of an impediment to the successful management of application performance as are technology and tools. • There is a lot of interest in ITIL. • Logical factors are almost as frequent a source of application performance and availability issues as are device-specific factors. • In the majority of instances. • A majority of NOCs use many management tools that are not well integrated. • The NOC is almost as likely to monitor performance as it is to monitor availability. • End-to-end visibility refers to the ability of the IT organization to examine every component of IT that impacts the communications once users hit ENTER or click the mouse to when they receive responses from an application. • NOC personnel spend the greatest amount of time on applications. it must be possible to view all relevant management data from one place. • The network is usually not the source of application performance degradation. • The lack of management vision and the NOC’s existing processes are almost as bit of a barrier to change as are the lack of personnel resources and funding. the NOC does not meet the organization’s current needs. • The top driver of change in the NOC is the requirement to place greater emphasis on ensuring acceptable performance for key applications.
com/newsletters/frame/2007/0827wan1.html http://www. and not merely the megabytes of traffic traversing the network. • Although the primary responsibility for setting QoS priorities rests with the network groups. are important tools for guaranteeing success.networkworld. management http://www. WAN optimization.com/newsletters/frame/2007/0903wan2. • The focus of the organization’s traffic management processes must be the company’s applications. in most IT organizations the decision is discussed with the other affected groups.com/newsletters/frame/2007/1105wan1.html http://www.html Cisco.com/newsletters/frame/2007/0820wan1.com/newsletters/frame/2007/1029wan2.networkworld.networkworld.html The addition of new technologies means IT becomes increasingly siloed Ignore the port 80 black hole at your peril http://www.com/newsletters/frame/2007/0910wan2.html A Guide to Decision Making 87 . • No one approach to cost-justifying performance management works in all environments nor all the time in a single environment. so are other latency-sensitive applications such as Citrix Presentation Server.networkworld. • Baselining of pre-QoS performance and ongoing monitoring of QoS implementations.com/newsletters/frame/2007/09103wan1.com/newsletters/frame/2007/0917wan2. • Most organizations use four or fewer classes of service.networkworld.com/newsletters/frame/2007/0730wan2.html http://www. • While VoIP is a major driver of QoS deployments.com/newsletters/frame/2007/1001wan2.com/newsletters/frame/2007/0820wan2. • The cost of downtime varies widely between companies and can also vary widely within a given company.html How do you scan for what’s on port 80? http://www.networkworld.networkworld.networkworld. • The vast majority of IT organizations must cost-justify an investment in performance management.html Would combining network and security operations reduce the negative impact of silos? http://www.networkworld.html Breaking the data replication bottleneck MPLS vs. WAN optimization.networkworld.html http://www.com/newsletters/frame/2007/1217wan1.networkworld. Part 1 http://www.html http://www.networkworld.networkworld. • The majority of IT organizations have already deployed QOS and at least 70% will have done so within the next 12 months.com/newsletters/frame/2007/1217wan2.html The schism between the applications team and the rest of IT http://www.html The AHA’s IT group benefits from working collaboratively How one network exec persuaded coworkers that the network is not always to blame http://www. Part 2 MPLS vs.com/newsletters/frame/2007/0924wan1. • Reducing MTTR requires both credible tools and an awareness of and attention to technical and nontechnical factors.networkworld. In many instances it can be as much a political process as a technological one.networkworld.APPlicATion Delivery HAnDbook | februAry 2008 reduce MTTR and improve cooperation between different IT teams. NetQoS move a step closer to integrated network optimization.com/newsletters/frame/2007/0917wan1.html Relationship between the network and apps development teams not bad but not great http://www. Bibliography Articles by Jim Metzler Newsletters written for Network World The impact that small amounts of packet loss has on successful transmission http://www.html How the network team can show business value to the company A real-life example of something gone wrong in senior management http://www.
html Senior management: Don’t manage each IT component in isolation http://www.com/newsletters/frame/2006/1127wan1.com/newsletters/frame/2006/1211wan2.html When apps are slow.com/newsletters/frame/2007/0423wan2.com/newsletters/frame/2007/0507wan1.com/newsletters/frame/2007/0709wan2.com/newsletters/frame/2007/0212wan1.com/newsletters/frame/2007/0205wan1.com/newsletters/frame/2007/0305wan2.html Survey: Organizations’ application delivery processes are ineffective http://www.com/newsletters/frame/2007/0430wan1.com/newsletters/frame/2007/0108wan2.networkworld.com/newsletters/frame/2007/0115wan1.html http://www.networkworld. Part 1 Are you in a state of denial over the need for network.html How to fix application performance issues: Organize an IT pow-wow http://www.networkworld.networkworld.html http://www.html What makes application management so hard to do? NOCs now in charge of application management http://www.com/newsletters/frame/2007/0305wan1.html Using ITIL for better WAN management http://www.html Logical sources of performance and availability issues The WAN and the wiki generation.networkworld.networkworld. Part 2 The WAN and the wiki generation. application optimization? The benefits of managed service for applications performance http://www.networkworld.com/newsletters/frame/2007/0326wan1.networkworld.html Identify and then test http://www.html http://www.networkworld. net managers are wrong until proven right http://www.networkworld.networkworld.networkworld.networkworld.html Don’t assume application performance problems are always network-related http://www.networkworld.com/newsletters/frame/2007/0730wan1.html How does your company handle recreational use of Internet resources? The benefits of thinking strategic when deploying network optimization http://www.html http://www.html http://www. Part 1 http://www.html When YouTube presents true business value http://www.html Zero chance of interoperability between optimization technologies http://www.html Users formalizing processes to manage application performance Successful application delivery http://www.com/newsletters/frame/2007/0423wan1.html http://www.html Microsoft SharePoint could be a challenge for WAN optimization Juniper’s take on network optimization Cisco analyst conference mulls super fast pipes vs.com/newsletters/frame/2007/0618wan2.com/newsletters/frame/2007/0702wan1.com/newsletters/frame/2007/0618wan1.html Benchmarking for WAN-vicious apps.html http://www.networkworld. Part 2 Benchmarking for WAN-vicious apps.com/newsletters/frame/2007/0521wan1.networkworld.html WAN optimization tips http://www.networkworld.com/newsletters/frame/2007/0226wan1.networkworld.networkworld.networkworld.html The three components of application delivery according to Cisco http://www.networkworld.networkworld. highly functional nets http://www. Part 3 The WAN and the wiki generation.html http://www.com/newsletters/frame/2006/1218wan1.com/newsletters/frame/2007/0129wan1.networkworld.com/newsletters/frame/2006/1127wan2.networkworld.html Does your WAN benefit your business’ business? http://www.com/newsletters/frame/2007/0212wan2.com/newsletters/frame/2007/0430wan2.com/newsletters/frame/2007/0326wan2.networkworld.networkworld.html http://www.html What’s the goal of route analytics in the WAN? http://www.networkworld.com/newsletters/frame/2007/0528wan1.networkworld.networkworld.networkworld.html A Guide to Decision Making 88 .com/newsletters/frame/2007/0219wan2.com/newsletters/frame/2007/0716wan1.APPlicATion Delivery HAnDbook | februAry 2008 How optimizing the business network could help you optimize your career http://www.com/newsletters/frame/2007/0122wan1.html http://www.networkworld.networkworld.networkworld.html Morphing tactical solutions to becoming strategic ones http://www.com/newsletters/frame/2007/0625wan1.service delivery or service support? http://www.html http://www.com/newsletters/frame/2007/0219wan1.html Which is most important to organizations .com/newsletters/frame/2007/0108wan1.
and collaboration http://www.networkworld.com/newsletters/frame/2006/0925wan1.com/newsletters/frame/2006/0403wan2.com/newsletters/frame/2006/0821wan1.html Users don’t want WAN optimization tools that are complex to manage http://www.networkworld.networkworld.networkworld.networkworld.html Disgruntled users and the centralized data center http://www. Part 2 http://www.html Who in your company first notices when apps performance starts to degrade? http://www.html http://www.html http://www.html WAN-vicious apps are a net manager’s worst nightmare http://www.html Application Acceleration that Focuses on the Application.html Application Acceleration that Focuses on the Application.com/newsletters/frame/2005/1114wan1.networkworld.networkworld.com/newsletters/frame/2006/0731wan2.networkworld.html Network managers plugged into the importance of application delivery http://www.com/newsletters/frame/2005/1107wan1.com/newsletters/frame/2006/0925wan2.html Where best to implement network and application acceleration.networkworld.html http://www.networkworld.html http://www.html WAFS attempts to soothe the problems of running popular apps over WANs http://www. Microsoft attempts to address CIFS’ limitations in R2 WAFS could answer CIFS’ limitations http://www.html QoS.com/newsletters/frame/2006/0130wan2. visibility and reporting are hot optimization techniques.html http://www.networkworld.com/newsletters/frame/2006/0529wan2.networkworld. unified communications.com/newsletters/frame/2006/0807wan1.html http://www.networkworld.networkworld.html Where best to implement network and application acceleration.com/newsletters/frame/2006/0220wan1.html Making sure the apps your senior managers care about work well over the WAN http://www.networkworld.networkworld.html A Guide to Decision Making 89 .com/newsletters/frame/2006/0213wan1.networkworld.networkworld.com/newsletters/frame/2005/1017wan1.html Cisco vs.com/newsletters/frame/2006/1106wan2.com/newsletters/frame/2005/1212wan1.html Naïve users who hog (or bring down) the network http://www.networkworld.html Application accelerators take on various problems http://www.com/newsletters/frame/2006/1120wan1.networkworld.com/newsletters/frame/2006/0403wan1.networkworld.com/newsletters/frame/2006/0807wan2.html When applications perform badly.html CIOs don’t take enough notice of application delivery issues http://www. Part 1 http://www.html WAN optimization helps speed up data replication for global benefits firm http://www. is the CYA approach good enough? http://www.com/newsletters/frame/2006/0327wan2.networkworld.networkworld.networkworld.networkworld.com/newsletters/frame/2005/0829wan1.html The limitations of today’s app acceleration products What slows down app performance over WANs? Automating application acceleration Network managers reveal extent of network misuse on their nets http://www.networkworld.html What makes for a next-generation application performance product? http://www.APPlicATion Delivery HAnDbook | februAry 2008 Insight from the road http://www.com/newsletters/frame/2006/0213wan2.com/newsletters/frame/2006/0227wan2.com/newsletters/frame/2006/1113wan1. users say http://www.com/newsletters/frame/2005/1031wan1.html Survey finds users are becoming proactive with WAN mgmt.com/newsletters/frame/2006/0327wan1.networkworld.com/newsletters/frame/2006/1023wan1.networkworld. Microsoft: The battle over the branch office.com/newsletters/frame/2006/1106wan1.networkworld.html Cisco gets serious about application delivery http://www.com/newsletters/frame/2006/0130wan1.html http://www.html http://www.com/newsletters/frame/2005/1121wan1.com/newsletters/frame/2006/0821wan2.com/newsletters/frame/2005/1114wan2.com/newsletters/frame/2005/1031wan2.networkworld.networkworld.networkworld. Part 1 http://www.html Advancing the move to WAN management automation Application benchmarking helps you to determine how apps will perform How do you feel about one-box solutions? http://www. Part 2 http://www.html A new convergence form brings together security and application acceleration What is termed network misuse in one company may not be so in another http://www.com/newsletters/frame/2005/1107wan2.networkworld.
com/newsletters/frame/2005/0627wan1.com/newsletters/frame/2006/0925wan2.html http://www.html Mechanisms that directly influence network throughput Increase bandwidth by controlling network misuse http://www.html TCP acceleration and spoofing acknowledgements http://www.html What the next generation Web services mean to your WAN http://www.networkworld.html Identify and then test http://www.html http://www.networkworld. Part 1 http://www.com/newsletters/frame/2006/0529wan2. and collaboration http://www.html Who in your company first notices when apps performance starts to degrade? http://www.networkworld.networkworld.com/newsletters/frame/2006/1120wan1.networkworld.html http://www.com/newsletters/frame/2006/1106wan1.networkworld.html Cisco vs.html Antidote for ‘chatty’ protocols: WAFS http://www.com/newsletters/frame/2005/0502wan2.com/newsletters/frame/2006/1127wan1.html Making sure the apps your senior managers care about work well over the WAN http://www.html http://www.com/newsletters/frame/2005/0307wan1.com/newsletters/frame/2005/0418wan2.networkworld.html Cisco’s FineGround buy signals big change in the WAN optimization sector The thorny problem of supporting delay-sensitive Web services Uncovering the sources of WAN connectivity delays Naïve users who hog (or bring down) the network http://www.com/newsletters/frame/2005/0214wan2.networkworld.html What is termed network misuse in one company may not be so in another http://www.html WAN-vicious apps are a net manager’s worst nightmare http://www.networkworld.html Organizations are deploying MPLS and queuing for QoS.html How are you optimizing your branch-office WAN? http://www.networkworld. unified communications. survey finds http://www.networkworld.networkworld.networkworld.html Network managers plugged into the importance of application delivery http://www.html http://www.networkworld.com/newsletters/frame/2006/1113wan1.com/newsletters/frame/2005/0627wan2.html Application Acceleration that Focuses on the Application.html Why adding bandwidth does nothing to improve application performance http://www. Part 2 http://www.com/newsletters/frame/2005/0321wan1.networkworld.html http://www.com/newsletters/frame/2006/0403wan1.html How TCP acceleration could be used for WAN optimization http://www.networkworld. Microsoft: The battle over the branch office.com/newsletters/frame/2005/0314wan1.networkworld.com/newsletters/frame/2006/0821wan1.networkworld.html http://www.html WAN optimization tips Insight from the road How TCP ensures smooth end-to-end performance http://www.networkworld.networkworld.com/newsletters/frame/2006/0925wan1.networkworld.com/newsletters/frame/2005/0815wan2.APPlicATion Delivery HAnDbook | februAry 2008 The gap between networks and applications lingers Is Cisco AON the new-age message broker? Controlling TCP congestion http://www.com/newsletters/frame/2005/0221wan2.networkworld.networkworld.networkworld.com/newsletters/frame/2005/0620wan1.networkworld.html Morphing tactical solutions to becoming strategic ones http://www.networkworld.html Application Acceleration that Focuses on the Application.com/newsletters/frame/2006/1106wan2.html Cisco gets serious about application delivery http://www.networkworld.com/newsletters/frame/2005/0425wan1.com/newsletters/frame/2005/0321wan2.networkworld.networkworld.networkworld.com/newsletters/frame/2006/0807wan2.com/newsletters/frame/2006/0731wan2.html CIOs don’t take enough notice of application delivery issues http://www.com/newsletters/frame/2006/1211wan2.com/newsletters/frame/2005/0718wan1.networkworld.com/newsletters/frame/2006/1218wan1.com/newsletters/frame/2005/0502wan1. management: A careful balancing act When applications perform badly.html A Guide to Decision Making 90 .networkworld.com/newsletters/frame/2006/0821wan2.networkworld.html http://www.networkworld.html Network managers reveal extent of network misuse on their nets http://www.com/newsletters/frame/2005/0704wan1.html The benefits of thinking strategic when deploying network optimization http://www.networkworld.html http://www. is the CYA approach good enough? http://www.html The trick of assigning network priority to application suites http://www.networkworld.com/newsletters/frame/2005/0530wan1.com/newsletters/frame/2005/0516wan2.html Bandwidth vs.com/newsletters/frame/2006/0807wan1.com/newsletters/frame/2006/1023wan1.com/newsletters/frame/2006/1127wan2.
com/newsletters/frame/2006/0220wan1.networkworld.html http://www.com/newsletters/frame/2005/0829wan1.com/newsletters/frame/2005/1114wan1.com/newsletters/frame/2005/1031wan1.com/newsletters/frame/2005/0502wan1.com/newsletters/frame/2005/1121wan1.networkworld.com/newsletters/frame/2005/0718wan1.APPlicATion Delivery HAnDbook | februAry 2008 Where best to implement network and application acceleration. Part 1 http://www.networkworld.html How TCP acceleration could be used for WAN optimization http://www.html http://www.com/newsletters/frame/2006/0213wan2.html Survey finds users are becoming proactive with WAN mgmt.networkworld.com/newsletters/frame/2005/1114wan2.html Users don’t want WAN optimization tools that are complex to manage http://www.com/newsletters/frame/2006/0213wan1.com/newsletters/frame/2005/0620wan1.com/newsletters/frame/2005/0321wan1.com/newsletters/frame/2005/0530wan1. Microsoft attempts to address CIFS’ limitations in R2 WAFS could answer CIFS’ limitations The trick of assigning network priority to application suites http://www.com/newsletters/frame/2005/0627wan2.com/newsletters/frame/2005/0516wan2.html http://www.com/newsletters/frame/2006/0227wan2. survey finds http://www.html What makes for a next-generation application performance product? http://www.html http://www.com/newsletters/frame/2005/0425wan1.com/newsletters/frame/2005/0704wan1.com/newsletters/frame/2006/0327wan1.com/newsletters/frame/2005/1031wan2.com/newsletters/frame/2006/0130wan2.html QoS.networkworld.networkworld.networkworld.html http://www.com/newsletters/frame/2005/0314wan1.html http://www.com/newsletters/frame/2006/0130wan1.com/newsletters/frame/2005/0627wan1.html http://www.html WAN optimization helps speed up data replication for global benefits firm http://www.html A new convergence form brings together security and application acceleration The gap between networks and applications lingers Is Cisco AON the new-age message broker? Controlling TCP congestion http://www.networkworld.html http://www.com/newsletters/frame/2005/1107wan2.html Cisco’s FineGround buy signals big change in the WAN optimization sector The thorny problem of supporting delay-sensitive Web services Uncovering the sources of WAN connectivity delays http://www.networkworld.html How are you optimizing your branch-office WAN? http://www.networkworld.html Application accelerators take on various problems http://www.com/newsletters/frame/2006/0327wan2.html A Guide to Decision Making 91 .networkworld. Part 2 http://www.html Advancing the move to WAN management automation Application benchmarking helps you to determine how apps will perform How do you feel about one-box solutions? http://www.networkworld.networkworld.html Organizations are deploying MPLS and queuing for QoS.networkworld.networkworld.networkworld.networkworld.networkworld.com/newsletters/frame/2005/1212wan1.html Why adding bandwidth does nothing to improve application performance http://www.networkworld.com/newsletters/frame/2005/0502wan2.html Mechanisms that directly influence network throughput Increase bandwidth by controlling network misuse http://www.com/newsletters/frame/2005/0307wan1.html Disgruntled users and the centralized data center Antidote for ‘chatty’ protocols: WAFS http://www. users say http://www.html http://www.html http://www.networkworld.html http://www.com/newsletters/frame/2005/0321wan2.networkworld.html http://www.html WAFS attempts to soothe the problems of running popular apps over WANs http://www.com/newsletters/frame/2005/1017wan1.com/newsletters/frame/2005/1107wan1.networkworld.html http://www.networkworld.networkworld.networkworld.com/newsletters/frame/2005/0418wan2.html Where best to implement network and application acceleration.html TCP acceleration and spoofing acknowledgements http://www.com/newsletters/frame/2006/0403wan2.networkworld.networkworld.networkworld.networkworld.networkworld.com/newsletters/frame/2005/0815wan2.html http://www. visibility and reporting are hot optimization techniques.networkworld.html http://www.networkworld.html The limitations of today’s app acceleration products What slows down app performance over WANs? Automating application acceleration How TCP ensures smooth end-to-end performance http://www.networkworld.networkworld.networkworld.
htm Moving Past Static Performance Alarms Supporting Server Consolidation Takes More than WAFS http://www.webtorials.com/main/resource/papers/netscout/briefs/ brief-07-07.htm http://www.com/main/resource/papers/kubernan/brief-1-1.com/main/resource/papers/netscout/briefs/ brief-09-06.com/main/resource/papers/kubernan/ref2.com/main/resource/papers/packetdesign/paper10.APPlicATion Delivery HAnDbook | februAry 2008 What the next generation Web services mean to your WAN Bandwidth vs.com/main/resource/papers/kubernan/ref4.webtorials.html http://www.networkworld.htm IT Impact Briefs The Value of Performance Management http://www.pdf A Guide to Decision Making 92 .htm http://www.htm http://www.com/main/resource/papers/kubernan/paper7.com/main/resource/papers/netscout/briefs/ brief-08-07.htm http://www.webtorials.webtorials.htm The Logical Causes of Application Degradation http://www.com/newsletters/frame/2005/0214wan2.html http://www.webtorials.webtorials.webtorials.com/main/resource/papers/netscout/briefs/ brief-10-06.com/main/resource/papers/kubernan/brief-1-2.htm http://www.com/main/resource/papers/flukenetworks/paper19.htm Rethinking MTTR http://www.webtorials.htm Business Process Redesign http://www.webtorials.com/main/resource/papers/kubernan/ref2.webtorials.htm The Cost and Management Challenges of MPLS Services http://www.webtorials.htm Showing The Value of Network Management The Business Value of Effective Infrastructure Management http://www.webtorials.webtorials.com/main/resource/papers/kubernan/brief-1-4.webtorials.htm The Movement to Implement ITIL Eliminating The Roadblocks to Effectively Managing Application Performance Closing the WAN Intelligence Gap http://www.com/main/resource/papers/netscout/briefs/ brief-03-07.htm Does IT Provide Business Value? http://www.com/main/resource/papers/netscout/briefs/ brief-05-06.com/main/resource/papers/netscout/briefs/ brief-06-07.com/main/resource/papers/netscout/briefs/03-06/ NetScout_iib_Metzler_0306_Managing_VoIP_Deployments.htm Network Misuse Revisited Taking Control of Secure Application Delivery http://www.com/main/resource/papers/netscout/briefs/ brief-11-06.com/main/resource/papers/netscout/briefs/ brief-01-07.com/main/resource/papers/netscout/briefs/04-06/ NetScout_iib_Metzler_0406_Deploy_MPLS.webtorials.htm http://www.webtorials.webtorials.com/main/resource/papers/netscout/briefs/ brief-07-06. management: A careful balancing act The Hows and the Whys of Quality of Service http://www.webtorials.webtorials.htm http://www.com/main/resource/papers/netscout/briefs/ brief-04-07.webtorials.webtorials.com/main/resource/papers/netscout/briefs/ brief-06-06.htm http://www.webtorials. Planning and Network Optimization The Performance Management Mandate The Road to Successful Application Delivery http://www.com/main/resource/papers/netscout/briefs/ brief-09-07.htm http://www.htm http://www.webtorials.htm The Integration of Management.htm http://www.htm Management and Application Delivery Route Analytics: Poised to Cross the Chasm http://www.networkworld.webtorials.com/newsletters/frame/2005/0221wan2.com/main/gold/netscout/it-impact/briefs.webtorials.com/main/resource/papers/kubernan/brief-1-3.pdf Managing VoIP Deployments The Port 80 Black Hole http://www.htm The Movement to Deploy MPLS Demonstrating the Value of Performance Management http://www.com/main/resource/papers/netscout/briefs/ brief-02-07.webtorials.com/main/resource/papers/netscout/briefs/ brief-10-07.webtorials.htm WAN Vicious Applications http://www.com/main/resource/papers/kubernan/ref1.webtorials.webtorials.htm Kubernan Briefs The Data Replication Bottleneck: Overcoming Out of Order and Lost Packets across the WAN http://www.htm Proactive WAN Application Optimization – A Reality Check Analyzing the Conventional Wisdom of Network Management Industry Trends http://www.
com/main/resource/papers/netscout/briefs/07-05/ Lack_of_Alignment_in_IT.pdf Articles Contributed by the Sponsors Cisco Overview of Cisco Application Delivery Network solutions www.com/library/whitepapers/voip_best_practices.pdf The Challenges of Managing in a Web Services Environment http://www.webtorials.htm NetScout Network Performance Management Buyers Guide http://www.webtorials.webtorials.com Cutting through complexity of monitoring MPLS Networks The Mandate to Implement Unified Performance Management www.com/main/resource/papers/netscout/briefs/0605/0605Application.htm Guide for VoIP Performance Management in Converged Networks http://www.APPlicATion Delivery HAnDbook | februAry 2008 Network and Application Performance Alarms: What’s Really Going On? The Successful Deployment of VoIP http://www.htm http://www.com/library/whitepapers/MPLS_multi_protocol_label_ switching.com www.cisco.htm http://www.htm A Guide to Decision Making 93 .cisco. asp?cpid=metzler-hbk http://www.asp?cpid=metzler-hbk Crafting SLAs for Private IP Services http://www.pdf http://www.com/abstracts/Avaya27.webtorials.com/go/microsoft Identifying Network Misuse http://www.com/en/US/netsol/ns340/ns394/ns224/ns377/networking_solutions_package.com/main/resource/papers/netscout/briefs/01-06/ NetScout_iib_Metzler_0106_NetFlow_Application_Awareness.pdf The Lack of Alignment in IT http://www.netscout.asp?cpid=metzler-hbk The Three Components of Optimizing WAN Bandwidth Branch Office Networking Streamlining Network Troubleshooting .netscout.com/main/resource/papers/netscout/briefs/09-05/ Data_Center.htm Netflow – Gaining Application Awareness http://www.webtorials.kubernan.com/main/resource/papers/netscout/briefs/11-05/ Management_Web_Services.cisco.cisco.html The Rapidly Evolving Data Center http://www.webtorials.webtorials.cisco.com/abstracts/NetworkPhysics6.cisco.htm Management Issues in a Web Services Environment Buyers Guide: Application Delivery Solutions http://www.netscout.pdf Solutions Portals for Cisco Application Delivery Products www. asp?cpid=metzler-hbk White Papers Innovation in MPLS-Based Services www.com/abstracts/Crafting%20SLAs%20for%20 Private%20IP%20Services.webtorials.com/library/buyersguide/default.com/go/oracle www.com/main/resource/papers/netscout/briefs/08-05/ Whats_Driving_IT.pdf Why Performance Management Matters http://www.webtorials.webtorials.webtorials.webtorials.com/main/resource/papers/netscout/briefs/02-06/ NetScout_iib_Metzler_0206_Network_Application_Alarms.com http://www.com/main/resource/papers/netscout/ briefs/05-05/0505_Network_Misuse.netscout.com/go/ans http://www.com/go/optimizemyapp www.com/abstracts/SilverPeak3.com/abstracts/ITBB-BON-2004.com/abstracts/Why%20Performance%20 Management%20Matters.webtorials.pdf Best Security Practices for a Converged Environment The Movement to Deploy Web Services http://www. Stupid http://www.com/go/ace It’s the Application.com/go/waas www.Monitoring and Continous Packet Capture Provides Top-Down Approach to Lower MTTR http://www.webtorials.pdf Cisco Data Center Assurance Program (DCAP) for Applications www.com/redirect_pdf/appnote_network_troubleshooting.com/abstracts/The%20Successful%20 Deployment%20of%20VoIP.cisco.kubernan.webtorials.kubernan.pdf What’s Driving IT? http://www.com/main/resource/papers/netscout/briefs/10-05/ NetScout_iib_Metzler_1005_Web_Services_Deployment_SOA.webtorials.
packeteer.pdf http://www.com/managed_services http://www.netqos.com/resources/prod-sol/ControlDrillDown.asp http://www.netqos.asp Improve Networked Application Performance Through SLAs NetPriva WAN Control for Microsoft Windows End Point QoS Protection for Citrix End Point QoS for Cisco WAAS http://www.com/resourceroom/whitepapers/forms/trustverify.com Controlling WAN Bandwidth and Application Traffic (QOS Technologies): http://www.asp http://www.netpriva.com//whitepapers.com/resources/prod-sol/iSharedArchitectureWP.aspx?whitepaperId=3 http://www.networkperformancedaily.com/content/view/124/344/ http://www.com/resourceroom/whitepapers/forms/metrics.aspx?whitepaperId=1 http://www.com/redirect_pdf/appnote_tech_reducing_mttr.com/ipa_brochure http://www. asp?cpid=metzler-hbk NetQoS Network Performance Management News and Analysis http://www.com/resourceroom/whitepapers/forms/voip.packeteer.aspx?whitepaperId=21 http://www.pdf Best Practices for Monitoring Business Transactions Akamai Akamai Web Application Accelerator http://www.com/resourceroom/whitepapers/forms/netflownew.html Shunra Testing Secure Enterprise SOA Applications across the WAN without Leaving the Lab http://www.com/content/view/259/435/ http://www.asp Application Delivery – An Enhanced Internet Based Solution Performance-First: Performance-Based Network Management Keeps Organizations Functioning at Optimum Levels 6 Reasons to Allocate Network Costs Based on Usage http://www.netpriva.asp How to Optimize Application QoE Before Rollout The Virtual Enterprise: Eliminating the Risk of Delivering Distributed IT Services http://www.com/resourceroom/whitepapers/forms/performancefirst.akamai.com/resourceroom/whitepapers/forms/voip_quality_experience.netqos.com/content/view/256/437/ http://www.netqos.com/waa_brochure http://www.packeteer.shunra.com//whitepapers.netqos.asp Application Acceleration: Merits of Managed Services http://www.asp A Guide to Decision Making 94 .com//whitepapers.netqos.com/resourceroom/whitepapers/forms/proveit.pdf Acceleration Technologies Overview VoIP Quality of Experience http://www.netqos.com/resources/prod-sol/Intelligent_LifeCycle_ Introduction.asp Network Diagnostics Provide Requisite Visibility for Managing Network Performance IP Application Accelerator http://www.netqos.netqos.netqos.akamai.asp Solving Application Response Time Problems .asp Intelligent LifeCycle for Delivering High Performance Networked Applications Over the WAN Seven Ways to Improve Network Troubleshooting http://www.com/ipa_whitepaper Managing the Performance of Converged VoIP and Data Applications http://www.netpriva.netqos.com//whitepapers.pdf http://www.shunra.com/resourceroom/whitepapers/forms/retrospective_ network_analysis.akamai.netscout.netqos.com/resources/prod-sol/VisibilityDrillDown.asp Best Practices for NetFlow/IPFIX Analysis and Reporting MPLS Network Performance Assurance: Validating Your Carrier’s Service Level Claims It’s not the Network! Oh yeah? Prove IT! Predicting the Impact of Data Center Moves on Application Performance http://www.com/solutions/allocate/whitepaper/allocate.Metrics that Matter http://www.APPlicATion Delivery HAnDbook | februAry 2008 A Unique Approach for Reducing MTTR and Network Troubleshooting Time Packeteer Gaining Visibility into Application and Network Behavior http://www.com/resourceroom/whitepapers/forms/monitoring_business_transactions.com/resourceroom/whitepapers/forms/improvedSLAs.aspx?whitepaperId=10 http://www.com/resourceroom/whitepapers/forms/network_diagnostics.packeteer.shunra.shunra.akamai.
Pdf Easing Data Center Migration with Traffic Explorer http://www.orange-business.networkworld.riverbed.com/documents/Easing_Data_Center_Migration_ with_Traffic_Explorer. 360 Architecture turns to Riverbed for a successful deployment.Orange Business Service case study.riverbed.pdf Route Analysis for Converged Networks: Filling the Layer 3 Gap in VoIP Management http://www.com/news/press_releases/press_112607.com/documents/Tex-WPv1.pdf ASP.ipanematech.packetdesign.com/New/DocUpload/Docupload/mc_wp_ Acceleration_en_070604.com/mnc/press/press_releases/2007/070307_ busi_acc.0.orange-business.com/solutions/ http://www.com/documents/Regaining%20MPLS%20WAN%20 Visibility.mnc.com/content/pdf/OBS/library/brochure/ broch_business_accmnc.com/lg/white_paper_forrester.html A suite of services to improve visibility.com/products/ Ensuring your communications infrastructure becomes efficient and effective http://www.pdf AS1000 Appliance Datasheet - http://www.mnc.com/newsletters/accel/2007/1105netop1.strangeloopnetworks.php http://www.byteandswitch.0. including Optimizing ASP.orange-business.packetdesign.php?FillCampaignId=701700000 00HMDM&FillLeadSource=Banner+Ad&FillSourceDetail=TEMPLATE_IDC+RO I+Case+Study&mtcCampaign=3721&mtcPromotion=%3E%3E App-ID Classification Technology http://www.com/content/pdf/OBS/library/case_studies/cs_Thistle_CST-015(2)PVA_Single.com/documents/IP%20Route%20Analytics%20 White%20Paper.pdf The Guoman and Thistle hotel group .pdf Regaining MPLS VPN WAN Visibility with Route Analytics Orange What is Business acceleration? http://www.pdf A Guide to Decision Making 95 .” http://www.com/technology/appid.pdf After 6-months of headaches with Cisco’s WAAS solution.com/content/pdf/OBS/library/white_ papers/wp_ensuring_your_comms_infrastructure.paloaltonetworks.strangeloopnetworks. management and performance of applications http://www.Net apps http://www.NET Performance resources.orange-business.pdf http://www.html http://www.” PA-4000 Series Firewalls http://www.packetdesign.com/products/resources/ Business Acceleration provides insights and tools to enhance your applications’ http://www.ipanematech.com/lg/case_study_idc.html http://www.html Panaorama Centralized Management What are the key benefits of deploying a mobile WAN optimization solution? Find out in this new Forrester study “Optimizing Users and Applications in a Mobile World.pdf Network World Optimization .com/New/DocUpload/Docupload/mc-ipasolution_ overview_en_070716_USFORMAT.packetdesign.com/products/panorama.html Palo Alto Networks Riverbed IDC applies their ROI methodology to our WDS solution in “Adding Business Value with Wide-Area Data Services.packetdesign.asp?doc_id=133898&page_number=2 http://www. Why Riverbed? Network-Wide IP Routing and Traffic Analysis: An Introduction to Traffic Explorer http://www.Specialist works to accelerate ASP.com/documents/Route-Analysis-VoIP-v1.com/products/pa4000.php?FillCampaignId=7017 0000000HOvK&FillLeadSource=Web&FillSourceDetail=&mtcCampaign=3721 &mtcPromotion=%3E%3E Packet Design IP Route Analytics: A New Foundation for Modern Network Operations: Network World Names Riverbed Customer as Enterprise All-Star Award Winner http://www.riverbed.pdf Acceleration: Bottlenecks.strangeloopnetworks.mnc.com/en/mnc/campaign/business_acceleration/ index.riverbed. pitfalls and tips http://www.0 train wreck – Jim Metzler http://www.APPlicATion Delivery HAnDbook | februAry 2008 Strangeloop http://www.html http://www. Ipanema Maximizing Application Performance http://www.com/files/PDF/products/Strangeloop_ AS1000_datasheet.orange-business.paloaltonetworks.com/document.paloaltonetworks.NET applications for Performance and Scalability whitepaper and The Coming Web 2.
com/wanscaler Interviewees In order to gain additional insight application delivery from the perspective of IT organizations.ipanematech. as well as how they are referred to in this report.ipanematech. Job Title COO Chief Architect CIO Network Engineer Global Network Architect Enterprise Architect Team Leader.htm Citrix’s WANScaler www. Strategy and Performance CIO CEO CTO Director of IT Services CIO Manager Network Engineer Senior MIS Specialist Network Architect Senior Director of IT Network Management Systems Manager CIO Network Analyst Manager of Network Management and Security Director of Common Services Business Consultant Manager of Network Services Senior Consultant Manager of Network Administration Entertainment Diverse Industrial Automotive Consulting Application Service Provider (ASP) Energy Engineering Mobile Software Business Intelligence Insurance Government Telecommunications Financial Manufacturing Global Semiconductor Company Medical Conglomerate Medical Supplies Manufacturing Non-Profit Education Storage Hosting Pharmaceutical Medical Industry Electronic Records Management Company Reference The Electronics COO The Motion Picture Architect The Industrial CIO The Automotive Network Engineer The Consulting Architect The ASP Architect The Team Leader The Engineering CIO The Mobile Software CEO The Business Intelligence CTO The IT Services Director The Government CIO The Telecommunications Manager The Financial Engineer The Manufacturing Specialist The Global Architect The Senior Director The Management Systems Manager The Medical Supplies CIO The Manufacturing Analyst The Management and Security Manager The Education Director The Consultant The Manager The Pharmaceutical Consultant The Medical Manager Appendix .pdf Citrix How Citrix’s NetScaler accelerates enterprise applications www.citrix.com/netscaler http://www. Network Architecture. Cirrix’s NetScaler www. a number of IT professionals were interviewed.APPlicATion Delivery HAnDbook | februAry 2008 The need for application SLAs http://www.Advertorials A Guide to Decision Making 96 .com/New/DocUpload/Docupload/mc_wp_ ApplicationSLAs_en_070328. the type of industry that they work in.com/emailing/gartner/Gartner-Ipanema-webcastNovember-2007.citrix. The following table depicts the title of each of the interviewees.com/hqfastapps Register to attend our free WEBCAST with GARTNER: “WAN Optimization: from a tactical to a strategic approach”.citrix.
allowing customers to rapidly adapt to changing application types. more importantly. The Cisco ANS portfolio. Overview The Application Networking Services (ANS) product portfolio. and web services across the organization. control and management plane) and. including WAN bandwidth and server processing cycles • Rapidly deploy new applications over an integrated infrastructure. routing. acceleration and visibility needed to ensure applications are successfully delivered. the ANS products support multi-device tested and documented solutions. the Cisco ANS solution incrementally delivers: • Virtualization – an often used. without the need to upgrade or deploy new devices • Scalable solutions – for the highest possible scalability. such as security. in conjunction with the application-fluency embedded in Cisco’s switching and routing foundation products. Attributes of Cisco Application Delivery Networks Cisco has built and implemented the ANS portfolio to be deployed as part of the data center and branch infrastructure. but misunderstood term. CRM. mid-market and service provider IT organizations who need to optimize and deliver business applications. which consists of the Wide Area Application Services (WAAS). which are critical to an application delivery network’s success: A Guide to Decision Making 97 . • Transparent solutions – network services can be added (or subtracted) without major overhauls or reconfigurations to the current network. security. Cisco has created secure. This allows Cisco the unique capability to deliver on the following attributes. traffic patterns and volume across the network. virtual partitions in both the switching. In the Application Control Engine (ACE) for example. • License-based performance – many Cisco ANS products can grow in performance simply by purchasing a software license that can easily and rapidly be installed to support additional demand. an IT manager can very rapidly partition off a new application on the same device which remains completely separate (on the data. This is crucially important as services such as voice are added to the existing data network. such as ERP. websites / portals. Application delivery networks from Cisco help IT departments accomplish the following objectives: • Deliver business and web applications to any user. integrated solutions that provide the availability. Application Control Engine (ACE) and ACE XML Gateway. creates a true end-to-end Application Delivery Network. or the possibility to scale a WAAS design due to the integration with ACE. which are a set of Network-wide. While taking advantage of the foundation elements and integration benefits. secure from other partitions.ApplicAtion Delivery HAnDbook | februAry 2008 enabling Application Delivery Networks Using Cisco’s Application Networking Services (ANS) and Application Fluent Foundation Networks to better optimize and enhance business applications Adaptability Cisco has integrated application delivery functionality directly into the core switching and routing infrastructure. Examples include the possibility to add up to 4 active ACE Modules within the same Catalyst chassis. anywhere with high performance • Centralize branch server and storage resources without compromising performance • Better utilize existing resources. and application delivery controller products. is designed for enterprise.
URL filtering and parsing and TCP into the network.ApplicAtion Delivery HAnDbook | februAry 2008 Extensibility The network system underlying the application must be designed to easily allow additions of new capabilities into the overall system. • Application recognition – the first step towards optimizing the delivery of the application is to recognize it. Using the Wide Area Application Services (WAAS) solution. The Cisco ACE XML Gateway. the Application Control Engine can be used not only to provide server load balancing. a secure. the mechanism by how they’re delivered is important as well. A Guide to Decision Making 98 . in conjunction with virtual LANs (VLANs) on the Catalyst 6500. the network can optimize the delivery of that traffic over the WAN using innovative TCP optimization. Cisco offers plug-ins to exchange authentication messages with Identity Management systems or detect dangerous packets through the RegX capabilities in ACE. is integrated into the Cisco Integrated Services Routers (ISR) as well as into Cisco IOS software. an overlay network does not provide any hooks into the existing network infrastructure that can make application delivery more effective. the server can be better utilized. These include: • Integration of application intelligence into the foundation. such as with Network Based Application Recognition in the Catalyst 6500 Supervisor 32 engine. For example. Additionally. extending a “private” network through to the data center • WAAS can be inserted transparently into the existing network. and adds to the alreadyovercrowded data center. without having to purchase a new device. The creation of an overlay network – for server load balancing or WAN optimization. Client sessions can now be scaled across multiple servers. so that new functionality can be developed and integrated into the network. These demonstrate how broad the solution needs to be. for instance – adds complexity. but what about how “deep”? Cisco’s solutions contain customizable elements that allow the network to respond to new threats. Integration These capabilities are all critical to the delivery of applications from the data center to users. compression and data redundancy elimination. however. virtually partitioned network can be created for a new application on a common platform. Tying application delivery functionality into various places in the network ensures the most optimal application delivery and the best user experience. increases management burdens. ensuring rapid content delivery of email traffic • WAN Optimization – The user at the branch office will be limited by the amount of bandwidth available on the WAN. the network can classify. for example. • Integration of application switching and firewalling into the Catalyst 6500 though the Application Control Engine (ACE) and Firewall Services Module (FWSM) • Integration of the WAAS solution. meaning that no changes to quality-ofservice or security access control lists are needed. then optimize or re-priorize the application throughout the network • Server Scaling – in order to better scale the ability of the application to delivery email. Using both IP and application recognition mechanisms integrated into the Catalyst 6500 or the Cisco routers. This provides a solution for a complete range of TCP-based applications with the simple addition of policies. Let’s take an example. but “last-line-of-defense” security. by offloading functions such as SSL. • Using ACE virtualization. supports programmable plug-ins. while it can be purchased in an appliance form factor. Additionally. • The Catalyst 6500 can detect local physical connectivity failure and can inform the ACE module to perform a failover to a secondary device • ACE can map a virtual partition to a virtual route forwarding (VRF) engine. functionalities and capabilities as they become available. • The WAAS transparency elements allow the network to deliver valuable statistics through NetFlow to network management and monitoring tools.
com A Guide to Decision Making 99 . over the WAN and data center networks. such as Exchange. The network adapts to all types of applications. to better optimize specific applications. while also supporting active-active stateful resiliency between ACE modules to ensure application sessions remain intact. Visibility Visibility is an often-overlooked element in the deployment of an application delivery network. and will continue to play.ApplicAtion Delivery HAnDbook | februAry 2008 How Cisco’s ANS Portfolio Enables the Application Delivery Network By integrating into the Cisco data center. including XML. the ACE Global Site Selector ensures users can be re-routed to a secondary data center should the primary become congested or fail. the Cisco ANS portfolio delivers against the key challenges for application owners. Cisco is working with application vendors. an ever-increasing role in delivering IT services to users. most manageable. which can yield up to 100x end user performance improvements. Conversely. Within the data center. These challenges include security. with ACE for load balancing and server www. from networklevel Layer 2 to application level. and what end users are experiencing. By combining the application-fluent network foundation – Cisco’s industry-leading switches and routers – and the Application Networking Services capabilities. IT management and end users. Availability Availability is table-stakes in application delivery networking. Through integration of application delivery functions into the network. such as generic TCPbased applications. attacks. Cisco delivers integrated security. Finally. enabling a business to increase efficiency and deliver results. Acceleration Cisco delivers the most complete set of application acceleration features in order to provide performance improvements for all clients and applications. whether it’s a networking. Conclusion The network has played. as well as application-level security both for clientto-application access and for server-to-server communication in multi-tier designs. network managers can understand (and limit) the traffic coming into the network. such as Microsoft. Cisco can provide the most complete visibility solution today. Finally. Cisco provides the most complete. branch and campus foundation. Starting with application recognition in the Catalyst 6500 and ISR routers. applications and network elements. Cisco can optimize specific HTTP-based sessions with ACE web acceleration features. Cisco has designed its application delivery solution to adapt to all types of failures. WAAS can add additional benefits specific for each application. Security features in the ACE module provide the last line of defense prior to reaching the physical server itself. Security Cisco delivers the strongest end-to-end security in order to protect hosts. through partners such as NetQoS. recovering and/ or working around the failure. For example. and most integrated application delivery network solution available today. In conjunction with specific application optimization engines for protocols like CIFS. including across the WAN (where many WAN optimization solutions change IP header information). The Cisco network can adapt to all attack types. including embedded firewall and intrusion protection services on the Cisco ISR routers as well as a high-speed firewall in the Catalyst 6500. NetFlow statistics can be reported and monitored at all places in the network. the ACE XML Gateway (AXG) can be leveraged to protect XML-based applications from newer attacks. server or application component failure. availability. Transparency throughout the network assures a uniform measurement. Cisco can deliver end-to-end performance visibility to truly gauge how applications are operating over the application delivery network. and WAAS for WAN acceleration. acceleration and visibility. the ACE module works with the Catalyst 6500 to ensure route health in the network and reroute around failures. offload.cisco. yet it is often the most critical as it is how network and application managers monitor performance and end user productivity. Additionally.
A Guide to Decision Making 100 . IT professionals must switch from an up/ down model of network management to a performancefirst model to quantify end user experience.ApplicAtion Delivery HAnDbook | februAry 2008 take the lead on WAN Application Delivery by Putting Performance First IT organizations have spent billions of dollars implementing fault management tools and processes to maximize network availability. While availability management is critical. a provider of network performance management software and services to Global 4000 companies. Relying solely on infrastructure availability and utilization is no longer enough to address these challenges. service providers and government organizations. A performance-first approach starts with measuring endto-end response times to get an overall view. traffic flow analysis. and long-term packet capture. including VoIP quality. infrastructure reliability has improved to a point where 99. Texas-based NetQoS. especially as network professionals are becoming increasingly responsible for application delivery across the network infrastructure. and identify the severity and pervasiveness of problems. The Performance-first Imperative Network engineers and managers must take the lead on application delivery with a performance-first approach to managing their complex networks. Inc. “Network performance projects are no longer cost centers. Performance-first management starts with measuring application response times and drilling into other metrics as needed. device performance. According to Aberdeen Group’s 2007 research report. network professionals can concentrate on how the network is affecting service delivery and make themselves more relevant to the business units they serve. and revenue growth. traffic flow analysis. and expands other key performance metrics as needed. By shifting the focus from fault management—which is largely under control— to performance-based management. the rise of voice and multimedia traffic. “Performance is the thousand shades of gray between those red and green lights indicating whether devices are up or down. The more efficiently data flows at the transport layer the better the application performance. profitability. This is driven by the fundamental purpose of the network—to transport data from one end of the system to the other as rapidly as possible. they are becoming the major components of enterprise strategies for better customer service. Application response time response time is the best measurement to use when deciding how to optimize the network. including VoIP quality of experience. CEO of Austin.9% availability is not uncommon.” The performance-first paradigm inverts the traditional device monitoring approach and begins with top-down visibility into application performance. plan new infrastructure rollouts and upgrades.. and long-term packet capture. application performance issues are growing dramatically due to trends such as data center consolidation. device performance. and growing numbers of remote users. The Real Value of Network Visibility.” said Joel Trammell. At the same time.
In fact. Network-based call setup and call quality monitoring enable the network team to track the user quality of experience and isolate performance issues to speed troubleshooting.ApplicAtion Delivery HAnDbook | februAry 2008 Application Response Time Monitoring: Understanding application response time baselines is the essential starting point for making strategic decisions about network performance and application delivery. Too often IT managers have no way of knowing how well their organization or service provider is meeting its performance targets. it’s impossible to manage application performance proactively. Traffic analysis enables network engineers to understand the composition of traffic on specific links where latency is higher than normal or expected. For instance. engineers need to view and analyze detailed packet-level information before. and services is also a critical component of the performance-first approach. if end-to-end performance monitoring shows the source of latency is isolated to an infrastructure component—a busy router or a server memory leak. This yields the information needed to redirect or reprioritize application traffic and do capacity planning. For instance. and applications. Advantages of Performance-first Management Survey results from Aberdeen indicate that organizations taking a performance-centric approach achieve superior application and network performance and more cost-efficient operations than their peers. The Dashboard for Performance-first Management Recognizing the absence of tools to support the performance-first management paradigm. The NetQoS Performance center offers best-in-class modules that scale to support the world’s largest and most complex networks and leverage industry standard instrumentation without the use of desktop or server agents. Once the packets are stored. In addition. Other important benefits of performance-first network management include: Deliver consistent application performance and measure it: Without real-time visibility into end user response times. traffic flows. during. servers. Device Performance Management: Managing network infrastructure. Long-term Packet Capture and Analysis: When problems do occur. Network Traffic Analysis: With end-to-end performance metrics captured and the source of latency isolated. With top-down performance analysis spanning enterprise views of the end-user experience down to deep-packet inspection. compared to a 40 percent success rate for all the others. the NetQoS Performance Center provides global visibility into the core metrics needed to sustain and optimize application delivery. visibility into new or anomalous traffic patterns pinpoints performance problems and identifies security risks. and infrastructure health. devices. the data can be analyzed for actionable information to solve problems quickly. application response time monitoring enables network professionals to decide where WAN optimization and application acceleration technologies are most needed and to measure the before and after impact. The deconstruction of total end-user transaction time allows network professionals to tie end-user performance back to the IT infrastructure and analyze the behavior of networks. without having to recreate the problem. those organizations reported a 92 percent success rate in resolving issues with application performance before end users are impacted. A Guide to Decision Making 101 . a performance-first approach must include monitoring call quality and the impact of convergence across all application performance. VoIP Quality of Experience: With the introduction of VoIP applications on enterprise networks. NetQoS set out to fill the void. further analysis is much more focused. for example—network professionals need device performance management capabilities to poll the device in question and find the root cause so that corrective action can be taken. for both short-term troubleshooting and longterm planning. and after the problem.
we were flying blind. “NetQoS Performance Center management reports give us centralized visibility into traffic patterns across our global locations. and take measured steps to optimize application performance. Bhamini states “NetQoS Performance Center gives us the insight we need into application performance across our network and combines end-to-end response time and traffic flow reports that are easy to create and use. Belron’s manager of data networking and security.com. “Without NetQoS. application operations. capacity planning. and maintaining application service levels. “This data enables us to justify bandwidth upgrades.” said Gary Lewis. deployed the NetQoS Performance Center to gain visibility into network traffic from 320 locations in 46 countries. “The correlation of the end-to-end response time and traffic analysis data in a consolidated dashboard via the NetQoS Performance Center allows us to more effectively monitor issues and quickly resolve them. the network team knew that successful application delivery depended on their ability to measure network and application performance. troubleshoot issues faster. and analyze the impact of new application rollouts to prioritize traffic appropriately. we would have no idea what impact it had except for availability. senior network engineer at National Instruments.” said Hitesh Parmer. one of the world’s largest producers of specialty products and paints. “We continue to rely on the products to give us the data we need for troubleshooting. London-based ICI Group.” Columbus.” said Sohail Bhamani. Ohio based Belron US. mitigate risk. Often the anticipated results don’t materialize. “We saw immediate value when we deployed NetQoS products about two years ago.500 employees in 40 countries via two main data centers and its global MPLS network. IT organizations can ensure problems are resolved quickly. A Guide to Decision Making 102 . With NetQoS. including network engineering. automation and embedded applications.” Work collaboratively and more effectively to reduce MTTR: Network managers need tools that give them realtime global visibility and historical information to optimize the network infrastructure for application performance and work with peer groups to plan for changes. deployed NetQoS to deliver optimal network and application performance to more than 7000 employees and more than two million customers across 50 states. the premier online resource for automotive information based in Santa Monica. requiring a shift from a device-centric to a performance-centric focus. CA. a leading provider of computer-based measurement. National Instruments had followed the traditional network management approach of focusing on availability of network devices. ICI Group’s global network technology manager. especially in identifying the source of problems for faster and more accurate troubleshooting. not just availability. a multi-faceted automotive glass and claims management service organization.” said Philip Potloff. understanding how performance is impacted by infrastructure and application changes.” Faced with implementing a major Oracle upgrade worldwide. and isolating the sources of above-normal latency. “We are building custom dashboards for each of our IT groups. the cost can be high.” Conclusion IT organizations can no longer manage networks in isolation from the applications they support. National Instruments delivers applications to more than 4. Also. and performance problems persist. reduce network costs by identifying underutilized sites. By measuring how networked applications perform under normal circumstances. knowing the source of problems means the right technicians may be assigned immediately to address problems quickly. and database administrators. Prior to implementing NetQoS products.ApplicAtion Delivery HAnDbook | februAry 2008 This was the case for NetQoS customer Network Instruments. to have their own views into the NetQoS Performance Center data.” Make more informed infrastructure investments: When infrastructure managers make uninformed upgrade decisions. Executive Director of Infrastructure Operations for Edmunds. “When we would deploy a new application or make some sweeping change to the network. systems administration. ROI is poor.
Bed. Schlumberger. Representative customers include: AIG. Deutsche Telekom. Chevron. Avnet. Turner Broadcasting Systems.com/ A Guide to Decision Making 103 . Barclays Global Investors.netqos. NASA. NetQoS products are used to manage large enterprise. and Verizon. including a majority of the world’s 100 largest companies. www.ApplicAtion Delivery HAnDbook | februAry 2008 NetQoS is the only company that delivers a comprehensive network performance management suite for applying the performance-first approach. & Beyond. service provider and government networks. Bath.
implement corrective actions. Traditionally. pinpoint root cause. Ideally. However. it costs enterprises a fortune in lost productivity. identifying problems and diagnosing the cause in this earliest stage will reduce MTTR and minimize business interruption. with a problem lifecycle Management plan into more damaging stages where it impacts end users and customers. NetScout’s Sniffer and nGenius solutions are designed to aid in early detection and rapid diagnosis of problems impacting networked applications. This evaluation is the basis for A Guide to Decision Making 104 . The size and complexity of these networked applications makes identifying and resolving performance issues that much more difficult. Since problems that degrade the performance of networked application follow a typical lifecycle troubleshooting them can also follow a standardized four stage process: Network and application problems experience a rather predictable lifecycle where the problem originates and is minimally invasive to users and business processes. IT organizations need both best practices processes and comprehensive network and application performance management solutions to quickly find and fix problems. once users call their help desks and report the problem. When they operate at less-thanpeak performance. the problem continues to grow Detect – Identify a problem as soon as possible Diagnose –Evaluate the problem and determine its’ likely causes. in many cases. and remediate user-affecting issues. and verifies its effectiveness. Discovering problems earlier helps reduce MTTR Problem Lifecycle Evolution Staying ahead of such problems means developing an aggressive strategy to identify.ApplicAtion Delivery HAnDbook | februAry 2008 Reduce MTTR Introduction The increasing size and scope of today’s mission-critical enterprise networks means that users are more dependent on uptime and availability. thus improving IT productivity and application delivery to users. the IT organization initiates processes to identify the root cause.
In fact. e. Metzler & Associates with NetScout Systems – Spring 2007 Survey of 232 networking professionals A Guide to Decision Making 105 .Ashton. business is impacted.g.g. ing their help desk or IT staff to report it. VoIP. However. and verifying corrective actions to resolve problems: Step 1: Detection Answers the question. also known as Key Performance Indicators (KPIs). small problems that go unnoticed or unchecked gain in size and impact over time until multiple users are affected. Metzler & Associates and NetScout Systems surveyed more than 230 networking professionals in the spring of 2007 and more than 75% of the respondents indicated it was their end users complaints that alerted them to new problems in their network. The addition of latency-intolerant applications. Who Finds Performance Problems Discovering a problem emerging with networked applications has often been first identified by end users experiencing the problem and then call- Note: Respondents may have chosen more than one answer Source . revenue is lost and customers vanish. Verify –Assess network and application performance to validate that the actions taken have resolved the initial problem. ideally. makes it essential that network and application problems are discovered in the earliest stages. Monitor – Ongoing performance management gains long-term visibility into networked application traffic. Ashton. Continuous monitoring of networked application traffic flows for trending. many IT organizations desire a more proactive means of identifying emerging troubles.ApplicAtion Delivery HAnDbook | februAry 2008 formulating and evaluating a course of corrective actions to resolve the problem. providing the proactive detection of new problems. IP video. alarming and analytical evidence will produce key performance indicators such as: • Immediate notification of link or application utilization increases to pinpoint a growing congestion problem before it impacts the other traffic or end users • Identification of new applications added to the network to further evaluate if the capacity in key parts of the network can handle the additional load without degrading other business applications or converged services such as VoIP • Metrics. diagnosing. related to quality of user experience such as abnormal increases in average application response times or changes in VoIP quality indicators like unac- The Best Practices Approach to Problem Lifecycle Management The steps to an effective lifecycle management process can be address in much more detail for detecting. and other time-sensitive applications e. “Is there a problem in our network?” Often. multicast updates for stock prices.
and playback analysis based on a continuous stream of recorded packets to troubleshoot the most persistent. a corrective action plan needs to be determined.: viruses or streaming internet radio that might degrade performance of business critical applications • All applications. the volume and utilization of these applications and conversations. Proper diagnosis will depend on rich application and conversation flow visibility and when necessary. chatty application. the conversations between clients and data center application servers. trended in historical reports. as well as other resources vying for the same bandwidth. as well as. IT staff can begin by using early warning alerts. Analysis in real-time or over time should reveal: • The networked applications traversing the enterprise network to uncover dangerous or unapproved traffic. Using playback analysis to reconstruct the problem and the network conditions at the time of the problem with expert analysis makes diagnosing even the most elusive of problems a more successful proposition.g. e. • Upward or downward trends in traffic flows. Asking IT to run a converged network without this information is like trying to navigate a foreign city without a GPS system–there will be many guesses and wrong turns. Having the actual packet flow data from your network available in real-time. A Guide to Decision Making 106 . both inbound and outbound. The performance management system will confirm if the actions taken have satisfactorily restored service levels. correcting mis-configurations. As the complexity of converged networks and virtualized application services makes troubleshooting an order of magnitude more challenging. which will allow the IT team to “right-size” bandwidth more accurately and timely • Sophisticated bounce charts.ApplicAtion Delivery HAnDbook | februAry 2008 ceptable jitter. analyze the applications in use contributing to the alerts. leveraging the best packet-flow data available to solve the most complex problems. packet loss and MOS scores • Restored application and conversation levels to within previous baselines. One enterprise feared their mission-critical application server backups using multicast were impacting voice quality and turned off these backups for a period of time only to discover that the problem did not disappear. such as: • Returned response times for key business applications to acceptable levels. Step 2: Diagnosis Answers the question. packet loss or MOS scores help IT organizations to quickly realize an emerging problem and rapidly locate the source of network degradations before users notice quality issues. Alternatives may include adding bandwidth. helps to quickly ascertain if a problem is emerging. decode filters. the actual packet evidence for in-depth forensics analysis. first to determine the corrective action and then to implement and verify it. This step may take some time. or working with a third-party application developer to improve the architecture of a poorly written. • Improved VoIP quality of experience by analyzing metrics such as jitter. It also directly enables all appropriate personnel to collaborate on the next phases – diagnosis and alternative corrective actions. Valuable time is lost going down the wrong path guessing at the problem and possible actions to rectify it. “What is the root cause of the problem?” IT organizations will start their diagnostic analysis by asking “What are the symptoms of the network or application problem?” Following a top-down approach to troubleshooting the issue. within assigned QoS classes to discover possible mis-configurations that might be causing delivery issues Step 3: Verification To determine “Have we resolved the issue?” Based on the proper diagnosis. contextual drill downs to the actual packets at the center of the problem is essential. complex problems affecting business applications.
Survey 138 participants A Guide to Decision Making 107 . migrating to a MPLS WAN. and initiate traffic engineering changes or schedule disruptive traffic to traverse the network off hours. Broad elements of ongoing management include: • Proactive alarms and warnings of potential problems before they have the chance to have a noticeable effect on user experience Having a real effect on reducing MTTR Companies who have already implemented strong performance management solutions are already realizing Source: March 2007 Ashton. is the bottom line with ongoing management. Reducing user-affecting problems. However. or introducing VoIP services to remote offices. Ongoing management continues the detection and diagnostic activities already discussed to maintain a positive performance and user quality of experience across the overall projects and associated networked services.ApplicAtion Delivery HAnDbook | februAry 2008 What should be accomplished in this phase is confirmation that the problem is in fact corrected and the application services are not suffering any further issues. marks the start of ongoing management. they will be able to recognize degradations early. with in-depth application visibility from packet flow analysis. or between bandwidth hungry services like video teleconferencing and time sensitive. such as consolidating data centers. rectify bandwidth contention issues quickly. upgrading business applications. Step 4: Ongoing Management By analyzing “How is our network growing and changing over time?” The correction of a problem. existing business applications. as well as a widespread launch and subsequent rollout of any new initiative. • Access to real-time data for troubleshooting detected issues as they emerge • Analysis in historical reports of trended data for planning and traffic engineering Many IT organizations will find themselves serving as a sort of “referee” between business use and recreational use of the network. or at least improving mean time to resolve problems that do occur. Metzler and Assoc.
By following the process outlined in this discussion and leveraging the most in-depth.com/ A Guide to Decision Making 108 . verify. other organizations have already had demonstrable improvement in their troubleshooting time. diagnose. and manage the projects and problems that may arise in the future. it is easy to accept the need to reduce the time it takes to resolve network and application degradations. packet flow analysis and evidence from their network. When some manufacturing organizations report one hour of downtime represents $12 million in lost production. Summary Good network and application performance depends on a structured approach to performance management throughout a problem’s lifecycle. A recent survey shows dramatic reduction in MTTR. http://netscout.ApplicAtion Delivery HAnDbook | februAry 2008 significant time savings in reducing mean time to restore quality application delivery. Using NetScout’s nGenius and Sniffer network and application performance management solutions that leverage analysis of rich packet flow data you too can detect. or a stock exchange processing hundreds of thousands of transactions per hour. And this is an essential step in reducing MTTR and helping to maintain high quality service delivery throughout your enterprise community.
• Application performance can slow the business. they found that users accessing files over WAN links experienced severe degradation in response – not only for file access. they create additional strain on infrastructure and open up new requirements for effective management. decisions optimize for a narrow view. The Challenge: How to Align Business Objectives with the Entire Infrastructure Too often. When applications slow down. a network out of capacity. cut costs and simplify system management. voice. • Silos within the IT organization diffuse responsibility. When a global 100 company consolidated branch office servers into data centers to get better control over data. ERP to financial transactions. the assurance of application performance continues to elude IT managers. From voice to video conferencing. Yet despite these investments. networks and application software – hundreds of billions every year. When you look at your organization – how are your infrastructure investments fairing? Are they delivering the assurance your business requires? Even as IT performance challenges persist. it’s the business that suffers. a poorly written application – when ERP doesn’t work. emerging trends promise to open up new worlds of opportunity and efficiency. but for ERP applications that shared the same link. • Service Oriented Architectures (SOA) create a more flexible framework for leveraging and distributing information between systems and people. • Wireless technologies melt away the barriers of location. In the example above. the business slows down. Whether it is an underpowered server farm. communications. • Virtualization promises to free logic and resources from physical constraints. and data integration. we can see how good intentions quickly turn into performance nightmares. If we look at recent examples like branch office server consolidation. • Unified Communications enable the seamless integration of presence. Applications require systems. the applications – and potential for issues – are diverse. a decision to optimize servers cost the whole organization.ApplicAtion Delivery HAnDbook | februAry 2008 Delivering the Network That Thinks Enterprises continue to invest heavily in IT systems. Are you ready for these new opportunities? Can you effectively utilize and manage them while assuring the delivery of services your business demands? It is no trivial task. Yet as these emerging architectures are embraced by business. Value creation is lost and IT is forced to react simply to restore normal operations. networks A Guide to Decision Making 109 .
Our extensible framework interconnects with leading framework vendors to feed our real time application view into a heterogeneous environment. across the entire system. but many IT organizations aren’t built that way. As you deliver business globally. Beyond the brute basics of acceleration. in real-time. With role based portals. • Adaptive response & optimization for changing conditions. Our integrated system provides unique intelligence to identify issues. Automatically discover every application. track the total delay of specific application transactions. the technologies to assure performance. or ERP applications group. Communication barriers. you can create custom views for your voice group. dynamically allocating resources to the most critical needs – like automatically constricting recreational traffic to assure your next voice call receives the service level it requires – to achieve your business outcomes. and the ability to communicate and integrate across the organization. Why deploy network probes to diagnose a specific issue before considering a more complete solution. and the business objectives you’re trying to achieve. performance issues almost always involve the distributed network – but it’s the applications that run the business. • A unified view of end user performance. unaware of related challenges that need to be managed in other areas. Too often we deploy point tools to solve one issue. intelligent system to assure performance.ApplicAtion Delivery HAnDbook | februAry 2008 and application logic to work together. or monitor the Mean Opinion Score (MOS) of your IP Telephony – across your entire system. • Centralized. narrow views and self-interest plague our ability to resolve problems and plan effectively. centralized view into your network via IntelligenceCenter. and compression. Delivering the Network That Thinks means providing a powerful. users…. systems. such as an SAP Order Entry operation…and isolate network and server issues that emerge. we automatically respond at the application level. NOC group. fix and proactively manage issues. The challenge is to bridge the gap between the network and applications to assure the effective delivery of services. • Fragmented tool base mirrors the organization and creates islands of optimization. Intelligent Service Assurance – Delivering The Network That Thinks Packeteer delivers the network that thinks about applications. end-to-end view of key applications A Guide to Decision Making 110 . measure utilization and provide 100+ statistics to deliver the information needed to intelligently find. caching. which integrates better visibility AND the ability to resolve issues as a complete solution the entire organization benefits from? • An application-centric and network-delivered world. • An application fluent perspective of the network. For example. 90% of workers reside outside of HQ and the data center. Provide a real-time.
ERP. A Best Practices Approach to Performance Issues Helping you sort through hundreds of applications. contain problematic traffic.ApplicAtion Delivery HAnDbook | februAry 2008 The Solution. Accelerate: Overcome latency & protocol issues to enhance performance and capacity. what approaches to take to resolve issues. A Guide to Decision Making 111 . and continuously monitor performance. CRM. and determine what tools to employ to fix problems requires a strategic partner for your success. The approach is a simple series of four steps that begins with intelligence and leads to high performance application delivery. Packeteer’s vision for Intelligent Service Assurance means we will continue to provide the strategic solution to deliver your desired business outcome. unified communications and SOA strategies. Whether your infrastructure challenges are immediate to support high quality voice. optimize and assure the performance of all applications in the network as the foundation for Intelligent Service Assurance. video. Packeteer’s Intelligent LifeCycle exemplifies our commitment by providing a guide to help you navigate the complexity of application delivery and sustain the level of service performance your business demands. transactions and other key business applications. Packeteer unifies all the tools required to find. consolidation. Extend: Create an intelligent overlay that adapts current infrastructure to new and emerging issues and integrates into key service management frameworks. Start with Intelligence Today. file. Assess: Identify what applications are running on the network. Provision: Create policies to align network resources with the business – protect key applications. or you are planning ahead for virtualization. focus on the key issues.
For more information. rather than a narrow set of point performance tools.packeteer.ApplicAtion Delivery HAnDbook | februAry 2008 The Results: Applying Packeteer Technologies to Achieve Performance Results Packeteer delivers a full spectrum of integrated solutions to optimize application performance. SMS. usage patterns and WAN link speeds. users. these examples attest to the gains you can achieve with Packeteer solutions:* Application IP Telephony & Video ERP/CRM & Line of Business Applications File Access (CIFS) Results • Monitor MOS scores in real time • Reduce jitter by 60%1 • Reduce video conference session setup by 70%2 • Track end user response times • Accelerate performance by 75-95%1 • Reduce WAN bandwidth by 70-98% • Accelerate file access by 98% • Reduce bandwidth by up to 97% • Reduce storage by 2 TB3 • Accelerate SharePoint access times by 90% • Reduce bandwidth by 50-95%+ • Reduce WAN bandwidth by 80-100% Primary Technologies Employed • IPT/VoIP Quality Monitoring • Application classification • Per call Application QoS • • • • Response time monitoring Application QoS Compression TCP acceleration for higher latency links • Wide Area File Services (WAFS) • CIFS Acceleration • Document caching • Protocol acceleration • Bulk caching • Local SMS delivery • Local delivery of services: print. DNS/DHCP SharePoint & SMS Traffic Server Consolidation Services Recreational Traffic • Contain to 2-10% during peak usage • Allow usage as excess bandwidth available • Identify infected hosts • Contain propagation traffic • Maintain WAN availability • Traffic AutoDiscovery & Application Classification • Application QoS • Application classification • Connection diagnostics • Application QoS Malicious Traffic 1 2 3 Inergy Automotive Case Study Logitech Case Study Nortel Case Study * Performance results may differ based on a mix of variables such as applications.com Web: www.com Phone: 800-440-5035 (North America) 408-873-4400 (International) A Guide to Decision Making 112 . contact us at: Email: info@packeteer. Whether facing strategic changes in your infrastructure or looking to immediately resolve a pain point.
highly-interactive Web applications securely. a high performance transport protocol that reduces the number of round trips over the optimized path. At the same time. impact because of business trends. and voice over IP – available to employees. Business applications must perform quickly. supply chain management. businesses face significant challenges. customer relationship management. threatening not just the benefits linked to the applications. If they don’t. Each of these problems severely undermines the application’s effectiveness and the company’s return on investment. the Akamai Protocol. Application performance improvements are gained through several Akamai technologies. they need to make a variety of business-critical applications – including extranet portals. which ensures that traffic is always sent over the fastest path. Akamai’s Application Performance Solutions Today. and reliably every time. quick time to deploy. They are. including SureRoute route optimization. caching and compression techniques. such as globalization. such as poor performance due to high latency. The performance issues associated with the Internet are not new. It ensures consistent application perfor- Key Challenges in Delivering Applications When delivering applications via the Internet to their global user communities. APS comprises two solutions – Web Application Accelerator and IP Application Accelerator. poor application performance can quickly turn the user experience into a costly. sales order processing. VPNs. applications. Because of its lower cost. itself.000 servers. application use and adoption will suffer. and expansive reach. chatty protocols like HTTP and XML. spotty application availability caused by high packet loss. Akamai’s Application Performance Solutions (APS) are a portfolio of fully managed services that are designed to accelerate performance and improve reliability of any application delivered over the Internet – with no significant IT infrastructure investment. ensuring that users are always in close proximity to the Akamai network. Though global delivery of enterprise applications provides remote users with essential business capabilities. and inadequate application scalability to deal with growing user bases and spiky peak usage. These organizations must also be sensitive to the economic pressures driving IT consolidation and centralization initiatives.500 businesses trust Akamai to distribute and accelerate their content. productivity-sapping exercise. more than 2. and packet loss reduction to maintain high reliability and availability. are introducing additional performance issues. Web Application Accelerator accelerates dynamic. business partners and customers throughout the world. and business processes. As organizations expand globally. IT organizations are increasingly turning to the Internet to support their globalization efforts. but the overall success of the business. securely. higher availability. resulting in greater adoption through improved performance. however. and an enhanced user experience. product lifecycle management. Akamai leverages a highly distributed global footprint of 28. having more of an A Guide to Decision Making 113 .ApplicAtion Delivery HAnDbook | februAry 2008 transforming the internet into a business-ready Application Delivery Platform Ensuring application performance supports your business goals.
Users are dynamically mapped to edge servers. Voice over Internet Protocol. such as PDAs and smart phones. and delivers capacity on demand. Akamai APS also addresses performance problems associated with the delivery of applications to wireless handheld devices. live chat. such as SSL. • Rigorous Application Security – Akamai adheres to stringent security requirements for network. Software as a Service (SaaS). such as secure file transfers. is built on an optimized architecture for delivering all classes of applications to the extended enterprise. software. • Superior Application Availability . productivity.ApplicAtion Delivery HAnDbook | februAry 2008 Akamai’s distributed EdgePlatform for application delivery mance. and procedures. content availability and server load.Akamai’s provides unique protections to ensure that Internet unreliability never gets in the way of end user access to your application. ensuring increased application performance and availability for remote wireline and wireless users. like Web Application Accelerator. UDP and Citrix ICA will benefit from IP Application Accelerator. in real-time. A Guide to Decision Making 114 . based on considerations including Internet conditions. IPSec. IP Application Accelerator. to ensure that the user are served successfully and with minimal latency. Examples of applications delivered by APS include Web-based enterprise applications. regardless of where users are located. Customer Benefits Akamai’s Application Performance Solutions offer a number of performance and business benefits: • Superior Application Performance .Akamai provides unsurpassed application performance by accelerating cacheable and dynamic content. where and when it’s needed. Applications delivered by any protocol running over IP. and administration functions. client/server or virtualized versions of enterprise business processes. Web services. server proximity.
About Akamai Akamai is the leading global service provider for accelerating content and business processes online. Akamai also provides alert capabilities that notify IT when origin site problems are detected or user performance may have degraded.akamai. http://www.com Akamai’s EdgeControl Management Center provides real-time reporting A Guide to Decision Making 115 . Thousands of organizations have formed trusted relationships with Akamai. and have the foundation for the emerging Internet solutions of tomorrow. these organizations gain business advantage today. In addition. In addition to sophisticated historical reporting and real-time monitoring functionality. a sophisticated secure Network Operations Command Center continually monitors Akamai’s global distributed network. improving their revenue and reducing costs by maximizing the performance of their online businesses. Leveraging the Akamai EdgePlatform.ApplicAtion Delivery HAnDbook | februAry 2008 • Complete Visibility and Flexible Control Akamai’s EdgeControl Management Center provides clear and effective tools that allow IT to manage and optimize their extended application infrastructure.
Bridging the gap between the business’s applications and the underlying network/infrastructure – in essence directing network behavior based upon an application’s behavior – is at the core of an application delivery appliance’s responsibility. In addition. wikis. Citrix® AppCompress™ improves end-user performance and reduces bandwidth consumption by compressing Web application data. improved availability. availability and security functionality to address these issues while lowering the cost of delivering these applications. advanced web application development techniques and associated new protocols and formats (e. forceful browsing and cookie poisoning.0. Accelerating.) make understanding applications even more important. RSS. Comprehensive Application Delivery Functionality Citrix Systems. etc. inherent performance and security inefficiencies of networking protocols negatively impact the user experience. availability and security functions – at both the networking and the Comprehensive Application Security Citrix NetScaler appliances integrate comprehensive Web application firewall inspections that protect Web applications from application-layer attacks such as SQL injection. of course. To deliver true business value. Numerous load balancing algorithms and extensive server health checks provide greater application availability by ensuring client requests are directed only to correctly behaving servers. cross-site scripting. the global leader in application delivery infrastructure. as well as L4-7 header information such as URL. regardless of whether it is encrypted or unencrypted. In addition to layer 4 information (protocol and port number). Citrix NetScaler’s success is based on its ability to integrate multiple acceleration. Citrix NetScaler blocks attacks that A Guide to Decision Making 116 . integrated appliance: Accelerated Application Performance Citrix NetScaler can increase application performance by 5X or more. Administrators can granularly segment application traffic based upon information contained within an HTTP request body or TCP payload. Application users need quick response times. an application delivery appliance must: • make the applications perform faster • enhance the applications’ security • improve the applications’ availability An application delivery appliance needs to understand the behavior of the applications themselves. securing and ensuring the availability of critical business applications application layers – into a single. Web 2. Citrix NetScaler delivers multiple TCP optimizations to improve the performance of the network and server infrastructure. XML. This is.ApplicAtion Delivery HAnDbook | februAry 2008 next Generation Web Application Delivery The ubiquity of the Web simplifies many aspects of delivering application services. Citrix® AppCache® speeds content delivery to users by providing fast. Next generation application delivery appliances need to be application aware.g. traffic management policies for TCP applications can be based upon any application-layer content. and application-layer security for the mission-critical Web applications used today. provides application delivery appliances that combine application intelligence with a high degree of networking savvy. Ajax. in-memory caching of both static and dynamically generated HTTP application content. Only Citrix NetScaler provides all of the following into a single. The proliferation of new. Web application delivery appliances provide advanced application-level acceleration. impossible if the application’s behavior is opaque to the application delivery appliance. By inspecting both requests and responses at the application layer. application data type or cookie. integrated appliance. However. Intelligent Load Balancing and Content Switching NetScaler delivers fine-grained direction of client requests to ensure optimal distribution of traffic.
In addition. providing page-level visibility of Web application performance.ApplicAtion Delivery HAnDbook | februAry 2008 are not even detected by traditional network security products. not all increases in traffic are DoS attacks. application security and SSL offload into a single integrated solution. event management. content caching. Application-layer security prevents theft and leakage of valuable corporate and customer data. protecting against the theft and leakage of valuable corporate and customer information and aiding in compliance with security regulations such as PCI-DSS. application acceleration of up to 30X has been achieved. expenses by consolidating multiple capabilities such as content compression. with improved throughput of 67% and a 40% reduction in application response times. NetScaler tightly integrates proven protection for Web applications against today’s most dangerous security threats. Testing done in conjunction with Microsoft. and aids in complying with regulatory mandates such as the Payment Card Industry Data Security Standard (PCI-DSS). NetScaler reduces ongoing operational www. NetScaler enables IT organizations to improve resource efficiencies and simplify management while consolidating data center infrastructure. However. performance management and SSL certificate administration. ESRI lab test results further demonstrate the performance advantages of Citrix NetScaler. Oracle. ESRI and others have shown tangible benefits. Hyperion demonstrated the performance. the separately available Citrix Command Center provides centralized administration of multiple NetScaler Appliances enabling more efficient system configuration. The intuitive AppExpert Visual Policy Builder enables application delivery policies to be created without the need for coding complex programs or scripts. Content inspection capabilities enable Citrix NetScaler to identify and block application-based attacks such as GET floods and site-scraping attacks. interoperability. SAP’s CRM software suite showed a response time improvement of roughly 80% with CPU utilization dropping nearly 60%.com A Guide to Decision Making 117 . Response time measurements are combined with detailed statistics on the trip durations of requests and responses across the Web site infrastructure. providing granular visibility into how Web applications are behaving from the end user’s perspective. and ease of deployment of their enterprise applications with Citrix NetScaler appliances. Application Tested Citrix NetScaler has demonstrated improvements that not only address traditional server availability concerns. but also accelerate application performance. Citrix NetScaler appliances include highperformance. Citrix NetScaler is also available in a FIPS-compliant model that provides secure key generation and storage. In addition. Legitimate surges in application traffic that would otherwise overwhelm application servers are automatically handled with configurable Surge Protection and Priority Queuing features. SAP. For example. built-in defenses against denial of service (DoS) attacks. SSL acceleration reduces CPU utilization on servers. EdgeSight for NetScaler transparently instruments HTML pages. For managing multiple NetScaler appliances. Microsoft® SharePoint™ experienced up to an 82% reduction in latency for various workflows. Citrix NetScaler appliances are purpose-built to speed Web application performance by up to 5 times or more. The Citrix NetScaler family of Web application delivery systems is a comprehensive approach to optimizing the delivery of business resources in a fully integrated solution. freeing server resources for other tasks.citrix. End-user Experience Visibility Citrix NetScaler integrates Citrix EdgeSight™ for NetScaler end-user experience monitoring. Reduced Deployment and Operating Costs Citrix NetScaler cuts application delivery costs by reducing the number of required servers and by optimizing usage of available network bandwidth. SSL Acceleration NetScaler integrates hardware-based SSL acceleration to offload the compute-intensive processes of SSL connection set-up and bulk encryption from Web servers. Summary Citrix NetScaler appliances enable the network to bring direct business value to the business’s application portfolio. monitoring Web page response time from the application users’ perspective.
All Ipanema’s customers are globally deploying WAN Optimization!! The Ipanema Business Network Optimization solution has been specifically designed for strategic deployments of WAN Optimization. WAN bandwidth is a constrained resource and network delay is bound by physical constraints. Enterprises today tactically deploy WAN Optimization for networked business applications in sites that show poor end-user experience. But very few large Enterprises have generalized the deployment of WOCs in their network. Such an opportunistic approach has great advantages. Ipanema’s objective based approach that automates configuration of devices and leads to management costs that are nearly independent from the number of deployed devices! One to Any Single Data Center Any to Any Multiple Data Centers Multiple Data Centers Some to Any Branch Offices Branch Offices w/ inter-site traffic Tele-managed site (unequipped) Physical device Real-time cooperation Branch Offices 1 A Guide to Decision Making 119 .Putting WOCs in selected sites in a large network often does not improve performance. as in a significant number of cases it permits the end-user to gain immediate benefits in sites where the technology is deployed. especially large ones require WAN Optimization benefits to be delivered globally to consistently serve the quality of experience requirements of their whole distributed workforce.WOCs are high-tech devices that need to be individually configured. Ipanema customers typically deploy the solution to cover the need of tens. WOCs still cost many times more than the cost of a branch router.ApplicAtion Delivery HAnDbook | februAry 2008 WAN Optimization: from a tactical to a strategic approach Executive Summary The WAN is critical to the business of modern enterprises. • Efficiency . • Management costs . Each device needs to be configured with each other and yet. Despite technological progress such as MPLS and xDSL. The Ipanema Business Network Optimization solution is scalable.Many vendors today are able to enhance application performance on 10 or 20 sites. Most Enterprises. hundreds and thousands sites. There are four key reasons for that: • Scalability . A select few are able to scale WOC benefits to hundreds or thousands of sites. Modern networks have meshed topologies that cannot be properly handled by traditional WOCs. all must reflect local requirements. • Investment costs . The need for WAN Optimization Controllers (WOCs) has emerged over the past few years as a way to address application performance hurdles in selected sections of the network.Even if the technology tends to be more affordable. But not all networks are compatible with such a tactical approach to application performance. The Ipanema Business Network Optimization solution shows dramatically low management costs.
The Ipanema solution supports progressive deployments of devices and features. It is the only automated and scalable solution that adapts to any network condition to deliver the 3 key components of WAN application performance management: the ability to control network and application behavior. • Guarantee the performance of critical applications under the toughest conditions. Visibility The Ipanema solution’s Visibility features enable full control over application behavior on the network. 2. and also between sites. Visibility features allow the end-user to: • Automatically discover applications over the entire network using Layer 3 to 7 Classification. Optimization features allow the end-user to: The Ipanema Business Network Optimization solution bridges the gap between the enterprise’s business priorities and the WAN infrastructure. This leads to total control of network performance. • Report on usage and performance throughout the organization. • Define Application Performance Objectives per user and enforce them globally over the WAN. The Ipanema solution’s devices communicate together to synchronize the actions taken on WAN traffic. The Ipanema Business Network Optimization solution enables innovative and cost effective deployment options. For example. • Accurately measure the performance of all application flows in real-time.ApplicAtion Delivery HAnDbook | februAry 2008 The Ipanema Business Network Optimization solution is efficient on the most complex networks. 1. Ipanema Technologies provides a Business Network Optimization solution that automatically manages and maximizes WAN application performance through the combined use of Visibility. The Ipanema solution integrates all these features to address application performance over the entire network. • Globally manage meshed flows with Cooperative Tele-Optimization. Acceleration and Network Application Governance features. • Combine proactive and reactive helpdesk functions via bird’s-eye-views of performance (Maps). like in modern any-to-any MPLS networks. Optimization The Optimization features allow the performance guarantee of critical applications under all circumstances. All traffic flows are managed to handle both competition at a site. A Guide to Decision Making 120 . which does not require any device at the branch. to guarantee the performance of critical applications under all circumstances and to accelerate business applications everywhere. alarming and drill down. A first level of acceleration can even be obtained using our patented asymmetrical TCP acceleration. companies can obtain visibility and guarantee flow performance over their whole network by only putting devices in datacenters. Optimization. like in a hub and spoke topology.
users satisfaction Overall users satisfaction AQS MOS Application quality indicators AQS > 9 during 99% of the time MOS > 4 during 99% of the time Application performance SLAs Before Optimization After Optimization Application flow quality metrics 3. visit www. Network integrators market Ipanema’s Business Network Optimization solutions to enterprises. per link testing (PING. • Implement both a strategic and tactical approach to Acceleration. Business Criticality Citrix (CRM) VOIP Oracle Citrix (MS Office) Other FTP E-mail Exhaustive.ApplicAtion Delivery HAnDbook | februAry 2008 • Dynamically protect interactive applications and enable voice/video/data convergence with Smart Packet Forwarding. Acceleration Acceleration features reduce the response time of applications over the WAN. For more information.com. Network Application Governance Network Application Governance functions are unique to Ipanema. automated and scalable and allows enterprises to easily control. per flow measurement of real traffic 53% 57% 100% 78% Critical app. transparent application Acceleration. • Encourage good practices through cost allocation based on usage and delivered performance. It relies on Ipanema’s Autonomic Networking System to provide full Visibility of application flows over the network. Ipanema’s Business Network Optimization solutions are deployed in more than 75 countries. About Ipanema Technologies Ipanema Technologies is a provider of advanced application traffic management solutions that align the network with business goals. Because of its design. • Allocate responsibilities between the WAN and IT domains. Applications SAP Sites 1 2 3 4 5 6 7 8 1 2 3 Sites 4 5 6 7 8 • Rightsize the bandwidth according to the desired service levels. guarantee and accelerate the performance of their critical applications regardless of network conditions. • Transparently Accelerate legacy applications using Intelligent Protocol Transformation. The Business Network Optimization solution is simple. Network Application Governance functions and Scalable Service Delivery capabilities. • Simplify change management. global and dynamic Optimization of network resources. Ipanema’s solution allows network managers to concentrate for the first time on highlevel activities that make it easy to deliver on the network’s perennial promise to be a strategic business asset. • Unleash TCP acceleration without branch devices using Tele-Acceleration. Active. SAA …) Link quality metrics Link delay < 50 ms during 99% of the time Link loss < 1% during 99% of the time Link performance SLAs Acceleration features allow the end-user to: • Accelerate while protecting critical application performance. 4. Network Application Governance features allow the end-user to: • Enable the shift to Application Service Level Agreements.ipanematech. • Automatically select the best access link with Objective-Based Routing. while telecom service providers and network managed service providers offer those as a service. A Guide to Decision Making 121 . • Locally cache and compress data using Multi-Level Redundancy Elimination. accelerate operations and minimize TCO.
The expense of these appliances also means they tend to be limited to use at data centre locations or on high speed links. the target type(s) of data may not be accelerated in actual practice. Furthermore. Until now. Current solution approaches Typical network performance “degradation” type solution approaches have focused on network appliances. at times of congestion if there is no effective QoS control in place. including: • Appliances that monitor and report network traffic statistics. also known as WAN traffic optimization (WTO) appliances. While these can be said to be proactive. NetPriva has developed a totally new software based solution approach to the problem of WAN congestion which involves no appliance equipment. The problem of congestion on wide area networks (WAN) that service users in remote and branch offices is a growing one with ever more bandwidth hungry applications. competing for the available bandwidth resource. also known as “WAN traffic optimization” (WTO) appliances. • Multi function appliances. They may not assist interactive business transaction traffic (there is little if any data to compress and little if any duplication in such data). the problem of identifying and managing the causes of congestion has been tackled using specialized network appliances. PeopleSoftand others are time sensitive in that they must be delivered to users with consistent response times for them to use and rely upon them. Guaranteeing application performance… through visibility and control at the end point… sensitive applications lose productivity. Vendors are developing appliances also known as the “branch office box” (BOB) that provide a range of functions some of which may be surplus and it is still an appliance with cost issues Competition for bandwidth Applications that business depends on such as SAP.ApplicAtion Delivery HAnDbook | februAry 2008 Software at the endpoint - When Every Second Counts. file transfers or downloads or synchronization. Without rules both time and non time sensitive traffic compete for the available bandwidth on a “first come. they only work for some types of traffic. and defeats the purpose of using the wide area network to streamline operations or reduce costs. This is a reactive solution approach. and is economical for even the smallest of offices. Citrix.This is a proactive approach but not economical for smaller remote and branch offices. Other time sensitive examples include voice or video traffic where even a slight disruption results in an unacceptable user experience. first served” basis and the experience is that users of time A Guide to Decision Making 122 . Users in smaller remote offices miss out. and Internet portals that service business (MSPs) and other users that depend on network responsiveness. • Appliances that provide real time network traffic monitoring and Quality of Service (QoS) management installed at points in the network where major traffic congestion occurs. • Appliances that accelerate particular type(s) of traffic. The issue with these is that they are not economical or easily manageable to suit the majority of small remote office and branch office locations. The NetPriva solution suits enterprises for their enterprise WAN.. complain of poor response. network service providers for their customers. Non time sensitive business application examples include Emails. for example voice and video. and add to help desk queues.
the software can also use the capacity of user PCs in remote or branch office locations. • It provides monitoring/visibility and proactive policy based network traffic control (QoS). management and QoS control can be had by simply installing NetPriva software on an existing “gateway” device such as a Microsoft ICS server or a VPN server. including custom applications and even encrypted data. can be identified even if it is masquerading as some form of legitimate traffic. Full visibility. the NetPriva software can absolutely determine the identity of any application and user without the limitations and resource usage issues of interpreting matching data patterns as is the case with deep packet inspection techniques. These solutions are again not effective at times of network congestion. the NetPriva software filters and provides monitoring and shaping control by IP address. and Citrix ICA tag. and shaping capabilities thatprovide visibility and control to match or exceed that of many of the WAN traffic optimization appliances. Protocol. like any traffic. New Software Solution Platform NetPriva has developed new generation software platform for cost effective wide area network performance management for all size branch offices in an enterprise. There. Port number. And its ability to identify any traffic includes the growing amount and variety of peer to peer traffic much of which may be unsanctioned from a business point of view. ICS or VPN server. URL. Policy design and deployment is quick and easy. that is inside the user’s PC. Network layer 7++ visibility and shaping When deployed at network end points (servers or user PCs) the NetPriva software provides filtering to monitor and shape traffic at the application layer. NetPriva software functionality Network layer 3 and 4 visibility and shaping When deployed at the edge of the network. Layer 7 in addition to Layers 3 and 4 as described above. where the LAN links to the WAN. In addition. This is aimed at individual users “on the road”. users. and also eliminates the “man-in the middle” issues that appliances typically suffer from through not being able to positively or economically identify compressed or encrypted traffic. The basis of NetPriva’s Layer 7++ functionality lies in the fact that the NetPriva software does its work at the very end point of the network where network traffic originates and terminates. The NetPriva software platform provides a comprehensive range of filtering. and traffic policies are managed via the web portal. monitoring. These are limited “point solutions” for some wide area network users. • For additional ease of use and visibility. and user login as well as the capability to classify any traffic including custom applications and encrypted traffic. A Guide to Decision Making 123 . NetPriva terms this capability Layer 7++ on the basis of the additional facility to monitor and shape by application executable name.ApplicAtion Delivery HAnDbook | februAry 2008 • Software “clients” for user PCs to reduce the data on slow network links. • It has been designed to be extensible to integrate with or complement traffic acceleration methods. • It automatically classifies all types of traffic. NetPriva’s Layer 7++ functionality includes the identification of encrypted traffic which is a growth area in traffic.e. and managed through a web browser or PC based Console. making for a cost effective solution for even the smallest remote or branch offices. Multiple locations. Peer to peer traffic. i. It can be remotely installed and managed as part of a standard operating environment (SOE) on user PCs anywhere 24x7. • An appliance is not required as the software utilizes an existing MS Routing. NetPriva’s software solution achieves its highest level of visibility and control through unique peer-topeer network management.
4b Local “always on” PC or server Local “always on” PC or server(or remote server) Local or remote PC/Server Y Y Y With EPDirect Clients Y Y Y Y Y Layers 3 and 4 (IP address. Products End Point Direct (“EPDirect”): Client side software for true endpoint control. Data may be retained on line according to resource limitations or retention policies. port). Max. Analysis and reporting NetPriva holds granular network statistics at the level of per second application flow details in an SQL data base format. Edge Virtual Gateway (“EdgeVG”): Branch office server software for full visibility. Bandwidth Layer 3. http://www.ApplicAtion Delivery HAnDbook | februAry 2008 DSCP Packet marking / colouring The NetPriva agent based software is able to mark or colour traffic by DiffServ code point according to the policy for each application. port). Data schema details are provided for SQL queries and data base extracts and reporting. management and control (QoS) through a branch office “gateway” server. VPN connected to LAN/WAN Layer 3 routing host (Windows RAS or ICS) Ethernet Y Y Y Y Future EP Direct End Point (Windows) Smaller office network end points End Point User Windows PCs on LAN End Point User Windows PCs on LAN Ethernet Y Y Y Y TCP Flow Optimization V4. URL or URL substring via DPI. Citrix ICA priory tag On each end point PC – user PC Local “always on” PC or server (or remote server) Local or remote PC/Server N Y Y Y Y Y Y Y Y Layers 3 and 4 (IP address. such as Windows ICS/RAS. This can be used to “groom” particular traffic from the user’s desktop for routing purposes such as for MPLS Class of Service. analysis. 4 Layer 7++ automation Citrix ICA Packet marking(Diffserv) Drop packet conditions Shape or monitor only Statistics retention Application / user identification EdgeVG Application Software (Windows) Smaller office network edge points Shared server / “gateway” PC. automatic Layer 7++ traffic identification for all applications A Guide to Decision Making 124 . This has the potential to extend the MPLS Class of Service model right to the end points of the network.user.netpriva.com Product / Characteristic Typical location(s) Configuration Network type device Connectivity Key functions: Traffic monitoring / shaping Real time visibility and trouble shooting Statistics retention. URL or URL substring. capacity planning WAN Optimization Components: QOS Engine Collector Management Console Policy settings: Priority Min.
analyze • benchmark and align IT transactions to business priorities • model your end-to-end environment in order to study various change scenarios • validate application and infrastructure performance individually and holistically analyze Business Acceleration begins by helping you gain end-toend visibility into your communications infrastructure and business-critical applications.a full suite of services that brings focus. a variety of end-user profiles. we show you what’s A Guide to Decision Making 125 . network optimization. To enhance the efficiency levels required for multinational enterprises to successfully face competitive challenges. Our recommendations and guidance promote a better end-user experience as well as a measurable return on investment.ApplicAtion Delivery HAnDbook | februAry 2008 be competitive we help you perform at peak performance running on your network. We help you: • analyze: gain visibility into your communications infrastructure and business-critical applications • manage: ensure the efficient and consistent operation of your infrastructure and application environment • optimize: get the most of your infrastructure and improve the performance of your business We understand information communications and technology. wherever your business operates around the globe. We know what drives change in IT organizations: mergers and divestments. Then our consultants help you understand the impact of your usage patterns on the performance levels of your applications. Working with you. We conduct assessments of your underlying infrastructure. control and greater speed to those applications that are central to consistent delivery of quality services. applications and processes. First. Orange Business Services offers Business Acceleration . new product releases. You gain world-class management of your end-to-end communications and application environment along with a service level agreement that ensures the performance of your business. our Business Acceleration methodology we can get your business moving Business Acceleration provides insights and tools to enhance your applications’ visibility. quick wins and a transition plan to meet your service assurance expectations. new technologies and evolving business strategies. application performance and strategic business objectives. cost-reduction initiatives. changing application requirements. we define a business case. Our approach aligns your business and IT so everything is working optimally. management and performance.
To guarantee the best possible service. the service level agreement directly addresses your performance requirements on response times and availability. This means lower response times and increased availability so your business can operate at full speed.com/ A Guide to Decision Making 126 . reporting and real-time fault management for improved control application SLAs why Orange Business Services? As a global integrated operator. Ongoing monitoring and alerts ensure quality of service is maintained by our three major service centers to ensure that end-user expectations are met or exceeded. you can optimize • compression to maximize bandwidth use and throughput • caching at appropriate regional locations to minimize application latency • consolidation of servers. we can bring your business to a new level of performance. compression and acceleration. We can improve the visibility. management and performance of your applications with single-source convenience.ApplicAtion Delivery HAnDbook | februAry 2008 manage With Business Acceleration your infrastructure and application environment runs efficiently and harmoniously. ensuring that employees are more productive and business goals are met. leave the day-to-day administration to us and focus your resources on growing your business. You gain control when you have the ability to proactively manage performance and network resources. detailed analysis and monthly reporting. we simplify your infrastructure to reduce your operational costs and ease ongoing management. consistent quality Our service management improves the operational efficiency of your application environment through specialized personal support and delivers global operational monitoring. We also protect your infrastructure from attacks and threats to ensure resiliency and availability. IT infrastructure optimization the result Business Acceleration gives you an overall improvement in application experience backed by an application-level service level agreement.orange-business. Covering all critical applications. applications and network equipment http://www. Leveraging techniques such as caching. storage and application management services. With server. You gain more control of your environment while achieving consistently high service levels. It maximizes your existing investment in applications and infrastructure. manage • dashboards for ongoing performance improvements through QoS and web traffic prioritization • service level management including monitoring. we make your applications perform as efficiently as possible. we create an application-based service level agreement so you get the performance you need. no matter where your business takes you. We allocate bandwidth according to your business priorities and implement policies governing your applications globally. optimize The third phase of Business Acceleration approaches optimization from two angles – application and infrastructure. Through consolidation.
This approach provides the most accurate. streamline network operations. Service Provider and Government OSPF. Histograms displaying past routing activity allow the network engineer to quickly go back to the time when a specific problem occurred. and increase application and service up-time. Unstable routes and other anomalies – undetectable by SNMP-based management tools because they are not device-specific problems – are immediately visible. RFC2547bis MPLS VPNs) between routers on the network. Traps and alerts allow integration with existing network management solutions. though it forwards no traffic and is neither a bottleneck or failure point. Route Explorer records every routing event in a local data store. OSPF. real-time view of how the network is directing traffic. EIGRP and RFC2547bis MPLS VPN networks. government and military agencies and educational institutions use Packet Design’s route analytics technology to optimize their IP networks A Guide to Decision Making 127 . IS-IS. it does not poll any devices and adds no overhead to the network. EIGRP. Since it works by monitoring the routing control plane.ApplicAtion Delivery HAnDbook | februAry 2008 network-Wide Routing and Traffic Analysis Packet Design Solutions: Packet Design’s IP routing and traffic analysis solutions empower network management best practices in the world’s largest and most critical enterprise. then computing a real-time. analyzed and serve as the basis for actionable alerts and reports. Route Explorer appears to the network simply as another router. An animated historical playback feature lets the operator diagnose inconsistent and hard-to-detect problems by “rewinding” the network to a previous point in time. Overview of Route Explorer Route Explorer works by passively monitoring the routing protocol exchanges (e. Route Explorer: Industry-Leading Route Analytics Solution Optimize IP Networks with Route Explorer • Gain visibility into the root cause of a signification percentage of application performance problems. A single appliance can support any size IP network.g. confidence and responsiveness • Speed troubleshooting of the hardest IP problems • Empower routing operations best practices • Complement change control processes with realtime validation of routing behavior Deployed in the world’s largest IP networks 250+ of the world’s largest enterprises. BGP. while letting them step through individual routing events to discover the root cause of the problem. IS-IS. Engineers can model failure scenarios and routing metric changes on the as-running network topology. enabling network managers to maximize network assets. BGP. service providers. As the network-wide topology is monitored and updated. no matter how large or highly subdivided into separate areas. network wide topology that can be visualized. • Prevent costly misconfigurations • Ensure network resiliency • Increase IT’s accuracy.
and protocols. analysis. Integrated Traffic and Route Analysis and Modeling Solution Optimize IP Networks with Traffic Explorer • Monitor critical traffic dynamics across all IP network links • Operational planning and modeling based on realtime. and calculates traffic distribution and link utilization across all routes and links on the network. Stores replayable traffic history • Modeling Engine: Provides a full suite of monitoring. Traffic Explorer: Network-Wide. and modeling capabilities Traffic Explorer Applications Data Center Migration Simulation and Analysis: Traffic Explorer ensures application performance by increasing the accuracy of network planning when moving server clusters between data cen- A Guide to Decision Making 128 . areas. network-wide routing and traffic intelligence • IGP and BGP-aware peering and transit analysis • Visualize impact of routing failures/changes on traffic • Departmental traffic usage and accounting • Network-wide capacity planning • Enhance change control processes with real-time validation of routing and traffic behavior Traffic Explorer Architecture: Traffic Explorer consists of three components: • Flow Recorders: Collect Netflow information gathered from key traffic source points and summarize traffic flows based on routable network addresses received from Route Explorer • Flow Analyzer: Aggregates summarized flow information from Flow Recorders.ApplicAtion Delivery HAnDbook | februAry 2008 Route Explorer User IGP Routing Adjacencies BGP Routing Adjacencies BGP AS 4 (EIGRP) BGP Route Reflector BGP BGP AS 1 (OSPF) AS 3 (OSPF) AS 2 (IS-IS) Route Explorer passively listens to routing protocol exchanges to provide visibility into network-wide routing across Autonomous Systems. alerting.
using the actual routed topology and traffic loads. with “what-if” modeliing capabilities. Netflow Data http://www. . link by link traffic visibility. Simulating the affect of these changes on the actual network results in faster.packetdesign. Flow 2 Src Dest . Traffic Explorer lets engineers model changes on the “as running” network. . including the routed path for any flow.com Email: info@packetdesign. integrated routing and traffic monitoring and analysis. This information helps operators prioritize their response to those situations with the greatest impact on services. Traffic Explorer’s knowledge of IP routing enables visibility into network-wide routing and traffic behavior. internet gateways. Powerful “what-if” modeling capabilities empower net- Data Center Flow 1 Src Dest . more accurate network operations and optimal use of existing assets. and the number of flows and hops affected. reports and enables modeling based on actual network-wide routing and traffic data A Guide to Decision Making 129 . BGP policy configurations. work managers with new options for optimizing network service delivery. Disaster Recovery Planning: Traffic Explorer increase can simulate link failure scenarios and analyze continuity of secondary routes and utilization of secondary and network-wide links Forensic Troubleshooting: Traffic Explorer increases application performance by speeding troubleshooting with a complete routing and traffic forensic history. .ApplicAtion Delivery HAnDbook | februAry 2008 ters by simulating and analyzing precisely how traffic patterns will change across the entire network and identifying resulting congestion hot spots. .g.com Internet Flow 1 Src Dest . Overview of Traffic Explorer Traffic Explorer is the first solution to combine real-time. leading to reduced capital and operational costs and enhance service delivery. . data center(s). . such as adding or failing routers. . drill-down tabular views allow engineers to quickly perform root cause analysis of important network changes. Unlike previous traffic analysis tools that only provide localized. Traffic Explorer provides extensive “what-if” planning features to enhance ongoing network operations best practices. Netflow Data Traﬃc Explorer User • Collects Netflow data exported from routers at key traffic sources (e. interfaces and peerings. Engineers can simulate a broad range of changes. link capacities or traffic loads. moving or changing prefixes. . and adjusting IGP metrics. Standard reports and threshold-based alerts help engineers track significant routing and utilization changes in the network. WAN links) • Computes traffic flows across network topology using routing data from Route Explorer • Displays. Traffic Explorer delivers the industry’s only integrated analysis of network-wide routing and traffic dynamics. An interactive topology map and deep. . network-wide traffic impact of any routing changes or failures.
and hackers looking for financial gain through the theft of personal information. PC performance degradation due to nonwork related processing are just a few of the relatively benign ramifications that administrators face. applied to specific functions. controlling file transfers. The platform utilizes different processing technologies. hardware accelerated prevention engine supported by a common signature format to detect a wide range of malware and threats. Simple and intuitive visualization tools provide visibility into the traffic currently on the network. Application policy control includes allowing. The policy controls enable gradual introduction of SSL decryption as well as granular enforcement of corporate policy. Enduser applications that are being installed on the network have been designed specifically to act evasively. facilitate wide spread access and minimize disruption. The ramifications resulting from this inability to identify and control the applications traversing the network range from benign to serious. • Real-time protection from threats embedded in applications allows network-based threat prevention without impact to user experience. The more threatening ramifications include regulatory compliance. passwords and corporate information. Palo Alto Networks is taking a new approach to build a solution for today’s network security needs: • The solution starts with network traffic classification that identifies the actual application irrespective of port. or evasive tactic. and that without application visibility it is not possible to effectively control traffic on the network. helping to set appropriate application use policy. comple- A Fresh Approach to Network Security In order to keep pace with the evolving application landscape. • Policy-based decryption. one that takes an application-centric approach to traffic classification and is capable of bringing policy-based application control back to the network security team. protocol. What’s needed is a fresh approach to the firewall. IT administrators know that there are applications on their network that their network security infrastructure cannot identify. administrators are coming to the stark realization that only a fresh approach will enable them to accurately identify and therefore control all application traffic flowing A Guide to Decision Making 130 . • Rounding out this fresh approach is a purpose-built high speed platform that makes it possible to provide visibility and control for all applications on all ports. End-user productivity. in and out of the network. and inspecting traffic for viruses. blocking.ApplicAtion Delivery HAnDbook | februAry 2008 Next Generation Firewalls providing visibility and control of users and applications Introduction IT administrators today are faced with an application landscape that has evolved in a dramatic fashion. marking for QoS. bandwidth consumption. IT administrators are managing as best they can with a patchwork of existing technologies. Palo Alto Networks utilizes a single. avoiding network detection and associated security. Rather than using multiple threat prevention devices that often proxy file transfers to look for viruses and spyware. All traffic on all ports is classified in this way. providing application identification as a comprehensive visibility foundation for all security functions to leverage. Even wellmeaning corporate applications utilize similar tactics to accelerate deployment. • Graphical visualization and policy control of application usage. spyware. identification and control of SSL traffic provides visibility into one of the largest blind spots on the network today. and vulnerability exploits. information leakage.
• Protocol/port: helps narrow the application identification process. irrespective of port or protocol. using up to four traffic classification techniques to analyze the actual session data and identify the application—even those applications that use random ports. Armed with this in-depth knowledge. irrespective of protocol. customers can deploy policy-based application usage control for both inbound and outbound network traffic. regardless of the protocol and port being used. tunnel inside and emulate other applications. but is primarily used to control which ports applications are allowed to use. thereby enabling organizations to accurately identify and control applications flowing in and out of the network. allowing it to meet the performance demands of protecting a high speed network. SSL encryption or evasive tactic employed. The four traffic classification mechanisms in App-ID are: • Application signatures: application context-aware pattern matching designed to look for the unique properties and information exchanges of applications to correctly identify them. Unlike traditional security approaches that rely solely on protocol and port. The result is a solution that can help mitigate today’s emerging security risks through tighter control of the application traffic traversing the network.ApplicAtion Delivery HAnDbook | februAry 2008 mented by large amounts of RAM to maintain multigigabit throughput and low latency even under load with all functions turned on. Application Identification At the heart of the PA-4000 Series is an applicationcentric classification technology called App-ID. Figure 1: App-ID uses four traffic classification techniques to accurately identify the application. control and protection to the enterprise firewall market. • SSL decryption: decrypts outbound SSL traffic using a forward SSL proxy to identify and control the traffic inside before re-encrypting it to its destination. The Palo Alto Networks PA-4000 Series brings new levels of application visibility. or use SSL encryption. high performance platform with dedicated processing for management. With the resultant visibility into the actual identity of the application. App-ID is an industry first. the PA-4000 Series can accurately identify which applications are flowing across the network. • Application decoding: a powerful engine that continuously decodes application traffic to identify the more evasive applications as well as create the foundation for accurate threat prevention. security administrators can regain control of their networks at the gateway to achieve the following business benefits: • Mitigate risk through policy-based application usage control and threat detection • Enable growth by embracing web-based applications in a controlled and secure manner • Facilitate efficiency by minimizing the amount of manpower associated with monitoring desktops and removing unwanted applications The PA-4000 Series is a purpose-built. port. Based upon a new traffic classification technology called App-ID™. A Guide to Decision Making 131 . traffic classification and threat mitigation. Palo Alto Networks PA-4000 Series Palo Alto Networks is taking a fresh approach to deliver a next-generation firewall that classifies traffic from an application-centric perspective.
or otherwise revise this publication without notice. security policy decisions which can then be implemented using the intuitive management interface. which are dynamically updated as new applications are added the Palo Alto Networks update service. logging and reporting giving administrators a consistent view of network activity. appropriate security policies can be implemented to enforce application usage rights and any traffic that is ultimately allowed onto the network can be inspected more completely for all manner of malware. FTP.786. Palo Alto Networks. Alternatively. Yahoo!IM. From the familiar rule-base editor. All rights reserved. Inc. and other applications commonly found on enterprise networks. application control can be implemented based on the 16 different application categories. FlashMatch. dynamically updated and listed by their commonly used names.ApplicAtion Delivery HAnDbook | februAry 2008 The application-centric nature of App-ID means that it can not only identify and control traditional applications like HTTP.com Copyright 2007 Palo Alto Networks. Palo Alto Networks reserves the right to change. In addition to being displayed in ACC and App-Scope. an application usage control policy can be created. emule. transfer. both ACC and App-Scope will display who is using the application based on their identity from Active Directory. Neonet. ACC allows a security team to analyze the data collected and make informed. A Guide to Decision Making 132 . reviewed and deployed. Conclusion The Palo Alto Networks PA-4000 Series brings welcome relief to security teams struggling to gain control of. Once the application is identified and decoded using App-ID. etc). SNMP. Positive identification of which actual user is using specific applications is key to providing visibility into application usage on the network and subsequently being able to create an appropriate security policy that is based on actual users and user groups. etc). that are specifically designed to evade today’s port-based security offerings. Palo Alto Networks assumes no responsibility for any inaccuracies in this document or for any obligation to update information in this document. as well as their IP address. the traffic can be more tightly controlled through security policies. Administrators can pick and choose from over 500 applications. modify. user identity is also accessible as part of the policy editor. App-ID and Panorama are trademarks of Palo Alto Networks. Hotmail. Inc. etc). CA 95002 408. Policy and Configuration Control With increased visibility comes the ability to deploy policies for more granular control over traffic traversing the network. the Palo Alto Networks PA-4000 Series accurately identifies applications irrespective of the protocol or port that they may use for communications. With its’ fresh from-the-ground-up approach. and protect the network from new threats borne by the nextgeneration of applications. gmail. . in the United States. but it can also accurately delineate specific instances of IM (AIM. PAN-OS. Palo Alto Networks 2130 Gold Street Alviso. All specifications are subject to change without notice. Meeboo. peer2peer (Bittorrent.paloaltonetworks. Webmail (Yahoo!Mail. the Palo Alto Networks Logo. User Visibility Through transparent integration with Microsoft’s Active Directory (AD). both personal and business. Once accurately identified.0001 www.
4. the sources of threats to enterprise data have multiplied dramatically. present a difficult dilemma to a CEO or CIO: Continue to deliver acceptable IT services by throwing money. Businesses that operate across traditional borders must be able to respond to opportunities and challenges faster than ever before. Continuity. Delay or loss of data for any reason – system failure. and an estimated 450 million mobile workers1 around the world. decision-making power was concentrated in the headquarters. assessing where to try and cut costs while still moving forward with a plan to continually enhance IT services to the business. Worldwide Mobile Worker Population 2005-2009 Forecast and Analysis Redefining the Enterprise Workplace Historically. Simplicity. While per unit costs of technology are always decreasing. natural disasters – has a domino-like effect across the entire organization. and wireless connectivity to help IT slim down its footprint while increasing their business’s competitive advantages. at any time of the day or night. Finally. Smart CIOs are investing in technologies like continuous data protection. virtualization. the need for anytime. Headquarters isn’t where all of the action is. people and resources in previously distant markets has created a vast new set of challenges that when taken together. or better. With the growing importance of digital applications and data. The Business Imperative Before considering an information technology strategy for a globalized world. 1. As a result. As a result IT infrastructure development A Guide to Decision Making 133 . save money by consolidating at the expense of the end users? Or use IT to drive new business initiatives? Is it possible to do both? How does a business in today’s global marketplace bring the world closer? This paper will explore the business imperatives that are driving enterprise IT design today. it is valuable to understand the fundamental trends that are pushing businesses to redesign their operations around a small set of broad-based imperatives. Flexibility. and then presents five key principles CIOs are using to redesign business infrastructure at companies of all sizes. they still must be prepared to recover from threats as quickly as possible. While businesses do everything that they can to stop threats in the first place. as an increase in technology has typically led to increased complexity. a product or service as good. “follow the sun” (global 24/7) operations have shrinking maintenance windows and a need for applications to be running at all times.ApplicAtion Delivery HAnDbook | februAry 2008 the cio’s new guide to design of global it infrastructure: Five Principles Driving Radical Redesign Technology has enabled businesses to become highly distributed. The ability to take advantage of business opportunities. 3. Less has always been more for enterprises. Security. At the same time. With about two-thirds of the workforce operating in locations other than HQ. a business has to be faster to deliver 1 IDC. As businesses have expanded. anywhere application access has become a requirement. in aggregate companies see an increase in cost. this paper discusses the importance of wide-area data services (WDS) solutions and explains how they can help to cohesively tie together distributed and highly mobile organizations. In order to compete. businesses now operate everywhere all the time. than that of potentially any other company in the world. bandwidth and infrastructure at the problem? Or. 2. the IT team is typically in a difficult position.
Data centers were routinely housed as close to headquarters as possible. With a globalized workforce – and a rapidly globalizing customer base – businesses cannot afford their operations to be stopped for even a few minutes.ApplicAtion Delivery HAnDbook | februAry 2008 mostly focused on that location. follow-the-sun business practices. Knowledge must be harnessed – and data must be managed. The cross functional nature of the distributed workforce significantly changes how a business needs to resource and support both the branch office and the mobile worker. Decisionmaking has become significantly more decentralized. it can negatively impact business operations unless LANlike performance can be maintained everywhere. IT must be an enabler for the way business needs to operate. In fact. A Guide to Decision Making 134 . and few decision making requirements while they were on the road. While data lives in a data center. or move in order to take advantage of an opportunity. Mobile workers were almost an anomaly – those who were traveling among offices were simply out of touch. Location – headquarters. Applications and data must be available everywhere but be all in one place. 3. The major decisions – including the tools and data to make those decisions – were essentially in one place. 4. They expect to have access to whatever data or services the company offers no matter where they happen to be. protect. and restore. Compromised data centers may require enterprise to rapidly switch operations to secondary locations with no loss in information. branch office. Waiting 20 minutes for a file to be sent between workers – even if they are across the world from each other – is no longer acceptable for the employee or for the customer project that they are working on. with no ability to access applications and data. how does the CIO begin to sketch out the path forward? 1. where most employees worked. Distance doesn’t matter. New offices and merged/acquired businesses must The Principles of the Global Workforce There are five key principles that CIOs must take into account when thinking about how business is changing today. and potentially set up application infrastructure in remote locations overnight. and “remote” workers were often relegated to small. 2. or no office – simply doesn’t matter anymore. Employees now expect to be able to collaborate in real-time with any coworker. home office. But with organizations being required to react quickly in the face of change. businesses recognize that the data and applications were “out there” for a reason – that’s where they needed to be accessed. Business never stops. With such a pressing need for redesigning IT strategies to encompass global. CIOs are demanding that data be brought back from remote offices. disconnected islands of branch offices. Today’s enterprise looks significantly different. with mobile workers and branch office employees making critical decisions on a regular basis. So while consolidation is an important strategy for data protection and cost control. Consolidating data makes it easier to track. Consolidation goes a long way toward eliminating the ‘islands’ of storage and data that evolve over time. At the same time. it can be used by anyone everywhere. Issues like hurricanes or a flu pandemic might force workers to operate from home for an unspecified period of time. Many organizations have made a 180-degree change in how they view employees who work outside of headquarters: before decisions were handed down to them and now remote workers’ decisions help define the path of the organization. CIOs must be able to quickly move massive amounts of data. Typically these were sales representatives who only needed to receive information from headquarters. making it impossible for the CIO to develop an IT strategy without accounting for a distributed workforce – and a mobile one as well. flexibility in moving data and applications is essential. a recent Forrester survey notes that 80% of businesses are trying to set a strategy and policies for mobility in their organizations this year.
What if distance were no longer an issue? How would that change the way document management systems. regardless of size. The impacts of WDS are tangible. virtualized infrastructures to live anywhere and migrate at faster speeds than ever before. at the same level of application performance. Applications. With an effective branch WDS solution. without significant investment in bandwidth or infrastructure. As such. to allow workers anywhere to feel as if the files. Branch Office WDS solutions have typically been known for accelerating the branch office Branch office acceleration forms the basis of a WDS fabric. WDS solutions have a primary role in ensuring that users everywhere can access applications with LAN-like performance even if they are accessing data from low-bandwidth Wi-Fi connections. networking. 5. storage. There are no second-class enterprise citizens. and sales representatives that are on the move who are often responsible for bringing in new revenue and dealing with the customer in times of crisis. WDS solutions combine the benefits of WAN optimization and application acceleration. CIOs can engage in meaningful consolidation projects that are de-risked by the fact that application performance will still be maintained. once IT implements a WDS fabric. out to the three key areas of the business: the branch office. WDS solutions allow individuals to collaborate more easily. WDS solutions are architected – and should be evaluated . Every member of the enterprise needs to have access to the same applications. Mobile Worker CIOs today have a strong focus on the mobile worker. 1. and data they need are always local.with three characteristics in mind: Speed. collaborating Wide-area data services: bringing the distributed enterprise closer No single technology can allow a CIO to accomplish these large goals for an enterprise. Once in place. But in order to fully take advantage of WDS in the enterprise. one thing remains certain: a widearea data services (WDS) solution is the fabric that can tie all of these elements together. and backup systems are architected? The possibilities are endless. But no matter which of these technologies – and which vendors – are chosen. The introduction of a mobile user use case adds a number of requirements for any proposed WDS solution: Does the mobile solution provide acceleration of the same level to mobile workers as to branch offices? Is the WDS solution architected so that the Mobile accelerator connects directly to the existing appliance solution? Can the appliances support potentially thousands of mobile workers effectively? Does the mobile software use the same code base and functionality as the appliance solution? IT-empowered mobile workers can also enable new and innovative work arrangements within an organization. the way that they implement services can be dramatically simplified. Moreover. and security will all play into the mix. businesses that are hoping to expand to a new region often want to hire professionals in that region. The days of the “important” people working at corporate HQ are rapidly fading. ERP systems. Scale and Simplicity. 2. engineers. the workers can “source” work from other offices. IT to complete tasks like backup and consolidation more quickly and effectively. CIOs must choose a solution that can reach A Guide to Decision Making 135 . CIOs and IT managers may no longer prioritize workers based on their geographic location. and the data center. the mobile worker. across a range of different IT projects. it’s essential that these employees have fast access to any and all of the corporate resources that are available to employees at the office. It’s those executives.ApplicAtion Delivery HAnDbook | februAry 2008 quickly be absorbed into the fabric of the existing organization by providing them immediate access to new systems and data. For example. applications. Employees who work in branch offices can more effectively share data with colleagues across the organization. Employees everywhere are now empowered to make important decisions.
Using WDS solutions. adaptable WDS solution. Of course it must tie in to what is happening in the branch office. moving virtual images of servers. Datacenter migration for storage and applications. These conditions. Data Center The idea of application acceleration has a special place in the data center. 3. Data centers are more protected. www. high latency between DR centers. anytime anywhere collaboration. As a result. in a window that is continually shrinking to support 24x7 operations. often providing faster response than ever before. Large-scale solutions between datacenters need to be able to handle different bandwidth conditions as well: high bandwidth connections. but a different set of challenges await with the sometimes massive amount of data that needs to be managed among data centers. and snapshots are becoming essential. These requirements in themselves require a WDS solution that can scale up to handle massive data transfers. CIOs must be prepared to adapt their IT infrastructure in a way that supports distributed employees.ApplicAtion Delivery HAnDbook | februAry 2008 in real-time with colleagues on projects in other parts of the world.riverbed. and those of a mobile user comprise a wide set of conditions that require an intelligent. CIOs now have a way to bring their distributed enterprise closer together. Mobile and branch office workers can have the same level of application performance as users at headquarters.com A Guide to Decision Making 136 . CIOs now have a technology that can tie together their distributed enterprises. plus those encountered in a branch office environment. Massive backup and replication jobs are now a regular occurrence. and also be clustered to handle the simultaneous load of inter datacenter transfers as well as datacenter-to-branch transfers. Conclusion The way that businesses operate is always changing. Infrastructure can be consolidated without performance loss to far-off locations. many companies are regularly trying to move terabytes each day. With WDS. using WDS to respond faster in the event of disaster. and the need for business continuity in times of change or disaster. yet the flexibility to move data and applications can be retained.
comprehensive network emulation solution that creates a virtual network environment in your performance and pre-deployment network lab.shunra. Shunra Virtual Enterprise Desktop Now you can quickly test your applications to ensure production network readiness. Use production network behavior recorded with Shunra VE Network Catcher to capture real world testing conditions in the lab. A windows-based product.shunra.com Go to www.ApplicAtion Delivery HAnDbook | februAry 2008 Shunra Virtual Enterprise WAN emulation solutions for the entire application development lifecycle. It delivers a powerful and accurate way to model complex network topologies and test the performance of your applications and network equipment under a wide variety of network impairments – exactly as if they were running in a real production environment. Shunra Virtual Enterprise (VE) Desktop is a must have tool for application developers and QA teams.com for a free 30-day trial A Guide to Decision Making 137 . Learn more at www. With this tool you can uncover and resolve production related problems – before rollout. Shunra Virtual Enterprise Suite Shunra Virtual Enterprise (VE) is a highly robust. Simulate any wide area network link to test applications under a variety of current and potential network conditions – directly from the desktop.
it allows end-users to conduct tests based on these plans. Enables timing of transactions and designation of pass or fail of transactions.vet test result files can be uploaded to the central repository for viewing and distribution Enhanced Professional Mode: integrates the use of Test Plans that include Business Processes (list of transactions that need to be conducted) and Network Scenario Files (impairment parameters) to guide users in conducting tests VE Reporter: VE Desktop Client is installed with a basic version of VE Reporter that displays runtime and offline data in easy to read reports. Reports can also be saved in HTML and MS Word format.enc) and Shunra . connection quality and connection type. experience the impact of impairments on transaction times.csv) Test Data Upload to VE Desktop Server: test results can be uploaded from the VE Desktop Client to the VE Desktop Server central repository Enhanced Functionality for Administrators Silent Install: allows enterprise deployment and installation by network administrators withoutend-user interaction Vista Support: all end-user desktops running Windows Vista are fully supported Redesigned user interface Redesigned Basic Mode GUI: enhanced and more user friendly options deliver a much easier way of defining testing parameters. Shunra Virtual Enterprise (VE) Desktop Professional offers a methodology that integrates test plans. including business processes and network scenario files. prepare. manage and distribute tests and reports.enc format and can then be used by VE Analyzer or other packet analysis software VE Analyzer Integration: tests conducted using a Packet List to capture traffic can be analyzed using Shunra VE Analyzer or other packet analysis software Business Process Editor: enables the creation of business processes and the subsequent transactions which take place for each process Test Results Repository: a central repository for test results is available in the VE Desktop Server. packet capture files (. and upload test results to the VE Desktop Server. Packet List: enables the capture of packets during tests. Increased Testing Value Agent Support: provides an Application Programming Interface (API) which enables clients to be seamlessly integrated with automation and load tools Transaction Analysis: Transaction Timer: lists contents of business processes to guide users in conducting tests. Test results that include HTML Reports. Additionally.ApplicAtion Delivery HAnDbook | februAry 2008 Virtual Enterprise Desktop Professional v4.0 Now it’s easier than ever for multi-user enterprise network engineers and developers to test applications under a variety of current and potential network conditions— directly from the desktop. Packet List saves data in . The enhanced VE Desktop Server enables the administrator to plan. A Guide to Decision Making 138 . Parameters include: traffic source and destination (by cities). and test results can be exported to Comma Separated Values (.
No capital purchase. The Benefit When the project is completed the Shunra team exits.shunra. The Dilemma Oftentimes our customers need to quickly add one or more VE appliances or software solutions to manage surges in their demand for testing projects. you choose how Shunra Virtual Enterprise is delivered! Let’s face it. no learning curve. If you need the power of Virtual Enterprise to emulate and test your applications and networks on a temporary basis we will deliver both the technology \ WAN Optimization Vendor Selection (with our ROI Metric) \ Application Performance Profiling/Production Readiness \ Package or Custom Application Roll-outs \ Data Center Moves/Server Consolidations CONTACT SHUNRA VIRTUAL ENTERPRISE TODAY TO LEARN MORE! North America +1 877 474 8672 Europe and Africa +44 208 382 3757 Israel. Flexibility to receive the value of our technology the way you require it is now more essential then ever. and Training The Solution Shunra has responded by creating a Services team dedicated to delivering the benefits of our solutions when you require them.ApplicAtion Delivery HAnDbook | februAry 2008 Turn-key Managed Services Deliver Immediate Results. Mentoring. Mediterranean +972 9 763 4228 www. no more failed application rollouts. and a skilled technician to work with your staff.com A Guide to Decision Making 139 . APAC. Actionable Reports and Analysis • Best Practices and Documented Methodology • Repository of Project Artifacts • Expert Knowledge Transfer. many organizations that are looking at our technology for the first time are between budget cycles and don’t have the funds for a capital purchase. Other customers test applications prior to deployment using ad-hoc teams assembled temporarily to quickly certify an application’s readiness for production. Finally. until future needs arise. The expenditure includes the onsite expertise as well as the short-term use of the Virtual Enterprise suite. the need for our solutions doesn’t always correspond neatly with a budget cycle. Now. or the time to process the request through internal channels. technology in hand. Delivering • Detailed.
Traditional solutions aren’t enough Ensuring performance and scalability for these new web applications requires new tools that can support dynamic content and adapt to changing usage patterns.files. Web 1. the communication extends to application-to-application communication. A Guide to Decision Making 140 . chances are they were developed using Microsoft’s ASP. such as AJAX and Microsoft’s Silverlight let developers deliver a more complex and “desktop-like” user experience. While these features can be used to improve application performance and scalability.0 applications have inherent bottlenecks that cannot be removed by increasing server or network capacity. really web applications. Traditional acceleration devices offload the server for static applications. ranging from caching to session state. embracing a new approach to generating and distributing Web content.NET If you run web applications in a Microsoft environment. time-consuming and expensive forcing your development organization or software vendor to tradeoff new features for Communication is two-way Content generation has shifted from the majority of the content built by the website owner the bulk of the content contributed by users.0 was characterized by web sites designed to disperse information. And as these new sites.net applications in the network A New Performance Challenge IT staff supporting today’s rich. taking advantage of them through coding can be complex. Today’s Web 2. characterized by dynamic pages stitched together from database lookups and web services whose content is regenerated every time a visitor reloads the page. one-way communication vehicles with largely static HTML pages . Additional tools. changed rarely and were displayed exactly the same way each time a visitor requested the page. The dynamic web has transformed into a medium for sharing and collaboration. that although large. Applications designed using development methods to deliver and deployed with infrastructure to support an average load can experience order-of-magnitude traffic increases overnight. Pages are dynamic With this change from static web sites to dynamic web applications come new and difficult performance challenges. and the visibility of the application and the data lookups to offload work for dynamic applications. add APIs. but dynamic web 2. highly interactive and dynamic web applications are facing new and difficult performance challenges.0 Applications How enterprises can have their performance and new features too by automatically optimizing Microsoft ASp. but they lack the dynamic features needed to fully exploit browser caching. NET framework which includes many features designed to support application performance and scalability.0 sites are full blown applications. such as image and video. Traditional solutions – adding servers or more network capacity and deploying traditional application acceleration devices – provide some gain. Traffic volumes and patterns become unpredictable and can spike without warning. Automatically optimize Microsoft ASP.ApplicAtion Delivery HAnDbook | februAry 2008 High Performance Web 2.
NET application. The Strangeloop AS1000 appliance reduces the need for time-consuming performance optimization. Using the Strangeloop AS1000 appliance. without adding more gear or handing the problem back to development. It gives network managers a way to meet performance challenges. organizations can quickly deliver richer user experiences featuring increased performance and improved bandwidth utilization. Development can focus on new features.and effort-saving features of the ASP.NET and Strangeloop. you can avoid the tradeoff between new features and performance. The Strangeloop Manager also provides access to Strangeloop Analytics Reporting features. With the AS1000 in your network." . you can keep up with the dynamic nature of today’s Web 2. Strangeloop Analytics: Browser Load Time http://www. The Strangeloop AS1000 appliance takes over the difficult task of performance and scaling optimization. in a way that’s simply not possible when you rely on traditional application acceleration appliances or handcoding for performance. Developer Tools Team.NET behavior allows for dynamic tuning as the application evolves. rapid installation and configuration.NET Framework and third-party controls that make it easier to bring new features and applications to market faster.strangeloopnetworks. And the AS1000’s deep understanding of ASP.Keith Smith.com/ A Guide to Decision Making 142 .NET applications.0 applications.NET sites that serve more than a few dozen simultaneous users or perform complex tasks inevitably face performance challenges. "With ASP. removing that burden from the developers. Treatments can be applied on per application or per URL basis. Microsoft Corp Strangeloop Manager Conclusion ASP.ApplicAtion Delivery HAnDbook | februAry 2008 Deployment and Performance Management The Strangeloop AS1000 Appliance is designed for simple. and can still use the time. The Strangeloop Manager provides an easy to use GUI that walks users through setting up the appliance and configuring the application treatments to accelerate any ASP. and reduces the ongoing cost of delivering and supporting ASP.
This action might not be possible to undo. Are you sure you want to continue?
We've moved you to where you read on your other device.
Get the full title to continue listening from where you left off, or restart the preview.