You are on page 1of 9

ASP.

NET

10 Tips for Writing HighPerformance Web Applications
Rob Howard This article discusses: This article uses the following technologies:  Common ASP.NET performance myths  Useful performance tips and tricks for ASP.NET, .NET Framework, ASP.NET IIS  Suggestions for working with a database from ASP.NET  Caching and background processing with ASP.NET Contents Performance on the Data Tier Tip 1—Return Multiple Resultsets Tip 2—Paged Data Access Tip 3—Connection Pooling Tip 4—ASP.NET Cache API Tip 5—Per-Request Caching Tip 6—Background Processing Tip 7—Page Output Caching and Proxy Servers Tip 8—Run IIS 6.0 (If Only for Kernel Caching) Tip 9—Use Gzip Compression Tip 10—Server Control View State Conclusion Writing a Web application with ASP.NET is unbelievably easy. So easy, many developers don't take the time to structure their applications for great performance. In this article, I'm going to present 10 tips for writing high-performance Web apps. I'm not limiting my comments to ASP.NET applications because they are just one subset of Web applications. This article won't be the definitive guide for performancetuning Web applications—an entire book could easily be devoted to that. Instead, think of this as a good place to start. Before becoming a workaholic, I used to do a lot of rock climbing. Prior to any big climb, I'd review the route in the guidebook and read the recommendations made by people who had visited the site before. But, no matter how good the guidebook, you need actual rock climbing experience before attempting a particularly challenging climb. Similarly, you can only learn how to write high-performance Web applications when you're faced with either fixing performance problems or running a highthroughput site. My personal experience comes from having been an infrastructure Program Manager on the ASP.NET team at Microsoft, running and managing www.asp.net, and helping architect Community Server, which is the next version of several well-known ASP.NET applications (ASP.NET Forums, .Text, and nGallery combined into one platform). I'm sure that some of the tips that have helped me will help you as well. You should think about the separation of your application into logical tiers. You might have heard of the term 3-tier (or n-tier) physical architecture. These are usually prescribed architecture patterns that physically divide functionality across processes and/or hardware. As the system needs to scale, more hardware can easily be added.

I'll begin with the changes that can make the biggest difference. Tip 1—Return Multiple Resultsets Review your database code to see if you have request paths that go to the database more than once. You'll be making your system more scalable. I prefer to use stored procedures. and how often you make round-trips to and from the database. whenever possible. With a big optimization. run the ASP. the cost of process hopping to the database is still high. and tiny optimizations that repeat themselves. but when compounded across the total requests per day. make sure you profile your applications to see exactly where the problems lie. too. So. By returning multiple resultsets in a single database request. Using a SqlCommand instance and its ExecuteReader method to populate strongly typed business classes. However. stop and perform your litmus test. you might shave a few milliseconds on a given request. you can move the resultset pointer forward by calling . Because of the separation of code and the boundaries between tiers.There is. it can result in an enormous improvement. but I think that if logic in a stored procedure can constrain the data returned (reduce the size of the dataset. With a small one. time spent on the network. let's look at ten tips that can help your application perform better. the amount of data returned. thus performance on the data tier is the first place to look when optimizing your code. a performance hit associated with process and machine hopping. With that general information established. Each of those round-trips decreases the number of requests per second your application can serve. but I'm not covering those in this article. however. there is a single litmus test you can use to prioritize work: does the code access the database? If so. such as using the ASP.NET pages and their associated components together in the same application. thus it should be avoided. Before diving in to fix performance problems in your applications. Key performance counters (such as the one that indicates the percentage of time spent performing garbage collections) are also very useful for finding out where applications are spending the majority of their time. These tiny optimizations are sometimes the most interesting. While you can return multiple resultsets using dynamic SQL. as you'll cut down on the work the database server is doing managing requests. your time would be better utilized trying to optimize the time spent in and connected to the database. Yet the places where time is spent are often quite unintuitive.NET Cache. it's a good thing. how often? Note that the same test could be applied for code that uses Web services or remoting. There are two types of performance improvements described in this article: large optimizations. you can cut the total time spent communicating with the database. Performance on the Data Tier When it comes to performance-tuning an application. If you have a database request required in a particular code path and you see other areas such as string manipulations that you want to optimize first. and not having to filter the data in the logic tier). Unless you have an egregious performance problem. using Web services or remoting will decrease performance by 20 percent or more. you might see overall performance take a large jump. It's arguable whether business logic should reside in a stored procedure. You make a small change to code that gets called thousands and thousands of times. too. The data tier is a bit of a different beast since it is usually better to have dedicated hardware for your database.

If 100.IndexID < @PageUpperBound ORDER BY PageIndex.000 records. Figure 2 shows a sample stored procedure that pages through the Orders table in the Northwind database. a WHERE clause can be used to constrain the data returned.NextResult().Add(PopulateSupplierFromIDataReader( reader )). } Tip 2—Paged Data Access The ASP.000. The paging UI allows you to navigate backwards and forwards through displayed data.Return total count SELECT COUNT(OrderID) FROM Orders -.Return paged results SELECT O. if there are 1. One good approach to writing better paging code is to use stored procedures.OrderID = PageIndex.NextResult. a fixed number of records is shown at a time. When paging is enabled in the DataGrid. Figure 1 shows a sample conversation populating several ArrayLists with typed classes.Read()) { suppliers. For example.Insert into the temp table INSERT INTO #PageIndex (OrderID) SELECT OrderID FROM Orders ORDER BY OrderID DESC -.Read()) { products. 1) NOT NULL.Set the page bounds SET @PageLowerBound = @PageSize * @PageIndex SET @PageUpperBound = @PageLowerBound + @PageSize + 1 -.OrderID AND PageIndex. OrderID int ) -.First set the rowcount SET @RowsToReturn = @PageSize * (@PageIndex + 1) SET ROWCOUNT @RowsToReturn -. // read the data from that second resultset while (reader. all you're doing here is passing in the page index and the page size. For example.000 total records and a WHERE clause is used that filters this to 1. returning two resultsets from one stored procedure: the total number of records and the requested data.NET DataGrid exposes a wonderful capability: data paging support. For example. . // read the data from that resultset while (reader.975 records would be discarded on each request (assuming a page size of 25). your data layer will need to return all of the data and then the DataGrid will filter all the displayed records based on the current page. @PageSize int ) AS BEGIN DECLARE @PageLowerBound int DECLARE @PageUpperBound int DECLARE @RowsToReturn int -. The total number of records to be returned must be known in order to calculate the total pages to be displayed in the paging UI. } // read the next resultset reader. The total number of records returned can vary depending on the query being executed. In a nutshell. Returning only the data you need from the database will additionally decrease memory allocations on your server. Paging with the DataGrid requires all of the data to be bound to the grid. Figure 1 Extracting Multiple Resultsets from a DataReader // read the first resultset reader = command.* FROM Orders O. displaying a fixed number of records at a time. The appropriate resultset is calculated and then returned.IndexID END In Community Server. we wrote a paging server control to do all the data paging. There's one slight wrinkle. the performance of the application will suffer as more and more data must be sent on each request. You'll see that I am using the ideas discussed in Tip 1.Add(PopulateProductFromIDataReader( reader )). Figure 2 Paging Through the Orders Table CREATE PROCEDURE northwind_OrdersPaged ( @PageIndex int. #PageIndex PageIndex WHERE O. As the number of records grows.ExecuteReader(). 99.Create a temp table to store the select results CREATE TABLE #PageIndex ( IndexId int IDENTITY (1. Additionally.IndexID > @PageLowerBound AND PageIndex.000 records are returned when you're paging through the DataGrid. paging UI is also shown at the bottom of the DataGrid for navigating through the records. the paging logic needs to be aware of the total number of records to properly render the paging UI.

always call Close or Dispose explicitly on your connection when you are finished with it. I repeat: no matter what anyone says about garbage collection within the Microsoft®. Developers at Microsoft have been able to take advantage of connection pooling for some time now. and apply business rules such as permissions. the time spent sending or retrieving data. a new connection is set up only when one is not available in the connection pool. you should optimize by focusing on the time spent connecting to the resource. Tip 4—ASP. it can still be cached.Web. Rather than setting up a new TCP connection on each request. If you don't use the same connection string. there are a couple of rules to live by. as opposed to completely tearing down that TCP connection.NET Framework. When the connection is closed.NET Cache API One of the very first things you should do before writing a line of application code is architect the application tier to maximize and exploit the ASP. your pooling will also be much less effective. Second. you won't get the same optimization value provided by connection pooling.or request-specific. if data can be used more than once it's a good candidate for caching. For example. Of course you need to watch out for leaking connections.Cache property (the same object is also accessible through Page. do the work. The . running in another process. The CLR will eventually destroy the class and force the connection closed. if data is general rather than specific to a given request or user. an often overlooked rule is that sometimes you can cache too . and then close the connection.dll in your application project.Cache). but may not be used as frequently. When you need access to the Cache. this is where you populate a Forums or Threads collection. in Community Server.NET Cache feature. Third. open the connection.Cache and HttpContext. for example customizing the connection string based on the logged-in user.NET application.Tip 3—Connection Pooling Setting up the TCP connection between your Web application and SQL Server™ can be an expensive operation. Second. it's a great candidate for the cache. but you have no guarantee when the garbage collection on the object will actually happen. you simply need to include a reference to System. use the same connection string (and the same thread identity if you're using integrated authentication). most importantly it is where the Caching logic is performed. And if you use integrated authentication while impersonating a large set of users. It's okay to open and close the connection multiple times on each request if you have to (optimally you apply Tip 1) rather than keeping the connection open and passing it around through different methods. To use connection pooling optimally. First. allowing them to reuse connections to the database. but is long lived. First. There are several rules for caching data. Always close your connections when you're finished with them. The application tier contains the logic that connects to your data layer and transforms data into meaningful class instances and business processes. Do not trust the common language runtime (CLR) to clean up and close your connection for you at a predetermined time. Whenever your application is connecting to a resource. and the number of roundtrips.NET CLR data performance counters can be very useful when attempting to track down any performance issues that are related to connection pooling. such as a database. If the data is user. it is returned to the pool where it remains connected to the database. use the HttpRuntime. Optimizing any kind of process hop in your application is the first place to start to achieve better performance. If your components are running within an ASP.

In other words.Common Performance Myths One of the most common myths is that C# code is faster than Visual Basic code. which can be a scary proposition. which is absolutely false. Web services should be used to connect disparate systems or to provide remote access to system functionality or behaviors.NET to force . This was true in Classic ASP when compiled COM servers were much faster than VBScript. The worst thing you can do is use Web services for communicating between ASP and ASP. Myth number three is that components are faster than pages. Therefore. With ASP. For example. There is a grain of truth in this. within a codebehind. It doesn't matter where your code for your ASP. allowing ASP. Another myth is that codebehind is faster than inline. Organizationally. there are much better alternatives. or in a separate component makes little performance difference. there is no reason why Visual Basic and C# code cannot execute with nearly identical performance. which will likely get you into trouble. it is better to group functionality logically this way. but again it makes no difference with regard to performance. both pages and components are classes.NET page.NET is out-of-memory errors caused by overcaching. especially of large datasets. To put it more succinctly. The final myth I want to dispel is that every functionality that you want to occur between two apps should be implemented as a Web service.NET application lives. One of the most common support calls for ASP. but if that computation takes 10 parameters.much. which I've witnessed all too frequently. with codebehind you have to update the entire codebehind DLL. such as not explicitly declaring types. similar code produces similar results. Sometimes I prefer to use inline code as changes don't incur the same update costs as codebehind. Figure 3 ASP.NET Cache There are a several great features of the Cache that you need to know.NET. The first is that the Cache implements a least-recently-used algorithm. But if good programming practices are followed. Generally on an x86 machine. you may be able to reuse a result of a computation. you might attempt to cache on 10 permutations. whether in a codebehind file or inline with the ASP. as it is possible to take several performancehindering actions in Visual Basic that are not possible to accomplish in C#.NET applications running on the same server. Whether your code is inline in a page. caching should be bounded. you want to run a process with no higher than 800MB of private bytes in order to reduce the chance of an out-of-memory error. They should not be used internally to connect two similar systems. however. While easy to use.

For a look at the architecture of the cache.NET HttpContext. see Dino Esposito's Cutting Edge column in the July 2004 issue of MSDN®Magazine. the Cache supports expiration dependencies that can force invalidation. and on subsequent lookups the data found in HttpContext. the application first needed to ensure that there were no duplicate posts. Indexing e-mail didn't need to happen on each request. These include time. For more information on database cache invalidation. A particular code path is accessed frequently on each request but the data only needs to be fetched. it would take longer and longer to perform the AddPost function. but with ASP.Web. overall performance gains. such as the skin to use for the controls.NET 2. applied. Time is often used. send e-mail notifications out to any subscribers. One of my absolute favorites of these is something I've termed per-request caching. right? There may be times when you find yourself performing expensive tasks on each request or once every n requests. An instance of HttpContext is created with every request and is accessible anywhere during that request from the HttpContext. you can use HttpContext. or updated once. Tip 6—Background Processing The path through your code should be as fast as possible. and it turned out that the built-in System. It turns out that most of the time was spent in the indexing logic and sending e-mails. once posted.0 and rebuilding what became Community Server. I mentioned that small improvements to frequently traversed code paths can lead to big. add the post to the moderation queue when required. The logic behind this is simple: data is added to the HttpContext. validate attachments. modified. then it had to parse the post using a "badword" filter. To accomplish per-request caching. per-request caching simply means caching the data for the duration of the request. the style sheet to use.NET Forums 1. Just as you can use the Cache to store frequently accessed data. tokenize and index the post. and file.a Cache purge—automatically removing unused items from the Cache—if memory is running low. When tearing apart ASP. This sounds fairly theoretical. Secondly.Current property. Tip 5—Per-Request Caching Earlier in the article.Items to store data that you'll use only on a per-request basis.Items collection when it doesn't exist. Sending out e-mails or parsing and validation of incoming data are just a few examples. As the number of subscribers to a particular post or topic area increased. key. Ideally. each server control used on a page requires personalization data to determine which skin to use. In the Forums application of Community Server. we found that the code path for adding a new post was pretty slow. Whereas the Cache API is designed to cache data for a long period or until some condition is met. Clearly. Indexing a post was a time-consuming operation. but some data. use the ASP. The HttpContext class has a special Items collection property. Each time a post was added. This refers to the automatic removal of entries in the cache when data in the database changes. see Figure 3. that's a lot of work.0 a new and more powerful invalidation type is being introduced: database cache invalidation.Items is simply returned. and finally. objects and data added to this Items collection are cached only for the duration of the request. so let's consider a concrete example.Mail functionality would connect to an SMTP server and send the e-mails serially. as well as other personalization data. is fetched once on each request and reused multiple times during the execution of the request. we wanted to batch this work together and index 25 posts at a time or send all the e-mails every five . parse the post for emoticons. Some of this data can be cached for a long period of time.

such as those used by the Microsoft Internet Security and Acceleration Server or by Akamai. this can be an issue. The Timer class. ASP.NET application. When HTTP Cache headers are set. You can do work such as indexing or sending e-mail in this background process too. at which point the page will re-execute and the output will once be again added to the ASP. but less well-known class in the .NET Framework. then. too. user controls. By simply adding this line to the top of your page <%@ Page OutputCache VaryByParams="none" Duration="60" %> you can effectively generate the output for this page once and reuse it multiple times for up to 60 seconds. but you can download a digestible sample atwww. There are several configurable settings for output caching. but allows you to specify the HTTP GET or HTTP POST parameters to vary the cache entries. server controls (HttpHandlers and HttpModules). the documents can be cached on these network resources. Tip 7—Page Output Caching and Proxy Servers ASP.NET page that generates output. the Timer will invoke the specified callback on a thread from the ThreadPool at a configurable interval. this can only be anonymous content. since the CLR has a hard gate on the number of threads per process. . Additional parameters can be named by specifying a semicolon-separated list. but it can potentially reduce the load on your server as downstream caching technology caches documents.NET page also generates a set of HTTP headers that downstream caching servers. an ideal situation for background processing. does not make your application more efficient. Of course. images. you won't see the requests anymore and can't perform authentication to prevent access to it. Just grab the slides and demos from the Blackbelt TechEd 2004 presentation. For example. and you run this code on each request and it generates the same output. you can get into a situation on a heavily loaded server where timers may not have threads to complete on and can be somewhat delayed. Using page output caching. the ASP. such as the VaryByParams attribute just described. at least for Web developers. whether HTML. though. If you have an ASP.net.NET tries to minimize the chances of this happening by reserving a certain number of free threads in the process and only using a portion of the total threads for request processing. or any other data.Threading namespace.rob-howard. If your application domain unloads. There are a couple of problems with this technique. We decided to use the same code I had used to prototype database cache invalidation for what eventually got baked into Visual Studio®2005. the timer instance will stop firing its events. There is not enough room to go into the code here.NET Cache. and the content that they generate.minutes. Once created. and client requests can be satisfied without having to go back to the origin server.aspx?Report=2 could be output-cached by simply setting VaryByParam="Report". This behavior can also be accomplished using some lower-level programmatic APIs. In addition.aspx?Report=1 or default. This means you can set up code to execute without an incoming request to your ASP. XML. found in the System. if you have lots of asynchronous work. default. VaryByParams just happens to be required. you have a great candidate for page output caching. Many people don't realize that when the Output Cache is used.NET is your presentation layer (or should be). is a wonderfully useful. However. once it's downstream. it consists of pages.

0 (Windows Server™ 2003). The common characteristic was network saturation by requests/responses and IIS running at about five percent CPU utilization. you'll see unbelievable performance results. such as Internet Explorer 6. When caching is involved.0. the server can parse.NET. Rather than include the procedure in this article.0 (which isn't for the faint of heart). There's also a Knowledge Base article on enabling compression for ASPX. a kernel-level driver (no context switch to user mode) receives the request. It was amazing! There are certainly other reasons for using IIS 6. When the page is posted back to the server.0 and Firefox).asp. a request comes through IIS and then to ASP. The IIS team built awesome gzip capabilities into the server. there is a nice little feature called kernel caching that doesn't require any code changes to ASP.0 (If Only for Kernel Caching) If you're not running IIS 6.NET performance. flushes the cached data to the response. when attempting to turn on gzip compression in IIS 6. how well it can be compressed. your server can serve more requests per second. and completes execution. By the way. just read the article by Brad Wilson at IIS6 Compression. This means that when you use kernel-mode caching with IIS and ASP. and returns the contents from the Cache. the credit goes to Scott Forsyth of OrcsWeb who helped me figure this out for the www. In IIS 5. however.0 and is much better than the gzip compression used in IIS 5. but kernel mode caching is an obvious one. I talked about output caching. but I saw all the reports on a daily basis.NET. View state is a very powerful capability since it allows state to be persisted with the client and it requires no cookies or server memory to save this state.NET output caching. and if cached. At one point during the Visual Studio 2005 development of ASP. available at Enable ASPX Compression in IIS. that dynamic compression and kernel caching are mutually exclusive on IIS 6.0.NET. I was the program manager responsible for ASP. When a request comes from the network driver.NET server controls use view state to persist settings made during interactions . If you're using IIS 6.netsevers hosted by OrcsWeb. and apply this view state data back to the page's tree of controls. In Tip 7. The good news is that gzip compression is built into IIS 6.0. using gzip compression can decrease the number of bytes sent by your server. When a request is output-cached by ASP. Many ASP. Tip 9—Use Gzip Compression While not necessarily a server performance tip (since you might see CPU utilization go up).Tip 8—Run IIS 6. This gives the perception of faster pages and also cuts down on bandwidth usage.0 due to some implementation details. The developers did the magic.NET.0. Unfortunately. you will increase requests per second. It should be noted. Tip 10—Server Control View State View state is a fancy name for ASP. but neglected to include an administrative UI for enabling it. and whether the client browsers support it (IIS will only send gzip compressed content to clients that support gzip compression.0. the IIS kernel cache receives a copy of the cached data. an HttpModule in ASP. To enable gzip compression. just about any time you can decrease the amount of data returned. In fact. you have to spelunk into the innards of the XML configuration settings of IIS 6.NET storing some state data in a hidden input field inside the generated page. Depending on the data sent. validate. you're missing out on some great performance enhancements in the Microsoft Web server. The kernel mode caching results were always the most interesting.NET receives the request. you may not be able to locate the setting on the properties dialog in IIS.

The default behavior of the ViewState property is enabled. view state increases the memory allocations on the server. tend to make excessive use of view state. for example. you should disable view state at the page level. or you can set it globally within the page using this setting: <%@ Page EnableViewState="false" %> If you are not doing postbacks in a page or are always regenerating the controls on a page on each request. the most well known of which is the DataGrid. Within a control. it increases the total payload of the page both when served and when requested. even in cases where it is not needed. Lastly. First of all. There is also an additional overhead incurred when serializing or deserializing view state data that is posted back to the server. saving the current page that is being displayed when paging through data. There are a number of drawbacks to the use of view state. but if you don't need it. you simply set the EnableViewState property to false. Several server controls. . however. you can turn it off at the control or page level.with elements on the page.