P. 1
Windows Azure Platform Articles From the Trenches

Windows Azure Platform Articles From the Trenches

|Views: 120|Likes:
Published by patrikcarlander

More info:

Published by: patrikcarlander on Jan 26, 2012
Copyright:Attribution Non-commercial

Availability:

Read on Scribd mobile: iPhone, iPad and Android.
download as PDF, TXT or read online from Scribd
See more
See less

08/21/2013

pdf

text

original

The Windows Azure Platform: Articles from the Trenches

Volume One
Editor and copy and paste guru: Eric Nelson and 15 authors smarter than him 22nd June 2010 (v0.9)

Cover art by Andrew Fryer

Developers have been exploring the possibilities opened up by the Windows Azure Platform for Cloud Computing. This book pulls together great articles from many of those developers who have been active with the Windows Azure Platform to hopefully help others become successful. There are twenty articles in this first volume covering everything from getting started to implementing best practices for elastic applications.

The Windows Azure Platform: Articles from the Trenches TABLE OF CONTENTS

INTRODUCTION
From the Editor Would you like to become an author for a future edition? Introduction to the Windows Azure Platform AE – Acronyms Explained 

6
6 6 7 8

CHAPTER 1: GETTING STARTED
5 steps to getting started with Windows Azure Step 1: Creating an Azure account. Step 2: Provisioning a SQL Azure database Step 3: Building a Web Application for Azure Step 4: Packaging the Web Application for Windows Azure Step 5: Deploying the Web Application to Azure. The best tools for working with the Windows Azure Platform Category: The usual suspects Category: Windows Azure Storage Category: Windows Azure diagnostics Category: SQL Azure Category: General Development

9
9 9 9 10 11 11 14 14 14 17 18 19

CHAPTER 2: WINDOWS AZURE PLATFORM
Architecting For Azure – Building Highly Scalable Applications Principles of Azure Architectures Partition Data Colocation Cache State Distribute Workloads Effectively Maximise Resources Summary The Windows Azure Platform and Cost-Oriented Architecture Cost is important What costs to consider Conclusion De-risking Your First Windows Azure Project Popular Risks Non-Technical Tactics for Reducing Risk Technical Tactics for Reducing Risk

20
20 20 20 21 21 21 22 22 23 24 24 24 25 26 26 27 28

2

The Windows Azure Platform: Articles from the Trenches Developer Responsibility Trials & tribulations of working with Azure when there’s more than one of you Development Environment Test Environment Certificates When things go wrong Summary Using a Continuous Integration build to achieve an automated deployment of your latest build Getting the right “bits” Packaging for deployment Deploying Using Java with the Windows Azure Platform Accessing Windows Azure Storage from Java Running Java Code on Windows Azure AzureRunme 29 30 30 30 31 31 31 32 32 32 33 35 35 36 37

CHAPTER 3: WINDOWS AZURE
Auto-Scaling Windows Azure Compute Instances Introduction A Basic Approach The Scale Agent Monitoring: Retrieving Diagnostic Information Rules: Establishing When To Scale Trust: Authorising For Scale Scaling – The Service Management API Summary Building a Content-Based Router Service on Windows Azure Bing Maps Tile Servers using Azure Blob Storage Azure Drive Guest OS VHD CloudDrive Development Environment Azure Table Service as a NoSQL database Master-Detail structures Dynamic schema Column names as data Table names as data

39
39 39 39 39 40 41 42 44 45 46 49 51 51 51 52 53 55 55 55 56 56

3

The Windows Azure Platform: Articles from the Trenches Summary Queries and Azure Tables CreateQuery<T>() Contexts Querying on PartitionKey and RowKey Continuation DataServiceQuery CloudTableQuery Tricks for storing time and date fields in Table Storage Using Worker Roles to Implement a Distributed Cache Configuring the Cache Using the Distributed Cache Logging. diagnostics and health monitoring of Windows Azure Applications Collecting diagnostic data Persisting diagnostic data Analysing the diagnostic data More information Service Runtime in Windows Azure Roles and Instances Endpoints Service Upgrades Service Definition and Service Configuration RoleEntryPoint Role RoleEnvironment RoleInstance RoleInstanceEndpoint LocalResource 57 58 58 59 59 60 60 61 64 68 68 69 71 71 72 72 73 74 74 74 74 75 75 76 76 77 78 78 CHAPTER 4: SQL AZURE Connecting to SQL Azure in 5 Minutes Prerequisite – Get a SQL Azure account Working with the SQL Azure Portal Create a database through the Server Administration Configuring the firewall Connecting using SQL Server Management Studio Application credentials Keep in mind – the target database 79 79 79 79 80 80 81 83 83 CHAPTER 5: WINDOWS AZURE PLATFORM APPFABRIC 85 4 .

The Windows Azure Platform: Articles from the Trenches Real Time Tracing of Azure Roles from Your Desktop Custom Trace Listener Send Message Console Application Trace Service Service Host Class Service Summary 85 85 86 86 87 88 88 MEET THE AUTHORS Eric Nelson Marcus Tillett Richard Prodger Saksham Gautam Steve Towler Rob Blackwell Juliën Hanssens Simon Munro Sarang Kulkarni Steven Nagy Grace Mollison Jason Nappi Josh Tucholski David Gristwood Neil Mackenzie Mark Rendle 90 90 90 91 91 92 92 92 93 93 93 94 94 95 95 96 96 5 .

In early 2010 it went live and in the first six months we have already seen an impressively diverse range of solutions developed to take advantage of the services offered. Instead I would encourage you to head straight to the chapters or the individual articles that look most relevant or interesting.uk Email: eric. especially for Visual Studio 2010 and . This book pulls together great articles from many of those developers who have been active with the Windows Azure Platform to hopefully help others be successful.2 released adds some great new features. You are not expected to read it in order from start to finish.0 in areas such as debugging and IDE integration.2 release of the Windows Azure SDK.co.com/ericnel WOULD YOU LIKE TO BECOME AN AUTHOR FOR A FUTURE EDITION? Developers value the sharing of best practices.NET Framework 4.The Windows Azure Platform: Articles from the Trenches INTRODUCTION FROM THE EDITOR Hello all.nelson@microsoft. knowledge and experiences – knowledge and experiences such as your own. Microsoft UK Website: http://www. The book was put together in May and early June 2010 which means that it pre-dates the 1.com Blog: http://geekswithblogs. implement. Please email me (eric. If you have insight into the Windows Azure Platform then you are a great candidate for becoming an author involved in the next volume of this book as the Windows Azure Platform continues to evolve and broaden.ly/azureebook1feedback (It should take less than one minute).com) with your proposed article(s) and if possible a “sample of your work” such as a link to your blog. Thank you and happy reading. The 1.net/iupdateable Twitter: http://twitter. There are twenty articles in this first volume covering everything from getting started to implementing best practices for elastic applications. Eric Nelson Developer Evangelist.ericnelson.nelson@microsoft. Volume Two of this book will cover off those new features (and more!) Once you have had a chance to look at the articles please give us your feedback at http://bit. deploy and manage solutions. 6 . The Windows Azure Platform is changing the way we architect.

AppFabric comprises Service Bus and Access Control. and data synchronization with mobile users. and disparate identity systems. Ruby. We also have a Getting Started chapter within this book. dynamic IP. Windows Azure Platform AppFabric  AppFabric provides secure connectivity as a service to help developers bridge cloud. with some of the solution running on-premise or elsewhere on the Internet. PHP. search. Windows Azure provides developers with on-demand compute and storage to run your code and store your data.  Windows Azure supports a consistent development experience through its integration with Visual Studio 2008 and Visual Studio 2010. secure authorization for RESTful web services that federate with a variety of identiy providers. It can store and retrieve structured. semi-structured. Solutions can either run entirely on the Windows Azure Platform or as a hybrid.  There are many articles. Windows Azure is an open platform that supports both Microsoft and non-Microsoft languages and technologies. AppFabric Access Control enables simple. AppFabric Service Bus gives developers the flexibility to choose how their applications communicate. For the first time you are able to run your code and store your data in Microsoft datacenters and let Microsoft take on some of the responsibility for keeping your solution running great and able to respond to the changing demands of business. videos and screencasts designed to help you get up to speed with the Windows Azure Platform and a great place to start is http://bit.The Windows Azure Platform: Articles from the Trenches INTRODUCTION TO THE WINDOWS AZURE PLATFORM The Windows Azure Platform contains three technologies which can be used individually or together to build solutions which run “in the cloud”. remote offices and business partners. and Python. Windows Azure welcomes third-party tools and technologies such as Eclipse.ly/startazure. addressing the challenges presented by firewalls. From simple eventing scenarios to complex protocol tunneling. NATs. SQL Azure  Microsoft SQL Azure delivers the capabilities of Microsoft SQL Server to Windows Azure applications or applications running outside of the Windows Azure Platform. and unstructured data with the advantage of high availability through the storage of multiple copies of your data. SQL Azure and Windows Azure Platform AppFabric: Windows Azure Windows Azure is the cloud services operating system for the Windows Azure Platform. and hosted deployments. onpremise. 7 . It enables relational queries. The three key technologies are Windows Azure.

(Also see the 100+ alternative definitions of Cloud Computing  e.microsoft. IaaS – Infrastructure as a Service is one approach to Cloud Computing that favors flexibility over abstraction and simplicity e. PaaS – Platform as a Service is one approach to Cloud Computing that favors abstraction and simplicity over flexibility e. WCF – Windows Communication Foundation. http://en. the Windows Azure Platform.wikipedia.The Windows Azure Platform: Articles from the Trenches AE – ACRONYMS EXPLAINED  If you are new to the Windows Azure Platform then you may need a little help with some of the acronyms and industry terms used in this book. A style of software architecture to enable clients and servers to interact. http://www.com/WindowsAzure/dallas/ CTP – Community Technology Preview. In simple terms – not quite as solid as a traditional Beta        8 . currently in CTP. Cloud Computing – running of code and storage of data off-premise.NET Framework 3.g.Representational State Transfer. Codename “Dallas” – a 4th member of the Windows Azure Platform. elastic computing (in our case the Windows Azure Platform) promises to rapidly respond to those demands and provision out additional compute and storage resources. A technology shipped initially in .0 to allow communication to take please between code running in different “locations”. Amazon Web Services.g.   REST and RESTful .org/wiki/Cloud_computing ) Elastic Computing –as more processing power is needed or as more data needs to be stored.g.

my database needs to be in the cloud too. The reasonable answer is that if my application is going to be hosted in the cloud. To create a SQL Azure server. you’ll need to provide a username and password and the SQL Azure Developer Portal will create a server using a generated unique name similar to crkvq7vdhu. This is a pretty straightforward registration process that will require you to create a Windows Live ID if you don’t already have one and will require a credit card. Given that. as you might imagine. At the conclusion of the registration process you should have access to Windows Azure. In order to create my cloud database I’ll need to return to the Azure account that I set up in step 1 and navigate to the SQL Azure section of the portal https://sql. whether creating a new application or moving an existing one to the cloud. At this point you haven’t created any cloud services. You can create an Azure account at the Windows Azure Developer portal. and the implications of building those same types of applications in the Azure cloud. Again. SQL Azure and AppFabric. The following are some of the primary design considerations for what I think of as a typical business application. I’d like to focus on providing a few of the basic steps that I recently went through in the hope that it will both answer some of the basic questions and knock down some of the barriers to accelerated learning. you can just grant your local machines IP address access to the SQL Azure server. The Windows Azure Platform provides Windows Azure Storage as well as SQL Azure for storing data. I think it’s going to be a fairly common question to ask where the database lives and how you connect to it. but generally once you get going things become familiar and learning accelerates. Naturally I’m inclined towards SQL Azure to get started.windows. you can now create the database. you’ll need to create an account in the cloud.com. SQL Azure is most similar to the relational databases of the typical business application. SQL Azure provides the more familiar paradigm. but most of the applications I’ve built have been database driven. With the SQL Azure server created.The Windows Azure Platform: Articles from the Trenches CHAPTER 1: GETTING STARTED 5 STEPS TO GETTING STARTED WITH WINDOWS AZURE By Jason Nappi Getting started with a new technology can be daunting.net. is to set up an Azure account. you’ve only created an account under which the services you create can be provisioned and deployed. There is also an additional requirement that you configure firewall rules to allow access. STEP 2: PROVISIONING A SQL AZURE DATABASE This step may not be required by everyone. STEP 1: CREATING AN AZURE ACCOUNT. 9 . Therefore.database. so while Azure Storage may have scalability and cost advantages. for the sake of simplicity. The first step. Since Windows Azure is a cloud service.azure. and provision a cloud environment.

Worker and WCF Service Roles. After choosing the cloud service template you will be prompted to choose from one of the cloud service ‘roles’. Assuming you’ve chosen “ASP. and assuming you’ve created the tables required by your application.NET Web 10 . The good news about both of these is that they support Visual Studio 2008 and Visual Studio 2010. Once you fire up Visual Studio you’ll notice a new project template for “Windows Azure Cloud Service”.The Windows Azure Platform: Articles from the Trenches Lastly. you might be wondering.1. you’re ready to begin building your application. Web. STEP 3: BUILDING A WEB APPLICATION FOR AZURE Having provisioned our cloud database and proven that you can connect to it with familiar SQL Server Management studio tools. I was able to successfully connect after downloading SQL Server Management Studio 2008 R2. whether the newly created SQL Azure database is accessible via the familiar SQL Server Management Studio Tools. as I did. In order to do so you’ll need to install the Windows Azure SDK and the Windows Azure Tools for Microsoft Visual Studio 1.

cscfg file. These two files are all you need to deploy your application to Windows Azure. you’ll need to return to the Windows Azure account you created in Step 1 and create your Windows Azure service. and also create the ServiceConfiguration. a solution containing two projects. The Development Fabric simulates the Windows Azure cloud environment enabling you to run. test and debug Azure applications on the desktop! STEP 4: PACKAGING THE WEB APPLICATION FOR WINDOWS AZURE Packaging up the application for publishing to Azure turns out to be fairly simple. This will package the web application into a .NET web project and the ASP. Provide both files. STEP 5: DEPLOYING THE WEB APPLICATION TO AZURE. 11 .cs file. Once the Service is created there’ll be two hosted service locations. Under each will be a ‘Deploy’ button. The only real difference between a standard ASP.cspkg file. staging and production. a cloud services project and the familiar ASP. This will bring up a screen asking for the two files created in Step 4. The WebRole. Choose Deploy under Staging. Now that you’ve packaged your ASP. will be created.cs serves as the entry point for Azure. From within Visual Studio you can right click on the Cloud Services project and choose Publish from the context menu. Under the Windows Azure tab choose “new service””Hosted Service” and provide a name and description for your new cloud service.NET Web project.The Windows Azure Platform: Articles from the Trenches Role”. Now you’ll also see that you have the ability to ‘Run’ the service.NET Web Role project is the existence of a WebRole. After deploying the package and the configuration you’ll be provided with a unique url for accessing your application.NET Web Role. When you hit F5 your Azure application starts up and runs inside the development Fabric. and deploy.

wait for it. but once you get the green light you should be good to go.The Windows Azure Platform: Articles from the Trenches The application won’t be accessible via the url until you Run it. wait for it…it takes a while to provision the Windows Azure infrastructure for your application. so press Run. and wait for it. 12 .

The Windows Azure Platform: Articles from the Trenches These are just a few of the baby steps I’ve taken to become familiar with Windows Azure. one of the more intriguing considerations when building for Windows Azure is the potential use of Windows Azure Storage as a data store instead the more conventional relational database provided by SQL Azure. With these steps I’ve been able to demonstrate that developing for Windows Azure is largely the same development experience that I’m accustomed to. However. 13 .

microsoft.com/en-us/library/dd573355. Download it from: http://www. though a fairly nascent cloud platform is aptly supported by some fantastic tooling which make development fun and a developer’s life easy. Windows Azure VMs support . As always. CATEGORY: WINDOWS AZURE STORAGE What: Cerebrata . Windows Azure. the overall development experience is definitely superior.Net Framework 4.Net 4. But it comes very handy to have a user with permissions as laid out at http://msdn. CATEGORY: THE USUAL SUSPECTS Microsoft Visual Studio 2010® Visual Studio 2010 (VS2010) is a stable development platform for Windows Azure.com/downloads/details. Microsoft SQL Server Management Studio® 2008 R2 The R2 release is recommended for working with SQL Azure.2 and therefore it makes sense to use VS2010 to take advantage of the new features of .aspx to avoid any surprises related to user rights while running in the fabric. Again Express edition is free and recommended as it serves most of the needs.0 in the cloud. User Accounts and Local Security Policy Control Panel applets I know there’s nothing specific to Azure here.The Windows Azure Platform: Articles from the Trenches THE BEST TOOLS FOR WORKING WITH THE WINDOWS AZURE PLATFORM By Sarang Kulkarni “A platform is known by the tooling available around it!” Much clichéd but still holds true. Though there are very few changes specific to Azure when compared with VS2008. I don’t think I need to wax poetic about this one. as well as a lot of things in the hosted 14 . this is Bread and Butter.0 from OS Version 1.microsoft. as well as hosted applications.Cloud Storage Studio Why: Cerebrata Cloud Storage Studio (CSS) is a WPF based client for managing Azure Storage. many of which I cannot do without. the Express edition is free. It now stands as a one stop solution to manage everything under the Azure Storage. Let us get the usual suspects out of the way first to make way for some more interesting kids on the block.aspx?familyid=56AD557C-03E6-4369-9C1DE81B33D8026B&displaylang=en. CSS started as a commendable effort by a small firm to provide an intuitive visual access to the Azure Storage putting the Storage APIs to good use. The biggest advantage being the comfort of an SQL IDE we have grown up with.

15 . CSS also features a simple yet effective service management UI. view. perform CRUD operations on existing tables.NET Data Services) query syntax. list blobs in a container replete with the folder structure. The regular service management operations like connecting to hosted services. copy and move blobs. Linq query support would have been a welcome add-on. Figure 1: Cloud Storage Studio . rename. MIME type configuration support is icing on the already nice cake. The same features are offered plus a few more. upload/download page/block blobs. configure access policies. You can create containers. A very useful feature we find here is a nifty little checkbox at the bottom of the create service deployment dialog which reads “Automatically run the deployment after creation” – a nice touch. My only grudge is the very basic breadcrumb while navigating the container structure. deploy. create and view blob snapshots (Very useful). delete services. The design closely resembles that of the actual azure developer portal. create signed URL for a blob. Basic querying support is also provided which supports the WCF Data Services (formally ADO.Connect to Azure Account You can design a table schema in CSS.The Windows Azure Platform: Articles from the Trenches applications. download/upload table contents to/from the disk and filter table contents. manage API certificates and manage affinity groups are available. swap deployment slots. Blob storage is a forte of CSS and all possible operations on Blobs and Containers are available.

codeplex.cloudstorage/default. Notable alternatives are  Cerebrata’s own CSS/e https://onlinedemo.   Figure 3: Windows Azure MMC 16 .com/windowsazuremmc.aspx which is a Silverlight application providing very basic but useful Storage Service administration the open source Azure Storage Explorer http://azurestorageexplorer.com/cerebrata.microsoft. Azure MMC in its second version and covers almost all bases as the Cloud Storage Studio and deserves a worthy mention.com/ Finally. the far from perfect yet still useful open source alternative Azure MMC Snap-in http://code.Deploy a Service It costs a totally worthwhile 60$ per license.msdn.cerebrata.The Windows Azure Platform: Articles from the Trenches Figure 2: Cloud Storage Studio .

IIS Failed Request Logs. LINQPad can query a varied set of data sources. The Dashboard provides a bird’s eye view of all the diagnostic information collected. Yes Table storage! LINQPad steps in where Cloud Storage Studio stops being adequate . Azure Diagnostics Manager (in public beta at the time of writing) attempts to achieve just that.the querying capabilities are superior and the interface more powerful. There are few tools which provide the comfort of an Event Viewer or a comprehensive management dashboard for working with the diagnostic data.linqpad. Figure 4: LinqPad . Infrastructure Logs.net/ Why: It would not be an overstatement to term LinqPad by Joseph Albahari to be the best querying scratchpad available for Linq.  17 . Crash Dumps and On Demand Transfer. Visual Studio integration etc.Sample Query on the WADPerformanceCounters table As usual some of the best tools come free and LinqPad surely fits the definition. Of particular interest to this discussion are SQL Azure.The Windows Azure Platform: Articles from the Trenches What: LINQPad http://www. CATEGORY: WINDOWS AZURE DIAGNOSTICS What: Cerebrata – Azure Diagnostics Manager http://www.aspx Why: Azure diagnostics has taken some time to reach the final form we see it in today.cerebrata. There is also a pro version available with some bells and whistles like auto-complete. WCF Data Services (Think codename “Dallas”) and Windows Azure Table Storage. Performance Counters. IIS Logs. The feature set is fairly comprehensive covering the following:  You can either connect to an Azure storage account to read the diagnostics information and find the deployments from there and connect to the listed deployments or choose to connect directly to a subscription and get a list of hosted services to monitor.com/Products/AzureDiagnosticsManager/Default. Trace Logs. One may choose to view Event Viewer.

SQL Azure to SQL Server and SQL Azure to SQL Azure. the largest chunk of the work coming to the System Integrators is the migration of existing applications to cloud. SQL Azure migration wizard helps simplify database migration. With this you can enable/disable any of the diagnostic information being collected or you can alter the verbosity/frequency.2 version it still has its share of quirks but is vastly improved and great for the mundane tasks in DB migration.2. 18 .com/ Why: As most of us working with cloud solutions might have already noticed.codeplex. generate scripts and can migrate databases – schema and data. Migration is supported from SQL Server to SQL Azure.The Windows Azure Platform: Articles from the Trenches  If you have only deployed a service and are collecting none of these. Even in its 3. fret not. With the SQL Azure Migration Wizard we can analyze scripts for SQL Azure compliance.Performance Counter Graphs CATEGORY: SQL AZURE What: SQL Azure migration wizard http://sqlazuremw. Azure Diagnostic monitor also provides access to the diagnostic monitor inside your Roles as well as individual role instances through the Remote Diagnostics API. One of the key aspects of this is database migration. Figure 5: Azure Diagnostics Manager .

Figure 6: Fiddler – Statistics Fiddler scripting engine can be used to filter in/out requests and/or responses and also issue preconfigured responses. Remote Diagnostics Manager API and anything REST.chadsowald.Net application to programmatically track network traffic and use almost all of Fiddler’s features.com/ .codeplex. Chad Oswald’s Request to Code http://www. Fiddler can also target specific processes to filter traffic only from those processes. 19 .com/ which visualizes JSON objects. It allows us to inspect all incoming and outgoing HTTP(S) traffic on a machine.com/software/fiddler-extension-request-to-code which gives the required code to issue captured http requests and the JSON Viewer http://jsonviewer.fiddler2.A Passive Security Audit tool http://websecuritytool. what Responses are received and a host of other information that every web service developer/consumer will find handy. This has enabled some nifty Fiddler Extensions like Watcher .The Windows Azure Platform: Articles from the Trenches CATEGORY: GENERAL DEVELOPMENT What: Fiddler http://www.codeplex. This is particularly helpful while working with the Azure Storage. Looking at the HTTP traffic gives an insight into how the Requests/Responses are constructed. Azure Service Management API. Fiddler provides an API which can be used in a .com/fiddler2/ Why: Fiddler is a Web Debugging proxy.

which means queries and requests can scale extremely efficiently. and abstractions from the distributed platform on which it is run. redundancy. In scalable applications it is important for those same reasons. but there are still key measures we need to take to ensure our application doesn’t become its own worst enemy. storage is cheap. In Table Storage. those that are suited for the cloud are not architected for scalability. This article will address the key things to consider when architecting highly scalable applications that are cost-optimised for the Azure platform.microsoft. imagine serving 500 requests per minute on a single database versus 50 requests per minute across 10 databases.org/wiki/Nosql 20 . the Windows Azure Platform has a pricing model that if not considered during your architecture phase. and to improve query performance by splitting unrelated data into different partitions.org/wiki/Shard_%28database_architecture%29 4 http://en.aspx http://www. You should also consider data duplication for further performance increases. Consider Sql Azure pricing versus Azure Table Storage 2 for 1Gb storage: $10 and $0.100%29. PARTITION DATA Data partitioning is not a new concept 1. This is essentially the premise of the ‘NoSql’ movement 4. However not only is Azure Table Storage cheaper. Furthermore. Traditionally it has helped us break up massive databases into smaller more manageable pieces.The Windows Azure Platform: Articles from the Trenches CHAPTER 2: WINDOWS AZURE PLATFORM ARCHITECTING FOR AZURE – BUILDING HIGHLY SCALABLE APPLICATIONS By Steven Nagy Two key reasons organisations move to the cloud are to reduce cost and leverage economies of scale.wikipedia. Both are at least 3 times redundant.microsoft. PRINCIPLES OF AZURE ARCHITECTURES The Windows Azure Platform already provides elasticity. each partition is a physically different storage node. This gives us a flying head start when designing systems for the cloud. Further. Denormalising your data can help immensely by removing those relationships and allowing ease of partitioning. Unfortunately not every type of application is suited to the cloud.com/windowsazure/pricing/ 3 http://en. If you don’t have complex relational queries. this is the ideal choice.wikipedia. Consider a search function for customers by age demographic or by city. and more often than not. Here we define five key tenets to keep in mind throughout the design and implementation phases of your project. but also allows us to scale more effectively. by having two copies of the data in different 1 2 http://msdn. can negate the cost benefits of moving to the cloud to begin with.15 per month respectively.com/en-us/library/ms190787%28v=SQL. it has inbuilt partitioning mechanisms that allow you to allocate every single entity (row) of data to a horizontal partition (or shard 3) based on the partition key you provide.

has multiple nodes that either all share the same content (shared everything) or have unique sections of the cache (shared nothing). for underlying data stores. shared everything distributed caches work well in Azure because of the throwaway nature of commodity hardware and ease of scale   STATE 5 6 http://en.wikipedia. Luckily this is a deployment consideration and not so important with up front design. or memoization5 purposes.microsoft.The Windows Azure Platform: Articles from the Trenches partitions. The flip side to this approach is the added complexity to managing multiple copies of data. CACHE A more important consideration is the various opportunities to utilise caching mechanisms.Allow you to specify a ‘version’ in a http header field. preventing client browsers from requesting a page.org/wiki/Memoization http://www. It makes sense to keep this in mind when building our applications. There are many ways that cache can be harnessed to minimise transactions. we can co-locate components of our system via Affinity Groups such that network traversal is minimal and faster.html 7 http://msdn.Net Page level Cache Distributed Cache7 .aspx 21 . serving a local copy instead Entity Tags6 (ETags) .w3.org/Protocols/rfc2616/rfc2616-sec14. otherwise can return all the data for that request ASP. server can indicate the version has not changed. Azure Compute roles. Azure already lets us choose our data centres and more importantly.com/en-us/magazine/dd942840. Azure Storage. your query and retrieval time is highly efficient. Some cache concepts to consider are:   Client side timed cache – content that expires after a certain amount of time. Partitioning support in Azure can be summarised as follows:     Table entities are horizontally partitioned on partition key Blobs are partitioned based on their container Queues are partitioned on a per-queue basis Sql Azure supports no partitioning Vertical partitioning is not supported by default however it makes sense to store smaller amounts of data together when the additional fields are not needed on the majority of requests. it pays to invest effort into caching. and the AppFabric all have bandwidth costs for data moving in and out of the data centre. COLOCATION Sql Azure. from end user http requests. in which case no other data is exchanged. When almost everything in the platform is accessible via a REST interface.

but when there are multiple instances of the same worker role. Sometimes state is specific to a single user. When using worker roles. redundant queuing mechanism that guarantees unique distribution of work items. consider claim based security.The Windows Azure Platform: Articles from the Trenches State has often been cast as the enemy of concurrent programming and the same applies at higher levels of abstraction as well. As an alternative to sessions. Locks and leases need to be taken and other threads are blocked until contention is resolved. if session state is critical to your application. how do we ensure that each instance does not pick up the same work item? The ‘Asynchronous Work Queue Pattern’ is one such solution. As with state. Typically an IT department will maintain enough servers to cope with their peak periods. Since adding another instance means an additional hourly cost. If your worker (or web role for that matter) has lots of IO work. MAXIMISE RESOURCES One could argue that if your CPU is not at 100% it is being underutilised. There are other messaging architectures that allow us to decouple our components. DISTRIBUTE WORKLOADS EFFECTIVELY Typically when multiple sources need to access a resource there is a level of contention. therefore as soon as you have more than one web front end you can no longer store session state in process (default). Load balancers in the Azure data centres are round-robin. cutting costs as you do. The AppFabric Access Control Services can assist with this. In Azure you pay for the core regardless of usage. 22 . such as session state. and the Windows Azure Storage Queue service is an ideal candidate. Mutable state requires locking and tracking in concurrent environments. By providing a robust. and use autoscaling functionality to add instances dynamically. first ensure you are getting the most out of your current instances. for example. or even removing state is an ideology worth pursuing. Such a queue will be reusable for many different work types. When load starts to taper off. it makes sense to use multiple threads. so it makes sense to get the most bang for your buck. the workers are ignorant of leases and locks and can focus on the job of processing work items. start scaling down. Therefore reducing. this problem exists in all forms of concurrent programming. multi-threaded architectures are often forgotten. which adds overhead and complexity to applications. AppFabric allows a ‘NetEventRelayBinding’ for Publish/Subscribe scenarios. such as multiple compute instances. Worker roles need to pick up items for processing. such that any page request is accompanied by a set of claims. However session state is typically abused and is generally not actually required for the situations in which it is used. Auto-scaling resources is worth investigating also. consider instead starting at trough capacity. look to move it to Sql Azure or Table Storage instead. and is as important in multi-instance work sharing scenarios.

This feature was specifically included so that Silverlight applications running from blob storage could place a cross domain policy file at the root of the URL namespace (a requirement for cross domain policy files). Also consider what could qualify for blob storage. 23 . This will help improve latency for your customers. Word documents Videos Website images Website CSS and JavaScript libraries Any static HTML website pages Silverlight files (XAP) Blob storage currently allows blobs to be stored in the root container. By following these guidelines you should be able to achieve core objectives of scalability and cost recovery in the cloud.The Windows Azure Platform: Articles from the Trenches Currently you can utilise content delivery services (CDN) to push blobs out to localised edges. essentially anything static is a contender:       PDF. this article gave you a brief overview of some key principles to keep in mind when architecting applications to run on the Windows Azure Platform. SUMMARY While not extensive.

This should be supported by modelling the costs of all the components 24 . developing without any reference to Azure. the total business cost of the chosen approach can be a critical factor. the obvious area to necessitate a cost-oriented architecture. There are a range of costs that need to be considered. Migration and set up need to include both the application and the data. With a software factory that uses a strict assembly process. Another significant topic is the methodology applied to the migration of an existing application or the consideration for setting up data required by a new application. While this may be critical to some applications. When considering a cloud platform such as Azure the cost implications of the chosen architecture can be significant and require a more sophisticated approach. Summary  Cost is much more of an architectural consideration for Azure than for a traditional on premise or hosted solution. cost is an area that receives significantly more focus for Azure.  Costs should be considered in the context of the end to end development and application lifecycle management processes.  Cost implications of the chosen architecture can be significant. the dramatic price difference for data storage between SQL Azure and Windows Azure storage. processes and procedures needed to transfer large volumes of data or complex data. There are cost implications and significant other pros and cons across this continuum. While this may be an extreme example.  Model costs for the chosen architecture but most importantly test the model. perhaps. These concerns would drive a cost-oriented architecture where all Azure specific components are abstracted from the developer or potentially replaced with non-Azure components. As an example. the cost of using the production platform may be prohibitive due to the expense of both the platform and training required. it is better on balance to construct a solid architecture as this will provide the best long term approach than initially focusing on cost.The Windows Azure Platform: Articles from the Trenches THE WINDOWS AZURE PLATFORM AND COST-ORIENTED ARCHITECTURE By Marcus Tillett COST IS IMPORTANT Cost-orientated development is nothing new. may be a significant undertaking. at one extreme. There is a continuum of development strategies for Azure. It is natural to be drawn to. for instance. to the other extreme. in particular. The time. While a traditional on premise or hosted architecture may not consider cost as a significant factor. these costs need to be considered in the context of Azure and of the end to end development and application lifecycle management processes. With the potential complexity of managing changes to a live data source. A low cost approach to building an application or product is desirable but the methodology used to achieve this is not always very sophisticated. The cost implication of the platform itself is. WHAT COSTS TO CONSIDER The development process can be a significant cost consideration for Azure. it does highlight one of several areas to be considered. consider the use of software factories. from. using the Azure environment for development.

This would equate to an approximately monthly cost of US $100 per tenant but the cost of this Azure architecture scales linearly with tenant numbers. Delivering Solutions on the Windows Azure Platform? http://www. Thereby providing an understanding of how the application design and the charging mechanisms of Azure impact the cost model. the terminology is less important than the realisation that cost is much more of an architectural consideration for Azure than for a traditional on premise or hosted solution.amazon.com/Sessions/SVC33 9 Thinking of.microsoft.co... CONCLUSION Whether the considerations described here could be termed cost-driven8 or cost-oriented architecture9. As a way to augment the full cost modelling process.10. 8 Lessons Learned: Building Multi-Tenant Applications with the Windows Azure Platform http://microsoftpdc.The Windows Azure Platform: Articles from the Trenches of the application. but that where cost is a critical factor for the application a different architectural approach may be required. This model supports 10’s or perhaps 100’s of tenants for near same cost as a single tenant.com/en-us/magazine/ee309870. Translated the same architecture to Azure might consist a Windows Azure Web Role and a 1GB SQL Azure database.aspx 25 . However. it is even more important to test this model for the most cost critical aspects of the application. monitoring should be included in the final application and used to tune the system while ensuring that the evolution of Azure and the application are analysed for significant cost implications. This is not to state that Azure is not suitable for multi-tenanted applications. there are some scenarios where the cost of the platform suggests a cost-orientated architecture. For any aspects that are cost critical. Indeed monitoring the whole system as a means to verify costs and SLA is another architectural consideration.uk/ThinkingDelivering-Solutions-Platform-Questions/dp/0956155634/ 10 Windows Azure Platform for Enterprises http://msdn. A basic on premise or hosted server model with a pair of servers can enable the creation of a separate IIS web site and SQL Server database for each tenant. With this information the architecture can be reviewed for significant cost savings. One of these is multi-tenanting of an application where there are high tenant numbers.

if we want to build a solution on Azure any time soon. in most cases. there are many risks that are just as real but less well known. but the suspicion still remains that off premise data is a high risk. often due to their more technical nature. Vendors are scrambling for attention and pushing their biased marketing oriented opinions through the biggest dinosaurs of all – the print media. Apple and other web-consumer facing properties that willy-nilly change terms of service and sell personal data for profit casts a shadow over business oriented cloud computing services. Facebook. are actively managing and reducing risk. data does not get drunk and put pictures of itself up on Facebook when it leaves home. centred on the involvement of other parties in the operational aspects of the solution. the actual risk. we will have to take responsibility for helping business understand the issues in order to gain their support. No longer can business dictate service levels or even have confidence in an external supplier of services that they may have had with their own internal IT. performance and other standards. wary and reluctant to make a commitment to our latest idea of running our solution on Azure. The term ‘the cloud’ has become synonymous with ‘the web’ and is indistinct from ‘cloud computing’ platforms that we are interested in – the unfortunate side effect being that the behaviour of Google. While the dust may settle at some point in future. While it is great (and perhaps obvious to us) that the cloud is ‘the way of the future’ some individuals and organizations and vendors are ready for the change while others are not. that culturally could not even cope with the changes brought on by the Internet.The Windows Azure Platform: Articles from the Trenches DE-RISKING YOUR FIRST WINDOWS AZURE PROJECT By Simon Munro Developer enthusiasm for building solutions based on Azure is not always shared by business. While the risk to data may increase. While we may prefer to deal only with technical issues. Unlike students that go and live in co-ed dorms. as well as the Microsoft and the Windows Azure platform. It is unsurprising then that the people that we need to make decisions about cloud computing in our own organizations are confused. COVERT RISKS While mainstream CIO information sources popularise some risks by extensive coverage. Process related risks are also well known. products. there are real issues here that have fairly complex contractual ramifications as customers attempt to reduce vendor lock-in. where the urge is to maintain the status quo in our (currently) risk-averse environment. guarantee service levels and maintain operational. is greatly exaggerated and manageable. POPULAR RISKS Risks to data are by far the most publicised because once data is in databases that are outside of an organization’s locked-down data centre a degree of control and authority over the data is lost. security. industries and jobs will go as the cloud wave washes them out to sea. Most anti-cloud and vendor bashing opinion plays on fear and its business cousin risk. the current reality is that in most environments we have to proactively discuss the perceived risks and demonstrate that we. 26 . Like with data. Not all vendors have technologies for the cloud and many businesses.

as our trusted provider of platforms and tools still has risks embedded within Azure. Normally we would not worry about throwing up a new website onto our existing data centre. the identification of risk still remains the responsibility of everybody on the team. integrate with a lot of other systems. audit and other parts of the business sooner than usual. reliable and performant cloud computing solutions. The lack of on-premise alternatives to cloud technologies such as Azure tables and queues makes the commitment to the platform quite high (a kind of vendor lock-in) and the tooling is still immature and unable to easily support accepted engineering practices such as continuous integration (see ‘Using a CI build to achieve an automated deployment of your latest build’ by Grace Mollison) . digging deeper into the pricing. CHOOSE THE CORRECT APPLICATION Choose something simple that is better suited to cloud computing. you will need to shoulder additional responsibility and deal with some aspects of reducing risk that do not involve code. UNDERSTAND THE PRICING AND OPERATIONAL MODEL As much as it may look simple on the surface. so before getting into the technical aspects. 27 . This also related to the problem of development engineering costs that could be higher than simply throwing hardware at performance bottlenecks. finance. billing. you need to understand the possible financial. ENGAGE EARLY Even if your project is a low profile skunk works development. Build on those successes before tackling applications that contain sensitive data. NON-TECHNICAL TACTICS FOR REDUCING RISK While ultimate responsibility for managing risk falls to project managers and other people within the organization. Even Microsoft. such as one that is public facing and may have demand peaks. are a migration of an existing legacy system or contain a lot of traditional database storage and reporting. you need to engage with legal.The Windows Azure Platform: Articles from the Trenches The most obvious is the lack of skills and experience in creating secure. SLA’s and related aspects of cloud computing platforms can become complicated. You have to at least put the Azure prices in a spreadsheet with your estimated requirements and put an annotated printout of the SLA in your project sponsor’s hands. reputational and other risks if your application is compromised or the data gets lost. operations. with broad reaching impacts on legal positions. compliance. By downloading this book you have more knowledge of cloud computing than many of your coworkers. UNDERSTAND THE IMPLICATIONS While it may be unnecessary to do a full threat model. but if you surprise people with a rogue cloud computing application it may get shot down. compliance and interdepartmental feuds.

UNDERSTAND THE APPETITE FOR RISK Culturally. Whenever defending the risks of cloud computing make sure that you compare them to the existing everyday risks of the existing onpremise platform. Azure will support you well whichever approach you choose. DEFINE THE APPROACH TO DATA When it comes to cloud computing risks. worker processes. but you need figure out how much on the fancy new stuff you really need and make those decisions early. less likely to absorb additional risk. although it may ultimately be the project sponsor’s responsibility. there is still the issue that the data is in the cloud and it needs to be dealt with in your architecture. TECHNICAL TACTICS FOR REDUCING RISK HOW EXTREME? Microsoft has made it quite simple to take a good ‘ol ASP.NET web application with an underlying SQL database and throw it up onto the Azure cloud with minimal changes. but ultimately Azure storage. data is the most sensitive and active topic and it needs to be addressed early on in the solution design. startups can absorb cloud computing risks as part of their overall risk exposure compared more risk averse organizations such as banks that are. integration with other systems or even just the feeling that the data is safer. the focus on security often means that the solution is more secure than the on-premise counterparts. Fortunately SQL Azure addresses many of the concerns and risks around the NoSQL-like Azure tables by providing a familiar database platform if such familiarity is required. forgo Azure storage. There may be a requirement to move or copy data from Azure to an on premise database for reporting. FAMILIARISE YOURSELF WITH ON-PREMISE RISKS Because cloud computing is seen to have security risks. On the other hand. building a well architected solution that has been optimised for a cloud computing environment is more difficult. at least this year. involved and risky. for how long and whether it is moved to on-premise storage. Not all solutions. networks and other infrastructure can actually deliver the availability and security that they promise. caching and other technologies need to be considered in any good Azure architecture. MANAGE THE ENGINEERING COST 28 . Whatever the bias for storage in the Azure cloud. If your system is being built within a risk averse environment and does not need to be built for the cloud. More mature organizations have processes and committees for managing risks and.The Windows Azure Platform: Articles from the Trenches Understanding the effects of loss should influence your approach to what data is stored on the cloud. you need to get a feel for the ability of the organization to take on risk before you pitch your big idea. federated identity management and other cloud specific technologies and build a simple solution with web roles and a SQL database.

but you need to do a lot more than just read or learn on the job. extensibility are amplified in such an environment which is years from settling down. regulations have changed or the attitudes of your organization towards cloud computing have altered. understood and debated on the . put it under load. 29 . diagnose and try out a lot of unfamiliar patterns and technologies just to reduce the impact of unforeseen quirks.NET ecosystem and some new technologies. technical components have been added or abandoned. They are reading conflicting messages by vendor marketers and self proclaimed cloud experts while their own staff are both protecting existing jobs and whispering discord in the passageways. monolithic architectures that seem easy at first and are encouraged by Microsoft marketing and tooling will result in an approach with high and unnecessary risk in an already risky space. scale up. inversion of control. You need to hone these skills as single layered. Testability. You need to install the tools. IMPLEMENT WITH GOOD ENGINEERING PRACTICES The future of your first Azure application is fairly unsure – cast your mind out two years and you cannot be sure that your architectural choices were correct. business and other decision makers are probably more wary of the cloud than any other (recent) computing technology shift. deploy.The Windows Azure Platform: Articles from the Trenches Unless you have built a reasonable sized application on Azure and deployed it in a live environment there are going to be unforeseen technical challenges that will present themselves.NET platform and are therefore (reasonably) portable onto Azure. testability. scale down. approaches and thinking thrown in means that we have both the need and the frameworks to craft solutions properly to reduce the risk that we are exposed to. debug. The concerns raised by the software craftsmanship movement of maintainability. loose coupling and other software craftsmanship techniques are well supported. The Azure combination of a well established platform in the . By reading this book you are clearly on the right track and trying to learn from the experiences of others. DEVELOPER RESPONSIBILITY While technologists may be excited at the technical opportunities of cloud computing. So while risk management and selling of architectures may not be amongst the most exercised developer skills. write code. cloud computing requires that we take cloud computing to the business and take some responsibility for allaying fears.

Net MVC. Visual Studio 2010 was in beta2/ RC when we undertook the development but with all the potential unknowns with Azure that was a step too far. There was an upgrade to the Azure SDK during the development cycle which the development team said was needed which meant updating the various machines that constituted the environment manually ( Alas no WSUS  ) . PostSharp There was one bug bear in that the Azure development experience is NOT designed for a team of developers and I needed to get that sorted out. So where did I start? With a list of course. xVal. Here were the big ticket items:    The ability to set up three environments Development. We resorted to using Azure Staging as our Test environment. But (yes I know there’s always a “but”) the Development fabric runs against the local loopback address. Windows Azure Storage. SQL Azure. TEST ENVIRONMENT The Test environment proved to be more challenging. After all no self-respecting development team doesn’t have a continuous integrated build do they? DEVELOPMENT ENVIRONMENT For the development environment we stuck to Visual studio 2008 SP1. 30 .The Windows Azure Platform: Articles from the Trenches TRIALS & TRIBULATION S OF WORKING WITH AZ URE WHEN THERE’S MORE THAN ONE OF YOU By Grace Mollison I had enormous fun working on an Azure project See the Difference that took 7 weeks from start of development to handing over to the client The technology stack used was: Windows Azure hosting. N2CMS. Testing and UAT to be accessible by all members of the team Shared access to the hosted environment Automated deployments to the cloud as part of a CI build. Alas this proved to be slightly less than user friendly plus the fact the random allocation of ports for the local storage fabric had to be resolved after each new deployment made it basically unworkable. The differences between the Development fabric and Azure fabric was also impacting the team deliverables as we ended up seeing differences in behavior or could only test certain functionality in the staging environment. The most pragmatic way to sort it out was to provision another development work station running the development fabric. Testing and UAT. In addition to Visual studio we also supplemented the development environments with a few extra tools that provided a more complete development experience. Spark View Engine. Castle Windsor. ASP. The Azure developer tools were installed on each developer workstation and the Azure SDK on the build server. To get round this a SSH tunnel had to be set up between the target machine and the Client machines that needed to access it. Fortunately this only happened once during the development cycle.

One thing we quickly learnt was that in the early days of development. then deleting was the safest approach to deploying a new package. WHEN THINGS GO WRONG It’s a fairly nerve racking experience when things go wrong as often you can do nothing but wait for Azure to barf and throw a Dr Watson and there’s no real feedback when Azure tries to spin up the roles. The small team meant it was easy to communicate the change of URL this caused. You can with a small amount of work up front treat development for the Windows Azure Platform as you would any other application developed using your familiar team development tools. yes it’s another of those “Buts”. The loss of this environment for system testing meant we were forced to press my personal Azure account into service as the Staging environment. This turned out to be a good call as we did have problems with certificate connections apparently timing out after some time for some team members for no obvious reason.. Alas as soon as we got to UAT we then had to give up our staging environment and minimise changes to the Staging URL as both the client and a 3rd party needed to know the URL. CERTIFICATES The team members needed to either use their own self signed certificate or to use a certificate I generated which is then uploaded onto Azure. suspending. SUMMARY The Windows Azure Platform may not be quite ready for team development out of the box but once you understand what needs to be addressed the barriers for team development are easily overcome . We did get the automated deployment in place but it’s a tale too long to describe in this article. As the team was small and fluid the decision went with using one I generated. 31 . Because there was only one certificate to worry about it was relatively painless to resolve the problems around the use of this... For a larger team with a longer development cycle I would advocate each developer using a personal certificate which can then be easily revoked.The Windows Azure Platform: Articles from the Trenches I was anticipating an easy ride from here on but. It is bad practise to share certificates in this way but pragmatism was the order of the day.

Make sure the location of the service definition file is known The ServiceDefintion.csdef file illustrating a simple example with one web role.cer. I downloaded the Windows Azure Service Management PowerShell CmdLets and also the Windows Azure Service Management API Tool which are both handy for remotely accessing the Azure portal via the Service Management API. Using the API requires an x. There are two key things when packaging the application from the command line : 1. GETTING THE RIGHT “BITS” The first thing that was done was to collate and configure the components that would be needed to allow the build server to access the Target Window Azure portal via a command line. Obtain the role types and names as this will be needed to construct the package 2.509 certificate. The blog post referred to above illustrates the use of the x. PACKAGING FOR DEPLOYMENT Next I looked at packaging the application ready for deployment. The number of instances does not matter to cspack : 32 . At this stage I had no idea which one I would be using. The blog post Creating and using Self Signed Certificates for use with Azure Service Management API explains in detail how to configure the certificate on the target Azure portal and the machine that needs to communicate with the portal. I created a self-signed one using the makecert tool which is part of the windows SDK. I tried them both as part of a Build and found that I preferred using the service management API tool csmanage (despite being a big fan of Powershell).csdef file contains the role types and names as this is needed to construct the package using the Windows Azure command line tool cspack. This article outlines an approach taken whilst delivering the See the Difference project using the Windows Azure Platform. An example on how to do this is shown below: "c:\Program Files\Microsoft SDKs\Windows\v6. Below is a snippet from a ServiceDefintion. To do this requires using the Azure Service Management API.0A\bin\makecert" -r -pe -a sha1 -n "CN=Windows Azure Authentication Certificate" -ss My -len 2048 -sp "Microsoft Enhanced RSA and AES Cryptographic Provider" -sy 24 MySelfSignedCert.The Windows Azure Platform: Articles from the Trenches USING A CONTINUOUS INTEGRATION BUILD TO ACHIEVE AN AUTOMATED DEPLOYMENT OF YOUR LATEST BUILD By Grace Mollison This article assumes familiarity with Team Foundation build and MSBuild concepts such as tasks and properties Setting up a Continuous Integration (CI) build to automatically push a successfully built package directly to Azure cannot be achieved straight out the box but requires some additional work.509 certificate. the API and Powershell to deploy to the Azure staging environment.

0\bin\cspack.csfg) file in blob storage was also a good idea as it would reduce the risk of non production configuration settings being used.csdef file is so important. The AfterDropBuild task is called after dropping the built binaries and I used it to insert some commands to allow the build to use cspack ( equivalent to zipping the dlls and configuration files ) to package the cloud service package which is then pushed up to blob storage ready for deploying to staging or Production.exe</PathToAzureTools> <cPkgCmd>"$(PathToAzureTools)" SeeTheDifference. This application pushed the package to a pre-determined container. Finally after testing all the constituent parts. It was only able to use one stored on the local system.com/ServiceHosting/2008/10/ServiceDefinition"> <WebRole name="SeeTheDifference.Web" enableNativeCodeExecution="true"> <InputEndpoints> If cspack is not run from the correct place the package will not be constructed correctly hence why the location of the ServiceDefintion.exe </LoadBlobCmd> </PropertyGroup> 33 . We had concerns with this approach with regards problems with the actual package affecting the deployment.Cloud. The new plan was to push the package to blob storage and then the Client would be able to carry out the deployment at their convenience.csx\ServiceDefinition.Cloud" xmlns="http://schemas.seeTheDifference. DEPLOYING At this stage I was able to package the application and deploy to the Azure portal via MSBuild.microsoft. To push the package to blob storage a C# console application I called LoadBlob was written that could be called from the MSBuild script. <PropertyGroup> <PathToAzureTools>c:\Program Files\Windows Azure SDK\v1.proj file where I overrode the target AfteDropBuild.Web\approot.Cloud.Web. A change of plan was decided upon.dll</cPkgCmd> <LoadblobPath>c:\TOOLS\AzureDeployment</LoadblobPath> <LoadBlobCmd>$(LoadblobPath)\Loadblob.csdef /role:SeeTheDifference. they were incorporated as part of the CI build. It was decided that storing the configuration (. In particular we were concerned about what to do after handover to the client when a little more caution would be called for.csx\roles\SeeTheDifference. but as the end to end deployment process we were implementing actually required a pause for breath before the push to Azure Staging or production this issue did not affect the implementation of the CI build process.The Windows Azure Platform: Articles from the Trenches <ServiceDefinition name="SeeTheDifference.Web. Below is a snippet from a TFSbuild. During testing I was unable to get the service management API to use the stored configuration file.See TheDifference.

cspkg" WorkingDirectory="c:\Drops\SD_Deploy" /> </Target> The screenshots below show the uploaded cskpg in blob storage: The deployment could then be completed by using a user friendly tool like Cerebreta Cloud Storage Studio.cspkg" WorkingDirectory="c:\Drops\test\$(BuildNumber)\Release\SeeTheDifference" /> <!-.Load blob to Azure into deployment container set via config file settings target container will be cleared before uploading --> <Message Text =" Copying '$(BuildNumber)'.cspkg to deployment container in Azure " /> <Exec Command ="$(LoadBlobCmd) -upload $(BuildNumber). 34 .The Windows Azure Platform: Articles from the Trenches <Target Name="AfterDropBuild" DependsOnTargets="DeriveDropLocationUri" Condition=" '$(IsDesktopBuild)'!='true' "> <Message Text=" cspack creating a package for deployment"/> <Exec Command="$(cPkgCmd) /out:c:\Drops\SD_Deploy\$(BuildNumber).

The Windows Azure Platform: Articles from the Trenches USING JAVA WITH THE WINDOWS AZURE PLATFORM By Rob Blackwell With a name like Windows Azure.jar commons-logging-1. Download the JAR file from http://www.jar httpclient-4.windowsazure4e. running on Windows Azure or elsewhere.If you are an Eclipse user.0-beta2.1.1.1.2.jar httpcore-4. In fact it has a lot to offer Java developers through its use of open standards and RESTful APIs.0.jar dom4j-1.1. you could be forgiven for thinking that Microsoft’s cloud computing offering is a Microsoft-only technology.6.0-beta2.jar jaxen-1.1.1. Queues or Tables.org/ 35 .windowsazure4j. ACCESSING WINDOWS AZ URE STORAGE FROM JAVA WindowsAzure4J is an open source library that can be used to access Windows Azure Storage from Java applications.jar httpmime-4. You’ll also need to grab some other dependencies commons-collections-3. you’ll need an account name and account key from the Windows Azure portal.jar httpcore-nio-4. Paste these into the sample code provided with WindowsAzure4j to use Blobs.2.0. you can also install the Windows Azure Tools for Eclipse http://www.jar To get started.9.org/ .jar log4j-1.

the public facing port 80 of yourapp. The following code allows you to determine this port at runtime: RoleEnvironment.net might get mapped to.Po rt 36 . You’ll also need to bootstrap Java from a small . In a worker role you just have to do some additional plumbing to connect up your web server to the appropriate load-balanced Input End Point. The first thing to note is that even if your Java application is a Web application you probably won’t want to use an Azure Web Role. So for example.InstanceEndpoints["Http"]. RUNNING JAVA CODE ON WINDOWS AZURE If you want to host a Java application in Windows Azure. say port 5100 in your worker role. so it’s usually best to go with a worker role and include your choice of web server within your deployment package.cloudapp. Most Java developers will want to use a Java specific web server or framework.The Windows Azure Platform: Articles from the Trenches The Windows Azure Storage Explore running in Eclipse.IPEndpoint. Both web roles and worker roles are provisioned behind a load-balancer so either is suitable for hosting web applications.CurrentRoleInstance. there are a number of considerations. The principle difference between web roles and worker roles is whether Internet Information Services (IIS) is included.Start call.NET program that will essentially invoke the Java runtime through a Process.

org/jetty/ ).com AZURERUNME AzureRunme (http://azurerunme. watch your application’s progress and see any exception messages.. In fact you could just run a straightforward command line application with no visible user interface.\jre\bin\java -cp MyApp. I’ve used it successfully with both Restlet (http://www. possibly packaged as a WAR file. The Tomcat Solution Accelerator is a good choice if you have a traditional Java based web application.cloudapp. Download AzureRunMe cspkg file and use this to bootstrap your Java code. It automatically handles the necessary configuration.The Windows Azure Platform: Articles from the Trenches Fortunately both The Tomcat Solution Accelerator and AzureRunMe handle all of these technicalities for you. wait for it to deploy then bring up your web browser and browse to http://yourapp. 37 .org/) and Jetty (http://jetty. Like this: cd MyApp .com/winazuretomcat .microsoft. The accelerator walks you through the process of creating an Azure cloud services package file that contains your application as well as the Tomcat server and Java Runtime. all the library JAR files and any data all in subdirectories of the USB stick. It supports Java Servlet and Java Server Pages applications.com/) doesn’t assume any particular web server or framework. It can be downloaded from http://code. That said.codehaus.codeplex. Notice that the batch file takes a parameter %1 – This is the port that you should use if you want to bring up a web server – the load balancer will direct all HTTP traffic to your application on this port. AzureRunme comes with a Trace listener that uses the Service Bus to relay standard output and any log4j messages back to a command window on your desktop machine.jar.BAT file at the top level to run everything. upload it to Blob storage. It makes it easy to see trace messages.restlet.msdn. You’d probably create a .lib\* Start %1 AzureRunme takes a similar approach – put all these files together in a single ZIP file. Just upload the resulting cspkg file to Windows Azure. Imagine that you were going to run your application from a USB drive and that you weren’t allowed to install any software onto the machine – you’d have to include the Java Runtime Executive (JRE) .

com/ 38 .interoperabilitybridges.The Windows Azure Platform: Articles from the Trenches AzureRunMe Trace Listener showing log messages relayed via the AppFabric Service Bus. For more information about Interoperability on the Microsoft platform see http://www.

For applications with global reach.The Windows Azure Platform: Articles from the Trenches CHAPTER 3: WINDOWS AZURE AUTO-SCALING WINDOWS AZURE COMPUTE INSTANCES By Steven Nagy INTRODUCTION There are many reasons applications need to scale. In the case of predictable peak loads we can easily log in to the Windows Azure portal and adjust our configuration file to increase the number of instances of our web and worker roles. our applications need to become smart. The next piece is about establishing rules and measuring against thresholds to determine when to scale up and scale down. we want our applications to respond immediately. this might be when we least expect it. and the roles that are being monitored. A BASIC APPROACH There are a number of jigsaw pieces that need to fit together to build the auto-scaling picture. The first piece is monitoring. when application load peaks unexpectedly. we are paying for every CPU core hour we use. Monitoring Rules Scale Agent Trust Instruct THE SCALE AGENT The Scale Agent is responsible for monitoring your application. Finally. which lets us pull information from the roles that need to auto-scale. could that impact on its ability to 39 . On the flip side. applying rules and instructing the API to scale your roles. but a role can have many identical instances. One option is hosting the agent as another process on your existing Azure roles. some have predictable peak loads (for example share market applications peak during open and close of the market) and some might have unpredictable peak periods (for example your website gets linked by Slashdot). We need to know how to auto-scale. The third piece establishes trust between the service that is doing the monitoring (referred to from here on as the ‘Scale Agent’). However. Some applications have on/off periods of batch processing (for example overnight render farms). Thus we want to be able to scale down instances that are underutilised. the Scale Agent needs to instruct the Windows Azure Portal to add or remove instances of those roles as it deems necessary. and can be hosted in different ways. so which instance would it run on? And the agent will take some CPU resources. Without appropriate monitoring techniques we may not even know the extent to which we are failing to serve requests.

CounterSpecifier = @"\Processor(0)\% Processor Time".Start("DiagnosticsConnectionString". perfConfig. available in Microsoft. as a windows service. config).The Windows Azure Platform: Articles from the Trenches assess the other work running on the same role? It makes more sense to move the Scale Agent to a separate location that doesn’t interfere with the standard workload. separate to the main work being done by the application. config.PerformanceCounters. The average utilisation will be gathered over 5 second intervals.GetDefaultInitialConfiguration().FromMinutes(1). getting performance counter logs. The role that needs to auto-scale will be responsible for gathering its own performance information and dumping it into a storage table. A dedicated worker role is usually the best option but also the hardest to configure for trust as we’ll see further on. where its own workload won’t pollute the statistics. and issuing scale commands. This means you have more control over the agent. These statistics in turn let us make informed decisions. This removes external bandwidth costs and allows for faster processing/assessment.Diagnostics namespace: var perfConfig = new PerformanceCounterConfiguration(). and could be geo-located and co-located near the compute instances that it needs to monitor. You could also host the agent off site completely.SampleRate = TimeSpan. and number of requests per second. var config = DiagnosticMonitor. The agent can be hosted as another worker role. This is done via configuration classes. and if any one of those exceeds an upper threshold then we want to scale up.DataSources. perhaps in your own data centre. DiagnosticMonitor. so this will require an Azure Storage project. config. This worker role would never need to scale.WindowsAzure. we need to know some simple statistics about the services we want to scale. but we usually want to monitor memory usage. Diagnostic helper processes will put performance counter information into table and blob storage. perfConfig. We create a configuration item for a performance counter we want to track – in this example we want information about CPU utilisation.ScheduledTransferPeriod = TimeSpan. but the agent will be slightly slower communicating to the instances. MONITORING: RETRIEVING DIAGNOSTIC INFORMATION Before we can make decisions about scaling. There are lots of counters to choose from.Add(perfConfig).FromSeconds(5). 40 . CPU usage.PerformanceCounters.

this is very well documented already11.msdn. Every minute new performance counter information will be written back to a storage account as specified in the ‘DiagnosticsConnectionString’. into a table called ‘WADPerformanceCountersTable’. I won’t go into the code required to view an entity in table storage.aspx 41 . Your Scale Agent will retrieve these values by polling the table occasionally and keeping track of the utilisation.com/jnak/archive/2010/01/06/walkthrough-windows-azure-table-storage-nov-2009-and-later.The Windows Azure Platform: Articles from the Trenches We then add the performance counter to the list of items we want the DiagnosticMonitor to track for us. We can verify the counter information made it into the table using 3rd party tools You can see that the table has an entity which has a property called ‘CounterValue’ which contains our CPU utilisation. RULES: ESTABLISHING WHEN TO SCALE 11 http://blogs. scaling when needed. The DiagnosticMonitor runs in a separate process on the virtual machine instance so it won’t interfere with our normal application code.

The Windows Azure Platform: Articles from the Trenches The Scale Agent now knows what levels your various role instances are at based on the performance counter information. However deciding when to scale up/down is difficult and can easily become an exercise in advanced mathematics. Although the rules are different for every application, here are some common issues to consider:   You usually need a certain amount of head room, in case you get a sudden spike in load before your Scale Agent can spin up more instances Immediately after scaling up, your original instances might still be over the threshold – prevent your agent from scaling up again immediately until enough time has passed that you can be positive that more scale is needed Aggregate your usage from all instances – if a single instance is spiking but the rest are under normal load, you don’t really need to scale If you do need more instances, scale up based on how many instances you currently have. For example, if you only have 5 instances, you might want to add 2 more (40% increase) before checking again. If you have 50 you may only want to add 10 (20% increase) Try to predict load based on patterns of behaviour. For instance, if over the last 15 minutes you’ve been steadily climbing by 5% utilisation per minute, you can predict that you will probably go over your threshold in X number of minutes. Why wait until you are over loaded and losing connections before scaling? Analysing these kinds of patterns can let you scale up “just in time” Predictive patterns can get very complicated – if at 4pm every day you seem to have additional load, prepare in advance for scale rather than waiting for auto-scale to kick in Keep in mind that long running requests can provide false positives – if all web threads are used for an instance but all those threads are held up in IO requests, you will still have low CPU utilisation, so consider a range of performance counters specific to your type of application and architecture Hard limits – If your average is 3 instances, would you want your application to be allowed to auto-scale up to 500 instances? That’s probably not a credit card bill you want to receive, so consider imposing some hard limits to scale, or provide some reasonable alerting (SMS, email, etc) so that if your app DOES scale to 500, you can find out immediately and hop online to see why

 

 

TRUST: AUTHORISING FOR SCALE There is a rich management API that can be used to control your Windows Azure projects, however in order to issue commands there needs to be trust between the Scale Agent and the API of the account hosting the roles – this trust is established via X509 certificates. Generating certificates is also well documented. Once created, we need to provide our certificate in 3 places:    The Windows Azure Account – for the Service Management API to check requests against The virtual machine issuing commands – in our case, where the Scale Agent is hosted The service configuration and definition for our Scale Agent project

42

The Windows Azure Platform: Articles from the Trenches

In the Windows Azure portal for the account you wish to manage, there is an ‘Account’ tab where you can upload DER encoded certificates with a .CER extension:

You must also upload the certificate in the Personal Information Exchange format with a .PFX extension and the matching password to your service project so that the certificate becomes available to any virtual machine instance provisioned from that entire project. This can be found under the Certificates section of your service deployment:

Click on ‘Manage’ and upload the .PFX version of your certificate. It is important to note that this is not installing the certificate to the role instances under this service. Instead it is making the certificate available to any role that requests it. To make that request we have to complete the third step and tell our Scale Agent role that it will require that certificate.

43

The Windows Azure Platform: Articles from the Trenches While it is possible to enter the required XML manually, it is much easier to use the property pages instead. For the role that needs the certificate (i.e. your Scale Agent role) find it in your Cloud Service project, right click and select properties. In the property pages, find the Certificates tab on the left. Select ‘Add Certificate’ from the top and enter the details. The important part here is finding your certificate under the right Store Location and Name. This screen presumes the certificate is installed locally as it uses local machine stores to search for it. If you don’t have it installed locally, you can just paste in the thumbprint manually.

That wraps up all 3 parts of the certificate process. When your role is deployed to Windows Azure, it will ask for the certificate with that thumbprint to be installed into the virtual machine. SCALING – THE SERVICE MANAGEMENT API We know we need to scale, we have established trust, all we need to do is issue the command: scale! All API calls are RESTful, but there is no API that exists solely for scaling up and down. Instead this is done through the service configuration file, which is maintained separately from the service deployment. You can at any time go and change the configuration for your deployment through the portal, and the API is just an extension of this functionality. The steps required are: 1. 2. 3. 4. Request the configuration file for a service deployment Find the XML element for the instance count on the role you are scaling Make the change Post the configuration file back to the service API

If you don’t want to manually manipulate the REST API yourself, Microsoft has posted code samples to assist you, including samples on scale12 and services management API 13.

12 13

http://code.msdn.microsoft.com/azurescale http://code.msdn.microsoft.com/windowsazuresamples

44

com/p/lokad-cloud/ 45 . there are other derivatives and approaches you could follow.google. you can also scale proactively. Scheduled scale up/down can also be automated with the same technique defined above but instead of scaling reactively. For example.The Windows Azure Platform: Articles from the Trenches SUMMARY This short article provides you with the theory to scale up your applications reactively. While this article has presented just one way of scaling automatically. rather than the roles pushing that information. the Scale Agent could pull diagnostic information from the roles via the Diagnostic Manager classes. Find the approach that’s right for you and capitalise on economies of scale today! 14 http://code.Cloud14 takes another approach by allowing roles to auto-scale themselves. Open source framework Lokad.

Count()) { return true. It is typical in these scenarios to develop an application layer to route requests from the client to a specific business component for further processing. depending on their nature. require priority processing based on request content.Roles["WorkerName"]. //Check current amount of instances and confirm sync with the LoadBalancer’s //record if (roleInstances. Implementing this in Windows Azure is not straightforward due to its built-in load balancer. } } private bool IsRoutingTableOutOfDate() { //Retrieve all of the instances of the Worker Role var roleInstances = RoleEnvironment. While this tutorial may seem more of an exercise on WCF than on Windows Azure. In order to filter requests by content. public class LoadBalancer { public LoadBalancer() { if (IsRoutingTableOutOfDate()) { RefreshRoutingTable().Instances.The Windows Azure Platform: Articles from the Trenches BUILDING A CONTENT-BASED ROUTER SERVICE ON WINDOWS AZURE By Josh Tucholski Some applications. The Windows Azure load balancer only exposes a single external endpoint that clients interact with. therefore it is necessary to know the unique IP address of the instance that will be performing the work. 46 . The LoadBalancer will need to account for endpoint failure and guarantee graceful recovery by refreshing its routing table and passing requests to other nodes capable of processing.Count() != CurrentRouters. IP addresses are discoverable via the Windows Azure API when marked as internal (configured through the web role’s properties). This class ensures requests are routed to live endpoints and not dead nodes. it is important to understand how to perform inter-role communication without the use of queues. Following is the class definition for the LoadBalancer to detect endpoints and recover from unexpected failures that occur. an internal LoadBalancer class is created.

} public string GetWorkerIPAddressForContent(string contentId) { //Custom logic to determine an IP Address from one of the CurrentRouters //that the load balancer is aware of } 47 .ToString()) { return true.IPEndpoint )) { //add to the collection of endpoints the LoadBalancer is aware of } } } private void RemoveStaleEndpoints(ReadOnlyCollection<RoleInstance> currentInstances) { //reverse-loop so we can remove from the collection as we iterate for (int index = CurrentRouters.Instances. } } return false. if (!IsEndpointRegistered(ipAddress)) { return true. AddMissingEndpoints(currentInstances). } private void RefreshRoutingTable() { var currentInstances = RoleEnvironment.IpAddress == ipEndpoint.Count() . RemoveStaleEndpoints(currentInstances).InstanceEndpoints["WorkerEndpoint"].1.InstanceEndpoints["WorkerEndpoint"]. foreach (var instance in currentInstances) { //determine if IP address already exists set found to true } if (!found) { //remove from collection of endpoints LoadBalancer is aware of } } } private bool IsEndpointRegistered(IPEndpoint ipEndpoint) { foreach (var routerEndpoint in CurrentRouters) { if (routerEndpoint.The Windows Azure Platform: Articles from the Trenches } foreach (RoleInstance roleInstance in roleInstances) { var endpoint = roleInstance. var ipAddress = endpoint. } private void AddMissingEndpoints(ReadOnlyCollection<RoleInstance> currentInstances) { foreach (var instance in currentInstances) { if (!IsEndpointRegistered(instance.IPEndpoint. index >= 0.Roles["WorkerName"]. index--) { bool found = false. } } return false.

public RouterService() { loadBalancer = new LoadBalancer(). the load balancer implementation will ensure that there is another endpoint available for processing. must be capable of accepting and forwarding any inbound request. Its interface is as follows: [ServiceContract(Namespace = "http://www. The IRouterServiceContract will accept all requests with the base-level message class and handle and reply to all actions. } } } } Detecting and ensuring that the endpoints are active is half the battle. The other half is determining what partitioning scheme effectively works when filtering requests to the correct endpoint. Ultimately the endpoint that the router forwards requests to will have a detailed service contract capable of completing the message processing. using (var factory = new ChannelFactory<IRouterServiceContract>(new BasicHttpBinding("binding"))) { IRouterServiceContract proxy = factory. } The implementation of the IRouterServiceContract will use the MessageBuffer class to create a copy of the request message for further inspection (e. using (proxy as IDisposable) { return proxy. public partial class RouterService : IRouterServiceContract { private readonly LoadBalancer loadBalancer. string serviceAddress = String. A router.CreateChannel(new EndpointAddress(serviceAddress)).namespace.g. } public Message ProcessMessage(Message requestMessage) { //Create a MessageBuffer to attain a copy of the request message for inspection string ipAddress = loadBalancer. The approach outlined above also attempts to accommodate for any disaster-related scenarios so that an uninterrupted experience can be provided to the client. ipAddress).svc/EndpointBasic". Once the router has an endpoint.GetWorkerIPAddressForContent("content"). a ChannelFactory is initialized to create a connection to the endpoint and the generic ProcessMessage method is invoked.The Windows Azure Platform: Articles from the Trenches } The LoadBalancer is capable of auto-detecting endpoints and the remaining work for the router service is WCF. who the sender is or determining if there is a priority associated with it).com/ns/2/2009". You may decide to implement some way of consistently ensuring a client’s requests are processed by the same back-end component or route based on message priority. If one of the back-end components happens to shut down due to a hardware failure. Name = "RouterServiceContract")] public partial interface IRouterServiceContract { [OperationContract(Action = "*". ReplyAction = "*")] Message ProcessMessage(Message requestMessage). by definition. 48 . GetWorkerIPAddressForContent on the LoadBalancer is invoked and a target endpoint is requested.Format("http://{0}/Endpoint.ProcessMessage(requestMessageCopy).

Now copy all of your “crunched tiles” from your local machine up to your newly created container in blob storage. ready to be used within your mapping application. you need to crunch your tiles. Setting up a Bing Map tile server using Windows Azure Blob storage is surprisingly easy and you can have your own tile server up and running in a few small steps. had the same project landed on my desk today I would be looking to serve the map tiles differently as Blob storage is ideally suited to such a task. one of the many Windows Azure storage management tools available on the web Using CloudXplorer. Although the map only covered the UK and we had restricted the zoom levels between 6 and 11. For ease I am going to use CloudXplorer. each set of tiles (and there were twelve sets) had around 4500 tiles and averaged 80 megabytes in size. First things first. There are plenty of tutorials on how to do this out on the web and Microsoft MapCruncher is a preferred tool for carrying this task out. This mapping solution served custom tiles of the UK which were specially commissioned for the project. Now that you have your “crunched tiles” and you have saved them off to a directory on your local machine. But what if things had been different? What if the customer wanted to cover Europe or even more zoom levels? What would be the bandwidth implications and the potential costs associated with huge demand for the map? With Windows Azure now “live”.The Windows Azure Platform: Articles from the Trenches BING MAPS TILE SERVERS USING AZURE BLOB STORAGE By Steve Towler Back in early 2009. Storage is infinitely scalable. 49 . Less than 1 gigabyte of tiles may seem like a trivial figure in terms of the vast amounts of storage we have at our disposal nowadays. the next step is to get your tiles up into the cloud. create a public container in blob storage called tiles. This is the process whereby you take you custom map images and cut them up into tiles. cheap and its RESTful interface makes requesting the tiles clean and simple. I was assigned to a project where I was required to build an informational mapping solution for a customer’s website.

blob.0.core. You can tweak that code to suit your own requirements but the important thing to remember is to change the VETileSourceSpecification path to point to your new tile server: var tileSourceSpec = new VETileSourceSpecification("lidar". The project I mentioned at the very beginning of this article was a success and a happy customer is actively informing their potential customers as to their presence in the UK. MSDN includes a piece of code (select JScript tab) which shows you how to add your own custom tile layer to a Bing map. 50 . Had Windows Azure been out of CTP.core.png (or http://127.png").windows. "http://myaccount.png if you are using local development storage) You will now be able to consume the tiles from your tile server using Bing Maps. your tiles should be publically available using a URL like: http://myaccount.net/tiles/%4.windows.The Windows Azure Platform: Articles from the Trenches Once complete. how differently would the project have turned out? The software consuming the tiles would have been the same but the infrastructure serving the tiles would most certainly have been in the cloud.blob.net/tiles/0313131311133.0.1:10000/devstoreaccount1/tiles/0313131311133.

Similarly.com/ServiceHosting/2008/10/ServiceConfiguration" osVersion="WA-GUEST-OS-1. It must be between 16MB and 1TB in size. GUEST OS Azure Drive requires that the osVersion attribute in the Service Configuration file be set to WAGUEST-OS-1. The Azure SDK provides three classes in the Microsoft.The Windows Azure Platform: Articles from the Trenches AZURE DRIVE By Neil Mackenzie Azure Drive is a feature of Windows Azure providing access to data contained in an NTFS-formatted virtual hard disk (VHD) persisted as a page blob in Azure Storage.1_201001-01"> VHD The VHD for an Azure Drive must be a fixed hard disk image formatted as a single NTFS volume.microsoft. It is not possible to mount an Azure Drive in an application not resident in the Azure cloud or development fabric. 51 . Furthermore. However. A VHD is a single file comprising a data portion followed by a 512 byte footer. For example: <ServiceConfiguration serviceName="CloudDriveExample" xmlns= "http://schemas. a nominally 16MB VHD occupies 0x1000200 bytes comprising 0x1000000 bytes of data and 0x200 footer bytes. the page blob can be downloaded and attached as a VHD in a local system. similar to the CloudStorageAccountStorageClientExtensions class. A single Azure instance can mount a page blob for read/write access as an Azure Drive. For example. CloudStorageAccountCloudDriveExtensions.WindowsAzure. The Disk Management component of the Windows Server Manager can be used to create and format a VHD. An appropriately created and formatted VHD can be uploaded into a page blob from where it can be mounted as an Azure Drive by an instance of an Azure Service. When uploading a VHD it is important to remember to upload the footer. since pages of a page blob are initialized to 0 it is not necessary to upload pages in which all the bytes are 0. provides an extension method to CloudStorageAccount allowing a CloudDrive object to be constructed.1_201001-01 or a later version. multiple Azure instances can mount a snapshot of a page blob for read-only access as an Azure Drive. This could save a significant amount of time when uploading a large VHD.StorageClient namespace to support Azure Drives:    CloudDrive CloudDriveException CloudStorageAccountCloudDriveExtensions CloudDrive is a small class providing the core Azure Drive functionality. The Azure Storage blob lease facility is used to prevent more than one instance at a time mounting the page blob as an Azure Drive. CloudDriveException allows Azure Drive errors to be caught.

This is required even if caching is not going to be used. A VHD page blob can be mounted on only one instance at a time. Since it is read-only.MaximumSizeInMegabytes). Create() physically creates a VHD of the specified size and stores it as page blob. The Delete() method can be used to delete the VHD page blob from Azure Storage. multiple instances can mount the VHD snapshot simultaneously. InitializeCache() must be invoked to initialize the cache with a specific size and location.InitializeCache(localCachePath. CloudDrive cloudDrive = cloudStorageAccount. Note that Microsoft charges only for initialized pages of a page blob so there should only be a minimal charge for an empty VHD page blob even when the VHD is nominally of a large size. A VHD page blob must be mounted on an Azure instance to make its contents accessible. a VHD snapshot can be mounted as a read-only drive on an unlimited number of instances simultaneously. An instance mounts a read-only Azure Drive by invoking Mount() on a VHD snapshot. Before a VHD page blob can be mounted it is necessary to allocate some read cache space in the local storage of the instance. However. localCache. the following creates a CloudDrive object for the VHD contained in the page blob resource identified by the URI in cloudDriveUri: CloudStorageAccount cloudStorageAccount = CloudStorageAccount.RootPath.GetLocalResource("CloudDrives"). For example. The following shows the Azure Drive cache being initialized to the maximum size of the local storage named CloudDrives: public static void InitializeCache() { LocalResource localCache = RoleEnvironment. For example. The Azure Storage Service uses the page blob leasing functionality to guarantee exclusive access to the VHD page blob.CreateCloudDrive(cloudDriveUri. An instance mounts a writeable Azure Drive by invoking Mount() on a VHD page blob.The Windows Azure Platform: Articles from the Trenches CLOUDDRIVE A CloudDrive object can be created using either a constructor or the CreateCloudDrive extension method to CloudStorageAccount.GetConfigurationSettingValue("DataConnectionString")). A snapshot therefore provides a convenient way to share large amounts of information among several instances. } The tweak in which trailing back slashes are removed from the path to the cache is a workaround for a bug in the Storage Client library.TrimEnd(backSlash).Parse( RoleEnvironment. An instance 52 . String localCachePath = localCache. Snapshot() makes a snapshot of the VHD page blob containing the VHD while CopyTo() makes a physical copy of it at the specified URL. CloudDrive. Char[] backSlash = { '\\' }. Note that this creates an in-memory representation of the Azure Drive which still needs to be mounted before it can be used.AbsoluteUri). one instance could have write access to a VHD page blob while other instances have read-only access to snapshots of it – including snapshots made periodically to ensure the instances have up-to-date data.

FileStream fileStream = new FileStream(path.which can be used to access any path on the drive. Different Azure Drives mounted on the same instance can specify different cache sizes and care must be taken that the total cache size allocated for the drives does not exceed the amount of cache available in local storage. cloudDrive. The options parameter takes an DriveMountOptions flag enumeration that can be used to force the mounting of a drive – for example. The entry point to this information is the static method DriveInfo.None).txt". for VHD page blobs. The cacheSize should be set to 0 if caching is not desired for the drive.GetDrives() which returns an array of DriveInfo objects representing all mounted drives on the instance. or LocalPath.CreateCloudDrive(cloudDriveUri.The Windows Azure Platform: Articles from the Trenches invokes the Unmount() method to release the Azure Drive and. Instead. before being used and then unmounted: public void WriteToDrive( Uri cloudDriveUri ) { CloudStorageAccount cloudStorageAccount = CloudStorageAccount. DEVELOPMENT ENVIRONMENT The Development Environment simulates Azure Drives in a manner that differs from their implementation in the cloud.FromConfigurationSetting("DataConnectionString").OpenOrCreate). Mount() returns the drive letter.AbsoluteUri). driveLetter).Unmount().Format("{0}\\Pippo. the blob management methods in the CloudDrive class must be used. String path = String.Mount(CacheSizeInMegabytes. String driveLetter = cloudDrive. DriveMountOptions. The cacheSize parameter to Mount() specifies how much of the cache is dedicated to this Azure Drive. StreamWriter streamWriter = new StreamWriter(fileStream). to the Azure Drive . Furthermore.Close(). CloudDrive cloudDrive = cloudStorageAccount.Write("that you have but slumbered here"). streamWriter. the Development Storage simulation is unaware of the Azure Drive simulation with the consequence that the standard blob manipulation methods in the Storage Client API do not work with the VHD page blobs and VHD snapshots used by the Azure Drive simulation. The following example shows an Azure Drive being mounted from a VHD page blob specified by cloudDriveUri. streamWriter. The DriveInfo class can be used to retrieve information about a mounted Azure Drive. } GetMountedDrives() provides access to a list of drive letters for all Azure Drives mounted in the instance. allow other instances to mount the blob for write access. 53 . "d:" . when an instance has crashed while holding the lease to a VHD page blob – or to fix the file system. FileMode.for example.

Consequently. The workaround is to attach the VHD to an empty folder in the well-known directory from where it can be mounted exactly as it would be in the cloud.g. However. drivecontainername) of a well-known directory: %LOCALAPPDATA%\dftmp\wadd\devstoreaccount1\ The full path to the folder that is the subst representation of the Azure Drive is: %LOCALAPPDATA%\dftmp\wadd\devstoreaccount1\drivecontainername\drivename Invoking CloudDrive.g. It is important to remember that Azure Drives are available only inside the Azure Fabric – cloud or development – and that they are not mountable in an ordinary Windows application. The Azure Drive API can then be used to mount the Azure Drive as if it were backed by a VHD page blob named. drivecontainername) of the well known directory so that the VHD can be mounted precisely as it would be in the cloud. VHD page blobs and VHD snapshots do not appear in blob listings in Development Storage because they are not stored as blobs. a VHD file uploaded to Development Storage cannot be mounted as an Azure Drive. drivename. 54 . Note that subst can be invoked in a command window to view the list of currently mounted Azure Drives. There is no need to invoke the Create() method.Snapshot() causes a folder with the name of the VHD page blob or VHD snapshot to be created in this directory. drivename) in a subdirectory (e.Create() or CloudDrive.Force is not implemented in the Development Environment. CloudDrive.The Windows Azure Platform: Articles from the Trenches The Azure Drive simulation does not mount Azure Drives from VHD page blobs but through the use of subst against a folder (e. located in a container named drivecontainername.g. The Disk Management component of the Windows Server Manager is used to attach a VHD to an empty folder (e.g. drivename) in a subfolder (e. Azure Drives are mounted and unmounted in the Development Environment just as they are in the cloud. DriveMountOptions. There is no entry in Development Storage for this blob.Delete() can be used to delete the VHD page blob or VHD snapshot. although visible in the Azure fabric. Note that.

this “master-detail” data can be created in a single transaction. with an arbitrary number of line items for each invoice. CLR property names are used as column names. and then sequential numbering for the Row Keys of the LineItem entity. this is commonly achieved with a complicated system of metadata tables. Using the invoice number for the Partition Key. The Table Service SDK. or transactions to coordinate modifications across multiple tables. Because rows can have different structures.aspx 55 . can be retrieved very quickly with a single query. The table itself is not structured in that way. But Table Service entities with the same partition key are held together in the store. such as primary/foreign keys. 15 http://msdn. and performance when querying against these custom fields is accordingly horrible. you are storing Invoices. you do not specify columns. and they can be different for different entities within a single table.microsoft. class names are (by default) used as table names. When you create an Azure Table. This fact opens up a world of interesting possibilities when it comes to planning and designing your persistence layer. in particular. But this simplicity enforces the concept of schema over a data store which is innately schema-less. MASTER-DETAIL STRUCTURES The Table Service does not support relational features. you can actually store the data from two (or more) different types of object within the same partition key in the same table. wraps the massively scalable storage service in an API which is instantly familiar to anyone who has used LINQ-to-SQL or the Entity Framework.The Windows Azure Platform: Articles from the Trenches AZURE TABLE SERVICE AS A NOSQL DATABASE By Mark Rendle The Windows Azure SDK is one of the things which sets the Azure platform above other “cloud” platforms. The column names are part of the entities (rows) which are stored in the table. an empty string value for the Row Key of the Invoice entity. and retrieved incredibly quickly in a single query. for example. and can be modified together inside Entity Group Transactions 15. Figure 1: Invoice and Line Item entities stored within a single Azure Table DYNAMIC SCHEMA It’s very common these days for database applications to allow the end user to extend the out-ofthe-box data model with their own fields. joins in queries. In a relational database. Let’s say.com/en-us/library/dd894038.

56 .The Windows Azure Platform: Articles from the Trenches In Azure. high-volume tables. the one with spare columns to add data into. By running MERGE updates against this entity. or relationships between users in a social-networking database. that this approach requires you to get down and dirty with the REST API. including the Partition Key. when the MERGE operation fails with a “too many values” error. and reset the active entity to clear all the values and set a new UniqueId: this prevents two simultaneous operations from creating duplicate copies of the entity. Row Key and Timestamp system columns. One obvious use for this is where you need short-lived. have an extra column. COLUMN NAMES AS DATA There are times when you want to store several thousand rows of related data. though. with a timestamp or Guid value. Also be aware that there is a hard limit of 255 properties per entity. where you have complete control over column names at the per-entity level. perhaps to contain analytics data which gets archived and cleared down after a couple of weeks (to cut down on storage costs). Azure Tables can handle this volume of data very easily.000 rows per result set. you can group 250 “rows” together in a single entity. increasing the time of the operation and the cost of the transactions. things like activity logs. that is. you can add new “rows”. though. one quick operation. And then subsequently. But if you create multiple tables with the date as part of the name. but because a query operation can only return 1. named “UniqueId”. If the data can be stored as a single string or binary blob. The best way to achieve this is to use an empty Row Key to identify the “active” entity. retrieving them all could take several round-trips to the server. these fields can be added to each entity just by specifying the extra column names and values in the Insert operation. Figure 2: Using column names as data to reduce number of rows used for high-volume tables TABLE NAMES AS DATA Another thing which is not limited is the number of tables you can create within an account or project. This is possible because there is absolutely no limit to the number of different column names that can be used within a table. creating hundreds of them is not prohibitive. And because you don’t have to specify the schema of each table. In addition. using the column name as a makeshift sub-key. querying against these columns can be done in exactly the same way as against the columns that were part of the original application. clearing down a day’s data is just a matter of dropping the table. you simply create a copy of that entity (Row Keys cannot be updated) with the UniqueId value as the Row Key. Be aware. Running hundreds of DELETE operations against a single table comes with scalability issues and incurs a high transaction cost.

57 . This is one of the best things about the NoSQL family of databases: many of the problems with which we have traditionally struggled when using rigidly-structured relational databases have much simpler. Whilst the official Microsoft Azure SDK is a great tool for modelling a lot of domains.The Windows Azure Platform: Articles from the Trenches Figure 3: Using table names as part of data schema SUMMARY As you can see. or ignoring it entirely and learning to use the REST API to fully exploit the NoSQL nature of Azure Table Storage. more direct solutions in a less-structured paradigm. I hope this short article has highlighted a few of the things you can achieve by digging deeper into the SDK. the scope for creative schema design in Azure Table storage is massive. and provides a very usable interface to the powerful features of the Azure storage stack with its familiar LINQ DataContexts and Query providers.

The CreateQuery<T>() return type is DataServiceQuery which implements both the IQueryable<T> and IEnumerable<T> interfaces: LINQ supports the decoration of a query by operators filtering the results of the query. tableServiceContext. IQueryable<Song> songs = (from entity in tableServiceContext.Take(10). The following example demonstrates a trivial use of CreateQuery<T>() and the Take() operator to retrieve ten records from a Songs table: protected void SimpleQuery(CloudTableClient cloudTableClient) { TableServiceContext tableServiceContext = cloudTableClient. However. CreateQuery<T>() is declared: public DataServiceQuery<T> CreateQuery<T>(String entitySetName).ToList<Song>(). } As with other LINQ implementations the query is not submitted to the Azure Table Service until the query results are enumerated. a single method is central to the querying process and that is CreateQuery<T>() in the DataServiceContext class.GetDataServiceContext(). Although a full LINQ implementation has many decoration operators only the following are implemented for the Storage Client library:     Where Take First FirstOrDefault These are implemented as extension methods on the DataServiceQuery<T> class.CreateQuery<Song>("Songs") select entity). 58 . Note the use of ResolveType to work around a performance issue when the table name differs from the class name. This method is used implicitly or explicitly in every query against Azure Tables using the Storage Client library.ResolveType = (unused) => typeof(Song). When a query is executed these decoration operators are translated into the $filter and $top operators used in the Azure Table Service REST API query string submitted to the Azure Table Service.The Windows Azure Platform: Articles from the Trenches QUERIES AND AZURE TABLES By Neil Mackenzie CREATEQUERY<T>() There are several classes involved in querying Azure Tables using the Azure Storage Client library. List<Song> songsList = songs.

Songs select entity where songContext is a SongContext object. When handling a query specifying PartitionKey and not RowKey the Azure 59 . Instead. QUERYING ON PARTITIONKEY AND ROWKEY The primary key for an entity in an Azure table comprises PartitionKey and RowKey.SaveChanges(). Boolean and DateTime – so they will not be repeated here.AddObject(TableName. StorageCredentials credentials) : base(baseAddress. The Songs property can be used as the core of any LINQ query instead of the tableServiceContext. For example.CreateQuery<Song>(“Songs”) used previously. numbers. the LINQ query: from entity in tableServiceContext. this article focuses on the various methods provided to invoke queries. credentials) { } public IQueryable<Song> Songs { get { return this.CreateQuery<Song>(TableName). CONTEXTS The SimpleQuery example used the TableServiceContext.CreateQuery<Song>("Songs") This syntax can be simplified by deriving a class from TableServiceContext as follows: public class SongContext : TableServiceContext { internal static String TableName = "Songs".CreateQuery<Song>("Songs") select entity can be rewritten as: from entity in songContext.The Windows Azure Platform: Articles from the Trenches The MSDN Azure documentation has a page showing several examples of LINQ queries demonstrating filtering on properties with the various datatypes – String.CreateQuery() method as follows: tableServiceContext. public SongContext(String baseAddress. this. } } This class is specific to the Song model class representing the entities in the Azure table named Songs. song). } } public void AddSong(Song song) { this. The most performant query in the Azure Table Service is one specifying both PartitionKey and RowKey returning a single entity. Doing this simplifies and improves the readability of the LINQ query.

public IEnumerable<TElement> Execute(). CONTINUATION A query specifying both PartitionKey and RowKey is the only query guaranteed to return its entire result set in a single response. DataServiceQuery provides the following methods to send queries to the Azure Table Service. public IAsyncResult BeginExecute(AsyncCallback callback. } } Note that the query must be explicitly cast from an IQueryable<Song> to a DataServiceQuery<Song>. When it completes. The asynchronous model is implemented by invoking BeginExecute() passing it the name of a static callback delegate and. optionally. an object providing some invocation context to the callback delegate. Object state). BeginExecute() and EndExecute() are a matched pair of methods used to implement the AsyncCallback Delegate model for asynchronously accessing the Azure Table Service.Take(10) as DataServiceQuery<Song>. The following is an example of Execute(): protected void UsingDataServiceQueryExecute(CloudTableClient cloudTableClient) { TableServiceContext tableServiceContext = cloudTableClient.ResolveType = (unused) => typeof(Song). the callback delegate is invoked on a worker thread. The Azure Table Service inserts a continuation token in the response header to indicate there are additional results which can be retrieved through an additional request parameterized by the continuation token.000 results are ever returned in response to a single request – regardless of how many entities satisfy the query filter. EndExecute() must be invoked in the callback delegate to access the results.CreateQuery<Song>("Songs") select entity). Execute() is a synchronous method which sends the query to the Azure Table Service and blocks until the query returns. public IEnumerable<TElement> EndExecute(IAsyncResult asyncResult).Singer. BeginExecute() initiates query submission and sets up an IO Completion Port to wait for the query to complete.The Windows Azure Platform: Articles from the Trenches Table Service scans every entity in the partition while for a query specifying RowKey and not PartitionKey it must query each partition separately. In practice. DataServiceQuery<Song> dataServiceQuery = (from entity in tableServiceContext. foreach (Song song in songs) { String singer= song. tableServiceContext. A further limit on query results is that no more than 1. Furthermore. DATASERVICEQUERY DataServiceQuery is the WCF Data Services class representing a query to the Azure Table Service. a failure to invoke EndExecute() could lead to resource leakage.GetDataServiceContext().Execute(). IEnumerable<Song> songs = dataServiceQuery. this object must include the DataServiceQuery object on which BeginExecute() was invoked. EndExecute() returns an object of type 60 .

public CloudTableQuery<TElement>(DataServiceQuery<TElement> query). if there is a need for continuation tokens which can happen on any query not specifying both PartitionKey and RowKey.Execute() may not retrieve all the entities requested if there are more than 1. DataServiceQuery. or the AsTableServiceQuery() extension method of the TableServiceExtensionMethods class: public static CloudTableQuery<TElement> AsTableServiceQuery<TElement> ( IQueryable<TElement> query ) The CloudTableQuery<T> class has the following synchronous methods to handle query submission to the Azure Table Service: public IEnumerable<TElement> Execute(ResultContinuation continuationToken). Consequently. A CloudTableQuery<T> object is created using one of the two constructors: public CloudTableQuery<TElement>(DataServiceQuery<TElement> query. indeed. Note that care should be taken when using either form of Execute() since large amounts of data might be returned when the query is enumerated. Execute() handles continuation automatically and continues to submit queries to the Azure Table Service until all the results have been returned. QueryOperationResponse<T> exposes information about the query request and response including the HTTP status of the response. The next version does but is not yet released in the Azure environment. Execute(ResultContinuation) starts the request with a previously acquired ResultContinuation object encapsulating a continuation token and continues the query until all results have been retrieved.000 of them – or.The Windows Azure Platform: Articles from the Trenches QueryOperationResponse<T> which implements an IEnumerable<T> interface. 61 . CLOUDTABLEQUERY The CloudTableQuery<T> class supports continuation tokens. RetryPolicy policy). public IEnumerable<TElement> Execute(). Note that the version of WCF Data Services currently used in Azure does not support server-side paging so that a DataServiceQuery is not able to process continuation tokens.

AsyncCallback callback. As with the synchronous Execute() methods the difference between the two BeginExecuteSegmented() methods is that one starts the retrieval at the beginning of the query result set while the other starts at the entity indicated by the continuation token in the ResultContinuation parameter.Singer. Object state). Object state). This provides a convenient method of paging through results in batches of size specified by the Take() query decoration operator or the 1. } 62 .Take(10) select entity ).Execute().GetDataServiceContext(). } } The CloudTableQuery<T> class has an equivalent set of asynchronous methods declared: public IAsyncResult BeginExecuteSegmented(ResultContinuation continuationToken. CloudTableQuery<Song> cloudTableQuery = (from entity in tableServiceContext. The following is an example of BeginExecuteSegmented() and EndExecuteSegmented() paging through the result set of a query in pages of 10 entities at a time: protected void QuerySongsExecuteSegmentedAsync( CloudTableClient cloudTableClient) { TableServiceContext tableServiceContext = cloudTableClient. tableServiceContext.ResolveType = (unused) => typeof(Song). IAsyncResult iAsyncResult = cloudTableQuery. foreach (Song song in songs) { String singer = song.000 records that is the maximum number of records retrievable in a single request. tableServiceContext. public IAsyncResult BeginExecuteSegmented(AsyncCallback callback. These follow the method-naming style used elsewhere in the Storage Client library whereby the suffix Segmented indicates that the methods bring data back in batches – in this case from one continuation token to the next.AsTableServiceQuery<Song>(). public ResultSegment<TElement> EndExecuteSegmented(IAsyncResult asyncResult). CloudTableQuery<Song> cloudTableQuery = (from entity in tableServiceContext.CreateQuery<Song>("Songs") select entity).GetDataServiceContext(). IEnumerable<Song> songs = cloudTableQuery.ResolveType = (unused) => typeof(Song). cloudTableQuery).BeginExecuteSegmented( BeginExecuteSegmentedIsDone.CreateQuery<Song>("Songs").AsTableServiceQuery<Song>().The Windows Azure Platform: Articles from the Trenches The following example shows Execute() retrieving all the records from a table: protected void UsingCloudTableQueryExecute(CloudTableClient cloudTableClient) { TableServiceContext tableServiceContext = cloudTableClient.

ContinuationToken.AsyncState as CloudTableQuery<Song>.The Windows Azure Platform: Articles from the Trenches static void BeginExecuteSegmentedIsDone(IAsyncResult result) { CloudTableQuery<Song> cloudTableQuery = result. This query results in the retrieval of only 10 records and does not page through the table in pages of 10 entities as in the previous example. cloudTableQuery).AsTableServiceQuery<Song>(). List<Song> listSongs = resultSegment. } } It is also possible to iterate through subsequent results using the GetNext() method of the ResultSegment<T> class rather than using BeginExecuteSegmented() with a ResultContinuation parameter. Note that exception handling is even more important in callback delegates than it is in normal code because they are not invoked from user code and errors cannot be caught outside the method.CreateQuery<Song>("Songs") select entity).HasMoreResults) { IAsyncResult iAsyncResult = cloudTableQuery. Here. BeginExecuteSegmentedIsDone.BeginExecuteSegmented( resultSegment.Take(10).Results. 63 . the Take(10) is outside the LINQ query definition.ToList<Song>(). all errors must be caught and handled inside the callback delegate. Consequently.EndExecuteSegmented(result). ResultSegment<Song> resultSegment = cloudTableQuery. It is worth noting the difference made by replacing the cloudTableQuery in the above example with: CloudTableQuery<Song> cloudTableQuery = (from entity in tableServiceContext. if (resultSegment.

e.ToString(). Property PartitionKey RowKey DeviceId ReportedOn Latitude Longitude DataType String String String DateTime Double Double Table 1 Properties of a PositionReport entity Also. are strings of up to 1KB in size. As they are strings. the entities would be arranged in an ascending order within the table. “Just use ‘yyyyMMddHHmmssfffff’ pattern for the DateTime“.Ticks). There is a way to fetch just the 100 entities that you need. Moreover.MaxValue. Using fixed length for different components of the time would indeed ensure that lexical comparisons are equivalent to DateTime comparisons. In this section. “Get 100 most recent position reports of device X” so that they could be displayed on a map. As many real life applications are interested in fetching the most recent entities first.reportedOn. PartitionKey and RowKey. Let’s examine this more closely using an example. suppose that the majority of queries would be something like. which in turn logs the reports to the table storage. our query would first have to fetch all the entities from the table (or partition). A ‘PositionReport’ entity could look something like that shown in Table 1. If we used the ascending order model. we discuss how to use the two types of entity keys in order to simulate a descending order based on timestamps so that queries based on dates are more efficient. i.PadLeft(19) 64 . Entity keys. Let us assume that we are making a location based service application that lets mobile users send periodic position reports to a Windows Azure Worker Role. “100” < “20” < “9”. At first glance making a key from time might seem very straightforward. each entity within a partition is uniquely identified by its row-key. the queries are inadvertently less efficient using this simple method.The Windows Azure Platform: Articles from the Trenches TRICKS FOR STORING TIME AND DATE FIELDS IN TABLE STORAGE By Saksham Gautam Windows Azure Table Storage supports storing enormous amount of data in massively scalable tables in the cloud. Let’s convert the time the device reported into ‘reverse timestamp’ by simply doing the following: (DateTime. you might say. Each application has to decide on the partition scheme by choosing the partition-keys for the entities.Ticks . all comparisons are purely lexicographic. Windows Azure Table storage employs a scale-out model to distribute entities across multiple storage nodes. In order to attain this amount of scalability. However. The clue here lies on the ReportedOn property. The tables can store terabytes upon terabytes of data and billons of entities. then get the last 100 entities.

so that all entities for a single device go into one partition. With that. or whether the ReportedOn property is more significant. We could use a similar technique to the one we used for our RowKey. In other words. 1). we save some bandwidth as well. But first we have to decide whether the device ID is significant in queries that we want to perform. Once we have our partition key. are most of your queries something like “Give me position reports for device X” or are they more like “Give me devices that have reported within a certain interval”.MaxValue. Without the loss of generality let us assume that we could keep all position reports for a device within a month in one partition.The Windows Azure Platform: Articles from the Trenches By reversing the number of ‘ticks’ in the time and then making it of fixed length. new DateTime(DateTime. DateTime temp = new DateTime(reportedOn.Ticks . we create a mechanism for assigning newer entities with keys that are lexically less than those of older entities.Length . As a result.Month.19). However. we determine whether our partition key would have timestamp or the device ID as the prefix of the partition key. We could do the following. 65 . Note that creating entity key by concatenation in this way only works if the device id is of fixed length. The PositionReport entity now looks like the one shown in Table 2. PartitionKey. We could use this as the RowKey for our entity. Let us assume that we decided to use the device ID as prefix.Int64. Choosing a partition key is an opportunity to load balance the entities across different servers. we could easily recalculate the device ID. Hence. we can definitely do better than choosing a fixed partition key. we would not need to have an additional property for storing the time the device sent the position report because we could easily compute it using the RowKey as shown below. String deviceId = PartitionKey. reportedOn. Then.Parse(RowKey)) The prime candidate for the partition-key would be the ID of the device. Based on that. constructing the reverse timestamp for our partition key is easy. our partition might grow enormous.Substring(0. if a device sends many position reports over time. We then create an identifier by concatenating the ID of the device with the reversed timestamp based on the time the device reported.Year.

it contains ‘AND’ or ‘OR’ keyword. a. but instead chose to filter the results in our code. Note that we did not use a (>) and (<) operators to filter out results at the table storage itself. 66 . the entities returned by the query may contain entities corresponding to other devices. c. 3. we can construct them such that we include at least one of the entity keys and preferably (always) the partition key. query the table storage using greater than (>) operator on the partition key. 100 most recent entities for the device within this month a.a. 1. 100 most recent entities for the device a. Since a device may not have 100 position reports. Perform the two queries and combine (union) the entities before using them in the application. b. This is because as of time of this writing. You should remove them in the data access layer before you use the result in the application. create two queries. b. Hence.The Windows Azure Platform: Articles from the Trenches Property PartitionKey RowKey Latitude Longitude GetReportedOn() GetDeviceId() DataType String String Double Double Returns DateTime Returns String Table 2 Modified PositionReport entity As for the queries based on time. c. as illustrated in the following examples. it results in a full table scan. Construct the row keys by using reverse timestamps for the dates. each using one partition key and one row key. Note that all partition keys for entities belonging to a particular device are created by appending a suffix to the device ID. As mentioned in 2.c. i. it is not efficient to use all the keys in a single query. Hence.e. 100 most recent entities for the device within a specific period. Query the table using greater than (>) operator on the row key and equal to (=) operator on the values computed in 1. Construct the combined partition keys for the dates that define the interval as in 1. d. Compute the combined partition key based on the device id and the reversed timestamp using the first day of the current month b. 2. if a range query is based on partition keys.a.

adding any suffix to either one or all of them does not affect how they are ordered afterwards. String revTicks = (DateTime. The solution to avoid such entity key collisions is to append a globally unique identifier (GUID) to the end of the row key. Property PartitionKey RowKey EventType Description GetEventSource() GetEventTime() DataType String String Int String Returns String Returns DateTime Table 3 Structure of Event Entity Partition key is based on the Event source which could be the name of the process that generated the event and Row key is the based on event time. it is not correct to use <= and >= operators in the queries when used with row keys. RowKey = revTicks + Guid. can this technique be used on a Table that is already using only reverse timestamps as row keys. there might be scenarios when there is a lot of data generated. If there are multiple processes with the same name that write into the table at the exact same time. Since the row keys don’t correspond directly to ticks anymore. and when you compute the entity keys using the method described above more than one entity might try to use the same keys.Ticks). The short answer is yes! As we discussed earlier.Ticks – eventTime. B and C. If there are three strings A. one has to convert T and (T + 1 tick) into row keys and use them in the where condition in your query. One might ask. Hence.MaxValue. comparisons on entity keys are purely lexicographical. Take an example of an application in which there are multiple processes that add ‘Event’ entities to the ‘Events’ table.NewGuid().PadLeft(19). and if lexically A < B < C. To get all events that occurred on time = T. Care has to be taken while querying. An ‘Event’ entity could look something like that shown in Table 3.ToString(). we would run into problems because both partition key and row keys have to be unique.The Windows Azure Platform: Articles from the Trenches Using reverse ticks for entity keys should be sufficient in most of the cases. However. 67 . the row keys would be computed like so.

CurrentRoleInstance. thereby making the placement of data into specific server caches impractical unless one can guarantee each user continually communicates with the same web server. Windows Azure provides a transparent load balancer. it is sufficient to use the built-in cache that a server provides for efficient data retrieval. FileName = "memcached. viral growth.Start(startInfo)) { exeProcess. making it a great use case for a Windows Azure worker role. //memcached arguments //m = size of the cache in MB //l = IP address of the cache server //p = port address of the cache server string arguments = "-m " + cacheSize + " -l " + endpoint. However as traffic and use of an application grows.IPEndpoint. Twitter. Windows Azure addresses the problem of viral growth by supporting a scalable infrastructure to scale and quickly allocate additional instances of a service on an as-needed basis. used by YouTube.exe". one can address this issue to ensure that all clients that issue similar data requests only retrieve to the database once and use the cached version going forward (pending updates). UseShellExecute = false.Port.The Windows Azure Platform: Articles from the Trenches USING WORKER ROLES TO IMPLEMENT A DISTRIBUTED CACHE By Josh Tucholski One of the most sought after goals of an aspiring application. //The worker role’s only purpose is to execute the memcached process and run until shutdown using (Process exeProcess = Process. it is inevitable that its database will suffer without any type of caching layer in place. //Retrieve the Endpoint information for the current role instance IPEndPoint endpoint = RoleEnvironment. Armed with a distributed cache and a well-built data access tier. Arguments = arguments }. CONFIGURING THE CACHE One of the most popular distributed caching implementations is memcached . Memcached can run from the command line as an executable.Address + " -p " + endpoint. ProcessStartInfo startInfo = new ProcessStartInfo() { CreateNoWindow = true.WaitForExit(). is also one of the quickest routes to failure if the application receives it unexpectedly. The following code snippet demonstrates how a worker role initializes the memcached process and defines required parameters identifying its unique instance IP address and the maximum size of the cache in MB. } 68 . string cacheSize = RoleEnvironment. This is not the case in Windows Azure. and Wikipedia. When memcached is active all of its data is stored in memory which makes increasing the size of the cache as easy as increasing the worker instance count. In smaller environments.InstanceEndpoints["EndpointName"].GetConfigurationSettingValue(CacheSizeKey). Note: Your will need to include the memcached executable to start the process within the Windows Azure app fabric. Facebook.

The Windows Azure Platform: Articles from the Trenches USING THE DISTRIBUTED CACHE Once the distributed cache instance is active.WriteLine("MemcachedCache . Hooks can be added to determine if at any point in time the client is out of sync with the actual number of worker role instances. //MemcachedClient is provided through Enyim private static MemcachedClient _client.AddSeconds((double)_expiry))) { Console. return _client. _client = new MemcachedClient(_configuration).IsOutOfDate) { _configuration = new AzureMemcachedClientConfiguration().UtcNow. Fortunately when using the Windows Azure API. is used to access the contents of the cache. such as nHibernate. return val. any part of the application that has access to the client can integrate with the distributed cache. and flushing the cache object val = Client. } } private static void EnsureClientUpToDate() { //If a configuration exists. internal endpoints are discoverable by worker role name. key. it is simple to construct a new cache provider and integrate with the Windows Azure distributed cache. confirm that the endpoints it is //aware match the ones in Windows Azure if (_client == null || _configuration == null || _configuration. such as Enyim.Store(StoreMode. The following code snippet shows how simple it is to retrieve and store data in the cache once the configuration interface is implemented: //See the Windows Azure Memcached Solution Accelerator for instructions on implementing //the AzureMemcachedClientConfiguration class private static AzureMemcachedClientConfiguration _configuration. From this point. object value) { //Stores the key/value pair in the distributed cache using the client //Available StoreMode operations //Add – adds the item to the cache only if it does not exist //Replace – replaces an item in the cache only if it does exist //Set – will add the item if it does not exist or replace it if it does if (!Client. value. The Web Role found in the Windows Azure Memcached Solution Accelerator. even contain support for cache providers. Certain object-relational mapping tools. private static MemcachedClient Client { get { EnsureClientUpToDate().Set.could not save key " + key). a client library. } } public object Get(string key) { //The client serves four key purposes: retrieval. The most challenging part of integrating Enyim with the distributed cache is identifying all of the cache endpoints available for access. removing.Get(key). has a well written implementation of the Enyim client demonstrating how to detect its configuration. causing an automatic refresh. I recommend hashing 69 . storage. DateTime. } } Once the implementation of the client is in place. } public void Put(string key.

Implementing a distributed cache in most scenarios has proven beneficial as long as the application controls the data flowing in and out.The Windows Azure Platform: Articles from the Trenches your object keys if any other library integrates with your distributed caching to avoid any name collisions. If external resources are modifying the data by communicating to the database directly. your distributed cache will consistently have high availability and never receive interruptions when adding additional instances or recovering from system failures. With Windows Azure. 70 . you need to rethink the architecture of your distributed application or at least invalidate your cache more frequently to ensure that data within it is not stale.

traffic analysis. resource usage monitoring. COLLECTING DIAGNOSTIC DATA One of the most common pieces of diagnostic data to collect is the Windows Azure logs. There are three key stages to using the Windows Azure diagnostics. However.Trace messages embedded in an application are output to. and resolution for Windows components. These Trace messages are the main way to log the flow and status of an application and are built on the existing Event Tracing for Windows (ETW) capabilities. and building a monitoring strategy based on them. for many reasons. and it doesn’t provide low level access to resources. Diagnostics are initialized within a role’s OnStart() method: public override bool OnStart() { var config = DiagnosticMonitor. And finally.0 logs and the Windows Diagnostic infrastructure logs are all collected when you switch on the diagnostics. capacity planning. By default this trace data. running up in the cloud. in the way you would have if you owned and managed the machines yourself – the Windows Azure fabric handles much of the complexities of deploying and managing roles and machines. as part of the general maintenance of the application and server. Firstly. an Azure application. downloading the data from Windows Azure for analysis. the application will typically be running across a whole set of machines. is very different to a traditional application when it comes to monitoring and performing diagnostics. It’s not about creating new APIs – but rather it’s about using the existing logging and tracing capabilities in the Windows platform that many developers are already familiar with. finally. Secondly.GetDefaultInitialConfiguration(). troubleshooting. which is where any System. Secondly. managed by the Windows Azure fabric. troubleshooting. deciding what diagnostic data you wish to collect. there is no direct. performance. system admin access to the machines running in Windows Azure. and is dynamic and will change over time. DIAGNOSTICS AND HEALTH MONITORING OF WINDOWS AZURE APPLICATIONS By David Gristwood Monitoring the health of an application is key to being able to keep it up and running and to help resolve problems as and when they arise. so the problem is much more complex than that of a single machine. // Get default initial configuration 71 .Diagnostics. And.The Windows Azure Platform: Articles from the Trenches LOGGING. deciding when and what diagnostic data should be persisted out to Windows Azure for analysis. the IIS 7. Fortunately Windows Azure has a diagnostic capability that allows you to monitor the health of your application across the different roles that make up your Azure application. you can’t just attach a debugger to the cloud and step through your code. The Windows Diagnostic infrastructure logs help provide general purpose problem detection. to cover scenarios such as debugging. Most developers know how to do this to some degree with on-premise application. Firstly. and auditing.

paging. Setting up a scheduled transfer is as easy as setting up the ScheduledTransferPeriod property on the appropriate data source before the call to DiagnosticMonitor. These transfers can take place either as regular scheduled events. ANALYSING THE DIAGNOSTIC DATA The default behaviour of a transfer of diagnostic data is to persist the data to a set of “wad” (Windows Azure Diagnostics) prefixed Windows Azure Tables and Blobs containers (Crash dumps go into Blob storage. These can then be inspected on line. etc. which makes it possible to control persisting data from a dashboard or system support application.TimeSpan. config). Windows Azure logs into Tables.FromMinutes(1. An on demand transfer can be initiated from outside a Windows Azure application. perhaps every 10 minutes. 72 . For fine tuning and capacity planning the Performance counters (which include CPU. or downloaded using the REST-based API for local viewing and analysis. with tools such as Cerebrata’s Azure Diagnostic Manager (see screenshot).Start("DiagnosticsConnectionString". For diagnostics. the Crash dumps and Windows Event logs can prove invaluable. memory. DiagnosticMonitor. PERSISTING DIAGNOSTIC DATA All the diagnostic data collected is stored in the local file store of the virtual machine within the Windows Azure fabric.Logs.The Windows Azure Platform: Articles from the Trenches // add any other data sources here that need to be tracked are added here DiagnosticMonitor. Additional data sources can be added to the DiagnosticsMonitor before the Start() method is called. such as Windows Azure storage.Start("DiagnosticsConnectionString".Start(): // schedule transfer of basic logs to Azure storage diagConfig.0). The local file store will not survive machine recycles or rebuilds and therefore the diagnostic data needs to be transferred to a persistent store. config). etc.).ScheduledTransferPeriod = System.) are essential. or on demand.

com/Sessions/SVC15 and the demos from the talk can be downloaded from http://code. storing log and trace information in SQL Server will make it easier to filter the relevant information and detect process flow. especially to help track flow across multiple machine and roles.com/en-us/library/ee758705. MORE INFORMATION You can view Matthew Kerner’s excellent PDC09 session http://microsoftpdc.aspx 73 . As with all debugging and monitoring scenarios. the key is to ensure good quality information is embedded within applications.The Windows Azure Platform: Articles from the Trenches For analysis. etc.microsoft. turning.com/WADiagnostics . The MSDN documentation can be found at http://msdn.microsoft.msdn. or resolving more complex issues. exceptions.

These connection requests are load balanced and forwarded to an Azure-allocated port on one of the instances of the role. A worker role may have an unlimited number of HTTP and TCP internal endpoints. HTTPS and TCP input endpoints as long as each is associated with a different port number. the only limitation being that each internal endpoint must have a unique name. Instances can connect directly to other instances in the service using TCP and HTTP. and a private internal endpoint for communication among instances. Instead. Azure provides two deployment slots: staging for testing in a live environment. A web role may have only one HTTP input endpoint and one HTTPS input endpoint. An inplace upgrade replaces the contents of a deployment slot with a new Azure application package and configuration file. Input endpoints and internal endpoints are associated with an Azure role through specification in the Service Definition file. Each instance of a role is allocated exclusive use of a VM selected from one of several sizes from a small instance with 1 core to an extra-large instance with 8 cores. and production for the production service. there is no real difference between the two slots. A web role may have only one HTTP internal endpoint. A worker role may have an unlimited number of HTTP. A VIP swap simply swaps the virtual IP address associated with the production and staging slots. SERVICE UPGRADES There are two ways to upgrade an Azure service: in-place upgrade and Virtual IP (VIP) swap. Individual instances do not have public IP addresses and are not directly addressable from the Internet. All inbound network traffic to a role passes through a stateless load balancer which uses an unspecified algorithm to distribute inbound calls to the role among instances of the role. Note that it is not possible to do an in-place upgrade where the new application package has a modified Service Definition file.The Windows Azure Platform: Articles from the Trenches SERVICE RUNTIME IN WINDOWS AZURE By Neil Mackenzie ROLES AND INSTANCES Windows Azure implements a Platform as a Service model through the concept of roles. Otherwise. There are two types of role: a web role deployed with IIS. External services make connection requests to the Virtual IP address for the role and the input endpoint port specified for the role in the Service Definition file. and a worker role similar to a windows service. Memory and local disk space also increase with instance size. ENDPOINTS An Azure role has two types of endpoint: a public-facing input endpoint. any existing service in one of the slots must 74 . Azure implements horizontal scaling of a service through the deployment of multiple instances of roles. A role with a public endpoint has a permanent URL in the production slot and a temporary URL in the staging slot.

The Windows Azure Platform: Articles from the Trenches be deleted before the new version is uploaded.specifies the role-specific configuration parameters Certificates .specifies the Azure guest OS version for the deployed service Instances . The number of upgrade domains is configurable through the upgradeDomainCount attribute (default 5) to the ServiceDefinition root element in the Service Definition file.defines the settings used to configure the service LocalStorage.specifies X. The Azure SLA comes into force only when a service uses at least two instances per role.UpdateDomain property.the instance size from Small through ExtraLarge ConfigurationSettings . When Azure instances are deployed.specifies the amount and name of disk space on the local VM InputEndpoints . The Azure fabric completely controls the allocation of instances to upgrade domains though an Azure service can view the upgrade domain for each of its instances through the RoleInstance. upgrading them.509 certificates for the role The Service Configuration file comprises one of the two distinct parts of the service application package and consequently can be modified through an in-place upgrade. The Azure fabric implements an in-place upgrade of a role by bringing down all the instances in a single upgrade domain.509 certificate store The Service Configuration file provides the configured values for:     osVersion .defines the internal endpoints for a role Certificates . The Service Definition file specifies the roles contained in the service along with the following for each role:        upgradeDomainCount .specifies the number of instances of a role ConfigurationSettings . the Azure fabric spreads them among different fault domains which means they are deployed so that a single hardware failure does not bring down all the instances. The Azure fabric deploys instances over several upgrade domains. Azure uses upgrade domains and fault domains to facilitate adherence to the SLA. The Azure fabric completely controls the allocation of instances to fault domains though an Azure service can view the fault domain for each of its instances through the RoleInstance. SERVICE DEFINITION AND SERVICE CONFIGURA TION An Azure service is defined and configured through its Service Definition and Service Configuration files. and then restarting them before moving on to the next upgrade domain.defines the external endpoints for a role InternalEndpoint . A VIP swap does support modifications to the Service Definition file. It can also be modified directly on the Azure portal. ROLEENTRYPOINT 75 .FaultDomain property.number of upgrade domains for the service vmsize .specifies the name and location of the X.

The Windows Azure Platform: Articles from the Trenches RoleEntryPoint is the base class providing the Azure fabric an entry point to a role. Azure invokes the overridden OnStop() during a normal suspension of the role. Prior to this call the status of the role is Busy. The standard Visual Studio worker role template provides a starter implementation of the necessary derived class. public virtual void OnStop(). All worker roles must contain a class derived from RoleEntryPoint but web roles can use ASP. 76 . public virtual void Run(). An instance recycles automatically when Run() exits so care should be taken. ROLEENVIRONMENT The RoleEnvironment class provides functionality allowing an instance to interact with the Azure fabric as well as functionality providing access to the Service Configuration file and limited access to the Service Definition file. through use of Thread. The overridden Run() is invoked following successful completion of OnStart() and provides the primary working thread for the role. It exposes the Name of the role and a collection of deployed Instances for it. ROLE The Role class represents a role in an Azure service. The Azure fabric stops the role automatically if OnStop() does not return within 30 seconds.Sleep() for example. Note that a web role can put initialization code in Application_Start instead of OnStart(). RoleEntryPoint is declared: public abstract class RoleEntryPoint { protected RoleEntryPoint(). that the Run() method does not terminate. } The Azure fabric initializes the role by invoking the overridden OnStart() method. public virtual Boolean OnStart(). Note that a web role can put shutdown code in Application_End instead of OnStop().Net lifecycle management instead.

The Changing and Changed callback methods are also used to handle topology changes to the service in which the number of instances of a role is changed. An instance can use the SetBusy() method of the RoleInstanceStatusCheckEventArgs class to indicate it is busy and should be taken out of the loadbalancer rotation. RequestRecycle() initiates a recycle. public public public public static static static static RoleInstance CurrentRoleInstance { get. of the current instance. GetLocalResource() returns a LocalResource object specifying the root path for any local storage for the current role defined in the Service Definition file. DeploymentId identifies the current deployment.The Windows Azure Platform: Articles from the Trenches RoleEnvironment is declared: public sealed class RoleEnvironment { public static event EventHandler<RoleEnvironmentChangedEventArgs> Changed.e. Note that Roles reports all Instances as being of zero size except the current instance and any instance with an internal endpoint. The StatusCheck event is raised every 15 seconds. public static LocalResource GetLocalResource(String localResourceName). } Boolean IsAvailable { get. The callback method for the Changed event has access to the new value of the configuration setting and can be used to reconfigure the instance in response to the change. Roles specifies the roles contained in the current service. public static event EventHandler<RoleEnvironmentChangingEventArgs> Changing.. The callback method for the Changing event has access to the old value of the configuration setting and can be used to control whether or not the instance should be restarted in response to the configuration change. i. } public static String GetConfigurationSettingValue( String configurationSettingName).Role> Roles { get. } The IsAvailable property specifies whether or not the Azure environment is available. } String DeploymentId { get. public static event EventHandler<RoleInstanceStatusCheckEventArgs> StatusCheck. stop and start. Note that the Stopping event is raised before the overridden OnStop() method is invoked. and CurrentRoleInstance is a RoleInstance object representing the current instance. public static void RequestRecycle(). The Stopping event is raised when an instance is undergoing a controlled shutdown although there is no guarantee it will be raised when an instance is shutting down due to an unhandled error. A role typically registers callback methods with these events in its OnStart() method. ROLEINSTANCE 77 . GetConfigurationSettingValue() retrieves a configuration setting for the current role from the Service Configuration file. The Changing event is raised before and the Changed event after a configuration change is applied to the role. } IDictionary<String. The RoleEnvironment class also provides four events to which a role can register a callback method to be notified about various changes to the Azure environment. public static event EventHandler<RoleEnvironmentStoppingEventArgs> Stopping.

} } FaultDomain and UpdateDomain specify respectively the fault domain and upgrade domain for the instance. } public abstract IDictionary<String. LOCALRESOURCE LocalResource represents the local storage. Note that each instance of a role has distinct actual RoleInstanceEndpoint for each specific instance endpoint defined in the Service Definition file. MaximumSizeInMegabytes specifying the maximum amount of space available. } public abstract Int32 UpdateDomain { get. Each instance has its own local storage that is not accessible from other instances. } public abstract String Id { get. on the file system of the instance.The Windows Azure Platform: Articles from the Trenches The RoleInstance class represents an instance of a role. It is declared: public abstract class RoleInstance { public abstract Int32 FaultDomain { get. LocalResource exposes three read-only properties: Name uniquely identifying the local storage. defined for the role in the Service Definition file. InstanceEndpoints is an IDictionary<> linking the name of each instance endpoint specified in the Service Definition file with the actual definition of the RoleInstanceEndpoint. Role identifies the role and Id uniquely identifies the instance of the role. 78 . and IPEndpoint containing the local IP address of the instance and the port number for the endpoint. ROLEINSTANCEENDPOINT The RoleInstanceEndpoint class represents an input endpoint or internal endpoint associated with an instance. It has two properties: RoleInstance identifying the instance associated with the endpoint.RoleInstanceEndpoint> InstanceEndpoints { get. RootPath specifying the root path of the local storage in the local file system. } public abstract Role Role { get.

you first need to login to the SQL Azure Portal[2].com[1] like the free-of-charge Introductory Special or the offer that is available for MSDN Premium subscribers. Do notice that these credentials will be the equivalent of your “sa” SQL Server account. for which logically strong password rules apply. 79 . you’re free to try one of the special offers that Microsoft has available on Azure. and after first accepting the Terms of Use.needs. In this article we will quickly boost you up to speed on how to get started with your own SQL Azure instance in less than five minutes! PREREQUISITE – GET A SQL AZURE ACCOUNT Let’s assume you already have a SQL Azure account. That’s SQL Azure. This is your dashboard for managing your own server instances. no more configuring of servers. WORKING WITH THE SQL AZURE PORTAL With a SQL Azure account at your disposal. yet with your data mirrored and still accessible using comfortable familiarities for SQL Server developers. The first time you login to the SQL Azure Portal. you will be asked to create a server instance for SQL Azure like the screenshot below illustrates: 1: Create a server through the SQL Azure Portal Providing a username and password is pretty straight forward. With the location option you can select the physical location of the datacenter at which your server instance will be hosted. It is advisable to select the geographical location nearest to your – or your users . If not. And certain user names are not allowed for security reasons.The Windows Azure Platform: Articles from the Trenches CHAPTER 4: SQL AZURE CONNECTING TO SQL AZURE IN 5 MINUTES By Juliën Hanssens "Put your data in the cloud!" Think about it… no more client side database deployment.

only populated with a 'master' database. such as system configuration settings and logon accounts. We are going to leave the master database untouched and create a new database by pressing the Create Database button. not until you explicitly tell your SQL Azure instance that you want a specific IP address to allow connectivity with pretty much all administrative privileges. Exactly like SQL Server this specific database contains the system-level information. CONFIGURING THE FIREWALL By default you initially cannot connect to SQL Azure with tools like SSMS. The latter is. like the name used for the connection string. 80 .The Windows Azure Platform: Articles from the Trenches Once you press the Create Server button it takes a second or two to initialize your fresh. At least. new server and you’ll be redirected to the Server Administration subsection. Congratulations. you’ve just performed a “SQL Server installation in the cloud”! CREATE A DATABASE THROUGH THE SERVER ADMINISTRATION Whilst still in the SQL Azure Portal[2] Server Administration section our server details are list. by default. in order to be able to feed our database some scripts we need to set security and get our hands on a management tool. and a list of databases. 2: Create a database through the SQL Azure Portal On confirmation the database will be created in the “blink of an eye”. And for the latter why not use the tool we have used since day and age to connect to our “regular” SQL Server instances: SQL Server Management Studio R2 (SSMS). But for those who find this too convenient you can achieve the same result using a slim script like: CREATE DATABASE SqlAzureSandbox GO However.

By enabling this you allow other Windows Azure services to access your server instance. If you haven’t done so already. Once in place. Older versions will just bore you with annoying error messages. so don’t waste time on that.The Windows Azure Platform: Articles from the Trenches To enable connectivity. add a rule by entering your public IP address in the Firewall Settings tab on your SQL Azure Portal. boot up the 81 . CONNECTING USING SQL SERVER MANAGEMENT STUDIO Having set up everything required for enabling SSMS to manage the database. install the latest R2 release of the SSMS[3] first. 3: Add a firewall rule through the SQL Azure Portal’s Server Administration Do notice the “Allow Microsoft Services access to this server” checkbox. let’s start using it.

The Windows Azure Platform: Articles from the Trenches SSMS application.[Beer] ( [Id] int NOT NULL. 'U') IS NOT NULL DROP TABLE [dbo].[Beer]'. you have a pretty similar environment with SSMS on SQL Azure as you have on a ‘regular’ SQL Server instance. [DateAdded] datetime NOT NULL. This is due to the fact that SQL Azure is designed natively for the Windows Azure platform In a nutshell this means that the creation of tables. Optionally you can provide a specific database instance to connect to in the Options section (more on that later). Once connected. [CountryOfOrigin] nvarchar(50) NULL. by using scripts is roughly the same in T-SQL syntax but only lacks certain (optional) parameters. views. Let’s demonstrate this by creating an arbitrary table. stored procedures etc. it doesn’t really differ from your average SQL Server script. 82 . albeit significant subset. but a basic SQL script for us to edit. [AlcoholPercentage] int NULL. For example: -. Once modified.========================================= -. of the familiar T-SQL features and commands you are used to using with SQL Server. In SSMS right click on the Tables section of our SqlAzureSandbox database and select “New Table”. This means you need to brush up your skills with T-SQL. [BeerName] nvarchar(50) NULL. 4: Connecting SQL Server Management Studio to your SQL Azure instance No rocket science there either. SQL Azure offers a subset.[Beer] GO CREATE TABLE [dbo]. enter the full server name and authenticate using the provided credentials. logins. The result will be no dialog box with fancy fields. Although keep in mind that with the current installment you have to do without the comfortable dialog boxes.Create table template SQL Azure Database -.========================================= IF OBJECT_ID('[dbo].

With SQL Azure you should keep in mind that each database can be on a different server and therefore requires a separate connection. “USE *SqlAzureSandbox+”. This is because the USE <database> command is not supported.2. so let’s create a lightweight custom user/login for our application to use: -.3. We really don’t want these credentials to be included in our application. you can even use familiar tools with design-time support like LINQ to SQL. But once they’re available you can simply boot up Visual Studio and use the Server Explorer to access them in your project. a recommendation on security.e.1. Up until now we have used our godlike master credentials for managing our database.NET DataSets or Entity Framework for even more productivity. Create a login CREATE LOGIN [ApplicationLogin] WITH PASSWORD = 'I@mR00tB33r' GO -. And grant it access permissions GRANT CONNECT TO [MyBeerApplication] GO KEEP IN MIND – THE TARGET DATABASE As you may have noticed all samples lack the USE statement. i. This is one thing to take notice off: tables have to be created through SSMS by default. Create a user CREATE USER [MyBeerApplication] FOR LOGIN [ApplicationLogin] WITH DEFAULT_SCHEMA = [db_datareader] GO -. With SSMS you can easily achieve this in the options of the Connect to Server dialog box: 83 . APPLICATION CREDENTIALS Last but not least. In fact. ADO.The Windows Azure Platform: Articles from the Trenches CONSTRAINT [PK_Beer] PRIMARY KEY CLUSTERED ( [Id] ASC ) ) GO Once executed the table is generated.

azure. 3. 1.com/ssmsr2rtm 84 . the sky is the limit.The Windows Azure Platform: Articles from the Trenches 5: Connect to a specific database using SQL Server Management Studio Take notice of this when you are frequently switching between databases.com http://sql. Microsoft Windows Azure Platform SQL Azure Portal SQL Server 2008 R2 Management Studio Express (SSMSE) http://www. Even in the cloud.com http://tinyurl.azure. 2. And with that in mind.

credentials. channelFactory.WriteLine(message). EndpointAddress endPoint = new EndpointAddress(serviceUri). } public override void WriteLine(string message) { traceChannel. credentials.CreateServiceUri("sb". } public override void Write(string message) { traceChannel. // Create the channel and open it ChannelFactory<ITrace> channelFactory = new ChannelFactory<ITrace>(new NetEventRelayBinding(). Then. credentials. string issuerName.Credentials. CUSTOM TRACE LISTENER We need a client to send the messages and a server to receive them.IssuerName = issuerName.SharedSecret.Credentials. The first thing to do is implement the custom TraceListener: public class AzureTraceListener : TraceListener { ITrace traceChannel. by using the magic provided by the service bus for traversing firewalls.Endpoint. By creating your own custom TraceListener. Well. 85 .Add(credentials). servicePath).Write(message).CredentialType = TransportClientCredentialType.SharedSecret. serviceNamespace. you can push trace messages anywhere you like. public AzureTraceListener(string serviceNamespace. string issuerSecret) { // Create the endpoint address for the service bus Uri serviceUri = ServiceBusEnvironment.SharedSecret.IssuerSecret = issuerSecret. you can pick up these trace messages in an application running on your desktop. string servicePath.CreateChannel(). you can use the Azure Diagnostics to collect data in table storage but this is far from ideal as you have to read the data out and that doesn’t give you real time information.Behaviors. There is a better way! The . endPoint).The Windows Azure Platform: Articles from the Trenches CHAPTER 5: WINDOWS AZURE PLATFORM APPFABRIC REAL TIME TRACING OF AZURE ROLES FROM YOUR DESKTOP By Richard Prodger One of the big challenges faced with a deployed Azure hosted role is how to get access to tracing information. traceChannel = channelFactory. Let’s start with the Azure client. // Setup the authentication TransportClientEndpointBehavior credentials = new TransportClientEndpointBehavior().NET Framework already provides the TraceListener that most of you will be familiar with.

static void { string string string string Main(string[] args) issuerName = "yourissuerName". there is some setup stuff for WCF and the service bus. [OperationContract(IsOneWay = true)] void Write(string text). what about the desktop end? The first thing you have to do is implement the TraceService that the custom listener will call: public class TraceService : ITrace { public static event ReceivedMessageEventHandler RecievedMessageEvent. TraceListener traceListener = new AzureTraceListener(serviceNamespace.Add(traceListener).Sleep(1000).WriteLine(string text) { 86 .Listeners. issuerSecret = "yoursecret". } } This simple app simply creates a new custom TraceListener and adds it to the TraceListener’s collection and that pushes out a timestamp every second.ToString()). issuerName. Trace.Add(new TextWriterTraceListener(Console. but basically all you have to do is override the Write and WriteLineMethods. } SEND MESSAGE CONSOLE APPLICATION Now we need an app to send the messages. I have created a simple console app. serviceNamespace = "yourNamespace". servicePath. void ITrace.Now. issuerSecret). The ITrace interface is simple as well: [ServiceContract] public interface ITrace { [OperationContract(IsOneWay=true)] void WriteLine(string text). but this could be any Azure role.The Windows Azure Platform: Articles from the Trenches } } As you can see. I’ve also added Console. servicePath = "tracer".Out)).Listeners.Out as another listener so you can see what’s being sent. Thread. TRACE SERVICE So that’s the Azure end done.WriteLine("Hello world at " + DateTime. Trace. For the purposes of this article. while (true) { Trace.

credentials. credentials. text). serviceHost = new ServiceHost(typeof(TraceService)). create an endpoint. public AzureTraceReceiver (string serviceNamespace. we have to create the class that will host the service: public class AzureTraceReceiver { ServiceHost serviceHost. text).IssuerSecret = issuerSecret. nothing special here.The Windows Azure Platform: Articles from the Trenches RecievedMessageEvent(this. // Setup the authentication TransportClientEndpointBehavior credentials = new TransportClientEndpointBehavior().Credentials.IssuerName = issuerName.CredentialType = TransportClientCredentialType. servicePath). } } This is basic WCF code.SharedSecret.CreateServiceUri("sb".Close(). The event delegate is there to push out the messages to the app hosting this class. string message).Open(). endpoint. string issuerName. All we do is create some credentials for authenticating with the service bus.SharedSecret.SharedSecret. serviceNamespace.AddServiceEndpoint(typeof(ITrace). serviceUri). EndpointAddress endPoint = new EndpointAddress(serviceUri). } public void Stop() { serviceHost. credentials. string servicePath.Behaviors. 87 . } public void Start() { serviceHost. new NetEventRelayBinding(). SERVICE HOST CLASS Next.Add(credentials).Credentials. string issuerSecret) { // Create the endpoint address for the service bus Uri serviceUri = ServiceBusEnvironment. } } public delegate void ReceivedMessageEventHandler(object sender.Write(string text) { RecievedMessageEvent(this. ServiceEndpoint endpoint = serviceHost. } void ITrace. add the credentials and start up the service host.

The Windows Azure Platform: Articles from the Trenches SERVICE Now all we have to do is implement the desktop app. Again, for simplicity, I am creating a simple console app:
static void Main(string[] args) { Console.Write("AZURE Trace Listener Sample started.\nRegistering with Service Bus..."); string string string string issuerName = "yourissuerName"; issuerSecret = "yoursecret"; serviceNamespace = "yourNamespace"; servicePath = "tracer";

// Start up the receiver AzureTraceReceiver receiver = new AzureTraceReceiver(serviceNamespace, servicePath, issuerName, issuerSecret); receiver.Start(); // Hook up the event handler for incoming messages TraceService.RecievedMessageEvent += new ReceivedMessageEventHandler(TraceService_myEvent); // Now, just hang around and wait! Console.WriteLine("DONE\nWaiting for trace messages..."); string input = Console.ReadLine(); receiver.Stop(); } static void TraceService_myEvent(object sender, string message) { Console.WriteLine(message); }

This app simply instantiates the receiver class and starts the service host. An event handler is registered and then just waits for messages. When the client sends a trace message the event handler fires and the message is written to the console. You may have noticed that I have used the NetEventRelayBinding for the service bus. This was deliberate as it allows you to hook up multiple server ends to receive the messages in a classic pub/sub pattern. This means you can run multiple instances of this server on multiple machines and they all receive the same messages. You can use other bindings if required. Another advantage of this binding is that you don't have to have any apps listening, but bear in mind you will be charged for the connection whether you are listening or not, although you won’t have to pay for the outbound bandwidth. I put all the WCF and service bus setup into the code, but this could easily be placed into a configuration file. I prefer it this way as I have a blind spot when it comes to reading WCF config in xml and I always get it wrong, but it does mean you can’t change the bindings without recompiling. SUMMARY There is more that could be done in the TraceListener class to improve thread safety, error handling and to ensure that the service bus channel is available when you want to use it, but I’ll leave that up to you. This

88

The Windows Azure Platform: Articles from the Trenches code was first put together whilst the AppFabric ServiceBus was in private beta. Microsoft has now included a version of this code in their SDK samples, so take a look there. So that's it. You now have the ability to monitor your Azure roles from anywhere.

89

The Windows Azure Platform: Articles from the Trenches MEET THE AUTHORS ERIC NELSON
After many years of developing on UNIX/RDBMS (and being able to get mortgages) Eric joined Microsoft in 1996 as a Technical Evangelist (and stopped being able to get mortgages due to his new 'unusual job title' in the words of his bank manager). He has spent most of his time working with ISVs to help them architect solutions which make use of the latest Microsoft technologies - from the beta of ASP 1.0 through to ASP.NET, from MTS to WCF/WF and from the beta of SQL Server 6.5 through to SQL Server 2008. Along the way he has met lots of smart and fun developers - and been completely stumped by many of their questions! In July 2008 he switched role from an Application Architect to a Developer Evangelist in the Developer and Platform Group. Developer Evangelist, Microsoft UK Developer Evangelist, Microsoft UK Website: http://www.ericnelson.co.uk Email: eric.nelson@microsoft.com Blog: http://geekswithblogs.net/iupdateable Twitter: http://twitter.com/ericnel

MARCUS TILLETT Marcus Tillett is currently the Head of Technology at Dot Net Solutions, where he currently heads the technical team of architects and developers. Having been building solutions with Microsoft technologies for more than 10 years, his expertise is in software architecture and application development. He is passionate about understanding and using the latest cuttingedge technology. He is author of “Thinking of... Delivering Solutions on the Windows Azure Platform?” (http://www.bit.ly/a0P02n). Head of Technology at Dot Net Solutions Twitter: @drmarcustillett Blog: http://www.dotnetsolutions.co.uk/blog

90

Technical Director of Active Web Solutions www. Active Web Solutions Twitter: @sakshamgautam Blog: http://sakshamgautam. Richard is the Director responsible for the AWS Technology Centre. Richard was responsible for implementing large scale e-commerce and web based systems and for translating emerging technology into practical business solutions. He presented his work on interoperability in Windows Azure at the Architect Insight Conference 2010. Richard was the principal architect and technical design authority for the multi-award winning RNLI Sea Safety system. Richard managed BT's Web Services unit. At BT. Prior to joining AWS. Apart from Azure and .net SAKSHAM GAUTAM Saksham Gautam started working with Windows Azure right from the early stages of the development of the platform.NET.The Windows Azure Platform: Articles from the Trenches RICHARD PRODGER Richard Prodger is a founding Technical Director of Active Web Solutions with more than 25 years experience in the R&D and computing sectors.blogspot. Since then. Saksham is one of the architects and the lead developer for porting the existing on-premise sea safety system to Windows Azure. particularly using C#.aws. he has been working as a Software Developer for AWS. He is currently based in Prague and builds interesting software.com 91 . More recently. Richard’s primary responsibilities are technical strategy and systems development. he is interested in distributed systems composed of heterogeneous components.NET. Richard has been working closely with Microsoft on their cloud services platform. Software Developer/Architect. and he graduated with Bachelors in Computer Science in 2007. He is an MCTS on WCF. Windows Azure.

Steve has also been conducting a number of Azure Assessment Days in conjunction with Microsoft and promoting the benefits of cloud computing.com Software Engineer at Securancy Intelligence Email: j. Senior Software Developer.uk ROB BLACKWELL Rob Blackwell is R&D Director at Ipswich based Active Web Solutions.com 92 . Active Web Solutions Blog: www. a Dutch IT company.robblackwell.org.stevetowler. In that time he has helped develop a number of applications hosted in Windows Azure including a CAD drawing collaboration tool and a location based services application. Rob is a self-confessed language nerd and freely admits that the real reason he’s interested in running Java on Azure is so that he can host his spare-time Clojure Lisp experiments. He was part of a team that won an unprecedented three British Computer Society awards in 2006 and was a Microsoft Most Valuable Professional (MVP) in 2007 and 2008.com/RobBlackwell JULIËN HANSSENS Juliën Hanssens is a Software Engineer and Technical Consultant in software technologies at Securancy Intelligence.hanssens@securancy. He can be contacted at j.The Windows Azure Platform: Articles from the Trenches STEVE TOWLER Steve Towler is a Senior Software Developer for Active Web Solutions in Ipswich and has been working with Windows Azure since April 2009.uk Twitter: http://twitter.hanssens@securancy.co. Active Web Solutions Blog: http://www. R&D Director.

Steven is a . he still has a deeprooted need to write production code every day.Net consultant who likes diving deep into the technologies he is passionate about.name/blog/ Twitter: snagy 93 . He dwells at Pune. ASP. and has been learning.Net.Net Framework 2. He has been coding for food and gadgets for the past 8 years around all things Microsoft including ATL/COM.snagy.0 onwards.NET Consultant Blog: http://azure. and presenting on Azure since its first public release at PDC 08.The Windows Azure Platform: Articles from the Trenches SIMON MUNRO Simon Munro. .net/iunknown Email: sarangbk@gmail. WIF and now Windows Azure. VB6. has been designing and developing commercial applications for two decades. teaching. Winforms. India with daughter Saee and wife Prajakta. . His current endeavors include assisting developers and customers understand the underlying architectural concepts around cloud computing.com STEVEN NAGY By day. a senior consultant at London-based EMC Consulting. Simon enjoys stirring things up and pushing conformity by challenging acceptable norms and asking difficult questions. Analyst Programmer with Accenture-Avanade Blog: http://geekswithblogs. WCF.com Twitter: @simonmunro SARANG KULKARNI Sarang is an Analyst Programmer with Accenture-Avanade during work hours and a technology nomad after that. By night he cackles gleefully basking in the glow of his laptop screen as thousands of Azure worker roles carry out his evil bidding. targeting varied assignments ranging from run off the mill enterprise LOB applications to Astrometry APIs and media transcoding solutions in the cloud. Branded as a thought-leader. Despite this. Senior consultant at EMC Consulting Blog: http://simonmunro.

Most recently Jason’s been struggling to keep pace with the Entity Framework. Software Architect.NET framework and even picked up a certification along the way. 3rd parties ( e. ASP.emc.com 94 . health care. Grace joined EMC in 2008 from Hogg Robinson where she was responsible for the design. developing applications beginning with VB 6. web hosting. through each successive version of the .com/gracemollison/ JASON NAPPI Jason is a Software Architect at SmartPak.g Hosting partners) and EMC Consulting as required. Advising on and architecting the platform. where he advances their eCommerce engine and line-of.The Windows Azure Platform: Articles from the Trenches GRACE MOLLISON Grace’s role as a Platform Architect at EMC bridges the gap between Infrastructure and Development. Platform Architect at EMC Blog: http://consultingblogs. Azure. ASP. He’s held roles in variety of industries in the Boston area including. liaising between the client. Grace has a lot of enthusiasm for Public cloud solutions and has been dabbling with Azure from early betas. He has 14 years experience as a developer mainly on the Microsoft stack. Grace was part of the team that developed the’ See The Difference ‘solution which was built using Windows Azure and SQL Azure.business applications.nappisite. SmartPak Blog: http://blog. Silverlight. MTS/COM+. Activities range from supporting the development teams throughout the development life cycle. financial services. WCF Rest. implementation and ongoing maintenance. WCF Data Services. support and evolution of their eCommerce platform which has BizTalk at its core. Grace is a CISSP (Certified Information Systems Security Professional). and eCommerce.NET MVC. and the myriad of other technologies pouring out of Redmond and elsewhere.

visiting lecturer in Computer Science. Since Microsoft’s launch of .com/in/joshtucholski Twitter: http://www. cloud computing with the Windows Azure platform. David has secured a Distinction in Computing Science at Newcastle University. firstly in its fledgling consultant service section. and healthcare sectors. and runs and delivers regular technical briefings around the Microsoft platform. he enjoys meeting with students in computer science and software engineering to pick their brain and help them prepare for their professional careers.NET platform. Outside of development. from smart clients to web applications. including TechEd Europe. and more recently.NET.dontforgetyourtodos. TechDays. He has experience working with small teams to large enterprise environments and focuses on WCF service development. Developer Evangelist. He strives to produce simple solutions that create sound technical architectures and can tell a great story at the end of the day. as well has having designed and developed a wide range of software. and computer systems. He currently works mainly with partner and startups. and middle-tier component integration. etc. helping design and build a wide range of systems. and has worked with computers ever since. Senior Technology Associate LinkedIn: http://www.com/jtucholski Blog: http://www. For the last 15 years David has worked at Microsoft.twitter. BizSpark Camp. worked as a freelance computer journalist. then in EMEA as a technical evangelist. Microsoft UK Twitter: @scroffthebad 95 . RIA apps. During his career. he has been focused on the .com/ DAVID GRISTWOOD Ever since he wrote his first ‘10 ? “hello world” : goto 10‘ program on a PET computer in the late 70s he has been hooked.The Windows Azure Platform: Articles from the Trenches JOSH TUCHOLSKI Josh works for Rosetta as a Senior Technology Associate in the Microsoft Solution Center where he takes part in helping Rosetta deliver interactive marketing solutions to clients in the financial. Josh lives in Ohio with his wife Andrea. B2B.linkedin. ecommerce. a director of a software company.

Those were the days. He learned C++ when the only book available was written by Bjarne Stroustrup. Neil spent many years working in healthcare software and is currently involved in a stealth data-analytics startup. CA having noticed the weather there is somewhat better than it was in Scotland.live. Dot Net Solutions Ltd Blog: http://www. don’t know you’re born.NET MVC. the Azure cloud platform and NoSQL data stores. Blog: http://nmackenzie. creating all manner of software on the Microsoft stack. when you had to write the code in a text editor and compile it on the command line. Neil lives in San Francisco.2... His career in software design and development spans three decades and more programming languages than he can remember.1 on Windows NT. he is only a recent convert to the joys of . WPF and Silverlight. However.spaces. with your IntelliSense and your ReSharpers.Net Framework and C#. including ASP. Things vying for Mark’s attention lately include functional programming. Windows Azure.co. Senior Software Architect. You kids today.The Windows Azure Platform: Articles from the Trenches NEIL MACKENZIE Neil Mackenzie has been programming since the late Bronze Age.uk/blogs/markrendle 96 . internet-centric applications. C# has been his favourite language pretty much since the first public beta. He has been using SQL Server since v4.com/blog/ Twitter: @mknz MARK RENDLE Mark is currently employed as a Senior Software Architect by Dot Net Solutions Ltd. He has been using Windows Azure since PDC 2008 and regrets the demise of the Live Framework API.dotnetsolutions.

You're Reading a Free Preview

Download
scribd
/*********** DO NOT ALTER ANYTHING BELOW THIS LINE ! ************/ var s_code=s.t();if(s_code)document.write(s_code)//-->