You are on page 1of 506

Mastering System Center Data Protection Manager 2007

Devin L. Ganger Ryan Femling Wiley Publishing, Inc. Acquisitions Editor: Tom Cirtin Development Editors: Pete Gaughan and Janet Chang Technical Editor: Brad Price Production Editor: Rachel McConlogue Copy Editor: Kathy Carlyle Production Manager: Tim Tate Vice President and Executive Group Publisher: Richard Swadley Vice President and Executive Publisher: Joseph B. Wikert Vice President and Publisher: Neil Edde Book Designers: Maureen Forys and Judy Fung Compositor: Craig Woods, Happenstance Type-O-Rama Proofreader: Nancy Bell Indexer: Nancy Guenther Cover Designer: Ryan Sneed Cover Image: Pete Gardner / Digital Vision / gettyimages 2008 Wiley Publishing, Inc., Indianapolis, Indiana ISBN: 978-0-470-18152-2 978-0-470-18152-2 No part of this publication may be reproduced, stored in a retrieval system or transmitted in any form or by any means, electronic, mechanical, photocopying, recording, scanning or otherwise, except as permitted under Sections 107 or 108 of the 1976 United States Copyright Act, without either the prior written permission of the Publisher, or authorization through payment of the appropriate percopy fee to the Copyright Clearance Center, 222 Rosewood Drive, Danvers, MA 01923, (978) 750-8400, fax (978) 646-8600. Requests to the Publisher for permission should be addressed to the Legal Department, Wiley Publishing, Inc., 10475 Crosspoint Blvd., Indianapolis, IN 46256, (317) 572-3447, fax (317) 572-4355, or online at http://www.wiley.com/go/permissions. Limit of Liability/Disclaimer of Warranty: The publisher and the author make no representations or warranties with respect to the accuracy or completeness of the contents of this work and specifically disclaim all warranties, including without limitation warranties of fitness for a particular purpose. No warranty may be created or extended by sales or promotional materials. The advice and strategies contained herein may not be suitable for every situation. This work is sold with the understanding that the publisher is not engaged in rendering legal, accounting, or other professional services. If professional assistance is required, the services of a competent professional person should be sought. Neither the

publisher nor the author shall be liable for damages arising herefrom. The fact that an organization or Website is referred to in this work as a citation and/or a potential source of further information does not mean that the author or the publisher endorses the information the organization or Website may provide or recommendations it may make. Further, readers should be aware that Internet Websites listed in this work may have changed or disappeared between when this work was written and when it is read. For general information on our other products and services or to obtain technical support, please contact our Customer Care Department within the U.S. at (800) 762-2974, outside the U.S. at (317) 572-3993 or fax (317) 572-4002. Wiley also publishes its books in a variety of electronic formats. Some content that appears in print may not be available in electronic books. Library of Congress Cataloging-in-Publication Data is available from the publisher. TRADEMARKS: Wiley, the Wiley logo, and the Sybex logo are trademarks or registered trademarks of John Wiley & Sons, Inc. and/or its affiliates, in the United States and other countries, and may not be used without written permission. All other trademarks are the property of their respective owners. Wiley Publishing, Inc., is not associated with any product or vendor mentioned in this book. 10 9 8 7 6 5 4 3 2 1 Dear Reader, Thank you for choosing Mastering System Center Data Protection Manager 2007. This book is part of a family of premium quality Sybex books, all written by outstanding authors who combine practical experience with a gift for teaching. Sybex was founded in 1976. More than 30 years later, we're still committed to producing consistently exceptional books. With each of our titles we're working hard to set a new standard for the industry. From the paper we print on, to the authors we work with, our goal is to bring you the best books available. I hope you see all that reflected in these pages. I'd be very interested to hear your comments and get your feedback on how we're doing. Feel free to let me know what you think about this or any other Sybex book by sending me an email at nedde@wiley.com, or if you think you've found a technical error in this book, please visit http://sybex.custhelp.com. Customer feedback is critical to our efforts at Sybex. Best regards, Neil Edde Vice President and Publisher It takes a lot of work and many sacrifices to write a technical book. We dedicate this book to the following people:

To my kids for all the stolen evenings and weekends when I should have been playing with you; to my wife for putting up with a preoccupied and busy no-funlump of a husband; to Nick for the game breaks that saved my sanity. Devin To my wife, for everything she does; to the children in our household, for providing amusing anecdotes; and finally, to whoever manufactured my couch. It provided a great location to work on the book. Ryan Acknowledgments A book like this is in no way solely the product of the authors. There are far too many people involved in all stages of the book, from before the authors type out the first outline to after the last revisions have been turned in. We would like to thank everyone at Sybex who helped make this book a success. Our heartfelt thanks must go out to:

Tom Cirtin for being the best acquisitions editor we could have hoped for. From our initial pitch, to lining up the team, to the constant reminders about deadlines we forced him to give; we caused him a lot of trouble. His patience and support have helped make this book as solid as it is. Pete Gaughan, our editorial manager, for helping Tom keep the ship sailing. He answered a lot of process questions, reinforced the schedule issues, and helped find workarounds to any problems that cropped up. Without Pete and the great leadership he gave us, this book wouldn't have been completed. Brad Price for his invaluable (although sometimes exasperating) technical editing. He definitely stretched us and pointed out places where we could do a better job covering the DPM product, the underlying technology, and any of its underlying implications. Kathy Carlyle for doing an excellent job of copyediting our manuscript and making our words be the most excellent they can be. The copyeditor's job isn't just to be a human grammar and spellchecker; Kathy caught many inconsistencies that we introduced over the months it took to write this book. Rachel McConlogue, our production editor, for keeping things on track as we moved through copyediting and the last revisions. She and the production team took the hundreds of files (Word documents, screenshots, and more) and laboriously stitched them together into this good-looking tome you now hold.

We'd like to thank everyone else at Sybex who was involved in the production of this book; even though we haven't learned your names, we know what a great debt we owe you in getting everything put together in a cohesive, impressive manner. We were lucky to have worked with such a great group. Additionally, we'd like to thank Jason Buffington, Deb Lewy, and Harshwardhan Mittal of the DPM product team at Microsoft. They helped us understand DPM 2007's potential and capabilities as a product and answered technical questions. Any remaining errors in the book belong solely to us.

We offer special thanks to Kevin Miller, fellow 3Sharpie, once and future Exchange MVP, who helped us pull together the material for Chapter 4, "Using the DPM Shell," on a short timeframe once we realized that the PowerShell material needed to be in its own chapter. We also particularly thank Ed Crowley, Exchange MVP extraordinaire, for graciously permitting us to use his words as the quote for Chapter 7, "Protecting Exchange Servers." Finally, we would like to thank everyone at 3Sharp, but we would like to especially call out the finest management team anyone could hope to have: Jeff Bean, Paul Flynn, David Gerhardt, Peter Kelly, John Peltonen, and Paul Robichaux. While we worked on this book after business hours, they made sure to stay aware of our progress on the book and give us advice, encouragement, and support. It was through 3Sharp that we first got introduced to DPM and were able to learn enough about it to make the writing of this book a reality. About the Authors Devin L. Ganger, Redmond, Wash., is a Messaging Architect, technical writer, and speaker for 3Sharp. He has nearly 15 years of experience managing Windows and Unix systems, with a focus on messaging, DNS, and security. Devin is co-author of the Exchange Server Cookbook (O'Reilly and Associates, 2005), wrote the Email Discovery and Compliance ebook for Windows IT Pro Magazine, and has contributed to several other books and magazine articles. Devin was recognized as an Exchange MVP by Microsoft in January 2007 and 2008. He is a regular speaker at the popular Exchange Connections conferences and other industry events and actively maintains his blog (e)Mail Insecurity at: http://blogs.3sharp.com/blog/deving. Ryan Femling, Redmond, Wash., a Windows Server 2003 MCSE, is an Office Systems solution specialist for 3Sharp, where he uses his extensive knowledge of networking, storage, and clustering technologies to provide critical guidance on a variety of successful projects. He began his career in networking in 1999 and has focused on implementing high-availability solutions using Microsoft technologies. Ryan has worked at a variety of large and small organizations.

Introduction
Overview
Welcome to our book on System Center Data Protection Manager (DPM) 2007. We assume that you fit one of three profiles:

You are in some way evaluating DPM and its capabilities. You've just purchased DPM for use in your network. You're idly browsing through the shelves at the bookstore (or a friend's shelf) and are wondering what exactly "Data Protection Manager" is.

Whichever type of reader you are, we hope that we will not only answer the questions you have, but find a way to entertain and amaze you at some point between these covers. If you have already worked with the previous version of DPM, Data Protection Manager 2006, you will find that DPM has added a number of vital new features to better support the needs of today's enterprise class organizations. The increased native support for the vital Microsoft workloads alone should make your head spin, but the integrated tape handling, not to mention the support for management through Windows PowerShell, makes this release practically a whole new product. With all of these changes and new features, you are probably looking for a resource that will help you navigate; you are not wrong for wanting some guidance. That's where this book comes in. We've assembled just about everything we know about DPM into the chapters and appendices that follow. We designed this books so that you can use it however you need to. If you're the coverto-cover type, you can read each chapter in order to gain a methodical picture of how DPM works, how to integrate it into your current organization, and how best to use it. You can also flip around the book and use it as a reference when you are not sure how something works or is supposed to work. Our goal is to enable you to gain familiarity, proficiency, and above all comfort when you're using DPM. The more comfortable you are with it, the more you will be able to do with it. Included at the end of each chapter is a Master It section. Each Master It section has questions to help reinforce the material included within the chapter. Quiz yourself to see how you're mastering the material. Most of all, have fun as you go through these pages. Once you find out how much power this product has, we think that you will be amazed at some of the things you can do with it. Just looking at the surface and knowing you can protect your servers and services may be impressive enough, but the additional features are what blew our socks off.

Who Should Read This Book?


The answer to this question, should we answer it the way Sybex sales would like, would be "everyone." That covers a lot of ground, though, so let's narrow it down a bit. You are the ideal reader for this book if you are:

A systems administrator who is responsible for your company's backup and restore processes A technical professional who wants to keep up on the latest Microsoft applications and technology A decision maker who wants to find out if DPM is the right answer for your data protection needs

Whatever your reasons may be, everyone wants to protect their information in an efficient manner. That's why this book includes a comprehensive look at all aspects of working with DPM: deployment, management, troubleshooting, and the new DPM Management Shell. As complex as products are becoming, no one can be an expert on all of them. If you are like most administrators, you only have time to learn enough about a product to fit it into your environment and manage it effectively the way you need to use it. This book is meant to get you up to speed quickly and then give you guidance through some of the more advanced topics. Not every administrator works with the same type of infrastructure. What works well in a large corporation does not always work for smaller companies. What works well for smaller companies often does not scale well for medium and large organizations. Microsoft has attempted to address these size and scalability differences and deliver a product that can be implemented quickly for a small company, yet will still scale well for larger organizations. No matter which scenario fits you, DPM will work for youand this book will help you learn how.

The Mastering Series


The Mastering series from Sybex provides outstanding instruction for readers with intermediate and advanced skills, in the form of top-notch training and development for those already working in their field and clear, serious education for those aspiring to become pros. Every Mastering book features:

The Sybex "by professionals for professionals" commitment. Mastering authors are themselves practitioners, with plenty of credentials in their areas of specialty. A practical perspective for a reader who already knows the basicssomeone who needs solutions, not a primer. Real-World Scenariosranging from case studies to interviewsthat show how the tool, technique, or information presented is applied in actual practice. Skill-based instruction, with chapters organized around real tasks rather than abstract concepts or subjects. Self-review test "Master It" problems and questions, so you can be certain you're equipped to do the job right.

What Is Covered in This Book


This book is made up of 12 chapters. They are intended to take you from a novice administrator, with limited knowledge of backup and restore procedures, to an experienced Data Protection Manager administrator who understands how to safeguard the systems under your protection. However, if you already understand how backup and restore theories tie together, you don't have to start at Chapter 1 and read sequentially through the book. You can jump right to the chapter that meets your immediate needs. The following section briefly describes each chapter. Chapter 1: Data Protection Concepts Wondering what the terms differential and incremental mean? How about recovery points and synchronization? This chapter is designed to help you understand basic backup and recovery concepts, both for DPM as well as the products it replaces. Chapter 2: Installing DPM What hardware and software do you need to deploy DPM in your environment? What changes do you need to make to Active Directory? This chapter identifies the prerequisites and details the installation options that DPM offers. Chapter 3: Using the DPM Administration Console In this chapter, we detail the graphical user interface included with DPM, the DPM Administration console. Installing the server is only the first step; this chapter helps you understand how the DPM Administration Console is laid out and gives you a map of the management and control options it gives you. Chapter 4: Using the DPM Management Shell A good command-line interface provides a number of benefits to any application, including the ability to write customized management scripts. DPM includes integration with the new Windows PowerShell environment, and this chapter provides coverage of this important new technology. Chapter 5: End-User Recovery With System Center Data Protection Manager 2007, you can allow end users to restore their files and folders. This takes some of the administrative support off of the shoulders of the data recovery team, but it also introduces the need to properly train your users. This chapter covers the settings and methodology behind end-user recovery. Chapter 6: Protecting File Servers Nearly every organization has file servers holding documents and other data. While Data Protection Manager 2006 focused exclusively on this workload, DPM extends this coverage. In this chapter, you will see how to protect file system data more easily than ever before. Chapter 7: Protecting Exchange Servers Email has become a mission-critical system; try taking the Exchange server offline for a few hours and see how many phone calls you receive. DPM provides some of the best protection available for Exchange servers, allowing for quick and efficient restoration of data in case of a problem or disaster. Chapter 8: Protecting SQL Servers SQL Server hosts play a vital role within a Microsoftcentric organization, hosting key databases for a range of applications and storage needs. This chapter covers how to use DPM to protect SQL Server, and explains the options you have to restore the vital data.

Chapter 9: Protecting SharePoint Servers In the past few years, SharePoint has gone from an interesting concept to a key component of many IT deployments. This chapter explains how SharePoint data is stored and distributed and gives you detailed coverage for protecting and restoring your SharePoint data with DPM. Chapter 10: Protecting Virtual Servers Virtualization is a hot new technology that many organizations are either implementing or evaluating. DPM now includes the ability to protect virtual machines hosted by Microsoft Virtual Server. This chapter shows you how virtual machines can be protected and restored. Chapter 11: Protecting Workstations No matter how you deploy and manage your organization's desktops and laptops, sometimes they contain data or applications that need to be protected at their current location. This chapter shows you how to use DPM to protect these critical workstations. Chapter 12: Advanced DPM This chapter takes you even deeper into the configuration and management of System Center Data Protection Manager 2007. This is also where we step aside from talking about DPM by itself and explore how DPM interacts with the rest of your IT environment.

How to Contact the Author


We welcome feedback from you about this book or about books you'd like to see from us in the future.

You can reach Devin via email by writing to deving@3sharp.com. You can reach Ryan via email by writing to ryanf@3sharp.com. Devin's blog, (e)Mail Insecurity, focuses on Exchange, data protection, Windows, and security. You can find it at: http://blogs.3sharp.com/Blog/deving/. Ryan writes about a variety of topics from a systems administration point of view. His blog is at: http://blogs.3sharp.com/blog/ryanf/. The authors have established a website just for the book. This site will include any DPM-related content they post to their blogs, updated essays or discussions, and usercontributed material (such as scripts) that will complement the book. Find it at: http://www.masteringdpm.com/.

Sybex strives to keep you supplied with the latest tools and information you need for your work. Please check the website at http://www.sybex.com, where additional content and updates that supplement this book, if the need arises, will be posted. Enter data protection manager in the Search box (or type the book's ISBN9780470181522), and click Go to get to the book's Update page

Chapter 1: Data Protection Concepts


Overview
He who laughs last probably made a backup. Murphy's Laws of Computing As a civilization, we humans tend to get attached to our stuffall flavors, shapes, and sizes of it. A lot of this stuff is probably not worthy of the amount of time and attention we devote to it. When something happens to part us from our stuff, we get in a snit for a while, and then an amazing thing happens: we gradually realize that most (if not all) of our stuff is just clutter, and that our lives are still going on just as nicely as they were before. True, there are some tangible objects that are important and necessary, but in the aftermath of disasters or other life-changing events, we find that suddenly a lot of our stuff just doesn't seem as important as it used to seem. How many PEZ dispensers or fastfood sports-team 32-oz. cups does one person need, anyway? When we start dealing with the intangible type of stuff we call "data," however, losses can quickly become more catastrophic. Leaving aside threats and dangers such as identity theft, data and information are critical commodities for many businesses. Many workers don't spend a lot of time dealing with data as part of their duties; a barista doesn't need to have a computer to create and serve a 20-oz. triple-shot mocha with whipped cream; a carpenter spends more time cutting, planning, and hammering than sending emailat least, we hope they do! For those of us who are information technology (IT) pros or information workers, however, data is the lifeblood of the information with which we deal. As with physical objects, not all data is of equal value. Consider the relative value of the following types of information. Be sure to think about the impact on your organization if you were to lose access to this data, and the amount of effort required to reconstruct this data if it were missing:

All of the accounts, passwords, and settings for all users in your organization The contents of your mailbox, calendar, and contacts The databases supporting your CRM deployment The databases supporting your ERM deployment Accounting spreadsheets and other financial files on your internal servers

Depending on your organization, some of these types of information will be more critical to your needs than others. As an example, at 3Sharp we would have a minor amount of discomfort if we lost our user accounts; re-creating the list of active user accounts would represent a few minutes' worth of work, and we would collectively spend another handful of hours dealing with issues such as fixing access permissions. However, the loss of our Exchange mailboxesand years of contact information, documents, and knowledge stored within the hundreds of thousands of messageswould be catastrophic for us. When we think about protecting our physical assets, we often spend a lot of time and money to do the job. If you don't believe me, just spend a few minutes thinking about how much time we spend both personally and corporately on such tasks as drawing up and paying for

insurance policies and premiums, generating and reconciling various types of inventory, or installing and maintaining access control mechanisms such as burglar alarms, deadbolts, and antitheft systems. All too often, we don't take the same amount of time to adequately protect our data assets. Backup systems have been a key part of IT infrastructures for decades now, but hardly a week goes by without a new story about a backup failure. The concept of disaster recovery has been pushed by vendors, consultants, authors, and speakers for years, yet few organizations have a completely, tested, trustworthy plan for rebuilding critical IT resources and infrastructure from the ground up. If our data is so important to usand is harder to replace than physical objects, which at least can be covered by insurance policiesshouldn't we as IT professionals take a corresponding amount of effort to protect this information and data, beyond slapping a tape drive onto a server, loading up some backup software, and calling it good? Microsoft's System Center line of products is designed to give IT pros better tools for managing their IT infrastructure, and the System Center Data Protection Manager (DPM) 2007 product is an important part of this lineup. If you're interested in finding out how to fully protect your critical IT resourcesnot just performing backups that you're not certain are really worth their time and expensethen this is the book for you. We're going to dig into DPM and reveal all its secrets for you, and give you practical guidance on putting it to work for you and getting the most out of it. Before we introduce you to DPM in detail, get together over drinks, and make it your new best friend, we need to ensure that we all understand what we're talking about. Let's take some time we're not in a hurry, after alland go over some basic concepts that relate to data protection. Do you know the difference between data protection and backup, for instance? If not, no fearyou will after you're done with this chapter. Once we've done that, we'll go on to explore some foundation concepts for DPM that you'll need before we move on to other chapters. In this chapter, you will learn to:

Understand general data protection concepts Identify new concepts introduced by DPM Identify the components in the DPM architecture

General Concepts
In order to get really excited about all of the benefits that DPM providesan essential part of making the decision to deploy ityou need to understand how it changes the playing field from the previous generation of products. Because we don't have any way to know what your level of experience in this field is, we're going to start with the basics; we want to ensure that we're all on the same page before diving into the new material. In this section, we're going to have a brief discussion about the following topics:

An exploration of backups and restores: what they are, how they work, what purpose they serve, what benefits they provide, and the weaknesses they have An overview of tape-based backup: why it was originally used, why it's still used, and why it may no longer be suitable for all backup operations

For Experienced Backup Administrators

At the very least, give this section a quick read-through, so that if we're coming at the topic from a different angle, you won't be surprised down the road when things don't line up the way you might expect. If you're already familiar with these introductory concepts, we beg your indulgence; we know we run the risk of boring you. Even if you've got good backup and restore practices down cold, you may discover a few implications or questions that you might not have fully considered.

A discussion of disk-based backup: what advantages it offers over tape, the potential drawbacks, and how it enables new modes of data protection A handy comparison table of the benefits and drawbacks of tape-based backup versus diskbased backup, suitable for showing skeptical coworkers and managers An overview of the Volume Shadow Copy Service, one of the key technologies used by DPM as well as by other modern backup suites A discussion of data replication: its advantages and disadvantages, as well as a description of common replication mechanisms An introduction to data protection: how it's more than just a good backup plan and what additional areas of concern it includes

We've got a lot of ground to cover, so let's get to it!


Backups and Restores

In the abstract sense, "backup and restore" is a simple concept that is nothing more or less than the most basic of common sense: your data is valuable, so make sure you have regular backup copies. This is probably why so many people have spent so much combined time and money over the years (an impressively large amount of money and an even more impressive number of man-hours) to put this simple idea into practice. For many years, backups meant using tape drives (see the "Tape Backup" section for an in-depth discussion of tape drive technology); this strategy worked well when data processing systems were centralized

mainframes that held everyone's data (as shown in Figure 1.1) on into the early years of the PC and networking revolution.

Figure 1.1: Centralized backups on mainframes

Yet, what starts as a simple idea time and again ends up eating resources and being a continual pain point in our networks. Once data processing moved onto an ever-increasing number of servers and workstations, a centralized tape backup strategy started to be less than optimal. In all the authors' combined years of systems administration, we think it's fair to say that probably the most hated duty is that of overseeing the backup and restore infrastructure. It's the source of a lot of stress and angst, because inevitably the goal is "make sure everything we care about is backed up all the time, but don't spend any money" (or so it seems). The reality is that backing up everything takes too long and costs too much, so you have to start making compromises in your designand it's all too easy to be criticized for your compromises. Here are two scenarios we've seen that illustrate the types of design questions and compromises backup administrators must face:

A start-up company has a single file server (shown in Figure 1.2). In the event of some sort of hardware failure, management wants as little work lost as possible. In an ideal world, this would mean some sort of continuous backup strategy, but this has been ruled out as too expensive. At the same time, only a small amount of data on the server changes. A daily backup strategy is designed with the following characteristics on a single tape drive: a full backup of the file server once a week written to a separate tape and daily capture each night of files that have changed files since the previous day written to a new tape each week. A manager takes last week's full backup tape to a safety deposit box. In the event of a simple hardware failure, such as the motherboard or a hard drive, the previous night's copy of the data can be restored by retrieving the offsite copy and restoring from each day's backup. If something happens to the entire site, such as a fire, the offsite tape copy can be used to restore the data up to the preceding weekend. Tapes are rotated on a three-month basis to limit the cost of new tapes.

Figure 1.2: kup scenario 1: a single file server

A utility company (shown in Figure 1.3) has a large farm of database servers that store a complete set of detailed three-dimensional mapping data for their service areaover a terabyte of data. This database is constantly referred to around the clock by service people, who must not only be able to read the locations of the utility's equipment, but must also be able to make changes to entries based on the status of their work orders. Because the data is located in a database, it is difficult to capture just the data that has changed from day-to day. While specialized backup applications exist for this type of database server, they conflict with the mapping and work order applications; a native backup interface is used to ensure a nightly backup can be performed without taking the databases offline. However, this interface is slow, so a full dump of the database is written to spare disk storage every night; this copy is then backed up to two tape libraries that each contain two tape drives for a total of four simultaneous tape streams. Each morning, the previous night's tapes (anywhere from six to ten in a normal day) are picked up by an offsite storage courier, who also returns the tape created three weeks previously. Archived tapes are not rotated, but are kept in a locked room to comply with retention directives established by the legal department.

Figure 1.3: Backup scenario 2: a database server farm The previous examples show just a small selection of the difficulties you face when designing the right backup strategy. Rather than spend the entire book talking about the rest of them, we'll summarize these issues for you in Table 1.1.

Table 1.1: Common Issues Affecting Backup Design Open table as spreadsheet

Issue Bandwidth

Description If you're backing up a single server, this isn't as much of an issue. However, when you start to consolidate backup operations among multiple servers, the amount of available network bandwidth between the machine that holds the data being backed up and the machine doing the backup can become a bottleneck. Each backup tape (or disk volume, if you're using a disk-to-disk strategy as shown in the second example) has a finite amount of space for data. Once the amount of data that you need to back up is larger than this capacity, you need to use multiple backup volumes (tape or disk), multiple backup devices, or both. Using multiple volumes in turn means that either someone must manually load new volumes into the backup device or a more expensive loader device must be used. Multiple devices and volumes complicate both the backup software used as well as the restore process, as data may now be spread across multiple volumes, and the backup software must have some sort of indexing capability (which in turn becomes sensitive data that must be backed up). Backup software, devices, and media are not inexpensive. Generally, the more flexibility or capacity you need, the more you can expect to pay. While you can find bargains, they're generally lower in capacity or quality. Most people think only of network bandwidth here, but there's a lot more to consider. When your data to be backed up lives on another machine, you need to answer a lot of questions. What user account is the backup system using, and does it have access to the data? If the destination is a user workstation, is the user logged off or are critical files (such as financial data) being held open? If the destination is a server, can a single user lock data so that it cannot be accessed by the backup process or are backups performed with a technology that can bypass locks? Early backup designs at 3Sharp were notorious for failing to capture a specific set of highly important files that were often held open by a user application. This is data that is not technically part of the data that is to be backed up, but is related to it in some important way. Examples include access control lists (ACLs) on Windows NTFS volumes, 8.3 filename mappings on shared folders, the backup database indexes used by high-end backup solutions (as mentioned in the Capacity entry), or system state configuration. If you're backing up database or mailbox data, this can also include information from relevant directory services; while it's not directly related to the primary data being backed up, the primary data is useless without the secondary data. Different backup technologies have varying rates of reliability. And there is not always a direct correlation with price; older devices and technologies that survive the market usually do so because they've proven to be trustworthy. The reliability of both the devices and the media must be established separately; when multiple vendors provide the same technology, there may be a marked difference in the reliability between their offerings. Ultimately, the

Capacity

Cost

Location

Metadata

Reliability

Table 1.1: Common Issues Affecting Backup Design Open table as spreadsheet

Issue

Description only defense against unreliable media is a regular program of testing the fresh backup copy during the backup process, which in turn reduces the amount of time during the backup window. Media can also go bad during storage or thanks to rough handling, which can diminish the chances of a successful restore. As mentioned in the Location entry, capturing which user accounts are used (and the access granted to them) is an issue. However, Windows provides specialized access rights for backup and restore operations that bypass many of the typical access restrictions. Accounts with these rights are often valuable targets and susceptible to increased scrutiny. Another issue is whether the backup software allows encryption of protected data or otherwise performs some sort of access control, or does the software allow anyone listening on the network (or who can get their hands on an archived backup volume) to retrieve sensitive data.

Security

Service-level While not a technical issue, this is often one of the key driving factors in the agreements backup system design. A service-level agreement (SLA) is essentially a commitment to perform a given backup or restore operation within a certain timeframe; it allows the user to know that in the event of a data outage they will have to wait no longer than a well-defined period before getting access to their data again. Quicker SLAs make users and managers happier, but demand more of the backup system and administrators; they must be established with a realistic eye toward the limitations of the technologies used and take into consideration the amount of stress the backup administrators will be under during an emergency. Good SLAs also define the priority level for each type of data and give extra time allowances for the restoration of lower-priority data when higher-priority operations are waiting. Speed Backup technologies operate at varying speeds. Often, backup operations must take place within a limited time window during off-peak hours, so backups must be performed as quickly as possible within the budget. Faster technologies and devices tend to cost more, depending on the underlying media.

Beginning in Windows NT and moving forward with Windows Server, Windows XP and Windows Vista offer the following capabilities:

The ability to back up and restore to both tape and disk, including removable devices that present themselves as disk volumes at the operating system level The ability to perform a System State backup to capture specialized server-level data such as the Registry, Active Directory databases (if the server is a domain controller), the IIS metabase, and other critical system data repositories The ability to create Automated System Restore backup sets, which allow a baremetal backup capability if a server must be completely rebuilt The ability to handle basic backup and restore operations across the network

While Windows Backup goes away in the upcoming Windows Server 2008 (formerly codenamed "Longhorn" Server), it will be replaced by the new Windows Server Backup feature, which has most of the same functionality but includes a lot of nice new features.
Using Windows Backup

Don't automatically turn your nose up at Windows Backup. True, it lacks a lot of the bells and whistles found in more sophisticated and costly packages, such as the ability to schedule operations within the program; you must use the Scheduled Tasks tool provided within Windows or some other third-party utility. However, it covers the basics and can be used to surprising effect. In fact, Microsoft has often used Windows Backup within its own IT infrastructure to help perform daily backup operations on mission critical datasets, usually in conjunction with some enterprise backup package. If you're upgrading to Windows Sever 2008, or have backups from Windows Server in the data you're protecting, you may be interested to know that the Windows Server Backup replacement will not read the Windows Backup .bkf format. Instead, you'll need to download a free add-on utility, the Windows NT BackupRestore Utility, from Microsoft Download.

Tape Backup

We all know the routine with tape backups:


Identify the data that we need to protect by backing it up. Configure the necessary backup filters, schedules, and media. Rotate the tapes so that we reuse tapes containing older data we no longer need. Archive the old tapes we still need for retrieval or historical purposes. Order more tapes to replace the ones that have worn out.

No matter how boring the daily backup routine is, tapes have been with us for a long time; we know the technology, and we know the associated routines. We do our backups, validate the media, test the consistency of our restore processes (at least I hope we all do), and we grumble the whole time. Most of the administrators we know agree that protecting data is one of the most important parts of their jobs. Coincidentally, these same administrators all say it is one of the least appealing parts of their job. Tape backups have long been the primary choice for enterprises wanting to protect their data. Until recently, no other method offered the flexibility or affordability of tape backups. This, coupled with a thirty-plus-year history of being a known and trusted solution, significant advances in tape technology, and advances in backup software have kept pace with the demands for data protection. Tape backups have historically offered some important benefits to the typical enterprise environment:

Reliability. Tapes are known and trusted technology; you've probably already got some sort of tape backup solution in your organization. Tape backup has been around for a long timesince the era of mainframes, computer technology's own era of

dinosaursand most companies have their tape backup and archival routines firmly in place. Portability. Tapes are easily transportable, making it easy to ensure that a copy of critical data can be available in the event of a catastrophe. As mentioned in our second backup example, there are a wide variety of data storage courier services that will come to your location to pick up and drop off tapes. Scalability. The scalability of a tape backup system is limited only by two factors: the number of tapes you have and the number of drives you have. With the addition of tape libraries and robots, all of the cumbersome tape handling and labeling tasks can be automated when required in larger organizations. Cost. Historically, tape backup was an inexpensive method of protecting data. However, with the cost per gigabyte of disk space dropping rapidly, tape technologies have been hard-pressed to keep up with storage costs, even when factoring in features such as compression support.

Of course, no technology is perfect; tape backup solutions have their share of headaches and drawbacks:

Speed. Tape backups for larger enterprises can take up a significant amount of time, possibly affecting necessary services on the machines being backed up. Management. In many larger environments, compensating for the length of time required to perform backups has led to purchasing multiple tape devices. This leads to very complex backup and restore scenarios due to the additional overhead of the multiple devices, and in the case of restores, collecting all of the correct media for a restore. Retention. In the long term, tape media may degrade, rendering blocks of data corrupt or missing and making the tape unsuitable for a restore operation. The risk of tape failures increases over the lifetime of the cartridge; all tape models have a limited number of readwrite cycles before they must be replaced. Testing. Testing the restore operation of a tape backup system can be complicated, and due to time restrictions in some environments, may require the purchase of additional tape devices, which can be expensive. Incompatibility. Tape technology has changed significantly over the years. In enterprises that have existed for a significant length of time, several different types of backup media archive require devices capable of reading the media.

Tape backups have incorporated several different methods for rotating media. Typically, a company will do one full backup of their data each week and a differential backup for daily backup needs. Once a month, a company may archive that month's data offsite, retiring that media from rotation to keep a long term copy for regulatory or corporate policy reasons. This scheme is also known as the Son-Father-Grandfather method, shown in Figure 1.4.

Figure 1.4: A common tape rotation

There are several other common tape rotation schemes, but discussing them is outside the scope of this book.
Labeling Your Backup Media

It never ceases to amaze me how many organizations fail to adequately label their backup media. Imagine going to a library and finding that only every third book had the title and author on the cover and spine. Be sure to label and track all of your media.

Disk Backup

In the past, the thought of not having to worry about tape rotation (or the high restore times) has been known to make many administrators weep from sheer joy. Their jubilation, however, was short-lived, only to be crushed under the heel of cost. These sad, depressed administrators went back to their normal tape backup routines, never realizing that the promised land of their proposed disk solution would almost certainly have presented its own difficulties. Disk-based backups have been historically available for very high-end systems that demanded extreme levels of data availability. These solutions, used when the data was extremely critical, replicate the data from one disk system to another using a block-level copy strategy that makes realtime, synchronous updates. This technology is functionally equivalent to a RAID-1 mirror, but was often used to ensure that a live copy of data was stored in another data center. These solutions were incredibly expensive, aside from the direct cost of the disk media, placing them out of the grasp of even most enterprise-size organizations. The price of hard disk technology has rapidly dropped in the last several years, and in response disk-based backups have started to appear in more organizations. One key difference between these solutions and the block-level mirroring described previously is that there is no need for the copying to be synchronous; an asynchronous process is completely

sufficient, because the organization's own backup process is the primary consumer for the copied data, rather than user requests or high-demand production applications. Disk-based backups have become increasingly popular when the amount of data to back up overwhelms the available backup window, or when restoration service-level agreements (SLAs) demand a faster restore time than is possible with tape. Because tape is a serial medium, all of the data written to the tape before the desired set of data must be spooled through; disk, on the other hand, is a random-access medium that permits the restore process direct access to any data that needs to be restored. Disk-based backups provide a number of benefits to an organization:

Performance. Disk-based backup solutions tend to offer faster read and write times than their tape counterpartsboth due to the inherently faster read/write speed as well as the random access mechanism. Although most disk systems are focused toward increasing the I/O performance, disk-based backup solutions tend to have modest performance requirements, allowing the use of less expensive drives with lower power and cooling specifications. Availability. In the event a restore operation needs to be performed, there is no need to physically locate the correct media; instead, you simply need to specify the data to be restored. Where the archive data lives is determined by your organization's own management policies; it can be located on direct-attached storage (DAS), networkattached storage (NAS), or even some sort of storage area network (SAN) technology, making it easy to locate the corresponding backup disk volume. Lifetime. Disk, like tape, has advanced significantly over the years; today's high-end serverlevel drive interfaces offer important performance, feature, and reliability increases (and even workstation-level drives have gotten faster, smarter, and more trustworthy). However, disk interfaces and architectures are generally backward compatible with previous drive generations. Even when a new interface technology is in use, older interfaces can be supported in parallel with little cost or effort. This level of support for older standards makes it considerably easier to transfer archives to newer drives when needed. Familiarity. Disk technology, also like tape, has been around for a long time and is well understood by the IT community. Every computer has a hard drive; the principles and techniques of hard drive management are readily available and mastered. Even the care and feeding of advanced disk configurations, such as RAID arrays, has become a commonly available skill in the wake of inexpensive RAID controller solutions intended for small and medium workgroup-level servers.

However, disks have their own drawbacks; they aren't right for all situations, and there are definitely indicators that they may not be right for you:

Lack of portability. Although tapes are easily moved or shipped from one locale to another, trying to do the same thing with disks can present a difficult challenge. It's easy to take a disk, place it in a padded envelope, and ship it via your favorite overnight service, but we don't recommend that you do this; antistatic and shock precautions are very important to ensuring the survival of the data at the other end. Simply matching drive interface technologies is often not enough; the lower-level formats produced by controllers of different make and model can be incompatible. If

the disks are part of a RAID array, this problem becomes even more pronounced; there may be difficulties in getting another RAID controller to recognize the array. High initial cost. The initial cost of disk as media is higher than the cost for an equivalent amount of tapes. Over time, disk's higher level of reliability and support for multiple overwrites makes it the clear winner in the dollar per gigabyte comparison, but the up-front costs of disk controllers, enclosures, and hard drives can be harder for companies on a strict budget to justify. Increased power and cooling consumption. Power and waste heat management is a critical part of modern data center operations. Disks are one of the biggest power consumers in a computer, and they contribute a significant amount of the system's total waste heat. Because most of our servers and computer are always powered up, the addition of more drives to the backup solution (or to associated NAS and SAN devices) can have a big impact on the total power and cooling budget.

Tape Versus Disk

Traditionally, enterprises wishing to protect their data from loss relied solely on backup to magnetic tape. In larger environments, the backup process can take eight or more hours. Additionally, the tapes can be quite expensive, take up storage space, and must be changed often. On the upside, tape media is portable, allowing for offsite archival of data. So how do you know which media is best for you to use in your backup deployment? In Table 1.2 we compare the relative advantages and disadvantages of tape and disk.
Table 1.2: Comparing Tape and Disk Backups Open table as spreadsheet

Media Advantages Tape Tape cartridges are portable and require relatively little storage space, which lends itself well to offsite archival. Tape drives and cartridges are available in a number of formats, capacities, and capabilities; it is easy to find a combo that is right for your organization. Tape is a well-known technology with an established history and record of trust; most administrators have ample experience with it. Tape is probably already present in your environment; it represents a significant investment in materials, experience, and archived data.

Disadvantages Tape data must be refreshed or moved from one format of tape to another over long periods of time, and tapes in constant used should be replaced. Tape storage is an expensive storage medium when the total cost (dollar per gigabyte) is considered; drives, cartridges, and replacement drives and media must all be factored together. Tape best practices are not always followed because they increase the time and cost of backup efforts, so many administrators don't know how to minimize and handle media failures. Tape drives usually require an additional interface such as SCSI or SATA, as well as specialized software applications and agents.

Tape formats allow for decent levels of Tape is serial storage; backup, verification, compression and storage capacity. and retrieval are all slow.

Table 1.2: Comparing Tape and Disk Backups Open table as spreadsheet

Media Advantages Disk Disk data storage can be reliable for long periods of time, especially if the drives are held to a low duty cycle. Disk provides both random access storage and higher data transfer levels and throughput; backup and restore operations are significantly faster. Disk is a very familiar technology for all administrators, so using it for backups doesn't require any additional skills to be learned. Disk cost has dropped significantly recently, making it a clear winner in the dollars per gigabytes metric. Reading archived disks doesn't require special hardware or software, as disk interfaces and formats are generally supported for many years.

Disadvantages Disks include integrated electronics and take more storage space than tape cartridges. Bringing additional drive capacity online usually requires increasingly expensive infrastructure such as RAID controllers and arrays. Because disk is so familiar, it may be harder for administrators to develop the proper habits and procedures for volumes used for backup. Disk systems require more electricity and generate more heat than tape systems do. Disk restores require all volumes of the relevant disk array to be online at the same time, which may require a compatible array or server configuration to be available.

The recent trend in most enterprise environments is to use a combination of disk and tape to provide full protection:

Disk offers several advantages that make it a superior choice of media for immediate and short-term backups; its speed allows the backup routines to run more quickly and restore the servers to production in less time while permitting quick, random access restoration of data when it is immediately required. Tape's portability makes it ideal for allowing archival (on or off-site) for longer periods of time such as years; the data can be transferred from disk to tape on the backup server, where it won't impact production use and can be scheduled when administrators are present to oversee operations.

There are several ways that disk and tape can be combined into a single backup solution. The most common deployment option is known as disk to disk to tape (D2D2T), which is shown in Figure 1.5.

Figure 1.5: Disk to disk to tape

D2D2T uses one or more disk volumes to perform the initial backup of the live data from the production resource. This backup copy can be generated using a multitude of tools, including tools native to the operating system such as Windows Backup, or using the same software that handles the tape archival. The disk backup can be on a local volume on the protected server or be located across the network on a central machine. Once the disk copy has been created, it is at some point then written to tape; until the next disk-based backup is created, the previous backup set is available for immediate use in restore activities. In some configurations, multiple generations of disk backups are held on the archive volume and are written to disk only after a suitable time has passed; perhaps only a portion of the data, such as full backups, are archived to tape while incremental and differential backups are removed from the backup volume. One of our favorite real-world examples of a D2D2T solution can be found in Microsoft's own internal deployment of Exchange Server 2003. Microsoft's Information Technology Group (ITG) faces a lot of unique challenges not seen by many other IT departments of equivalent size; two are particularly relevant for their backup solution:

Dogfooding. The general Microsoft product development strategy dictates that ITG must use prerelease builds of all major Microsoft applications in their production network as part of the final testing and acceptance trials, a process known as "eating their own dog food." Because they trust production-level data to preproduction software builds, having a reliable backup solution is an absolute necessity. Heavy usage. Microsoft's users rely on their email to a degree you have to see to believe. As a result, while there are definite peak times on their mailbox server, there is no good time for downtime such as that caused by typical backup-related outages.

Combined with their aggressive mailbox restore SLAs, ITG deployed a D2D2T solution for backing up their Exchange 2003 mailbox clusters. What's most surprising is what software they used to produce the disk-based backup: Windows Backup. Windows Backup, when run on an Exchange Server, provides support to back up from and restore to storage groups as well as to individual mailbox databases. ITG's solution, shown in Figure 1.6, uses a Windows Backup instance on each mailbox cluster to back up the relevant mailbox databases to a SAN-based disk volume. This volume is then mounted on a separate server, which is loaded with the backup agent used by their tape archival solution; the backup files created by Windows Backup are copied to disk on a daily basis.

Figure 1.6: ITG's Exchange 2003 backup solution

If ITG needs to perform an immediate restore of a mailbox database, mailbox, or storage group, they can use Exchange's Recovery Storage Group feature to quickly recover the selected data to a live server from the local disk copy of the backup files. From there, they can then move the relevant data back to the online resources. At the same time, they have the long-term data protection afforded by tape, combined with offsite archival. You can read all of the details of their solution at: http://www.microsoft.com/technet/itshowcase/content/exchbkup.mspx.
The Volume Shadow Copy Service

The Volume Shadow Copy Service (VSS) is a feature first introduced into the Windows server operating system in Windows 2003. VSS is designed to create multiple shadow copies, known more commonly as snapshots, of one or more volumes. A snapshot is a copy of a set of files and directories as they were at a specific point in time.

By exploiting the ability for the operating system and VSS-aware applications to create multiple snapshots, administrators can produce point-in-time images of critical data as a complete set, ensuring a consistent picture of the data at the time the snapshot was created. These snapshots can then be read by backup applications, allowing this same consistent view of the data to be transferred to long-term storage media while full access continues on the production filesystem. VSS nicely sidesteps one of the common irritations of conventional backup programs, which can often be negatively affected by users or applications that open files with an exclusive lock. These files can be accessed only by the process that opened them and cannot be read or manipulated in any form by other processes, including backup systems. Exclusive file locks are a constant headache for backup administrators; not only are they a nuisance, they can jeopardize the viability of the entire backup if the locked files (which are skipped by the backup process) are part of a larger set of data. Imagine the havoc that would be caused by the two following scenarios:

An Exchange mailbox database backup captures the .STM database file but not the matching .EDB database file. While the loss of the .STM file can be compensated for, the primary database structure is held in the .EDB file. A SQL Server database backup captures the transaction log file but not the actual database file.

We realize that the above examples are not common: if you're doing Exchange or SQL Server backups, you're almost certainly not trying to do them from the filesystem level against live targets. At the very least, you've taken the protected resource offline before doing this sort of offline backup. However, we have seen examples of just these types of mishaps. For backups against live production targets, you're almost certainly using supported backup interfaces to ensure that the backup software will make sure to handle all relevant locks for you and skip the unpleasantness of this filebased approach. Nevertheless, we brought up these scenarios to make the point that file locks can be more than just a nuisance; they can cause real data loss in your applications. VSS is the answer; a VSS snapshot works at a lower operating system level than the typical filesystem access request, and it creates point-in-time copies of all files on the protected resource. This in turn permits backup applications to use the snapshot to ensure they have a complete, consistent copy of all relevant files in the dataset, whether the application (and backup system) supports a common custom API. Table 1.3 provides an overview of the various components of VSS.
Table 1.3: Volume Shadow Copy Service Components Open table as spreadsheet

Component Description Volume Shadow The service containing the components necessary to create consistent Copy Service snapshots of one or more volumes Requestor Applications such as backup applications that request a volume shadow copy be taken.

Table 1.3: Volume Shadow Copy Service Components Open table as spreadsheet

Component Writer

Description A component of an application, such as SQL or such as SQL or Exchange server, that stores persistent information on one or more volumes participating in shadow copy synchronization. System services like Active Directory can also be VSS writers. The provider is a component that creates and maintains the Shadow Copies.

Provider

Storage Volume Shadow copy storage files are placed on the storage volume by the System Copy-On-Write provider. (Note that the storage volume does not need to be the same as the source volume). A shadow copy snapshot is created by the following process, illustrated in Figure 1.7: 1. The requestor queries the Volume Shadow Copy Service for a list of the writers and gathers the metadata to prepare for shadow copy creation. 2. The writer creates a description in XML of the backup components for the Volume Shadow Copy Service and defines the restore method, and then notifies the writer to make its data ready for a shadow copy. 3. The writer prepares the data via different methods depending upon the data type completing open transactions, rolling transaction logs, and flushing caches, for example. The writer notifies the Volume Shadow Copy Service when the data is prepared. 4. The "commit" shadow copy phase is initiated by the Volume Shadow Copy Service. 5. The Volume Shadow Copy Service halts I/O write actions on the volume by telling the writers to quiesce their data and freeze requestor writes for the duration required by VSS to create the snapshot. During this time I/O read requests are still allowed; they will not affect the consistency of the data. The application freeze is not allowed to exceed 60 seconds. VSS also flushes the file system buffer to ensure file system metadata consistency. 6. VSS tells the provider to create a shadow copy. The maximum limit on this is 10 seconds. 7. After the shadow copy is created, VSS then releases the writers from their frozen state and all queued write I/Os are completed. 8. The writers are queried by VSS to confirm that the write I/Os were successfully held. 9. In the event that a writer reports that the write I/Os were not successfully held, the shadow copy is deleted and a notification is sent to the requestor. 10. If the I/Os were not successfully held, the requestor can restart the process from the beginning or notify an administrator. 11. In the event of a successful copy, VSS gives the location information for the shadow copy back to the requestor.

Figure 1.7: The VSS snapshot process

Although the underlying VSS architecture may take a little bit of work to understand, rest assured that Windows and your applications are doing the hard part. The benefits of VSSaware backups are clear; more and more application and backup vendors are modifying their products to support the use of VSS. A well-written application will hide this complexity from you but make the job of successfully protecting your data (not to mention being able to restore it) much easier.
More about VSS

VSS was first included with Windows XP. The primary difference between Windows XP's VSS implementation and the Windows Server VSS implementation is that Windows XP can support only non-persistent snapshots, where only one snapshot or shadow copy can exist at a time. Persistent snapshots, on the other hand, permit multiple snapshots to exist simultaneously, giving servers the ability to store multiple pointin-time copies which can then be individually accessed by applications and end users (see Chapter 5, "End User Recovery," for more details). Windows Server 2003 allows VSS-aware applications to create up to 64 simultaneous snapshots per volume.

Replication

In the discussion of disk-based backup, we briefly mentioned the concept of data mirroring, which is an example of replication. Unlike a backup, which takes a copy of the data as it exists at a specific point, replication is an ongoing process that keeps a copy of the data synchronized with the original data source.

Of course, it's not that simple (what is?); there are multiple variants and options:

Synchronous or continuous replicas (shown in Figure 1.8) write updates and changes to the copy at the same time as they are written to the primary data source. This type of replication requires ample bandwidth between the two replicas and is usually quite a bit more expensive than the alternatives, but both replicas of the data are always up to dateno writes or changes are ever lost.

Figure 1.8: Synchronous replicas

Asynchronous replicas (shown in Figure 1.9) are created at specified intervals, such as every 15 minutes; during each interval, changes to the source are queued up for transmission to the replica at the appropriate time. These replicas are a trade-off between the expense of continuous replication and the potential loss of data; the replication interval is set at a value that represents an acceptable compromise.

Figure 1.9: Asynchronous replicas

Byte-level replicas (shown in Figure 1.10) track all changes in the source at the level of the individual byte. This type of replica requires specialized hardware and software to allow capture of changes at this level of granularity but produces the least amount of replication traffic. They are pretty much unheard of in typical implementations and

are usually reserved for very expensive or very important storage installations, such as those used to store critical military data.

Figure 1.10: Byte-level replicas

File-level replicas (shown in Figure 1.11) operate at the file-system level, tracking changes at a file level. At this level, changes are easy to track and can be performed by unsophisticated software, but it's also fairly inefficient (or even impossible) for certain types of data such as Exchange and SQL Server. The change of a single byte within a file will result in the entire file being recopied to the target replica.

Figure 1.11: File-level replicas

Block-level replicas (shown in Figure 1.12) represent a compromise between filelevel and byte-level. Almost all forms of computer data storage, whether on an NTFS filesystem or inside an Exchange or SQL Server database, are organized into discrete units known as pages or blocks (usually between 512 and 2,048 bytes). By using a special filter, replication programs can track which blocks have been updated and transmit only those blocks to the target replica. Although the entire block will be transmitted even if only a single byte is updated, most on-disk files consist of hundreds or even thousands of blocks, making this a more than acceptable compromise for almost all situations.

Figure 1.12: Block-level replicas Replication is only tangentially useful for traditional backup and restore processes, and it is commonly seen more in availability solutions such as Windows Distributed File System (DFS).
Data Protection

We've now set the stage to have a meaningful discussion about the concept of data protection, which is a merging of technologies between mere backups and high-end disk mirroring solutions. Data protection is more than just taking the occasional (or regular) copy of your data "just in case." All of the capabilities we've just discussed have their place in a full Windows-based data protection solution: Capability Backup and restore Tape backup Disk backup Disk to tape VSS Replication Benefits These capabilities are important for ensuring business continuation. Tape archives help ensure long-term storage capability balanced with portability. Disk archives provide rapid short-term restoration capabilities and ensure shorter backup windows. D2D2T provides a reliable transition between short-term and long-term archival media. VSS provides an underlying mechanism to enhance data consistency without compromising service or data availability. Replication allows automatic capture of critical data sources balanced by only a small amount of data loss if a server or site is lost.

These benefits are a good starting point, but by themselves, they don't offer much beyond a modern backup solution. However, there are still some key benefits provided by a true data protection solution, and they are still not in place:

A set of consistent, repeatable, management capabilities, usually implemented through a policy-base configuration engine The ability to consistently define and apply protection schedules across multiple data sources A single, centralized interface to protect multiple types of data sources in a consistent manner

Happily, DPM gives us these benefits and moreas we'll see in the next section.

DPM Concepts
Now that we've discussed data protection in the abstract, let us move on to an examination of how these concepts are implemented in DPM. DPM provides a combination of data replication and archival functionality. It incorporates many features commonly seen in advanced backup applications, such as D2D2T and centralized management for multiple data payloads such as Microsoft SQL Server, Exchange Server, Microsoft Virtual Server, and Microsoft SharePoint Services. Unlike traditional backup solutions, however, it combines long-term tape archival with seamless short-term disk replication and storage management, as shown in Figure 1.13.

Figure 1.13: A typical DPM solution

Before you can design and implement a DPM solution, there are several building blocks you must understand:

The creation of replicas The DPM storage pool The use of protection groups The creation of recovery points The use of end-user recovery

We will examine these concepts in further detail.

Replicas

The feature that most distinguishes DPM from a typical backup application is its integrated replication engine. When installed into an organization, DPM creates replicas of protected data sources and performs regular asynchronous replication with these sources. All further protection operations take place on these server-side replicas, allowing the original data sources to enjoy continuous uptime. DPM can use several replication strategies, depending on circumstances and the type of data being protected. Each of these methods has implications on the use of available storage space, which we will cover in more detail in later sections.
Storage Pool

In some data backup setups, the tape-rotation scheme is enough to make you cringe at the amount of space that goes unused during the daily backups. At some point, you might even develop a complex scheme of doing daily backups to disk and then archiving the results to tape. This is usually more trouble than it's worth. Your backups may not even be completely reliable because you're backing up the backups. You could move from that system to a large robotic tape library, with a third-party backup solution that offers some nice features, but that solution is expensive and slow. This is a typical example of the kind of balancing act IT professionals have to perform time and again.
Table 1.4: DPM Replication Strategies Open table as spreadsheet

Replication Data Type Strategy Express full backup All

Description This method is used both to create the initial replicas when a new data source is first added to DPM protection. It is also used to provide regular updates to an existing replica, usually corresponding with the creation of a new VSS snapshot. When this is performed against an existing replica, DPM uses blocklevel replication to minimize the replication traffic. The Protection Agent monitors file writes via a volume filter and performs block-level replication at defined intervals to capture the changes in protected files and folders.

Replication

File data Databases Virtural machines System state Exchange

The Exchange storage group transaction logs are captured at regular intervals and copied to the DPM server. With these logs, the DPM server can perform log replay to any specified time and perform data recovery. SQL Server transaction logs are captured at regular intervals and copied to the DPM server. With these logs, the DPM server can perform log replay to any specified time and perform data recovery.

SQL Server

Fortunately, DPM removes these problems, allowing us to focus on the more important matters at handsuch as which of our servers has enough spare resource overhead to take on the task of functioning as our Quake server. The storage pool is a key DPM concept. It is a collection of the available disk volumes on which DPM stores all of the data associated with protected resources, such as shadow copies, replicas, and transfer logs. A DPM server requires at least two physical disks volumes, one for the operating system (OS) and program files, and one (or more) for the disk pools. DPM will not add any disk that contains operating system or DPM files to the pool. The main advantage the DPM storage pool provides is reduced administration. By default, DPM will manage the storage pool for you, taking care to reserve space for protected resources as required. When you get low on space, all you have to do to expand the storage pool is add another volume to the pool; DPM will automatically allocate the new space for protection. If you like to fine-tune things manually, it is possible to change the allocation yourself. Generally, you want to change the storage recommendations only if your storage pool space is mostly utilized and you can't quickly add new volumes after the fact.
What Is a Physical Disk Volume, Anyway?

When we refer to a physical disk volume, it means any disk volume that shows up as a separate device in disk management. This includes direct attached disks, SAN LUNs, iSCSI LUNs, and other disk types.

DPM recommends and allocates storage pool space for a protection group based on the amount of data to be protected. The replica, snapshots, and transfer logs are all stored in allocated space within the storage pool, and Microsoft recommends not changing the default allocations unless they do not meet the needs of your organization. These recommended values are not static, however, and may be modified according to the following limitations of the three protection components:

Replica and shadow copies. Allocation for the replica and shadow copies may be increased, but not decreased. If you need to increase this allocation, we recommend that you first verify that the increase is going to be a consistent and ongoing change. If not, try to find a way to reduce the amount of data being protected before reallocating the space. Synchronization log. Space for the synchronization log can be increased or decreased, but bear in mind that it resides on a disk volume on the protected server. Changing this value may negatively impact storage space on the server you're trying to protect. Transfer log. The allocated space for the transfer log cannot be directly modified. DPM, however, does adjust the allocation based upon the other allocation settings. Due to this, the changes you make to the other components will affect the transfer log allocation.

DPM also includes tools to monitor the utilization of the storage pool and generate reports against the usage data. These tools are important aids for capacity planning.
Protection Groups

As Windows administrators, we should all be familiar with the concept of groups. We see groups and containers all over the place in a Windows network:

Standalone Windows servers use local groups to hold collections of users; Active Directory uses security groups and distribution groups to hold collections of users, computers, contacts, or other types of objects. You can use these groups as placeholders when assigning permissions, determining recipients of an email, or many other uses. Active Directory also uses a special type of LDAP object called an organizational unit (OU) that is tied to a specific point in the LDAP hierarchy. Unlike a group, OUs cannot be used to assign permissions, but they can be used as administrative boundaries to delegate management permissions and apply Group Policy Objects. Exchange Server 2003 defines the administrative group (AG), which acts in a similar fashion to an Active Directory OU. When you delegate permissions to the AG, or create a policy and apply it to the AG, all of the Exchange servers within that AG are affected by the action. Exchange Server 2003 also defines the routing group(RG), which defines a collection of Exchange servers that have sufficient local bandwidth between them that they can always send messages to each other directly. Two Exchange servers in different RGs will always use connectors and bridgeheads to route messages and never contact each other directly (unless, of course, they happen to be the bridgeheads for their respective RGs). SQL Server 2000 and SQL Server 2005 define the user group, which allows SQL administrators to easily assign permissions on databases and other SQL objects to multiple users at the same timejust like Windows/AD groups.

By looking at all of these examples (and others too numerous to list here), we see three useful concepts taking form:

Groups act as a multiplier of effort; we take a single action, such as granting permission or defining a policy, on a group; that action is then replicated to all members of the group. This can drastically reduce the time we must put into management. Groups act to guarantee consistency; once we know how one member of the group will behave when under conditions relating to the group's definition, we know how they all will act. This can radically reduce the effort we must put into troubleshooting. Groups act to simplify expansion; once the group is defined bringing new resources into congruence with existing resources is simply a matter of adding the new resource to the right group. This gives us enhanced protection from configuration errors.

DPM uses these basic precepts of taking several objects and putting them together in a container object so that the same policies may be applied to them. A protection group is an administratively defined group of data sources that share the same protection configuration and schedule. The elements of a protection group are shown in Table 1.5:

Table 1.5: Elements That Define a DPM Protection Group Open table as spreadsheet

Element Members

Description A member is a data source you want to protect. A single server may have more than one data source. A file server has one or more volumes, each of which has one or more shares. A SQL server has one or more instances, each of which has one or more databases. An Exchange server has one or more storage groups. A SharePoint farm has one or more servers, which can be spread out over multiple tiers. A Virtual Server host has one or more virtual machines.

Data Protection Method Short Term Objectives Disk Allocation Replica Creation Method

The method used by the protection group to protect your data. The method can be disk and/or tape. The parameters control how often recovery points are created, how long they're retained, and when express full backups occur. The amount of storage pool space allocated to the protection group. This specifies when replication of the protected data to the DPM server takes place. The default option is to have it happen automatically. Additionally, you can schedule it or specify manual replication using removable media.

The biggest key to comprehending why DPM protection groups are so neat is a clear understanding of the term "data sources":

A single file server in your organization may have multiple data sources on it to protect. For example, if you have a file server with multiple volumes, and one volume sees much more activity than the other, you may want the creation of recovery points to occur more frequently on that volume. In that case, the volumes would be members of separate protection groups. An Exchange storage group would be considered a single data source. If you have multiple storage groups, they could each be members of separate protection groups. Even though storage groups can contain multiple databases, you cannot choose to protect individual databases because each storage group shares a single set of transaction logs. An individual SQL database also qualifies as a data source. If you have a single SQL server hosting multiple databases with differing protection requirements, then it would make sense to house them in different protection groups. A SharePoint farm is a single data source, no matter how many servers you have as part of that farm or how many content databases are included. You cannot split the databases or servers within a farm into multiple protection groups.

On a Microsoft Virtual Server host, each separate virtual machine image is a separate data source. You can place virtual machines that are on the same host in separate protection groups to best match the protection needs of the data and applications they host. Workstations running Windows XP and Windows Vista may have multiple data sources, just like file servers. However, unlike file servers, the various data sources on a workstation must all be protected in the same protection group; you can't split them into multiple protection groups.

One natural application of protection groups is to define a separate group for each type of data source that you are protecting. This is an instinctive choice for many organizations and administrators; mailbox data often has different requirements than database data, and both are to be treated differently than file server or SharePoint data. For ease of administration, you may find it simpler to group data by purpose and protection requirements, rather than trying to lump many disparate types together. It all comes down to your environment. This technique permits the application of a single protection policy to multiple types of data sources when their protection characteristics and priorities are the same, in turn simplifying the creation and management of your protection policies. For example, if you have a SQL server with multiple databases or instances (as in a hosting environment), your data protection requirements may differ between databases. In this case, narrowing down the requirements to a few groups and putting the databases into the group that represents the best fit for their requirements keeps administration simple. There are a few restrictions to keep in mind when planning your protection groups:

Data sources can be members of only one protection group. File shares residing on the same volume cannot be members of separate protection groups. When you select a folder or share for protection, its children are automatically selected and cannot be deselected. When you select a location that contains a reparse point (mount points and junction points are two examples of reparse points), DPM will prompt you to protect the target location, but will not replicate the reparse point itself. After recovering the data, you must manually recreate the reparse point. If you select system volumes or program folders, DPM will not be able to protect the system state of the machine as a separate data source.

The general rule of thumb when designing your protection groups is to use as few as you need; unless you have a specific reason why you shouldn't include a resource in an existing protection group, don't create a new protection group.
Recovery Points

There comes a point in every administrator's life when data recovery needs to occur. It could be due to any number of causes: hardware failure, unrecoverable software issue, a security problem, a natural disaster, or even Godzilla attacking your data center. Whatever the cause, you need to be able to get your data back; otherwise, why are you even bothering with those tedious backups in the first place?

A recovery point is a snapshot that represents the state of data at a point in time. The use of persistent snapshots by VSS and DPM means that there can be more than one version of the data available for restore. Note that recovery points are not tied directly to the underlying VSS snapshots, depending on the payload being protected; Exchange data, for example, creates a recovery point every 15 minutes even though it doesn't perform an express full backup that frequently. Instead, DPM captures the transaction logs and can replay them to duplicate the state of the protected database. Recovery points are specific to a protection group. During the creation of a protection group, you will be asked to specify a protection policy. This means that you will be asked to set the following parameters:

Retention range. This parameter determines how long a snapshot should exist on the DPM server's storage pool. When the age of the data exceeds this value, DPM will transfer it to tape (if the policy permits). Synchronization frequency. This parameter specifies the synchronization schedule for the replica and controls how often the protection agent will send the block-level updates to the DPM server, up to every 15 minutes. Recovery points. This parameter specifies how often recovery points are created, up to every 15 minutes. There is always a "Latest" recovery point, representing the last synchronization performed that does not correspond to a defined recovery point.

When you create a new protection group, DPM by default creates three daily recovery points: 8:00 AM, noon, and 6:00 PM. Depending on how often data changes in your environment and what level of data loss you deem acceptable, you may want to change this schedule. For example, in a business that keeps mostly to an 8 AM to 5 PM. Monday through Friday schedule, the default values probably wouldn't make much sense; such an organization might want two or three restore points spread through the day on working days. While protection groups associate settings and schedules with protected data and locations, the recovery process remains blissfully ignorant of this arrangement. Microsoft has made data recovery simple with DPM, while at the same time giving it flexibility found in few traditional backup applications. While in the recovery section of the administration console, data is organized by server. When you choose some protected data to recover, you don't have to know the ins and outs of the protection group it was a member of; you just need to know what data you need and where it should be. DPM automatically populates the recovery points for the data you are trying to recover, so you can select the appropriate point in time to recover.
Why Do I Need Both Synchronization Frequency and Recovery Points?

At first, it seems as though specifying both your synchronization frequency and explicit recovery points is redundant. After all, if you're replicating your data every 15 or 30 minutes, isn't that good enough? Depending on your organization, the answer may very well be, "Yes." For many, though, that's not the case. When you go to restore data from a DPM protection group member, you will be asked to pick which recovery point you want to use. No matter what schedule you've set, you will always see the "Latest" entry. This entry represents the last replication of the protected member that

does not correspond to a recovery point. This is important, as DPM will not permanently store intervening synchronizations. That is, a DPM recovery point is roughly comparable to a traditional full backup; when you restore from a recovery point, DPM doesn't need to take any other replica or synchronization data into account. Extending this metaphor, a synchronization is analogous to a traditional differential backup; you first restore from the full backup (recovery point), then you restore the latest differential backup (synchronization) to get the latest version of the protected data. This will be a lot easier to understand with an example, so let's examine the case of a protection group configured with a 15-minute synchronization schedule and the default 8:00 AM/12:00 PM/6:00 PM recovery point schedule:

At 8:00 AM, the protection agent replicates the changed data to the DPM server. Because this also happens to be a recovery point, DPM creates a new replica to write the data to. For the next 15 minutes, "Latest" and "8:00 AM" will both point to the same replica. At 8:15 AM, the protection agent replicates the next batch of changed data. "8:00 AM" is still a discrete recovery point, so DPM allocates a new chunk of storage to hold the next replica of the data. This replica can be restored by choosing "Latest." At 8:30 AM, the protection agent replicates the next batch of changed data. The last set of data is from 8:15 AM, which is not a configured recovery point, so DPM updates that replica with these changes. The "8:00 AM" recovery point is still available, but "Latest" now points to the data from the 8:30 AM replication. The data from 8:15 AM can no longer be selected, even indirectly, as a recovery point. This continues every 15 minutes up through 11:45 AM, with DPM applying the updates to the latest replica. At 12:00 PM, the protection agent replicates the changed data to the DPM server. Because this also happens to be a recovery point, DPM creates a new replica to write the data to. For the next 15 minutes, "Latest" and "12:00 PM" will both point to the same replica. At 12:15 PM, the protection agent replicates the next batch of changed data. "12:15 PM" is still a discrete recovery point, so DPM allocates a new chunk of storage to hold the next replica of the data. This replica can be restored by choosing "Latest." All further replications through 5:45 PM will update this replica. At 6:00 PM, the protection agent replicates the changed data to the DPM server. Because this happens to also be a recovery point, DPM creates a new replica to which to write the data. For the next 15 minutes, "Latest" and "6:00 PM" will both point to the same replica. At 6:15 PM, the protection agent replicates the next batch of changed data. "6:15 PM" is still a discrete recovery point, so DPM allocates a new chunk of storage to hold the next replica of the data. This replica can be restored by choosing "Latest." All further replications through 7:45 AM the next morning will update this replica.

Clear? Put in very simple terms, separating the replication and recovery point schedules permits you to define how much data you're willing to lose in the event of an outage, right up to the limit of DPM's 15-minute granularity, while at the same time giving you explicit control over how much storage to use for your recovery points. Having explained that, we don't think that it makes a lot of sense to create a recovery point both at the end of the day and the beginning of the next day. If nobody's changing the data

during those hours, what is the point of using up storage space for another replica? We'll talk later in Chapter 12, "Advanced DPM," about the specific considerations you'll want to review when picking appropriate times for recovery points.

End-User Recovery

Most of us have had to deal with nondisaster-related recoveries of user data. We get the request, say a few less-than-polite words about the user under our breath, find the media, read it into the drive, and recover the data. Things have gotten a bit better with the ability to make shadow copies via the VSS functionality in Windows Server 2003. This enables us to deploy the VSS recovery client to end users, give them a little education on its use, and let them recover from accidental loss themselves. Data Protection Manager can be enabled for end-user recovery of data that exists in protected locations. Using the VSS client application, users can browse through previous versions of files and folders by the corresponding restore point. This functionality is similar to what you get when you turn on VSS for a volume on a server. The main difference here is that the snapshots are not taking up space on your production file volumes. End-user recovery brings many benefits that are not present with traditional backup and restore scenarios including:

Self service. Users don't have to contact IT to recover their data. It's a simple process that they can accomplish themselves. Instant recovery. The recovery of the files when initiated by users happens when they initiate it, not after submitting a request to IT and waiting for an administrator to find the time to retrieve the media, load it, and find the data the user wanted. Improved efficiency. Because the IT department does not have to be contacted, their time is spent more efficiently.

However, bear in mind that end-user recovery does have some potential drawbacks:

Only SharePoint documents and files and folders on protected file shares can be enabled for end-user recovery. Other data types such as SQL Server and Exchange cannot be enabled for end-user recovery. End-user recovery relies on the VSS client tool. This means that you have to deploy it in your environment, as well as train your users how it is used and make sure they understand its limitations. Users may inadvertently overwrite the current version of a file with an older version.

We will discuss end-user recovery in detail in Chapter 5, "End User Recovery."

DPM Architecture
With a better understanding of how DPM works, we can finish the chapter with a look at how the pieces of the DPM solution fit together. DPM uses the following tiers:

The protection agent is a service that resides on each protected server, performing replication and restore operations on behalf of DPM. The DPM server application runs on one or more dedicated servers, providing centralized scheduling, policy creation, and management, as well as serving as the repository for replicas of protected data and the location of tape operations. Optionally, DPM can interact with third-party backup software, providing an additional level of protection.

Figure 1.14 shows a typical single domain DPM deployment, and Figure 1.15 depicts a more complicated enterprise installation.

Figure 1.14: A single domain DPM deployment

Figure 1.15: A complex DPM deployment

Let's examine each tier in more detail.


The Protection Agent

Chances are, if you're reading this, that you're familiar with traditional backup methods, and more than one third-party backup application. Many third-party applications include "agents." Agents, in these cases, are small software applications that reside on client machines and target specific data types. Most backup solutions require a separate agent to cover Exchange, SQL Server, and any data other than flat files. In DPM there is only one agent, the protection agent, which handles all of the protection responsibilities on protected servers. The protection agent is a small client application installed on servers being protected by DPM. It uses a special disk volume filter that hooks into the Windows Server storage drivers, allowing it to track the block-level changes to resources it has been configured to protect. A client with the protection agent installed may be managed by only one DPM server and cannot be protected by multiple DPM servers. The protection agent performs the following functions:

Maintains the synchronization logs and records changes to selected resources on the protected server. The protection agent maintains a separate synchronization log for each protected volume. The synchronization log is located in a hidden folder on the volume to which it pertains. Copies the synchronization log to the DPM according to the configured schedule. Once DPM has a copy of the synchronization log, the data can be synchronized with the DPM server's replica and appropriate recovery points and VSS snapshots created. Performs express full backup synchronizations when scheduled or requested, including when a new data source on the protected server is first included in a protection group. Handles communications with DPM to allow DPM to browse the shares, volumes, and folders on the protected server during recovery operations.

There are two components to the protection agent; the agent itself and the Agent Coordinator. The Agent Coordinator component is temporary software that is used during installation, upgrade, or uninstallation of the protection agent.
DPM Server

If the protection agent tier is the eyes and ears, the DPM server tier is the brains. The server tier performs the following functions:

Hosts the DPM application. This application provides the policy engine, scheduler, and data repository. Hosts the SQL Server instance that provides the index of all data sources, replicas, protection groups, and other DPM configuration information. Manages the storage pool, which must be presented to the server tier as some sort of locally attached storage. Creates and updates the replicas with the data received by the protection agents. Captures the VSS snapshots on each replica according to the configured schedule of the appropriate protection group. Moves replica data to tape according to the long-term retention policies defined by the appropriate protection group. Retrieves replica data from tape and disk in response to recovery operations and sends it to the appropriate protection agent.

As mentioned previously, DPM should be installed on its own server. As you can tell from the previous list, it is responsible for performing a large number of tasks even in a simple environment; when you are protecting multiple sources, the DPM server will become very busy. The major bottleneck for most DPM operations is the disk subsystem, but RAM and CPU are important as well, especially for the underlying SQL Server instance DPM uses to track protected data. In larger environments, you can configure DPM to use a separate SQL Server instance. We'll cover the particulars of installing DPM in all its glory in Chapter 2, "Installing DPM."
Third-Party Backup Software

The final tier in a DPM system is completely optional. Out of the box, DPM provides native support for reading and writing protected data to tape devices ranging from single tape drives to large, expensive libraries. It will even do its best to create an appropriate tape rotation schedule for you, relieving you of the burden of having to decide the best way to handle your tape volumes. However, DPM does suffer from a serious limitation: it protects only Windows servers. If your organization is all Windows all the time, this probably doesn't concern you that much; DPM offers enough functionality that it makes a compelling choice to protect all of your Windows servers and data (and Microsoft certainly hopes that's how you'll use DPM). In many companies, though, they've got all sorts of other servers and operating systems, ranging from legacy mainframes to Unix and Linux servers and more. Many of the bigger backup solutions offer a high degree of cross-platform capability; they can support backup and restore operations from a large number of operating systems, hardware architectures, and

third-party applications. For example, a single backup application might handle the following workloads:

Windows desktops and laptops running a variety of versions of Windows, including Windows 95, Windows 98, Windows ME, and Windows 2000 Windows NT4 and 2000 domain controllers Windows NT4 and 2000 member servers Sun Solaris servers running Oracle databases UNIX or Linux workstations running various user applications Novell eDirectory servers providing directory, file, and print services Novell Groupwise messaging servers

If your organization uses one of these applications, you've likely got a large amount of time and effort (not to mention money) invested in it. DPM is designed to provide value to you even if you've already got a backup solution you want to keep. Use DPM to protect your Windows assets; use your existing application to back up the data from the DPM replicas. This integration provides several benefits:

Your centralized tape-handling procedures need no modification; all tape operations are still done the same way you're currently doing them. However, your Windows machines still get all the benefits of DPM, including the ability to offer end-user recovery for appropriate payloads. You can reduce the number (and type) of expensive backup agents for your tape backup solution. Instead of having to buy a license to back up each SQL Server machine or Exchange server (let alone Windows file servers), you simply need a file agent to back up each of your DPM servers. Windows administrators can perform their own local restores from the DPM replicas without having to drag the central backup administrators into it. This is an especially lovely prospect for Exchange and SQL Server administrators; it makes recovering deleted executive mailboxes much less painful for all parties concerned.

One caveat you need to keep in mind when using DPM with a third-party tape solution: your tape agent must support the use of VSS. DPM replicas rely heavily on VSS capabilities, and you will not be able to perform reliable backups without it. This shouldn't be a problem; almost every backup solution out there with Windows support now offers VSS compatibility.
Protecting Non-Windows Servers with DPM?

When we say DPM protects only Windows servers, we're not being completely accurate. We feel bad about that, so let's set the record straight: DPM can protect any server that runs in a Microsoft Virtual Server virtual machine (VM). So, you can protect VMs running any supported operating system such as Red Hat Linux or Novell's SuSE Linux. However, not many people run all of their non-Windows servers in Virtual Server VMsand if you're one of the odd ones who do, there are still some caveats we'll explore further in Chapter 10, "Protecting Virtual Servers."

The Bottom Line


Understand general data protection concepts. Understanding the concepts that apply to any data-protection scenario makes it easier for you to identify the challenges you face in your environment. Master It 1. Name the common factors affecting the design of traditional backup and restore solutions. 2. What are the two common storage technologies used for backup and restore? What are two advantages and disadvantages for each technology? 3. Describe how D2D2T (disk-to-disk-to-tape) works. 4. Name the two replication strategies and explain how they differ. 5. Describe the three levels of replication. Distinguish new concepts introduced by DPM. DPM presents a whole new way of thinking about data protection, but it introduces several new concepts to master. Master It 1. List which of the following members can be included in the same protection group: a shared volume on a file server, a virtual machine on Virtual Server, a SQL database, a SharePoint farm, and an Exchange storage group. 2. Missy, an Exchange administrator, has two mailbox databases for which she needs to design separate protection policies. To do this, she must put them into separate protection groups. What must she first do in order to permit this configuration? 3. Tom, a SQL Server administrator, has two SQL databases that he needs to protect with DPM. How many protection groups does he need to protect them? 4. You are protecting your department's file server and have it as a member of a protection group defined to synchronize every 30 minutes and create recovery points at 7:00 AM, 3:00 PM, and 11:00 PM. At 3:07 PM, your manager saves changes to an important spreadsheet on the file server. At 3:31, his secretary makes changes to the spreadsheet but the file is corrupted. Up until what time will you be able to recover his saved version before it is overwritten? (Hint: reread the "Why Do I Need Both Synchronization Frequency and Recovery Points?" sidebar.) Identify the components in the DPM architecture. While DPM attempts to mask the complexity of its protection operations, you still need to know the underlying components of your DPM deployment. Master It 1. Name the tiers of the DPM application. 2. Does DPM require the use of a separate tape backup solution?

Chapter 2: Installing DPM


Overview
"I know how to install software. You put the disk in, and click Next until it says you're finished." A child in Ryan's household In Andrew's defense, he said this when he was eight; at the ripe old age of twelve, he now knows better. He also knows about the following advanced systems administration practices:

Placing the operating system and installed program files on different disks for better performance Choosing an application's installation path so that similar applications can be more easily grouped together

If you think you caught us with tongue in cheek when we called these advanced practices, you'd be correct; but the sad fact is that while these may seem so simple that a twelve yearold boy can master them, how many IT professionals actually know (and follow) these best practices for installing software? How many places have you worked where new hardware was bought with purchase price as the sole consideration, where new software was installed without consideration or planning, and where systems management becomes an exercise in configuration by trial and error? Sadly, we see IT people commit "plug-n-pray" administration all of the time. There are, however, other ways to manage systems. For example, there is an unmistakable tendency for a large number of administrators to seek a cookbook approach, to try to reduce every problem to a discrete set of steps. When you're first dealing with a new piece of software, starting from a clickstream is a great way to get a working deployment installed in your lab so you can start learning itand if you're testing unreleased beta software with no documentation, you're lucky to have it. The danger of trying to take this approach and apply it to your production environment, though, is that every set of clickstream guidance embeds specific assumptions and simplifications. These assumptions might work fine for a lab environment where the important thing is to get a working installation so you can get your hands dirty, but they seldom map to the real world where your users live. Even if you can manage to jury-rig things so they work, we want you to feel comfortable not just with which buttons to click, but with the knowledge of why you're clicking on them and why you'll choose a different set of clicks next August after your company has doubled in size.
Note Clickstream software derives its name from the stream of clicks you must follow.

Even though we've spent a lot of time producing the clickstreams you're going to read in this book, we sincerely hope that you only use them as a starting point; we may be starry-eyed idealists, but we think that you're not only going to jump straight to the step-by-step guidance, but you're going to read the rest of our text (and maybe even the DPM product documentation too). Use our steps to get DPM installed in your test lab; we assume you have

one because it gives you a chance to get familiar with DPM before your production data is on the line. When it comes time to work out your production deployment, plan carefully and research the relevant installation options. Getting it right the first time in your production environment is far more importantand ultimately simplerthan trying to make changes to a less-thanoptimal installation. In this chapter, you will learn to:

Determine the prerequisites for installing the DPM server components Determine the prerequisites for installing the DPM protection agent Add disk volumes to the DPM storage pool Deploy the DPM protection agent to protected servers Configure a DPM protection group

DPM Prerequisites
Before introducing Data Protection Manager 2007 to your environment, you must first ensure that you will be able to install DPM without any problems. In our years of combined experience with software installations, we find that actually installing and configuring a given package of software is inevitably the easiest part of the process. We also find that the actual installation is one of the last processes to actually take place. Before we get there, we have a variety of other issues to take care of: fixing any blocking issues on the existing systems in our network, reading and planning and playing with the software in our lab to gain some mastery with the product, and getting everything ready before we can put the new software in place. You'll need to consider the following issues before beginning the DPM installation process:

The importance of reading documentation Addressing your licensing requirements Meeting the DPM server prerequisites Identifying any protected server prerequisites

Let's examine these in more detail.


Documentation

We cannot stress strongly enough the importance of reading the DPM documentation. Granted, we've yet to see any set of product documentation that could be described as "a real page-turner." If the official documentation was perfect, there'd be no need for books like this one (which we think would be a shame). However, books like this don't serve the same purpose as documentation. We can help you understand the less impenetrable parts of the documentation and give you an objective view of how to really use the product, but we're not meant to replace the manual. Having said that, don't feel that you have to read the documentation end-to-end and understand every word in it before using DPM. By its nature, product documentation is a reference that covers a lot of material. Not every topic will apply to your organization, but if

you're familiar with the topics the documentation covers (at the very least), you'll have a better idea of what the product can doand how you might be able to make better use of DPM at a later time. Browse the topics and note the ones that seem to be interesting; read them and try to make sense of them. Go play with the product in your lab, and then come back and read them again. You'll probably find that they make a lot more sense, even if they still don't directly relate to your configuration.
The Release Notes

Should you decide to ignore the documentation for now, there is still one file you need to read end-to-end: the release notes. These are required reading for any Microsoft product; they give you a roadmap of last-minute problems you may encounter and workarounds you are likely to need during your deployment.

Licensing

As with all commercial software, understanding DPM's license model is a key part of planning your deployment. Microsoft offers three types of licenses for DPM 2007:

The DPM 2007 Sever license is required for each DPM server you have in your organization. It allows you to install the DPM server software on a single machine; if this configuration includes the integrated SQL Server instance, that software is covered as well but can only be used for DPM. The DPM 2007 Data Protection Manager Standard License (S-DPML to its friends) is required for protected standalone file servers and workstations. It allows you to install the DPM protection agent on the protected machine and perform basic folder, share, and volume protection. For those of you who are already running DPM 2006, the SDPML is the equivalent of the DPM 2006 client license (DPML). The DPM 2007 Data Protection Manager Enterprise License (E-DPML) is required on protected Exchange, SharePoint, SQL Server, or Virtual Server servers, whether in standalone or clustered configurations. It is also required on clustered file server configurations. The E-DPML allows you to install the DPM protection agent on the protected machine and protect the application data. This license also provides folder, share, and volume protection and enables full cluster awareness from DPM, permitting automated continuation of protection in the event of a cluster failover. This license is required if you want to enable software encryption or to use DPM in conjunction with System Center Virtual Machine Manager (VMM) to protect and restore physical servers to virtual machines using the advanced Physical to Virtual (P2V) functionality of VMM.

If you have a Software Assurance license plan for your organization, you can upgrade your existing DPM 2006 server licenses and DPMLs to the DPM 2007 server license and S-DPML without purchasing additional DPM 2007 licenses. Remember, though, DPM 2006 only natively protects file servers or servers acting as file servers; if you're currently protecting an Exchange or SQL Server machine with DPM 2006 by having a scheduled task dump the databases to disk files which DPM 2006 then protects, you'll have to upgrade those DPMLs to E-DPMLs to get DPM 2007's native protection capabilities for those workloads.

DPM Servers

When you're assessing your intended DPM servers to find out if they meet the requirements, there are several categories you need to look at:

Hardware Software Storage Database

Let's look at them in more detail.


HARDWARE

The requirements for server hardware shown here are suggested by Microsoft. Bear in mind, however, that your server hardware will affect all aspects of your data protection scheme, from the amount of time required to create replicas and recovery points, to how quickly recoveries occur. It should go without saying that you're not going to be able to get maximum performance in a busy enterprise network out of the minimum required configuration At the same time, you need to be sizing the servers not just for their present level of use, but also to accommodate future growth. In many organizations, the folks who do backup and data protection are separate from the groups running the applications being protected. If you're working in one of those organizations, make sure that you talk to the Exchange or SQL Server (or other protected application) administrators to find out their projected sizing requirements for the next couple of years. Table 2.1 shows Microsoft's DPM server hardware requirements and recommendations.
Table 2.1: DPM Server Hardware Requirements Open table as spreadsheet

Component Processor Memory Program disk space

Required Configuration 550Mhz 512MB RAM

Recommended Configuration 1Ghz or faster 1GB RAM or more

System Drive 2150MB above The volume containing the DPM system requirements. executables has at least 23 GB of free space. Program Files 150MB. Database 900MB.

Storage pool disk space

1.5 (size of the protected data)

23 (size of the protected data)

If you are not purchasing new equipment for DPM, we suggest that you at least try to use hardware that can most easily have the available storage expanded on the fly. We recognize the importance of having standard server configurations and base operating system images in large environmentsit's the only way to stay sane. In one of these environments, you may be

better off having multiple DPM servers that fit into your existing management regime rather than dropping the money on a one-off, high-end monster server configuration that doesn't use the same spare components you have available for other servers.
SOFTWARE

In addition to VSS, DPM uses a number of Microsoft technologies provided by the operating system. In order to ensure that these technologies are present, the DPM server must be running specific versions and editions of the Windows Server operating system. Table 2.2 shows Microsoft's DPM server software requirements.

Although Windows Server 2003 SP1 is listed as the minimum, you may want to consider pre-installing SP2 before beginning the DPM installation; if you do install DPM on Windows Server 2003 SP2, you should first read the sidebar "The Effect of the Windows Server 2003 SP2 IP Stack Changes" in Chapter 5, "Advanced DPM"). Also note that while the x86 and x64 architectures are supported, the ia64 architecture (Itanium) is not.
Table 2.2: DPM Server Software Requirements Open table as spreadsheet

Software Component Operating System

Description Windows Server 2003 x86 Standard or Enterprise Edition with at least SP1. Windows Server 2003 x64 Standard or Enterprise Edition with at least SP1. Windows Server 2003 R2 x86 Standard or Enterprise Edition. Windows Server 2003 R2 x64 Standard or Enterprise Edition. Windows Storage Server 2003 x86 with at least SP1. Windows Storage Server 2003 x64 with at least SP1. Windows Storage Server 2003 R2 x86. Windows Storage Server 2003 R2 x64. Windows Server 2008 x86 Standard or Enterprise Edition. Windows Server 2008 x64 Standard or Enterprise Edition. Windows Storage Server 2008 x86. Windows Storage Server 2008 x64.

Table 2.2: DPM Server Software Requirements Open table as spreadsheet

Software Component Hotfixes

Description Microsoft Management Console 3.0. Hotfix 940349. Hotfix 891957. Windows PowerShell 1.0.

Windows Components

DPM installs a number of required Windows components, including Microsoft .NET Framework 2.0, ASP.NET, IIS, and Network COM+ access. If you must install the components manually, see the topic "Manually Installing Prerequisite Software" in the DPM documentation.

Active Directory Joined to a domain in the same Active Directory forest as the servers to be protected. Alternatively, the domains are joined by a two-way trust. Security Installation will be performed by an account that is a member of the local Administrator group.
Sharing DPM with Other Applications

If you're in a smaller organization, or planning on putting together a DPM server that is going to be underutilized, you're probably going to be tempted to put another application on the machine. We understand this urge, but we caution you to resist it. Granted, there are a handful of other applications that not only can successfully share the same hardware with DPM but actually make sense in that configuration for specific purposes, and we'll mention those in a moment. In general, though, this is a bad idea. In particular, you should avoid installing DPM on a machine with any of the following functions:

An Active Directory domain controller of any type. Don't try to sneak around it by running dcpromo after installing DPM; this is generally a very bad idea with any Microsoft software; it's even more so when you've chosen to use an integrated SQL Server installation on your DPM server. Microsoft Operations Manager or Microsoft Systems Center Operations Manager. Microsoft Exchange Server. Microsoft Internet Information Server or other web applications. Microsoft SQL Server of any version or edition, including the MSDE or SQL Server Express Edition. Yes, we know this seems funny considering the fact that DPM relies on a SQL Server instance that can be collocated on the DPM server, but when DPM installs its instance in this fashion it won't play well with any other SQL Server instance. Even if you're using an external SQL Server instance to store the DPM database, you'll only buy trouble by putting SQL Server and DPM together yourself.

Don't take this list as exhaustive; just take it as a strong indication that we really mean what we say when we tell you not to share DPM with another application on the same hardware. So now that we've given you the rule, let's give you the exception: disaster recovery. In particular, you can enable some exciting advanced site recovery scenarios by using DPM in conjunction with Microsoft Virtual Server 2005 R2 and Microsoft System Center Virtual Machine Manager 2007 to recover physical protected servers into virtual machines in the event of the loss of a site. In this configuration, DPM, Virtual Server, and VMM are placed on the same machine in the recovery site; it is intended only to allow the recovery of vital services for a limited time until the physical servers can be rebuilt. It is not intended for everyday operation. If you want more details on this configuration, Chapter 5, "Advanced DPM" is for you.

STORAGE

DPM requires the presence of at least one unformatted disk volume that can be allocated to the replica storage pool. Any volume that appears in Disk Manager as a physical volume can be used for this purpose. This includes:

Direct Attached Storage (DAS). A disk volume is considered to be direct attached if it is connected to a controller residing on the local server. The type of bus doesn't matter as long as it's supported by Windowsyou can use SAS, SATA, SCSI, or even IDE drives. DAS drives can be either internal or external to the system; a box of external disks that connects to an internal SCSI controller counts as DAS. You can also use disk arrays running on either hardware RAID controllers or Windows software RAID, although you'll need to carefully watch the CPU and disk performance if you choose the latter.

Storage Area Networks (SAN). SAN systems are a specialized RAID array. The primary difference (other than the traditionally hefty price tag) between a SAN and a DAS RAID array is that SANs are meant to be accessed by multiple servers over a specialized network connection (hence the N in SAN), typically some sort of fiber connection, where DAS arrays are only accessed by a single server. As a result, you need a host bus adapter (HBA) in each server that uses the SAN, while the array itself contains the actual RAID controller hardware. Your servers don't see the disk volumes in the array directly; the SAN administrator partitions the available storage up into logical unit numbers (LUNs) and assigns them to the various servers. These LUNs then appear in Disk Manager as a new volume.

Internet SCSI (iSCSI). An iSCSI array is really just a variant form of a SAN. Instead of using expensive fiber connections, however, iSCSI devices use a form of the SCSI protocol that has been specifically adapted to run over TCP/IP networks, usually Gigabit Ethernet running over Cat5e copper wiring (or Cat6 for those who just can't give up the habit of spending money needlessly). Instead of an HBA to connect the separate storage network, the server runs a special piece of software or hardware

known as an iSCSI initiator which translates the various I/O requests and replies into network packets. If you are planning on using iSCSI-based storage with DPM and are planning to use the Microsoft iSCSI Initiator package, you should read the "Dynamic Disks and iSCSI" sidebar in Chapter 5, "Advanced DPM." Although iSCSI devices can work over Ethernet or Fast Ethernet, you typically want the connection between them and the servers to use Gigabit Ethernetand most designs use a separate network for the storage requests to ensure decent bandwidth and latency, as well as security. These networks also tend to be configured to support jumbo framesEthernet packets that are larger than the usual 1,500 byte maximum limit. Jumbo frames can be up to 9,000 bytes. We talk more about jumbo frames in Chapter 5, "Advanced DPM."
What About Network Attached Storage?

Network Attached Storage (NAS) devices are a popular storage alternative in many small-tomedium size business, or even as department-or workgroup-sized solutions in larger enterprises. Superficially, they resemble a SANa box full of disks in some sort of RAID configuration, meant to be shared among multiple computers over a network. Unlike a SAN, though, NAS devices use the normal Ethernet network, not a special storage network. NAS devices differ from iSCSI devices (which are also SAN-like arrays that talk over an Ethernet network) because of how they appear to the network: they implement the SMB/CIFS Windows file-sharing protocol and show up as if they were a file server, with one or more accessible file shares. There are NAS devices that support file sharing protocols other than Windows file sharing (or even support multiple protocols), but we're talking about connecting to Windows machines, so we'll ignore those. To make matters even more confusing, a growing number of NAS devices support iSCSI access. Figuring out whether your device supports iSCSI is important:

iSCSI presents the arrays to your server as block-level protocol, just like a direct attached device with an Ethernet wire in between. The device shows up in Disk Manager. NAS present the arrays as file shares\\NAShost\sharenamejust like a Windows file server. The device does not show up in Disk Manager.

In order to use a NAS device with DPM: 1. The device must support iSCSI. 2. You must configure the device to share the array over iSCSI. 3. You must have set the matching iSCSI initiator configuration on your DPM server. Delving into more details about iSCSI is outside the scope of this book, but Microsoft has a site with a wealth of material about iSCSI support in Windows, including whitepapers and their freely downloadable software initiator. You can find this site at: http://www.microsoft.com/windowsserver2003/technologies/storage/iscsi/default.mspx.

Because of how DPM manages replica storageit creates volumes on the fly and resizes them as necessaryit requires that any disk used in the storage pool be a dynamic disk. You don't have to create the disk as a dynamic disk, though; DPM will automatically convert a basic disk to a dynamic disk, regardless of what type of disk it is, as long as the disk can be converted. If it can't be converted, you can't use it with the DPM storage pool.
DATABASE

DPM requires a Microsoft SQL 2005 database to store information on protected resources. You can either use an existing SQL Server instance to store the necessary database, or you can choose for the DPM installer to run an embedded SQL Server setup on the DPM server. If you choose the integrated SQL Server installation, you don't require any extra power for your DPM server; the requirements and recommendations already account for the overhead of the SQL instance. The DPM installer will install the necessary SQL components, including the base SQL Server 2005 Workstation Components, SQL Server 2005 with Reporting Services, and SQL Server 2005 SP1. If you plan on using an existing SQL Server instance, the following caveats must be heeded:

Install the SQL Server 2005 Workstation Components and SQL Server 2005 SP1 on the DPM server. The remote database must be SQL Server 2005 Standard Edition, SQL Server 2005 Enterprise Edition, SQL Server 2005 Workgroup Edition, or SQL Server 2005 Express Edition. You cannot use an earlier version of SQL Server. DPM relies on the Reporting Services feature; ensure it is installed on the remote server. ASP.NET 2.0 and IIS are required on the SQL Server machine in order to use Reporting Services. Install the SQL Server 2005 SP1 on the SQL Server machine. Microsoft recommends using the default failure audit setting in SQL Server. Te enable communication with DPM, the default Windows Authentication mode should be enabled. Microsoft recommends that only the database and reporting services be installed on the remote database machine. For best security, Microsoft recommended that a least privileged user account be used for SQL Server.

Before proceeding with installation in the remote SQL Server configuration, be sure to review the DPM documentation and follow the procedures it outlines.
Protected Servers

When you're assessing the servers you're going to protect with DPM, you have a simpler job.

Hardware. Does the server meet the hardware requirements for its configuration? The DPM agent does not place any additional hardware requirements on the server. Storage. Protected volumes must be formatted as NTFS. DPM will not protect FAT or FAT32 volumes. Additionally, each protected volume requires a minimum of 300MB free space to use for its change journal, which stores the changed blocks to be transmitted by the agent.

Software. See Table 2.3.


Table 2.3: Protected Server Software Requirements Open table as spreadsheet

Software Component Exchange Server

Description Exchange Server 2003 Standard Edition with at least SP2. Exchange Server 2003 Enterprise Edition with at least SP2. Exchange Server 2007 Standard Edition. Exchange Server 2007 Enterprise Edition. Clustered configurations are supported with the E-DPML. VSS hotfix 940349 on Windows Server 2003.

File server

Windows Server 2003 x86 Standard or Enterprise Edition with at least SP1. Windows Server 2003 x64 Standard or Enterprise Edition with at least SP1. Windows Server 2003 R2 x86 Standard or Enterprise Edition. Windows Server 2003 R2 x64 Standard or Enterprise Edition. Windows Storage Server 2003 x86 with at least SP1. Windows Storage Server 2003 x64 with at least SP1. Windows Storage Server 2003 R2 x86. Windows Storage Server 2003 R2 x64. Windows Server 2008 x86 Standard or Enterprise Edition. Windows Server 2008 x64 Standard or Enterprise Edition. Windows Storage Server 2008 x86. Windows Storage Server 2008 x64. Clustered configurations are supported with the E-DPML. VSS hotfix 940349 on Windows Server 2003.

Table 2.3: Protected Server Software Requirements Open table as spreadsheet

Software Component SharePoint Server

Description Windows SharePoint Services 3.0. Microsoft Office SharePoint Server 2007. Windows SharePoint Services 2.0 (using the steps in KB 915181). Microsoft SharePoint Portal Server 2003 (using the steps in KB 915181). VSS hotfix 940349 on Windows Server 2003.

SQL Server

SQL Server 2000 Standard Edition with at least SP4. SQL Server 2000 Enterprise Edition with at least SP4. Microsoft SQL Server 2000 Data Engine (MSDE) with at least SP4. SQL Server 2005 Standard Edition with at least SP1. SQL Server 2005 Enterprise Edition with at least SP1. SQL Server 2005 Workgroup Edition with at least SP1. SQL Server 2005 Express Edition with at least SP1. VSS hotfix 940349 on Windows Server 2003.

Virtual Server

Microsoft Virtual Server 2005 R2 x86 with at least SP1. Microsoft Virtual Server 2005 R2 x64 with at least SP1. VSS hotfix 940349 on Windows Server 2003.

Workstations

Windows Vista Business Edition. Windows Vista Enterprise Edition. Windows Vista Ultimate Edition. Windows XP Professional Edition with at least SP2.

Installing the DPM Server


You've done your planning and assessment and corrected any problem you've found. Here's the payoff: the part where you pop in the media and click "Next" a lot.
Installing DPM

To install DPM, you can either use the media at the server or copy it to a local drive or network share. The DPM installation media is DVD-ROM format, not the typical CD-ROM format Microsoft has used for many years. If you choose to use the media ensure that the server has a DVD-ROM compatible drive. To install the DPM server: 1. On the opening screen as shown in Figure 2.1, click Install Data Protection Manager.

Figure 2.1: The DPM installation screen 2. Check the box to agree to the licensing agreement, as shown in Figure 2.2, and click OK.

Figure 2.2: The license agreement 3. In the Welcome screen, click Next as shown in Figure 2.3.

Figure 2.3: The Welcome screen 4. The next screen will show the progress of the prerequisites check, as shown in Figure 2.4.

Figure 2.4: Prerequisites check in progress 5. When the check has completed, you'll be presented with the results as shown in Figure 2.5. Click Next to continue. If the check fails, you'll be presented with the results and a list of the changes that need to be made; the installation will then abort.

Figure 2.5: Summary of the prerequisites check 6. Enter a user and company name, as shown in Figure 2.6, and click Next.

Figure 2.6: Enter the user and company information 7. Choose the SQL instance for DPM as shown in Figure 2.7. If you choose the default instance, DPM will install SQL Server on the local machine; you can click Next and skip to step 9. To choose an existing instance of SQL Server on a remote server, select the appropriate bullet and click Next.

Figure 2.7: Select a SQL server instance 8. Enter the remote SQL Server host or instance name and appropriate credentials, as shown in Figure 2.8, and click Next.

Figure 2.8: Choose an existing SQL instance 9. Enter the password that DPM will use for the service accounts it creates, as shown in Figure 2.9.

Figure 2.9: Provide the SQL service account password 10. Choose whether to enable Microsoft Update, as shown in Figure 2.10. The default option allows your server to automatically pull updates for DPM and other installed applications as well as the Windows Server operating service. Click Next.

Figure 2.10: Choose whether to use Microsoft Update 11. Choose whether to participate in the Customer Experience Improvement Program, as shown in Figure 2.11, and click Next.

Figure 2.11: Customer Experience Improvement Program preferences 12. You will be presented with a summary of your installation choices as shown in Figure 2.12. Review them to ensure that they are correct and click Install if they are correct, or click Back to make changes.

Figure 2.12: Installation summary 13. Your installation progress will be displayed as shown in Figure 2.13. At the end, you will receive a message indicating either success or failure.

Figure 2.13: Installation progress If everything has gone well at this point, you have installed your DPM server. Congratulations! Take a break and have a beer or play a game of foosball (just don't mix them; foosball is dangerous enough, especially if you play it the way Ryan does). Before you can start using DPM for production use, you have several configuration tasks to complete.

Configuring DPM
Now that you have DPM installed, you're ready to go. If you're like us, you're now looking at your wonderful deployment and saying the first thing that comes to mind: "Now what?"

Well, now it's time to protect your data. To get DPM protecting your data, you first have to perform the following steps:

Add empty volumes to the DPM storage pool. DPM requires storage space for the replicas of your production data. Instead of making you manually manage files and folders and replicas, DPM does the dirty work for you; all you need to do is tell it which disk volumes it can use. Install the protection agent on your production servers. Without the agent, DPM won't know what data has changed. Create protection groups. These groups tell DPM what policies and schedules to use for the resources it is protecting.

Ready? Here we gowe're not going to make you do it alone.


Add Disks to the Storage Pool

After you've completed installing DPM, you need to add blank disk volumes to the DPM storage pool. Makes sense, right? If you want disk-based protection for your data, you have to tell DPM what storage space it can use for the replicas of your production data. If you start to run out of room, you can add more disks to the DPM server, then come back and allocate them to the pool. To add disk space to your DPM storage pool, follow these steps: 1. Open the DPM Administrator Console, navigate to the Management tab, and select the Disks subtab. 2. Click Add in the Actions pane. 3. Select the disk or disks that you want to add in the left pane, as shown in Figure 2.14, and click Add. When you have added all of the disks you want to include in the storage pool, click OK.

Figure 2.14: Add disks to the storage pool Once you have added the disks, they will appear in the Disks subtab, along with information about their total capacity and unallocated space, as shown in Figure 2.15.

Figure 2.15: Status of storage pool disks

If you're done adding disks to your DPM server, DPM is physically ready to store replicas. The next major tasks are to select servers and resources to protect, which means installing the DPM protection agent on your production servers.
Install the Protection Agent on Protected Servers

The next step is to specify the servers you will protect using DPM. You do this by pushing out the DPM protection agent from the DPM server to the protected servers. DPM accomplishes several tasks during this procedure, including ensuring that it has the proper credentials. To install the DPM protection agent: 1. Open the DPM Administrator Console, navigate to the Management tab, and select the Agents subtab. 2. Click Install in the Actions pane. 3. From the left pane, select the servers you want to protect, as shown in Figure 2.16, and click Add.

Figure 2.16: Choose the servers on which to install the agent

4. When all of the servers you want to protect are in the right pane, click Next. 5. Enter the credentials for a user with administrative rights on the selected servers, as shown in Figure 2.17, and click Next.

Figure 2.17: Enter the credentials to install the agent 6. After the agent install has been completed, you will not be able to begin protecting your servers until they have been restarted. Choose whether you want the servers to reboot now or later, as shown in Figure 2.18, and click Next.

Figure 2.18: Choose the server restart method 7. A Summary screen will appear, as shown in Figure 2.19, showing the choices you have made. Click Install to proceed with the agent install, or click Back to change your options.

Figure 2.19: Agent installation summary 8. The final screen, as shown in Figure 2.20, will display the agent install progress. You can click Close, and the current status and progress will be displayed in the Agents subtab.

Figure 2.20: Agent installation progress Once the protected server reboots and DPM verifies the connection with the agent, you will see the list of resources on those servers that you can use DPM to protect. Before you can select them, though, you must create at least one protection group.
Create Protection Groups

Creating protection groups is one of the most crucial steps in protecting your data. Protection groups govern the schedule and protection policies for your data. To create a new protection group:

1. Open the DPM Administrator Console, navigate to the Protection tab, and click Create Protection Group in the Actions pane. 2. In the Welcome screen, shown in Figure 2.21, click Next.

Figure 2.21: The Create Protection Group Welcome screen 3. In the Select New Group Members screen, expand the servers you want to protect, and select the data sources on those servers to include in the protection group by checking the boxes next to the data sources, as shown in Figure 2.22. When you have selected all of the data sources for the protection group, click Next.

Figure 2.22: Select the protection group members 4. Choose whether this group will use short-term protection and the associated method, as well as whether to use long-term protection (if you have a tape drive or library attached to your DPM server) as shown in Figure 2.23. If you don't have a tape device of some sort (a standalone drive or a library) installed on the server, or if DPM doesn't recognize your drive (perhaps because of a configuration problem), you won't see the option for long-term protection. Once you have chosen your protection methods, click Next.

Figure 2.23: Select the data protection method 5. Unless you have chosen not to provide short-term protection for your protection group, the next screen is where you decide how long short-term data is retained in DPM, as well as the synchronization frequency and the recovery point schedule as shown in Figure 2.24.

Figure 2.24: Select the short-term protection details 6. To change the schedule for either the recovery points or the express full backup, click the Modify button next to either. Here, you can change the frequency by adding times and checking days of the week for the selected operation to occur, as shown in Figure 2.25. When you are finished, click OK.

Figure 2.25: Change the recovery point schedules 7. Back in the Short-Term Goals screen, click Next. 8. In the Review Disk Allocation screen, you'll see that DPM will have already recommended a default allocation from the storage pool based on the amount of data being protected as well as the short-term goals you specified, as shown in Figure 2.26.

Figure 2.26: The disk allocation recommendation 9. To change the amount of storage pool space allocated for your protection group, click Modify. You will be presented with a breakdown of all of the data sources in your protection group, the storage type, and amount of space reserved for each. To change the amount allocated, as shown in Figure 2.27, enter the desired values in the fields, and click OK.

Figure 2.27: Modify the disk allocation 10. Back in the Review Disk Allocation screen, click Next. 11. In the Choose Replica Creation Method screen, select the method by which replicas will be created, as well as when the first one should be created, as shown in Figure 2.28. Click Next.

Figure 2.28: Choose the replica creation method 12. In the Summary screen shown in Figure 2.29, you will be presented with a summary of all of the settings you have selected for the protection group. If everything looks good, click Create Group, otherwise, click Back to make any necessary changes.

Figure 2.29: Protection group summary After the wizard completes, progress will be displayed in the window. If you choose to close the window, the progress can still be followed from the Protection tab.
Overriding the Default Storage Allocation

Let's face it: as administrators, we like to tinker with software. Most of us have never seen a default value that we don't secretly think is completely, totally, and in all other ways inconceivable (or at least unsuitable for use in our networks). In this case, however, we recommend that you use the default values suggested by DPM. Microsoft has done a lot of work to come up with a sizing algorithm that makes a lot of sense for most configurations and the chances are pretty good that your deployment will be more than adequately served by these default values, much as we all might hate to admit it. So when should you override these values? One situation is when you expect a sudden and imminent increase in the size of a given data source. For example, if you're upgrading to Exchange 2007 and you'll be moving a large batch of mailboxes over soon, a new protection group for your Exchange 2007 mailbox servers might not provide enough space if left to the default values. Just remember: you can always go back in later and tweak these numbers. DPM treats all of the volumes in the storage pool as one big happy set of storage. If you come back and change the numbers later, DPM will be able to do the right thing. You don't have to worry about moving replicas around or any of the other inconvenient busywork you might have to indulge in if you were managing the storage yourself.

The Bottom Line


Determine the prerequisites for installing the DPM server components. The first step of installing DPM into your organization is to ensure that your DPM server is running the necessary versions of the Windows operating system, service packs, and hotfixes.

Master It Perform a survey of your Windows environment to ensure that you have the necessary hardware and software to install DPM:

What version of Windows Server and service pack will you be running on the DPM server? Does your DPM server meet the hardware requirements, including storage configuration? What Active Directory forest and domain is the DPM server a member of? Is it in the same forest as the servers it will protect?

Determine the prerequisites for installing the DPM protection agent. The next step for installing DPM is to ensure that your protected servers are running the necessary versions of the Windows operating system and service packs. Master It Perform a survey of your Windows environment to ensure that your protected servers are compatible with the DPM protection agent:

What version of Windows Server and service pack are you running on the protected servers? Does the workload version and architecture meet the requirements? What Active Directory forest and domain is the server a member of?

Add disk volumes to the DPM storage pool. Storage on your DPM server is a critical part of your protection strategy. Although DPM's block-based replication and use of VSS helps reduce the amount of space it requires, you still need to give DPM an adequate amount of disk space to ensure that you can create the number of recovery points and synchronization schedules you need to protect your data. Master It 1. Of the following forms of storage, which ones can DPM use and which ones can it not use: Direct attached storage iSCSI volumes Network attached storage volumes Storage area network volumes 2. How does DPM require volumes for the storage pool to be configured in Disk Manager? Deploy the DPM protection agent to protected servers. Once the DPM server is configured, you must ensure that the DPM protection agent is deployed to the servers whose data you wish to protect. This agent ensures that the server resources are seen by DPM and can be protected. Master It 1. Where do you deploy the DPM protection agent?

2. Is a reboot required to install the DPM protection agent on a protected server? Configure a DPM protection group. The final part of preparing DPM to protect data is to create the protection groups. A protection group allows you to specify one set of protection policies and apply them to multiple protected resources. You should use as few protection groups as you need, but enough to ensure that all of your policy requirements are met. Master It 1. What two protection methods does DPM provide in a protection group? 2. What options does DPM give you for creating an initial replica of your protected data?

Chapter 3: Using the DPM Administration Console


Overview
There was a time when nails were high-tech. There was a time when people had to be told how to use a telephone. Technology is just a tool. People use tools to improve their lives. Tom Clancy When you're learning a new application, there's a tendency to forget one of the most important rules for dealing with computers: don't assume! You may think you know how it works, but you may find that you don't. In this chapter, we're going to take you for a walk around the GUI administrative interfaces of DPM. This may seem boring, but it never hurts to know where everything is. We've all done it. We get our hands on a new piece of hardware or software and can't wait to start playing with it. We know better; we know that we should read the manual, but our excitement overtakes us and we barge ahead. Ryan to wax nostalgic on this topic (cue the shimmery screen fade): It was 1999, and I was a new Windows NT 4.0 admin. I was tasked with building out a base install for a large number of new servers. Being a hotshot new admin-type, I immediately shoved all of the documentation, media, and everything else that came packaged with these servers aside (it's just extra fluff, real administrators don't use the OEM defaults). These servers were top of the line for the day, dual-processor 450MHz PIII CPUs, complete with 128MB of RAM and SCSI hard disks. I was in hog heaven. So, you can imagine my frustration when I inserted the install media into the first server and got a message from the installer stating that no hard disks could be found. I figured that the problem must be in the BIOS, or maybe the SCSI controller was configured improperly. After checking the BIOS and discovering nothing obviously amiss, I then proceeded to check the BIOS on the controller itself. Again, I could only find that all of the disks were visible, had the correct IDs, and that the disk I wanted was selected as the bootable device. My next flash of brilliance was that there must have been some problem with the install media, so I got another disk. This option, too, led me nowhere. But I wasn't about to let that stop me, so I got two more disks and tried the install two more times. Needless to say, they didn't help any better than the first one did. Feeling moderately dejected, I finally decided to check for some source of wisdom on the topic, and began searching online. Oddly enough, there were no hits for any of the descriptions of the problem I was having. Determined not to have to call the hardware vendor or Microsoft, I finally did the sensible thing via RTFM and opened the documentation to see if anything in it applied.

The OEM of the equipment included an incredibly helpful installation guide, including the instructions: "When Windows Setup begins, press the F6 key to be prompted later on to install the necessary drivers for the SCSI controller." (In my defense, this was before there was an onscreen prompt for the F6 key.) My jaw dropped, and I sat there staring at the page. I knew that from school. Why did I forget? While I don't know exactly why, I believe that I let my excitement over new toys and my ego run the show, causing my unfortunate oversight and subsequent wasted time. Deservedly humbled, I proceeded to dig the driver disks out of the pile I'd shoved them into and read every last word of the manual. The moral of the story is that I had something I knew well (Windows NT 4.0) and something new (the hardware). I wanted to play with the new hardware so badly that I ended up costing myself about four hours of time, a considerable amount of frustration, not to mention the bruised ego. We want you to know where everything is, even if you don't know how it works yet; we especially want you to know where it is if you think you can figure it out on your own. Although DPM follows the UI principles common to all Microsoft MMC-based applications, you may findas Ryan didthat some surprises are hidden in the things you think you already know. In this chapter, you will learn to:

Navigate the DPM GUI Name the major sections of the DPM GUI Describe the purpose of the Actions pane

Navigating the GUI


Most of us who've been administering Windows environments for a while are used to GUIbased admin tools. Since Windows 2000, the main interface for our administrative tools has been the Microsoft Management Console (MMC). In this respect, DPM is no different. The advantage of the MMC is that it provides a consistent interface that is customizable by snapins, such as the DPM Management console. The DPM Management console is where you'll perform the vast majority of tasks related to DPM 2007. It is installed by default on the DPM server, and it can be installed by itself on any machine from which you choose to perform administrative tasks (provided it meets the system requirements, see Chapter 2, "Installing DPM"). Once DPM is installed, you can access the DPM Management console by clicking Start All Programs Microsoft System Center Data Protection Manager 2007 (Figure 3.1). If you chose to create a desktop shortcut during setup, you can double-click that shortcut instead.

Figure 3.1: Opening DPM Administration console

On launching the DPM Management console, you'll notice that like other MMC consoles, there are two panes. The left pane is for navigating the different functions of DPM, while the right pane shows the actions available. Some actions are global and, therefore, available no matter which tab you have selected; other actions are directly related to the tab at hand, making them available only on that tab. There are some consistencies to the Actions pane. You'll always see the View and Help links, as well as the Options link. The Options link in the Actions pane handles global options for the DPM server. On clicking the link, you'll be presented with the Options window as shown in Figure 3.2.

Figure 3.2: The Options window

The first tab displayed is the end-user recovery tab. This tab is discussed in detail in Chapter 5, "End-User Recovery." The Auto Discovery tab, shown in Figure 3.3, allows you to specify a time of day for DPM to look for new machines on your network. The Auto Discovery feature allows DPM to maintain a list of machines that have been added to the Active Directory domain and are available for agent installation. Even if the autodiscovery process hasn't yet found a machine you want to protect (it runs only once every 24 hours), you can manually enter the information for a new host.

Figure 3.3: The Auto Discovery tab

The SMTP Server tab, shown in Figure 3.4, allows you to specify a mail server so that DPM can send job and alert notifications over email.

Figure 3.4: The SMTP Server tab

You must enter the server host name, the TCP port number, and the sender email address from which the notification messages will appear to be sent. If your SMTP server requires authentication, enter a username and password to use. Once you have entered the information, you can use the Send Test E-mail button to test your configuration. The Notifications tab allows you to specify the types of alerts to be sent, as well as the recipients as shown in Figure 3.5. Just like the SMTP Server tab in Figure 3.4, there is a button to test your configuration.

Figure 3.5: The Notifications tab REAL WORLD SCENARIO: Which Port Do I Use?

DPM uses the standard Simple Mail Transfer Protocol (SMTP) to submit messages to your chosen mail server. By default, SMTP uses TCP port 25. However, there are several cases in which you may be using a different port:

If your mail server is using SMTP with Secure Sockets Layer (SSL), also known as SMTPS, there is a different default port. SMTPS typically uses TCP port 465 rather than port 25; SSL requires a different port for protected connections than for unprotected connections. If your mail server is using SMTP with Transport Layer Security (TLS), you should use the default setting of port 25. While TLS works in much the same fashion as SSL, it can be started over an existing unprotected connection; therefore, it doesn't require a separate port. If you're using Exchange 2007, you should consider using the Client Submission Port, which is enabled by default on Exchange 2007 Hub Transport servers. This new emerging standard is specifically designed to allow standard SMTP port 25 to be locked down within the network and offers TLS-protected, authenticated SMTP sessions on TCP port 587. If you're using this option, you will need to configure and enable SMTP authentication for your DPM server.

Regardless of which port option you use, best practice is either to create a separate account for the DPM notifications to come from or to use an account that is associated with your DPM administrators. We prefer to use a single mail-enabled distribution group, named Server

Administrators and coupled with a standard SMTP service account. This account is used to permit SMTP authentication for all of our serversDPM, SharePoint, and morebut the messages are configured to come from the Server Administrators email address. This way, if anyone ever replies to the notification message, it's going to be read by our administrative team.

The Alert Publishing tab, shown in Figure 3.6, is for those of you who are using Microsoft Operations Manager (MOM) or Microsoft System Center Operations Manager (SCOM) to monitor and manage your network operations.

Figure 3.6: The Alert Publishing tab

In order for MOM or SCOM to properly pick up the alerts that DPM generates, you must first tell DPM to publish them. To do this, click the Publish Active Alerts button. The Customer Feedback tab, shown in Figure 3.7, is where you choose whether or not to send information to Microsoft's Customer Experience Improvement Program. Choose either the Yes or No button depending on whether you want to submit this anonymous information to Microsoft.

Figure 3.7: The Customer Feedback tab

The Actions pane is divided into sections. When an object is selected in the left pane, the available actions for that object appear in the Selected Item section of the Actions pane. In the Navigation pane, you will find some objects that may be interacted with via a rightclick to pop up the appropriate contextual menu. The Actions pane, however, has none of these contextual right-click menus, so all activity there is done by simple left-clicks.

The Monitoring Tab


The left-most tab in the DPM Management console is the Monitoring tab, shown in Figure 3.8.

Figure 3.8: The Monitoring tab

The Monitoring tab contains status messages and alerts in two subtabs: the Alerts subtab and the Jobs subtab. As you can see in Figure 3.9, the Alerts subtab displays active alerts grouped by severity by default.

Figure 3.9: Grouping in the Alerts subtab

Sorting and displaying all of the alerts is done via the dropdown and checkbox. In the alerts subtab, you can also subscribe to notifications for alerts, change an alert to inactive status, run a synchronization job, and view the items affected by the alert by clicking in the Actions pane, or right-clicking the alert. You can select any alert in the Alerts subtab and see its details in the Details pane below. The details pane includes links for possible actions for the alert as well as an option for a highly detailed description of the error. The Jobs subtab contains information about all of the jobs on your DPM server: The Jobs subtab also contains a details pane that allows a deeper dive into any scheduled, running, or failed job. The various types of jobs are shown in Table 3.1.
Table 3.1: Types of Jobs in DPM Open table as spreadsheet

Job Type Replica creation Consistency check Synchronization

Description Happens when the initial replica of a protected volume is created Happens when the replica of a protected piece of data is being compared to the source on the protected server Happens when DPM receives changes to protected data from the source

Table 3.1: Types of Jobs in DPM Open table as spreadsheet

Job Type Recovery point Disk recovery Tape erase data Drive cleaning Detailed inventory Fast inventory Tape verification Dataset copy Tape backup Tape recovery Copy data tape Recoverable items recatalog Tape recatalog Library rescan

Description Happens when a replica's Happens when protected data is in the process of being recovered Happens when DPM is erasing data on a tape Happens when DPM is cleaning a tape drive Happens when an administrator initiates a detailed inventory of a tape Happens when and administrator initiates a fast inventory of a tape Happens when a tape is verified Happens when copying a dataset Happens when a job that protects data via tape is running Happens when recovering data from a tape Happens when copying data from a tape, whether it is to another tape or disk location Happens when recataloging a tape from another DPM server Happens when recataloging a tape Happens when rescanning a library

The Jobs subtab, shown in Figure 3.10, allows you to group and view all of your protection jobs by Protection Group, Server, Status, or Type. You can also create and apply filters to further limit the results to the details you feel are relevant. Note also that the different filter options are available in the Actions pane on the Jobs subtab.

Figure 3.10: The Jobs subtab

The Actions pane of the Jobs subtab has a link that allows you to create a custom filter. To create a custom filter, click the link and the Filter screen will appear as shown in Figure 3.11.

Figure 3.11: The Filter screen

At the top of the Filter screen, you can enter a name for your filter and specify a time period to which it will apply. Also in the Filter screen, you'll see three subtabs: Jobs, Protection, and Other. The Jobs tab allows you to filter by job type and/or status as shown in Figure 3.11. The Protection tab allows you to filter by Protection Group, protected server, or data source as shown in Figure 3.12.

Figure 3.12: The Protection tab of the Filter screen

The Other tab allows you to filter by data transferred, time elapsed, or tape device as shown in Figure 3.13.

Figure 3.13: The Other tab of the Filter screen

Once your filter has been created, click the Save button, and it will appear in the list of filters in the Jobs subtab. You can also modify or delete any custom filter you have created using the link in the Actions pane. The tasks and actions available in the Monitoring tab are unavailable in the DPM Management Shell. This hinders creating and using automated monitoring scripts, and is a limitation that we hope Microsoft will remedy in the next release of DPM.

The Protection Tab


The Protection tab, shown in Figure 3.14, is where all of your protection groups are defined and modified. It's also the place where you can see all of the different protection options available in DPM; you can drill down and see the protection details for each data source that you've added to a protection group. The viewing options in the Protection tab are limited to grouping by protection group or server.

Figure 3.14: The Protection tab

Notice the abundance of available actions in the Actions pane. Table 3.2 lists the available actions, along with a brief description of each.
Table 3.2: Actions in the Protection Tab Open table as spreadsheet

Action Description Create Protection Group Starts the Create Protection Group Wizard Modify Protection Group Protection of Group View Tape List Specify Tape Catalog Retention Optimize Performance Opens the Modify Protection Group Wizard Stop Stops protection of the group and gives you the option of deleting any existing replicas Shows a list of tape devices used by the protection group Allows you to determine when tape catalogs should be pruned Allows you to change settings associated with protection jobs to help performance

Modify Disk Allocation Allows you to change the amount of space in the disk pool allocated to a protection group Perform Consistency Check Create Recovery Point Disk Create Recovery Point Tape Stop Protection of member Remove Inactive Protection Starts a consistency check on the selected data source Creates a recovery point on disk for the selected data source Creates a recovery point on tape for the selected data source Stops protection of a member of a protection group and presents an option to delete any replica data Removes inactive protection groups and members

Although you don't have a lot of tasks to accomplish here, DPM does allow you to also perform most of them using the DPM Management Shell (DMS). To learn more about the DMS, see Chapter 4, "Using the DPM Management Shell."

The Recovery Tab


The Recovery tab, shown in Figure 3.15, is where the real magic of DPM takes place. Well, technically it takes place in the code, but the Recovery tab is where you get to see and interact with the magic.

Figure 3.15: The Recovery tab

The Browse subtab contains a tree view of protected data sources organized by domain and server. To the right of the tree, there is a calendar view for selecting the appropriate day of the recovery point you want to select and a dropdown to choose which recovery point in that particular day that you wish to use. The details pane in the lower right of the subtab lists the recoverable items for the selected recovery point. Right-clicking an item here will display a contextual menu with all of the available options. Notice as well that the Actions pane also contains these options. The Search subtab, shown in Figure 3.16, allows you to search protected data from the source or replica.

Figure 3.16: The Search subtab

DPM's search capability is incredibly flexible as it allows you to do a standard file search, search Exchange mailboxes, SharePoint data, and more. You can narrow your search by Contains, Exact Match, StartsWith, and EndsWith, as well as date ranges organized by when the recovery point was taken. Finally, you choose the original location to search within and the option for subfolder search. The Search feature is extremely helpful in cases where you only need to recover a small number of items from a large data source. It is also helpful in cases where you need to recover from an older archive. Once you have inventoried the data, simply go to the Search subtab, enter the information and the source, and you can find it easily.

The Reporting Tab


Although the Recovery tab is where the magic happens and the Protection tab is where you define your protection, the Reporting tab may be the most important tab of all. You're probably asking yourself if we're joking at this point, but bear with us. In our combined years as systems administrators, there has almost always been a shortage on budget, whether real or perceived. Companies lately are under more pressure to cut costs, and because IT infrastructure tends to be expensive, it also shows up as a large line-item target for those who want to make a quick and significant effect on the bottom line. The downside to this approach is that it's roughly analogous to a racecar team owner deciding to cut his costs by not buying tools for the pit crew, reducing training time for the driver, and canceling the insurance policy for the car. Regardless of the long-term effects, this type of short-term focus happens all too frequently; the Reporting tab is one of the best tools in your arsenal to defend against this kind of thinking. The Reporting tab, shown in Figure 3.17, shows timely reports of all areas that concern your data protection configuration.

Figure 3.17: The Reporting tab

Double-clicking a report brings up the available filtering options, as shown in Figure 3.18.

Figure 3.18: Report filtering options

You can group by server or protection group, set your time granularity by weeks, months, quarters, or years, and display content from the current week to the previous five weeks. The History tab displays a list of any reports you've scheduled to be produced. The Disk Utilization Report, shown in Figure 3.19, displays information about the storage pool, its capacity, allocated space, and used space. It not only displays information for the active DPM server, but for all other DPM servers you have.

Figure 3.19: A Disk Utilization Report

The Protection Report, shown in Figure 3.20, provides statistics on recovery point availability. This data can be collected on a per-server or per-protection group basis and aggregated for all protection groups.

Figure 3.20: A Protection Report

The Recovery Report, shown in Figure 3.21, provides details about recovery jobs and their statistics for performance tracking.

Figure 3.21: A Recovery Report

The Status Report, shown in Figure 3.22, provides a status of all recovery points for a time period. It also lists recovery jobs, showing successes and failures by recovery point and creation of disk or tape-based recovery points. Trends in error frequency and alerts are also shown in this report.

Figure 3.22: A Status Report

The Tape Management Report, shown in Figure 3.23, provides details on your tape rotation strategies. It lists all libraries below the free media threshold. Data is collected on a perlibrary basis, and is aggregated for all of your libraries.

Figure 3.23: A Tape Management Report

The Tape Utilization Report, shown in Figure 3.24, shows trends in tape usage. This is particularly important in capacity planning.

Figure 3.24: A Tape Utilization Report

You can change the generation schedule for any report by clicking it in the left pane and clicking the Schedule button in the Actions pane, as shown in Figure 3.25.

Figure 3.25: Changing a report generation schedule

The Management Tab


The Management tab, shown in Figure 3.26, is the central location for managing the objects used by DPM. This includes agents, storage pool disks, and libraries.

Figure 3.26: The Management tab

The Agents subtab displays all installed agents, organized by protection status. Also displayed are agent licenses purchased, license status, and licenses in use. This subtab also handles agent installation and removal, agent disabling, and server bandwidth throttling via the links in the Actions pane, or by right-clicking the object. The Disks subtab, shown in Figure 3.27, is used to manage disks in the storage pool. From this subtab you can add, remove, or rescan for disks. Information is displayed here for the number of disks in the storage pool, total capacity, and allocation.

Figure 3.27: The Disks subtab

The Libraries subtab, shown in Figure 3.28, handles all aspects of tape device management, and includes many actions specific to the subtab.

Figure 3.28: The Libraries subtab

Table 3.3 shows the tape device management actions and their descriptions.
Table 3.3: Tape Management Actions Open table as spreadsheet

Action Inventory Library Rescan Unlock library door Rename library Disable library

Description Allows an inventory, either fast or detailed, of the selected library. Rescans for attached tape devices. For tape devices without an IE port, allows media to be physically added and removed. If media is added, the door must be locked again. Changes the name of a tape device. Disables the selected tape device, rendering it unusable by DPM.

Table 3.3: Tape Management Actions Open table as spreadsheet

Action Clean drive Disable drive Add tape (IE port)

Description Used to run a cleaning tape in the drive to clean it. Disables the selected tape drive, making it unusable by DPM. Enables IE port slot population.

Remove tape (IE Removes a tape from a populated IE slot. port) Identify unknown tape View tape contents Erase tape Reads the tape header from an unknown tape to identify it. Reads and displays the contents of a tape. Erases the contents of a tape.

Mark as cleaning Marks a cleaning tape so DPM can identify it as such. tape Mark tape as free Recatalog imported tape Marks tape as available to be written to. Enables data from another DPM server to be recovered.

The Bottom Line


Navigate the DPM GUI. Before you can master DPM, you need to be familiar with its primary administrative interface. Although DPM offers both a graphical interface and a commandline interface, the primary interface that most administrators will use and be comfortable with is the GUI. You should know the different components of the GUI. Master It 1. What standard Windows technology does the DPM Administrative console use? 2. What are the major areas of the DPM Administrative console? 3. What is the function of the Navigation pane? Name the major areas of functionality in the DPM GUI. The DPM Administrator console allows you to perform a variety of management tasks and functions for your DPM deployment. Master It 1. How many main tabs or functionality groups are there in the DPM Administrative console? 2. In which section would you see the status of any ongoing protection jobs? 3. In which section would you discover new servers on which to deploy the DPM protection agent? Describe the purpose of the Actions pane. The Actions pane is a key part of the DPM Administration console. Master It 1. What is the function of the Actions pane? 2. What options are always available in the Actions pane?

Chapter 4: Using the DPM Management Shell


Overview
Technology: No Place for Wimps! Scott Adams, "Dilbert" Windows has a reputation for being an operating system that is hostile to the command line and only allows you to do things in the GUI. Over the years, this perception has had an onagain offagain relationship with the facts. Early versions of Windows (and by "early" we're talking Windows 3.x) offered practically no way to perform tasks from the command line, short of launching programs from a DOS prompt or editing various .INI text files to tweak advanced options; modern versions of Windows offer a nearly complete array of tools, technologies, and utilities to control virtually every aspect of your computer's configuration and operation. In fact, the Server Core configuration, a stripped-down installation option that offers only a command-line interface (CLI) has been one of the most eagerly anticipated new features offered in Windows Server 2008. This continued drive for support for the command line amazes and confuses a lot of people. We blame Hollywood for perpetuating the myth that "typing stuff on computers to make them do things can only be done by really smart people who will use this power for evil purposes such as crashing airliners, transferring fractional pennies into back accounts, or taking food out of the mouths of starving widows and orphans by downloading the latest movies." This myth is helped along by a certain breed of computer professionals who want to appear wise and mysterious. Like all myths, though, this one contains a kernel of truth (blame Devin for the pun); the original command-line interfaces, for good historical reasons, were full of cryptic commands and confusing abbreviations:

The original command-line interfaces were designed for operating systems that ran in very small amounts of memory. As a result, programmers got in the habit of using short command names to make them fit into the limited memory they had available. The first devices for using command-line interfaces weren't the CRT screens and keyboards we're used to today; they were big teletype devices that ran over very slow serial communications links. When computers transmit data to teletype devices at the blazing fast rate of 110 characters per second, longer commands slow things down unnecessarily. As technology continued to improve, programmers kept the habit.

Whatever the source of the bad reputation, the command line is still a valuable tool for the modern IT professional. Microsoft has received a lot of feedback over the years; this feedback has led it to research and develop better, more consistent technologies for managing and configuring its applications and operating systems. DPM uses Windows PowerShell, the latest fruit of this research; the resulting DPM Management Shell (DMS) gives you a command-line interface that is actually easy (not to mention fun) to use.

In this chapter, you will learn to:


Explain the relationship between Windows PowerShell and the DPM Management Shell Describe the main benefits that PowerShell offers over regular scripting technologies

The DPM Management Shell: Your New Best Friend


The DMS isn't a technology that the DPM team came up with by themselves. Over the years, Microsoft has produced a variety of technologies for scripting and shell environments, such as VBScript. All of these environments, however, have various drawbacks that make them less than suitable for DPM 2007 and the current generation of Microsoft software. What the DPM team did instead was use a new technological building block that was just starting to come into play: Windows PowerShell. By using PowerShell as the basis of the DMS, they were able to gain some amazing capabilities and benefits:

A management interface that is consistent with other Microsoft applications. Windows Server 2008 offers PowerShell as a native management technology; Exchange 2007 is built entirely on PowerShell, and many other upcoming applications will feature varying levels of PowerShell support. Built-in integration with the .NET framework. PowerShell is built entirely in .NET, meaning that.NET applications can call PowerShell scripts and cmdlets and PowerShell scripts can in turn easily reference.NET objects. This allows more flexible application design; administrators can write scripts for common tasks, making use of the full power of the .NET framework, and have those scripts be easily used by other custom applications. As the scripts are edited and changed to conform to the operational environment, the applications automatically receive the benefits of those changes without requiring a lot of expensive development and testing time. Simplified scripting and bulk operations. Unlike other command-line environments, PowerShell is designed to use a consistent grammar for all of the commands you can use. Having this kind of consistency means that you have a much easier time learning how to use PowerShell-based environments. Again, since PowerShell is based on the .NET framework, this also means that if you know how to use a .NET class, you can leverage that knowledge when creating PowerShell scripts.
A Historical Perspective

Ryan offers the following story to give you a bit of perspective on the whole GUI vs. command line argument: "A while ago, I ran across my old 80286 while cleaning out my attic. Just for kicks I decided to see if it would still fire up. My 10-year-old stepdaughter was watching while I hooked up the old dinosaur and flipped the heavy duty power switch on the side. When it finished booting up and landed me at the command prompt, she said, 'I thought this was supposed to be a computer, where's Windows?' Suffice to say that although I'm only 31 years old, I felt I should be checking on my Social Security benefits." "I proceeded to explain to her the concept of an operating system, RAM, and the limitations on system resources in those days. Despite having been a die-hard Windows admin for many years, I found myself extolling the benefits of the command line and reminiscing about the 'good old days' when systems were simpler. She then asked why Windows was even invented. After lengthy explanations of multitasking and usability, I came full loop and had to admit that things these days aren't that bad after all."

There are those of us, including the authors, who do prefer the ability to perform certain tasks via some sort of command line. Thankfully, with DPM 2007 we have the choice of using the DPM Management Shell for these tasks.

Let's take a closer look at PowerShell and how it's integrated with DPM.
Introduction to Windows PowerShell

The Windows PowerShell (formerly called Monad) is a new command-line interface (CLI) from Microsoft. Originally designed for system administrators, PowerShell is designed to provide a flexible foundation to build all future GUI tools on top of PowerShell, ensuring that the core administrative functionality of PowerShell-enabled products remains scriptable. For most products, it will take a version or two to fully realize this goal. Each product team has a limited pool of resources that must be balanced among adding new functionality, rewriting old functionality, and fixing current functionality. Deciding how much of those resources to invest in each area can be a difficult process for a product group, and result in some customers scratching their heads trying to understand the results of those decisions. For the DPM group, the majority of the resource investment in DPM 2007 was aimed at adding new functionality and allowing the protection of more core Microsoft workloads such as Exchange Server, SQL Server, and Virtual Server. They chose to reuse as much of the existing code base as possible. Most of the existing DPM code base was not ready to work with PowerShell and would have required a lot of existing functionality to be rewritten. It was simpler and less expensive in resources to keep the existing code, and add to the product, than it would have been to rewrite the product and try to add functionality to it at the same time. This is the first version of DPM that supports PowerShell, and currently PowerShell capabilities may feel like an add-on to the product rather than a core technology as it is in other products such as Exchange 2007. The entire administrative interface for Exchange 2007 is implemented though PowerShell; the GUI management console for Exchange actually runs PowerShell cmdlets to accomplish what it is told to do. In DPM 2007, the shell can only perform a subset of the GUI operations. We trust that eventually DPM will be fully scriptable. In traditional command shells, the commands are executable programs whose parameters and input methods may vary widely. Take for example the DOS commands PING and DIR. All of the parameters for the PING command start with a minus sign (-), and all of the parameters for the DIR command start with a slash (/). Each of these commands is actually a different application written in a different language that was added to the windows CLI application at a different time, developed by different Microsoft groups, with different methodologies. This might seem like a tiny difference, but when you have to deal with hundreds of commands, this is just one more thing that you need to remember. Does this command use a minus or a slash? Does this command output in delimited text, comma-separated text, or something entirely different?

Another key difference between PowerShell and other shells is that PowerShell cmdlets pass data as objects that any other PowerShell cmdlet can use, instead of passing data as formatted text that often needs to be manipulated before it can be used by another command. This eliminates the need for the many text-processing utilities that are common in UNIX such as GREP and AWK. This makes piping the output of one cmdlet to the next very simple, and it lets you use very powerful single-line commands. In PowerShell these are called one-liners. Some product groups have held internal and external competitions to see who can come up with the best PowerShell one-liners for their products. A very simple example of a one-line piped DPM PowerShell command is as follows:
Get-Tape-ProtectionGroup <Group_Name> | Erase-Tape

This command will get the names of all of the tapes in a protection group, and pass that data to the Erase-Tape cmdlet, which will then take the list of tapes and perform the erase action on them. In the past you would have done this in the GUI a number of ways, such as:

You could hold down the Ctrl key and select all of the tapes, and then somehow tell the GUI to erase all of the tapes. If the application was designed well, there was a cool right-click option to erase all of the tapes in a group. If it was not written well, you had to select one tape a time and then erase the data.

You can see this might have been simple, or it might have been complicated in the GUI. In the PowerShell, our example should help you see that it is going to be pretty simple and consistent. That consistency not only persists across the tasks you might perform in one product, but across all products as well. The core PowerShell application is owned by a single group at Microsoft, the PowerShell group. The PowerShell group improves the core PowerShell functionality and works with each group at Microsoft that implements PowerShell in their applications. This ensures that PowerShell is implemented consistently across groups. When a vendor adds PowerShell support to their product, they do so by creating a PowerShell snap-in. The snap-in is the subset of cmdlets for the product that can be loaded on top of the basic PowerShell shell. PowerShell is an object-oriented CLI, providing interactive prompt and extensive scripting capabilities. All operating systems have a CLI of some sort; sometimes it's not directly accessible from the default OS interface, but it's there. Even the MAC OS, before it was based on Linux, had a CLI. Traditionally, a CLI processes text passed to it in arguments or scripts. Windows PowerShell, however, does not. Instead, it processes objects from the .NET platform. All shell commands in Windows PowerShell use the same command parser, instead of different parsers for each tool. This gives a common framework for any tools written for it to draw from, which makes learning PowerShell easier. Used together with a target cmdlet, it shows help information about the target, Get-Help Get-Command (see Figure 4.1).

Figure 4.1: The DPM Management Shell

The DPM Management Shell can process all of the same commands that cmd.exe can, in addition to the core PowerShell cmdlets, and all of the cmdlets in the loaded DPM snap-in. Although it is based fully on the .NET framework (version 2.0 if you're interested), PowerShell is designed to provide a consistent interface for all of the activities and data stores an administrator might use. It does this through the use of providers, which give PowerShell a plug-in architecture that expands its base functionality that allows existing cmdlets to be used.
A Matter of Style?

The section of the "Microsoft Style Guide" that dictates how writers write about PowerShell says that all PowerShell cmdlets and parameters will follow the Pascal case style convention (sometimes called UpperCamelCase). This is applied to PowerShell by capitalizing the verbnoun combinations as well as all of the words in the cmdlet parameters, especially if the parameter is a concatenated word such as GetDatasourceProtectionOption. It would be against the style for example, to write get-help-basic. That cmdlet should be written as Get-Help-Basic with proper word casing. The editors at Microsoft chose to do this to make sure that everyone was consistent in how they wrote commands in documentation. Pascal casing is applied to all programming and scripting languages at Microsoft and in most cases industry wide. You may think that this is trivial, not something that you need to know as a reader, and that this is just for writers, but there is a reason for this distinction. When we first started playing with PowerShell and read about cmdlets, we thought that we had to type them out exactly the way they were written or they would not work. We later learned that this was not the case; while PowerShell cmdlets and parameters are not case sensitive, the contents of the variables are. If you type get-help, it will work the same way that Get-Help does. However, if you try to match a text string, you need to either use a case-insensitive match or be prepared for PowerShell to literally match the case you've provided.

While many PowerShell enthusiasts will tell you differently, we recommend that you get into the habit of properly capitalizing your cmdlets. Being aware of the case sensitivity may just save you from problems down the road.

The base PowerShell 1.0 package provides interfaces for the following data stores and technologies:

The filesystem The system Registry The system event logs Active Directory Service Interface (ADSI) Windows Management Interface (WMI)

Because of this modular provider-based design, once you learn how to enumerate files in folders, you already know how to enumerate other types of objects, whether they're Registry values, entries in an Active Directory organizational unit, or some other type of data. As long as PowerShell has a provider, you can reference and modify objects of that type.
CMDLETS

In PowerShell, shell commands are called cmdlets (pronounced "command-let"). Cmdlets are simple, small, and designed to be used with other cmdlets. Cmdlets exist as verb-noun combinations separated by a hyphen. For example, the Get-Help cmdlet, shows help information. The Get-Help cmdlet is made up two parts, as follows:

The Verb Get The verb designates the action that the command is going to takein this example, to get the value of an object. The Noun Help The noun designates the setting, function, part of the system, or object on which the cmdlet is going to perform an action. In this case, the cmdlet is working with the help function.

A friend once flew to California to take the Cisco CCIE test to become certified as the ultimate Cisco geek. Things did not go so well for him; he was sent home the first day. They asked him to leave because his method of getting to the answer was the wrong way in the Cisco book. Although he got the right answer, he skipped a number of steps that were listed in the official Cisco fix-it manual. Cisco, at the time, believed that you had to do things their way or you could not do them as a certified individual. They wanted everyone who was certified to have the same way of thinking and troubleshooting problems. They did not allow for any custom style or methodology of troubleshooting. Aren't you glad that Microsoft doesn't feel the same way about how you use PowerShell? Rather than force you to do things their way, Microsoft has worked hard to build in the ability to easily customize their products. Applications that use the MMC framework gain the ability to create and manage interface customization as part of the core MMC capabilities; they also usually offer a number of different ways to perform any given task. The PowerShell team was happy to follow this lead and create an impressive capability for customizing in the PowerShell.

One of the components of PowerShell customization is a feature called aliases. Aliases facilitate the creation of custom commands to replace PowerShell verb-noun combinations. For example, if you come from a Unix background, you are used to using the command man (short for manual) to view help in the CLI. If you wanted to keep things the way that they were, you could create an alias for the Get-Help cmdlet. With the alias in place, you would type man Get-Tape to get view help for the Get-Tape cmdlet. Setting up this alias is very simple to accomplish; it is in fact a one-line command. To set the man alias, use the Set-Alias cmdlet followed by the alias name that you want to send, and then the cmdlet to which you want that alias to be mapped. The result looks something like this:
Set-Alias man Get-Help

If you're like we were the first time we saw this, you might be thinking, "Wow, this is amazing! I can totally customize PowerShell if I want to." From there, it's a short step to, "I bet I could make my PowerShell environments look like a completely different OS/shell/utility that I am already used to, if I wanted to take the time to set it up." If you are thinking this way, you are correct; yes, you can totally customize your PowerShell to this extent. In fact, you can go far beyond aliases and customize a number of aspects, such as:

Colors, including the various types of foreground and background colors The default folders PowerShell uses Custom functions and variables that you can access from session to session Objects, libraries, and PowerShell snap-ins that are loaded within your sessions

Just because you can do something doesn't mean it's wise to do so. Devin knows a number of Unix users who were more comfortable in DOS and spent a large amount of time developing a set of Unix command-line aliases that made them feel as if they were at the DOS commandline prompt. These users, however, promptly ran into problems the day that their customizations got wiped out; they didn't know how to use the native system commands and were dead in the water. You can easily paint yourself into the same kind of corner with PowerShell aliases. Overusing aliases can also make it more difficult for others to help you troubleshoot your scripts; you have to figure out where the problem is coming from. To apply customizations and have them load every time you load PowerShell, you need to create a custom PowerShell profile. Doing so is outside of the scope of this book; you should consult the PowerShell documentation.
POWERSHELL EFFICIENCY TIPS

When working with PowerShell, you can do some things that will make your work more efficient and save you some time. Besides using an alias, you can use a few different techniques to type fewer characters or even quickly find out what options you have available. PowerShell has a nifty feature called tab completion that will complete commands and parameters for you when you press the Tab key. This can be nice when you cannot quite remember the cmdlet. You can type the first letter of the cmdlet, press Tab, and then scroll

through all of the cmdlets that start with that letter. You can also use this when you want to be lazy and don't want to type a long word. You can use wildcard characters when specifying parameter values. For example, when you're trying to find all of the files starting with the letter "n" in a folder, you can simply type Get-ChildItem n* and PowerShell will automatically limit the results of the command to the correct files. Because this is a core capability of PowerShell, it works no matter what provider or data source you're using; you can do this with Registry entries if you want.
GETTING HELP

We've already talked briefly about Get-Help and how it will help you get all the information you need to learn about new cmdlets and topics when you're working with PowerShell. However, the help system is more powerful that we've been able to get into so far, enough so that we've got a few more tricks to describe. PowerShell documentation is generally found in three different places: in the CLI, online in the online help, and installed alongside the shell on the computer. The documentation in all three locations is compiled from the same source document; it is just displayed differently. One of our co-workers, Kevin Miller, has a brilliant Welsh friend who once complained that in a prerelease version of PowerShell, he found it difficult to view and utilize the massive amount of text that the Get-Help cmdlet produced. Even when he piped Get-Help to the More command to view the help output one page at a time, he found the density of information to be overwhelming. His complaints led the PowerShell team to reimplement how the Get-Help cmdlet displays information:

The default view displays a short summary of the command: the command name, a brief description of the syntax, and a few paragraphs that tell you about the basic operation of the cmdlet. The detailed view, triggered by using the -Detailed parameter, provides you with more details. You get an expanded syntax, along with a listing of the parameters. The full view is just thattotal information overload. By using the -Full parameter, you get more detail than you ever wanted to know about the cmdlet. However, this view is the only one that includes useful examples.

VARIABLES

We've already stated that PowerShell is a .NET application. One of the implications of this is that variables in PowerShell are strongly typed, just as they are in the .NET framework, and you can use variables of any type provided in .NET assemblies, regardless of language. However, PowerShell is a scripting language; it will hide as much of the complexity behind variable types from you for as long as it can. You don't have to define variable types before you use them; the types of cmdlets and objects you use them with will determine the variable type. PowerShell will automatically try to perform conversions from one variable type to another when appropriate, and it provides a host of functions to enable you to do so manually if needed.

Unlike many other scripting languages, PowerShell variables are actually collections of objects or values; there's no difference between a collection of only one object and a collection of multiple objects (other than the obvious difference in the number of objects in the collection). We can't really go into the implications of this here, but as you begin writing scripts, you'll see how easy it is to concentrate on the information you want without having to jump through the hoops a lot of other scripting languages will make you jump through.
THE PIPELINE

Those of you who have been exposed to Unix shell scripting environments in your past (Devin used to be a Unix administrator, so he's forever tainted) already understand the concept of the pipeline. If you haven't used Unix but are comfortable on the DOS command line, you may also know what the pipeline is, albeit in a limited fashion. Both Unix and PowerShell favor the "building block" approach: instead of building one big kick-ass tool that can slice, dice, and make Julienne fries, you build a lot of single-purpose tools that do one task well. In PowerShell, these tasks are what we've previously called cmdlets. The pipeline is the way that you can take the output from one cmdlet and pass it as input to the next cmdlet. You construct a pipeline in a very simple fashion: simply use the pipe character | between two cmdlets. If you want, you can string multiple cmdlets together in this way, forming a data-processing daisy chain of epic proportions. Unix shells have made effective use of the pipeline for years. The pipeline has even been used in the DOS command line, although not nearly as effectively as in Unix, mainly due to the lack of some of the common glue utilities that are common in Unix. So what are these glue utilities of which we speak? In traditional command-line environments, the pipeline passes text output from one command to the next command as text input. Usually, these commands produce extraneous text that you don't want and that can confuse the next command in the pipeline or even produce an outright error. These glue commands (arcane utilities with names like grep, sed, col, and awk) allow the scripter to manipulate, trim, fold, spindle, and mutilate the text into a format the next command in the pipeline will accept. Sometimes, a simple inline text manipulation isn't enough. In these cases, traditional scripting techniques utilize temporary files. The scripter could often end up having to create a series of scripts and temporary files in order to get a particular task done; often, the hardest part of this was ensuring that the glue code worked properly. Figuring out the proper pipeline of commands to use was the easy part, but getting all the text interchange properly lashed up could eat up a fair amount of time. Wouldn't it be nice if a scripting language could be designed in such as fashion as to remove the need to create and maintain all of this glue code and just allow you to concentrate on getting the actual task done? And now we come to what is the truly innovative characteristic of PowerShell: its pipeline passes objects instead of text strings. As an example, let's consider the task of stopping a running processOutlook, for the sake of argument (yes, that's another of Devin's bad puns): 1. Working with what you already know, you can probably guess that there's a simple command for listing all of the processes that are currently running on the system. In this case, you'd be correct; it's called Get-Process:

2. Handles NPM(K) ProcessName 3. --------------4. 4182 48

PM(K) ----69100

WS(K) ----66676

VM(M) ----405

CPU(s) -----86.00

Id -3484 ------OUTLOOK

5. You also need a cmdlet to stop a process. Sure enough, PowerShell gives you one: Stop-Process. This cmdlet must be told which process to act upon; it can take either a process ID or a name. Use the pipeline to string these two cmdlets together: Get6. Profit!
Process out* | Stop-Process

If we were doing this in Unix, it would be slightly more complex. We'd run the ps command that gives us the list of processes, pipe the text output to the glue utility grep to search through the text and find any line containing the desired process, pipe those text line results to the awk utility to strip out the field containing the process ID, and then pipe the process ID to the kill command. If you know the format of the various utilities, it's not that hard to do; but if you don't remember them exactly, you have to mess around with a couple of iterations of your pipeline until you get it right. Contrast that with PowerShell: remember two cmdlets (and you've already got a head start by knowing the likely verbs and nouns) and smoosh 'em together with a pipe character. No muss, no fuss, no gluejust scripting goodness. We think, however, you'll find that it gets much better when you see how this new and improved pipeline functionality works with multiple objects.
WORKING WITH COLLECTIONS

We mentioned previously that there's no difference in PowerShell between a variable that contains a single object and a variable that contains a collection of objects, because the former is really a collection containing one object. When you're dealing with a collection, you often want to do some sort of sorting or filtering:

The Format-List cmdlet takes a collection of objects and displays them to the screen with each parameter and value on a separate line. By default, when you display an object PowerShell only shows you a few parameters. The Format-Table cmdlet takes a collection of objects, a list of parameters, and displays the selected values in a tabular format. By default, this cmdlet will display a selected set of parameters. The Group-Object cmdlet allows you create subcollections of objects, grouped together. The objects in your original collections that have the same value for the specified parameter will be placed in the same subcollection. The Sort-Object cmdlet allows you to sort a collection object by the value of a specified property. By default, Get-* cmdlets don't have a defined order for the order in which they return objects to the collection. The Where-Object cmdlet allows you to perform filtering using a variety of comparison operators. You can filter on a single parameter or perform sophisticated Boolean and regular expression matches.

In addition to the above cmdlets, PowerShell provides a variety of cmdlets designed to help you import and export objects and data between multiple file formats. The formats include

CSV for interfacing PowerShell data with external applications and the PowerShell-specific CLI XML, allowing you to convert objects into XML strings and pass them between scripts with their full values intact.
WHAT WE DON'T COVER

As we're sure you understand, there is an impressive array of features and cmdlets that, for the reason of scopewe just don't have time to cover in this book. You won't need to know any of the following topics in order to use the DPM Management Shell cmdlets:

Creating PowerShell scripts Defining functions to reuse in your scripts Advanced formatting options Calling objects in the .NET framework Calling PowerShell objects from .NET framework applications Using and creating other PowerShell providers A whole metric ton of additional built-in PowerShell cmdlets

However, if you find that you really like using PowerShell, you'll want to get a good book on PowerShell and spend some time learning about these topics. Which book, you ask? We recommend the following:

Microsoft Windows PowerShell: TFM by Don Jones and Jeffery Hicks (SAPIEN Press, 2007) is a no-nonsense, uncluttered introductory primer on getting into PowerShell. This book will help you begin using PowerShell effectively and teach you what you need to know to move beyond being a beginner. Windows PowerShell in Action by Bruce Payette (Manning Publications, 2007) provides a wonderful primer on mastering PowerShell. Although it's not the best book for PowerShell beginners, it's a great resource for helping intermediate PowerShell scripters become advanced PowerShell scripters.

This isn't a book on Windows PowerShell, so we'll move back to DPM. Let's see how to start using the DPM Management Shell.
Navigating DPM Functions in the DPM Management Shell

It's hard to make sense of the DMS if you don't know everything that it can do, and in order to get the most out of it, you need to be comfortable using PowerShell. While PowerShell (and thus by extension, the DMS) includes a good help system, the documentation can be hard to grasp if you don't already know exactly which cmdlet you're looking foror if the task you're trying to do can even be done in the DMS. The PowerShell Verb-Noun cmdlet syntax helps you figure it all out, especially once you know which verbs and nouns are at your disposal. If you've already been exposed to PowerShell, you know that several common verbs and nouns are provided by the basic technology. As you might imagine, a specialized application like DPM adds several custom verbs and nouns to the basic PowerShell offerings. Table 4.1 lists all of the verbs that are available in the DMS, and Table 4.2 lists all of the objects.

Table 4.1: DMS Cmdlet Verbs Open table as spreadsheet

Verb
Add Clean Connect Copy Disable Disconnect Dispose Enable Erase Get Inventory Lock Modify New Offset Recatalog Recover Remove Rename Rescan Save Set Start Test Unlock

Description Adds a new instance of this object to DPM's database. Cleans the selected physical object. Creates a session to the selected object; the opposite is Disconnect. Creates a copy of this object. Marks this object as disabled; the opposite is Enable. Closes an open session to the selected object; the opposite is Connect. Disposes of the selected object. Marks this object as enabled; the opposite is Disable. Erases the selected physical object. Views the object's properties; to modify the object use Set. Performs an inventory of the selected object. Locks a physical object such as a tape drive; the opposite is Unlock. Makes modifications to the selected object. Unlike Set, these modifications are not automatically saved; you must use the corresponding Save cmdlet. Creates a new instance of this object; the opposite is Remove. Offsets the selected object. Initiates a recatalog operation on the selected object. Performs a recovery operation on the selected object. Removes this instance of the object; the opposite is New. Renames the object. Initiates a rescan operation on the selected object. Saves any pending changes to the selected object. Modifies the object's properties and save them in one operation; to view the properties on the object only, use Get. Starts a pending operation represented by the selected object. Performs a test of the selected object. Unlocks a locked physical object such as a tape drive; the opposite is Lock.

Table 4.2: DMS Cmdlet Objects

Object
ArchiveSchedule BackupLibraryOption ChildDatasource DPMLibrary DPMLibraryDoor DPMLibraryIEPort

Table 4.2: DMS Cmdlet Objects

Object
DPMObject DPMTapeData DPMVolume DatasetStatus Datasource DatasourceConsistencyCheck DatasourceDiskAllocation DatasourceProtectionOption DatasourceReplica DefaultSchedule Exclusion FullDeltaReplicationSchedule HeadlessDataset MaintenanceJobStartTime OnlineRecatalog PolicyObjective PolicySchedule ProductionCluster ProductionServer ProductionVirtualName ProtectionGroup ProtectionJobStartTime ProtectionType RecoverableItem RecoveryNotification RecoveryOption RecoveryPoint RecoveryPointLocation ReplicaCreationMethod ReplicationSchedule Schedule SearchOption Server ShadowCopySchedule SynchronizationSchedule Tape TapeBackupOption TapeDrive TapeSlot VerificationSchedule

Once you've got the right cmdlet, it can still be tough to get your mind into the PowerShell frame of mind for the first couple of times you use it. Don't worry and just keep practicing; the more you use it, the easier it gets. If you've got experience with the Exchange 2007 Management Shell (EMS), you're already pretty PowerShell spoiled; using the EMS, you can perform literally every possible management and configuration task there is to perform in Exchange 2007 management. Unfortunately, the DMS isn't nearly as comprehensive as the EMS in this release of DPM, and we hope that Microsoft will remedy that in future releases. For now, though, the types of tasks that you can do in DMS all focus on day-to-day management tasks such as configuring protection groups, managing storage and devices, and dealing with protected data.

Into the DMS


So you've installed DPM and want to take a look at what you can do in the DPM Management Shell. To get to the DPM Management Shell, click Start All Programs Microsoft System Center Data Protection Manager 2007 DPM Management Shell. If you installed a Desktop icon during setup, you can use that instead. The DPM Management Shell looks like a standard command prompt at first, as shown in Figure 4.2. Notice the PS in the far left, designating that you are running the Windows PowerShell.

Figure 4.2: The DPM Management Shell Initial prompt

To get a list of the available commands related to DPM, enter Get-DpmCommand. You'll see a long list of available commands scroll past while you try in vain to get a handle on them all, as shown in Figure 4.3.

Figure 4.3: The Get-DpmCommand cmdlet

To see the all of the available cmdlets in a readable context, type Get-DpmCommand | More. The More command (a DOS function that still works here) is a pager utility. It formats the output from the Get-DpmCommand cmdlet with page breaks, according to the current size of the active window, and redisplays the output one page at a time. After you read through the screen, you can press the spacebar to proceed to the next screen, press the Enter key to display the next single line of content, or press Q to quit if you've found what you needed. Figure 4.4 shows the Get-DPMCommand/More combination in action.

Figure 4.4: Piping the Get-DpmCommand to more

There are currently 81 different DPM cmdlets; all of them are listed in Table 4.3. The remaining sections of this chapter go into more detail on these cmdlets.
Table 4.3: DMS Cmdlets Open table as spreadsheet

DPM Cmdlet Name


Add-Tape Clean-TapeDrive Copy-DPMTapeData Disable-DPMLibrary Disable-TapeDrive Enable-DPMLibrary Enable-TapeDrive Erase-Tape Get-BackupLibraryOption Get-DPMLibrary Get-ProductionCluster

DPM Cmdlet Description Adds a tape device Cleans a tape drive Copies recovery point data Disables a library Disables a tape drive Enables a library Enables a tape drive Erases a tape Retrieves library properties Returns the attached libraries Shows clusters with DPM agent installed

Corresponding Gui Tab Management Management Management Management Management Management Management Management Management Management Management

Table 4.3: DMS Cmdlets Open table as spreadsheet

DPM Cmdlet Name


Get-ProductionServer Get-ProductionVirtualName Get-Tape Get-TapeDrive Get-TapeSlot Lock-DPMLibraryDoor Lock-DPMLibraryIEPort Recatalog-Tape Rename-DPMLibrary Rescan-DPMLibrary Set-BackupLibraryOption

DPM Cmdlet Description Shows servers with DPM agent installed Shows names of cluster nodes

Corresponding Gui Tab Management Management

Returns the available tape media in Management a library Returns the available drives in a library Returns the available slots in a library Locks library door Locks and loads media present in the IE port Recatalogs a tape Renames a library Rescans a library Stores library properties for the Create New Protection Group wizard Modifies disk allocation Marks a tape according to the specified status Starts a recatalog Verifies a data set Unlocks a tape library door Ejects media from a slot to the insert/eject port Shows maintenance job start time Shows start time of a protection job Sets or removes maintenance job start time Adds a datasource to a protection group Returns protectable objects Shows amount of allocated disk space Management Management Management Management Management Management Management Management

Set-DatasourceDiskAllocation Set-Tape Start-OnlineRecatalog Test-DPMTapeData Unlock-DPMLibraryDoor Unlock-DPMLibraryIEPort Get-MaintenanceJobStartTime Get-ProtectionJobStartTime Set-MaintenanceJobStartTime Add-ChildDatasource Get-ChildDatasource Get-DatasourceDiskAllocation Get-PolicyObjective

Management Management Management Management Management Management Monitoring Monitoring Protection Protection Protection Protection

Shows policy for protection group Protection

Table 4.3: DMS Cmdlets Open table as spreadsheet

DPM Cmdlet Name


Get-PolicySchedule Get-ProtectionGroup Get-ReplicaCreationMethod Modify-ProtectionGroup New-ProtectionGroup New-RecoveryPoint Remove-ChildDatasource Rename-ProtectionGroup Save-ProtectionGroup Set-PolicyObjective Set-PolicySchedule Set-ProtectionJobStartTime Set-ReplicaCreationMethod StartDatasourceConsistencyCheck Get-RecoveryPointLocation New-RecoveryNotification New-RecoveryOption New-SearchOption Recover-RecoverableItem Remove-DatasourceReplica Remove-RecoveryPoint Connect-Server Disconnect-Server Dispose-DPMObject Get-ArchiveSchedule Get-DatasetStatus

DPM Cmdlet Description Shows recovery point creation frequency Shows protection groups

Corresponding Gui Tab Protection Protection

Shows the replica creation method Protection for a protection group Modifies a protection group Creates a new protection group Creates a new recovery point Removes a child data source Renames a protection group Saves all actions performed on a protection group Protection Protection Protection Protection Protection Protection

Sets policy objective for protection Protection group Sets recovery point creation schedule Sets replica creation method for a protection group Starts a consistency check Shows the location of a recovery point Adds a notification to a recovery job Adds an option to a recovery job Searches protected data Recovers a data source Removes data source replica Removes a recovery point Connects to a DPM server Disconnects from a DPM server Releases the memory used by a DPM object Retrieves tape archival schedules Returns the status of a dataset Protection

Sets start time for a protection job Protection Protection Protection Recovery Recovery Recovery Recovery Recovery Recovery Recovery None None None None None

Table 4.3: DMS Cmdlets Open table as spreadsheet

DPM Cmdlet Name


Get-Datasource GetDatasourceProtectionOption Get-DPMVolume GetFullDeltaReplicationSchedule Get-HeadlessDataset Get-RecoverableItem Get-RecoveryPoint Get-ReplicationSchedule Get-ShadowCopySchedule Get-TapeBackupOption Get-VerificationSchedule Inventory-DPMLibrary OffsetSynchronizationSchedule Remove-Schedule Remove-Tape SetDatasourceProtectionOption Set-DefaultSchedule Set-Exclusion Set-ProtectionType Set-TapeBackupOption

DPM Cmdlet Description Returns the available datasources Returns the available protection options for datasources Returns the volumes in the DPM pool Returns the replication schedule Returns the headless datasets Returns the items that can be recovered Returns the available recovery points Returns the replication schedules Returns the shadow copy schedules Returns the available tape backup options Returns the verification schedule Provides an inventory of the libraries Sets an offset on the synchronization schedule Remove a schedule Removes a tape Modifies a datasource protection options Modifies a default schedule Modifies a protection exclusion Modifies the type of protection used Modifies the tape backup options

Corresponding Gui Tab None None None None None None None None None None None None None None None None None None None None

Get-DPMCommand

The Get-DPMCommand cmdlet doesn't do much. It displays a list of all the DPM-related cmdlets provided by the DMSbut trust us, that is a useful starting point when you're trying to remember a cmdlet or find a new one to do something you need.

We would be horribly remiss if we didn't take this opportunity to remind you about the GetCommand cmdlet native to Windows PowerShell. This cmdlet tells you all of the cmdlets that are available in your shell session, both those included natively in PowerShell as well as any that have been added by snap-ins. The beauty of PowerShell is that as more applications and companies begin providing snap-in support, you can start to build custom profiles to add all of these snap-ins into the same shell session, giving yourself unified control over your environment. Without further ado, here are the DMS cmdlets. Some of the examples that you see will be split over multiple lines of text in the book. PowerShell uses the tilde character () as a line continuation character; when PowerShell sees that character at the end of the line, it knows to take the next line of input and treat the two lines as if they had been entered as a single line.
Add-BackupNetworkAddress

The Add-BackupNetworkAddress cmdlet adds a backup network for DPM to use. A backup network is a network dedicated to DPM traffic. The parameters are Example: This is a required parameter. Enter an IP address or subnet address for the backup network. -DPMServerName This is a required parameter. Enter the name of the desired DPM server. This is a required parameter. Enter the priority for the address specified.
-Address SequenceNumber

Example:
Add-BackupNetworkAddress -DPMServer DPM-SRV01 ~ -Address 192.168.150.0/24 -SequenceNumber 1

Add-ChildDatasource

The Add-ChildDatasource cmdlet adds a data source or a child datasource to a protection group. Child datasources are folders on a protected volume. The parameters are This is a required parameter. Enter the name of the protection group to which you want to add the datasource. This is a required parameter. Enter the name of the datasource on a ChildDatasource protected server. -PassThru This is not a required parameter. It can be used to return related objects. It allows cmdlets to be part of a pipeline.
ProtectionGroup

Example:
$pg = Get-ProtectionGroup -DPMServerName "DPM-SRV01" $mpg=Get-ModifiableProtectionGroup $pg[0] $po = Get-Datasource -ProtectionGroup $pg Add-ChildDatasource -ProtectionGroup $mpg -ChildDatasource $po[8]

Add-DPMDisk

The Add-DPMDisk cmdlet adds a new disk to the storage pool. The parameter is
-DPMDisk

This is a required parameter. Specify the disk to add.

Example:
$DPMDisk = Get-DPMDisk -DPMServer -DPMServerName DPM-SRV01 Add-DPMDisk -DPMDisk $DPMDisk

Add-Tape

The Add-Tape cmdlet adds a tape to a DPM library. The parameters are: This is a required parameter. Specify a DPM tape library. -Async This is not a required parameter. It allows the cmdlet to be run asynchronously. This means that the user will regain control of the DMS command prompt before the cmdlet has finished running. Progress is communicated to the user periodically. This is not a required parameter. It is used with the -Async JobStateChangeEventHandler common parameter to inform a user when a job has completed.
-DPMLibrary

Example:
$DPMLib = Get-DPMLibrary -DPMServerName "DPM-SRV01" Add-Tape -DPMLibrary $DPMLib

Connect-DPMServer

The Connect-DPMServer cmdlet allows an administrator to connect to a remote DPM server via the shell. The parameters are
-DPMServerName AsyncOperation

This is a required parameter. Enter the name of the DPM server to which you want to connect. This is not a required parameter.

Example:
Connect-DPMServer -DPMServerName "DPM-SRV01.contoso.dpm"

Copy-DPMTapeData

The Copy-DPMTapeData cmdlet copies a recovery point data from a tape for a specified recovery point. The recovery point can be on the local or a remote DPM server. The parameters are

This is a required parameter. Enter the name of the recovery point to use. -DPMServerName This is a required parameter. Enter the name of the DPM server that owns the recovery point. -IncompleteDataset This is not a required parameter. Use this parameter to specify a data set that spans multiple tapesthis means that the operation will be carried out only on the portion of data that is on the current tape. This is not a required parameter. It is used with the -Async JobStateChangeEventHandler common parameter to inform a user when a job has completed. -OverwriteType This is not a required parameter. Define the behavior you want when data exists on the destination. Enter CreateCopy, Skip, or Overwrite. -RecoveryNotification This is not a required parameter. Use this parameter to be notified when the recovery completes. -RecoveryPointLocation This is not a required parameter. Enter the current location of the recovery point. -RecreateReparsePoint This is not a required parameter. Use this parameter to indicate whether the reparse point has to be re-created. -Restore This is not a required parameter. Use this parameter to indicate that the operation is a restore operation. -RestoreSecurity This is not a required parameter. Use this parameter to specify that you want to use the security settings of the recovery point. -SourceLibrary This is not a required parameter. Enter the location of the dataset you want to copy. -Tape This is not a required parameter. Use this parameter to specify that the operation must be performed on a tape. -TapeLabel This is not a required parameter. Enter a label for the tape. -TapeOption This is not a required parameter. Specify what encryption or compression options you want. Enter 0 for compression, 1 for encryption, or 2 for neither. -TargetLibrary This is not a required parameter. Enter the library you want to copy the data to. -TargetPath This is not a required parameter. Enter the path to the target. -TargetServer This is not a required parameter. Enter the name of the server to which the recovery is made.
-RecoveryPoint

Disable-DPMLibrary

The Disable-DPMLibrary cmdlet disables the specified library. The parameters are Example:
DPMLibrary

This is a required parameter. Enter the name of the library you want to disable.

-PassThru

This is not a required parameter. It can be used to return related objects. It allows cmdlets to be part of a pipeline.

Example:
DisableDpmLibrary -DPMLibrary library1

Disable-TapeDrive

The Disable-TapeDrive cmdlet disables a specified tape drive. The parameters are
TapeDrive -PassThru

This is a required parameter. Enter the tape drive to be disabled. This is not a required parameter. It can be used to return related objects. It allows cmdlets to be part of a pipeline.

Example:
Disable-TapeDrive -TapeDrive drive1

Disconnect-DPMServer

The Disconnect-DPMServer cmdlet closes and releases all connections and objects for a DPM connection session. The parameter is
DPMServerName

This is not a required parameter. Specify a DPM server from which to disconnect.

Example:
Disconnect-DPMServer -DPMServerName "DPM-SRV01"

Enable-DPMLibrary

The Enable-DPMLibrary cmdlet enables a specified library. The parameters are:


DPMLibrary -PassThru

This is a required parameter. Enter the library you want to enable. This is not a required parameter. It can be used to return related objects. It allows cmdlets to be part of a pipeline.

Example:
$DPMLib = Get-DPMLibrary -DPMServerName "DPM-SRV01" Enable-DPMLibrary -DPMLibrary $DPMLib

Enable-TapeDrive

The Enable-TapeDrive cmdlet enables a specified tape drive. The parameters are

TapeDrive -PassThru

This is a required parameter. Enter the tape drive to be enabled. This is not a required parameter. It can be used to return elated objects. It allows cmdlets to be part of a pipeline.

Example:
$DPMLib = Get-DPMLibrary -DPMServerName "DPM-SRV01" Get-TapeDrive -DPMLibrary $DPMLib Enable-TapeDrive -TapeDrive $TapeDrive

Get-BackupNetworkAddress

The Get-BackupNetworkAddress cmdlet displays the backup network specified for the server. The parameter is
DPMServerName

This is a required parameter. Enter the name of the DPM server from which to get the information.

Example:
Get-BackupNetworkAddress -DPMServerName "DPM-SRV01"

Get-ChildDatasource

The Get-ChildDatasource cmdlet displays the protectable filesystem objects of a data source. The parameters are
ProtectionGroup ChildDatasource -Async

This is not a required parameter. Enter the desired protection group. This is a required parameter. Enter the name of a datasource. This is not a required parameter. It allows the cmdlet to be run asynchronously. This means that the user will regain control of the DMS command prompt before the cmdlet has finished running. Progress is communicated to the user periodically. This is not a required parameter. This parameter shows all the available data sources. This is not a required parameter. It distinguishes between replies to each asynchronous call.

-Inquire -Tag

Example:
$pg = Get-ProtectionGroup -DPMServerName "DPM-SRV01" $ds = Get-Datasource -ProtectionGroup $pg $cds = Get-ChildDatasource -ChildDatasource $ds[1] -inquire

Get-DatasetStatus

The Get-DatasetStatus cmdlet displays the dataset state of a specified archive tape. The parameter is

-Tape

This is a required parameter. Specify a tape.

Example:
$pg = Get-ProtectionGroup -DPMServerName "DPM-SRV01" $pt = Get-Tape -ProtectionGroup $pg Get-DatasetStatus -Tape $pt

Get-Datasource

The Get-Datasource cmdlet shows a list of data (protected and unprotected) from a server or protection group. The parameters are This is a required parameter. Enter the path to search for the data source. This is not a required parameter. Enter the name of the server to be ProductionServerName protected. -DPMServerName This is a required parameter. Enter the name of a DPM server. -ProductionServer This is a required parameter. Enter the name of a server with the DPM agent installed. -ProtectionGroup This is a required parameter. Enter the name of a protection group. -Async This is not a required parameter. It allows the cmdlet to be run asynchronously. This means that the user will regain control of the DMS command prompt before the cmdlet has finished running. Progress is communicated to the user periodically. -Inactive This is not a required parameter. This will return inactive data sources (data sources that used to be protected, but are no longer). -Inquire This is not a required parameter. This parameter shows all available data sources. -Replica This is not a required parameter. Use this parameter to indicate that the operation is being performed on a replica. -Tag This is not a required parameter. It distinguishes between replies to each asynchronous call.
-SearchPath

Example:
$ps = Get-ProductionServer -DPMServerName "DPM-SRV01" Get-Datasoruce ProductionServer $ps[1] -inquire

Get-DatasourceDiskAllocation

The Get-DatasourceDiskAllocation cmdlet shows the amount of disk space allocated to protected data. The parameters are
-Datasource

-Async

This is a required parameter. Enter a share, volume, database, storage group, system state, or other protected data source that is a member of a protection group. This is not a required parameter. It allows the cmdlet to be run

asynchronously. This means that the user will regain control of the DMS command prompt before the cmdlet has finished running. Progress is communicated to the user periodically. This is not a required parameter. This parameter calculates the space CalculateSize allocated on a disk. -Tag This is not a required parameter. It distinguishes between replies to each asynchronous call. Example:
$pg = Get-ProtectionGroup -DPMServerName "DPM-SRV01" $ds = Get-Datasource -ProtectionGroup $pg Get-DatasourceDiskAllocation -Datasource $ds[1] -CalculateSize

Get-DatasourceProtectionOption

The Get-DatasourceProtectionOption cmdlet displays protection options for all datasources of a specified type within a protection group. The parameters are
ProtectionGroup ExchangeOptions -FileSystem

This is a required parameter. Specify a protection group. This is a required parameter. It indicates that the options following only affect a Microsoft Exchange datasource. This is a required parameter. Use this parameter to indicate that the operation will be performed on a file system resource.

Example:
$pg = Get-ProtectionGroup -DPMServerName "DPM-SRV01" Get-DatasourceProtectionOption -ProtectionGroup $pg -FileSystem

Get-DPMDisk

The Get-DPMDisk cmdlet returns a list of disks found during the last scan. The parameter is
-DPMServerName

This is a required parameter. Enter the name of a DPM server.

Example:
Get-DPMDisk -DPMServerName "DPM-SRV01"

Get-DPMLibrary

The Get-DPMLibrary cmdlet returns a list of libraries and their status for a specified DPM server. The parameter is
-DPMServerName

This is a required parameter. Enter the name of a DPM server.

Example:

Get-DPMLibrary -DPMServerName "DPM-SRV01"

Get-DPMVolume

The Get-DPMVolume cmdlet returns a list of volumes for a specified DPM server. The parameter is
-DPMServerName

This is a required parameter. Enter the name of a DPM server.

Example:
Get-DPMVolume -DPMServerName "DPM-SRV01"

Get-HeadlessDataset

The Get-HeadlessDataset returns incomplete datasets on a tape. The parameter is


-Tape

This is a required parameter. Use this parameter to specify a tape.

Example:
$DPMLib = Get-DPMLibrary -DPMServerName "DPM-SRV01" $Tape = Get-Tape -DPMLibrary $DPMLib Get-HeadlessDataset -Tape $Tape[2]

Get-MaintenanceJobStartTime

The Get-MaintenanceJobStartTime cmdlet shows the start time of a specified maintenance job. The parameters are This is a required parameter. Enter the name of a DPM server. -MaintenanceJob This is a required parameter. Enter a maintenance job to get its start time.
-DPMServerName

Example:
Get-MaintenanceJobStartTime -DPMServerName "DPM-SRV01" ~ -MaintenanceJob CatalogPruning

Get-ModifiableProtectionGroup

The Get-ModifiableProtectionGroup cmdlet retrieves a protection group in an editable mode. The parameter is
ProtectionGroup

This is a required parameter. Use this parameter to specify a protection group.

Example:
$pg = Get-ProtectionGroup -DPMServerName "DPM-SRV01" Get-ModifiableProtectionGroup -ProtectionGroup $pg

Get-PolicyObjective

The Get-PolicyObjective shows the protection policy for a specified protection group. The parameters are
-LongTerm ProtectionGroup -ShortTerm

This is a required parameter. Use this parameter to specify that the protection group is set to long-term protection. This is a required parameter. Specify a protection group. This is a required parameter. Use this parameter to specify that the protection group will use either disk, tape, or none.

Example:
$pg = Get-ProtectionGroup -DPMServerName "DPM-SRV01" Get-PolicyObjective -ProtectionGroup $pg -ShortTerm

Get-PolicySchedule

The Get-PolicySchedule cmdlet displays the recovery point creation intervals for a specified protection group. The parameters are
ProtectionGroup -LongTerm

This is a required parameter. Enter the desired protection group.

This is not a required parameter. This indicates that the specified protection group is set for long-term protection. -OffsetSchedule This is not a required parameter. Specify the interval in minutes by which the synchronization will be offset. -ShortTerm This is not a required parameter. This indicates that the specified protection group is set for short-term protection. Example:
$pg = Get-ProtectionGroup -DPMServerName "DPM-SRV01" Get-PolicySchedule -ProtectionGroup $pg -LongTerm

Get-ProductionCluster

The Get-ProductionCluster cmdlet shows a list of all clusters with the DPM agent installed. The parameter is
-DPMServerName

This is a required parameter. Enter the name of a DPM server.

Example:
Get-ProductionCluster -DPMServerName "DPM-SRV01"

Get-ProductionServer

The Get-ProductionServer cmdlet shows a list of all servers with the DPM agent installed. The parameter is
-DPMServerName

This is a required parameter. Enter the name of a DPM server.

Example:
Get-ProductionServer -DPMServerName dpmsrv01

Get-ProductionVirtualName

The Get-ProductionVirtualName cmdlet lists the physical names of all cluster nodes with the protection agent installed. The parameters are This is not a required parameter. It allows the cmdlet to be run asynchronously. This means that the user will regain control of the DMS command prompt before the cmdlet has finished running. Progress is communicated to the user periodically. -Handler This is not a required parameter. This is called when an event is received. This is not a required resource. Enter a cluster to return the names of its ProductionCluster nodes. -Tag This is not a required parameter. It distinguishes between replies to each asynchronous call.
-Async

Example:
$pc = Get-ProductionCluster -DPMServerName "DPM-SRV01" Get-ProductionVirtualName -ProductionCluster $pc

Get-ProtectionGroup

The Get-ProtectionGroup cmdlet shows a list of protection groups for a specified DPM server. The parameter is
-DPMServerName

This is a required parameter. Enter the name of a DPM server.

Example:
Get-ProtectionGroup -DPMSererName dpmsrv01

Get-ProtectionJobStartTime

The Get-ProtectionJobStartTime cmdlet displays the start time for a specified protection job. The parameters are
-ProtectionGroup

This is a required parameter. Enter a protection group.

-JobType

This is a required parameter. Enter a job type.

Example:
$pg = Get-ProtectionGroup -DPMServerName "DPM-SRV01" Get-ProtectionJobStartTime -ProtectionGroup $pg -JebType ConsistencyCheck

Get-RecoverableItem

The Get-RecoverableItem cmdlet returns the recoverable items for a specified recovery point. The parameters are
-SearchOption -BrowseType -Datasource

RecoverableItem -RecoveryPoint -Async

This is a required parameter. It sets the search options defined by NewSearchOption. This is a required parameter. Specify Parent or Child to choose the browse depth of the request. This is a required parameter. Enter a share, volume, database, storage group, system state, or other protected data source that is a member of a protection group. This is a required parameter. Specify a datasource to recover. This is a required parameter. Specify the recovery point to use. This is not a required parameter. Allows the cmdlet to be run asynchronously. This means that the user will regain control of the DMS command prompt before the cmdlet has finished running. Progress is communicated to the user periodically. This is not a required parameter. It distinguishes between replies to each asynchronous call.

-Tag

Example:
$pg = Get-ProtectionGroup -DPMServerName "DPM-SRV01" $ds = Get-Datasource -ProtectionGroup $pg $rp = Get-RecoveryPonit -Datasource $ds Get-RecoverableItem -RecoverableItem $rp -BrowseType Child

Get-RecoveryPoint

The Get-RecoveryPoint cmdlet lists all of the available recovery points for a data source. The parameters are
Datasource -Tape -Async

This is a required parameter. Enter a share, volume, database, storage group, system state, or other protected data source that is a member of a protection group. This is a required parameter. It indicates that the operation will be performed on a tape. This is not a required parameter. It allows the cmdlet to be run asynchronously. This means that the user will regain control of the DMS command prompt before the cmdlet has finished running. Progress is communicated to the user

periodically. Example:
$pg = Get-ProtectionGroup -DPMServerName "DPM-SRV01" $ds = Get-Datasource -ProtectionGroup $pg Get-RecoveryPoint -Datasource $ds

Get-RecoveryPointLocation

The Get-RecoveryPointLocation cmdlet shows the location of recovery points. The parameter is
-RecoveryPoint

This is a required parameter. Enter a recovery point.

Example:
$pg = Get-ProtectionGroup -DPMServerName "DPM-SRV01" $ds = Get-Datasource -ProtectionGroup $pg $rp = Get-RecoveryPonit -Datasource $ds Get-RecoveryPointLocation-RecoveryPoint $rp

Get-ReplicaCreationMethod

The Get-ReplicaCreationMethod cmdlet shows the replica creation method for a protection group. The parameter is
-ProtectionGroup

This is a required parameter. Enter a protection group.

Example:
$pg = Get-ProtectionGroup -DPMServerName "DPM-SRV01" Get-ReplicaCreationMethod -ProtectionGroup $pg

Get-Tape

The Get-Tape cmdlet displays a list of media in a specified library across drives and slots. The parameters are This is a required parameter. Enter a protection group. This is not a required parameter. Enter a DPM tape library. This is not a required parameter. Enter the current location of the RecoveryPointLocation recovery point.
-ProtectionGroup -DPMLibrary

Example:
$DPMLib = Get-DPMLibrary -DPMServerName "DPM-SRV01" Get-Tape -DPMLibrary $DPMLib

Get-TapeBackupOption

The Get-TapeBackupOption cmdlet returns the library, drive and other backup options for a specified protection group. The parameter is
-ProtectionGroup

This is a required parameter. Specify a protection group.

Example:
$pg = Get-ProtectionGroup -DPMServerName "DPM-SRV01" Get-TapeBackupOption -ProtectionGroup $pg

Get-TapeDrive

The Get-TapeDrive cmdlet displays a list of drives for a specified library on a DPM server. The parameter is
-DPMLibrary

This is a required parameter. Enter a library to return a list of drives.

Example:
$DPMLib = GetDPMLibrary -DPMServerName "DPM-SRV01" Get-TapeDrive -DPMLibrary $DPMLib

Get-TapeSlot

The Get-TapeSlot cmdlet displays a list of slots for a specified library. The parameter is
-DPMLibrary

This is a required parameter. Enter a library to return a list of drives.

Example:
$DPMLib = GetDPMLibrary -DPMServerName "DPM-SRV01" Get-TapeSlot -DPMLibrary $DPMLib

Lock-DPMLibraryDoor

The Lock-DPMLibraryDoor cmdlet locks the door of a specified library. The parameters are This is a required parameter. Enter a library to return a list of drives. -Async This is not a required parameter. It allows the cmdlet to be run asynchronously. This means that the user will regain control of the DMS command prompt before the cmdlet has finished running. Progress is communicated to the user periodically. This is not a required parameter. This is a callDoorAccessJobStateChangeEventHandler back method for the -Async parameter.
-DPMLibrary

Example:
$DPMLib = Get-DPMLibrary -DPMServerName "DPM-SRV01" Lock-DPMLibraryDoor -DPMLibrary $DPMLib[0]

Lock-DPMLibraryIEPort

The Lock-DPMLibraryIEPort cmdlet locks and loads the media in the insert/eject (IE) port, which is a special port used on some tape libraries. Instead of requiring the administrator to open the library door and manually mount and dismount tapes into caddies, libraries with an IE port provide a high degree of control over how tapes are manipulated inside the library. The parameters are This is a required parameter. Enter a library to return a list of drives. -Async This is not a required parameter. It allows the cmdlet to be run asynchronously. This means that the user will regain control of the DMS command prompt before the cmdlet has finished running. Progress is communicated to the user periodically. This is not a required parameter. Use this parameter with the JobChangedEventHandler Async parameter to be informed of the job status.
-DPMLibrary

Example:
$DPMLib = Get-DPMLibrary -DPMServerName "DPM-SRV01" Lock-DPMLibraryIEPort -DPMLibrary $DPMLib

New-ProtectionGroup

The New-ProtectionGroup cmdlet creates a new protection group on a DPM server. The parameters are
-DPMLibrary -Name

This is a required parameter. Enter the name of a DPM server. This is not a required parameter. Enter a name for the new protection group.

Example:
New-ProtectionGroup -DPMServername DPM-SRV01 -Name NewPG

New-RecoveryNotification

The New-RecoveryNotification cmdlet creates a new notification object for an event. The parameters are
-NotificationType NotificationIDList

This is a required parameter. Enter the type of notification. This is a required parameter. Enter a list of IDs to receive notifications.

New-RecoveryNotification -NotificationType email ~ -NotificationIDList dpmadmins@contoso.com

New-RecoveryOption

The New-RecoveryOption cmdlet sets recovery options for servers. The parameters are This is not a required parameter. Enter an alternative database. -AlternateDatasourceName This is not a required parameter. Enter the name of an alternative data source. -AlternateLocation This is not a required parameter. Enter an alternative location for the recovery point. -AlternateStorageGroup This is not a required parameter. Enter an alternative storage group for a recovery process (for Exchange restores only). -CopyLogFiles This is not a required parameter. Use this parameter to indicate that log files must be copied. -DatabaseFileTempLocation This is not a required parameter. Enter a temporary location for database files. -DatabaseName This is not a required parameter. Enter the name of a database. -DPMLibrary This is not a required parameter. Enter a DPM library object. -Exchange This is not a required parameter. Use this parameter to specify that the current operation is being performed on a Microsoft Exchange datasource. -ExchangeOperationType This is not a required parameter. Enter NoOperation, MailBoxLevelRecovery, or NeedCleanShutdown. -ExportFileTempLocation This is not a required parameter. Specify the location of the export file. -FileSystem This is not a required parameter. Use this parameter to indicate the file system in use. -GenericDatasource This is not a required parameter. It indicates that the operation is being performed on a generic datasource, such as Microsoft Virtual Server. This is not a required parameter. Use this parameter to IntermediateSharepointServer specify a SharePoint recovery farm. -IntermediateSqlInstance This is not a required parameter. Use this parameter to specify an intermediate SQL server instance for use during a SharePoint site recovery. -IsRecoveryStorageGroup This is not a required parameter. Use this parameter to specify that the target storage group is a recovery storage group. -LeaveDBInRestoringState This is not a required parameter. Use this parameter to leave a database in an operational state (rather than nonoperational but restorable). -LogFileCopyLocation This is not a required parameter. Specify a location for copying the log files.
-AlternateDatabase

-MailboxDisplayName -MountDatabaseAfterRestore -OverwriteType

-PrimaryDpmServer -RecoverToReplicaFromTape -RecoveryLocation

-RecoveryType -RestoreSecurity

-RollForwardRecovery -SANRecovery -SharePoint

-SharePointSite -SQL

-StorageGroupName -TargetLocation -TargetServer -TargetSiteURL

This is not a required parameter. Enter the name to be displayed on the mailbox. This is not a required parameter. Use this parameter to automatically mount the database after job completion. This is not a required parameter. Define the behavior you want when data exists on the destination. Enter: CreateCopy, Skip, or Overwrite. This is not a required parameter. Use this parameter to recover to a DPM server. This is not a required parameter. Use this parameter to recover to a replica from tape. This is not a required parameter. Enter OriginalServer, CopyToFolder, OriginalServerWithDBRename, AlternateExchangeServer, or ExchangeServerDatabase. This is not a required parameter. Use to specify recover or restore. This is not a required parameter. When you use this parameter, the security settings from the recovery point will be used. When you omit this parameter, the security settings of the destination will be used. This is not a required parameter. It recovers from the most recent recovery point and applies all logs since. This is not a required parameter. This is not a required parameter. Use this parameter to specify that the current operation is being performed on a SharePoint data source. This is not a required parameter. Enter a new name for the SharePoint site. This is not a required parameter. Use this parameter to specify that the current operation is being performed on a SQL data source. This is not a required parameter. Enter the name of a storage group. This is not a required parameter. Enter the location where the replica is to be stored. This is not a required parameter. Enter the server to which to recover the data source. This is not a required parameter. Enter the URL to which the recovery is to be made.

Example:
New-RecoveryOption -TargetServer server01 ~ -RecoveryLocation copytofolder -FileSystem ~ -AlternateLocation "d:\recovery" -OverwriteType overwrite ~ -RestoreSecurity

New-RecoveryPoint

The New-RecoveryPoint cmdlet creates a new recovery point for a specified data source. The parameters are This is a required parameter. Enter a share, volume, database, storage group, system state, or other protected data source that is a member of a protection group. -Disk This is a required parameter. Use this parameter to specify that the operation must be performed on disk. -ProtectionType This is a required parameter. Enter DiskToDi (for disk to disk), D2T (for disk to tape), or D2D2T (disk to disk to tape). -Tape This is a required parameter. It indicates that the operation will be performed on a tape. -BackupType This is not a required parameter. Enter ExpressFull or Incremental. -DiskRecoveryPointOption This is a required parameter. Enter WithSynchronize, WithoutSynchronize, or OnlySynchronize. This is not a required parameter. It is used with the -Async JobStateChangeEventHandler common parameter to inform a user when a job has completed.
-Datasource

Example:
$pg = Get-ProtectionGroup -DPMServerName "DPM-SRV01" $ds = Get-Datasource -ProtectionGroup $pg New-RecoveryPoint -Datasource $ds -Disk -DiskRecoveryPointOption WithSynchronize

New-SearchOption

The New-SearchOption cmdlet creates an object with search options to be used to search within a recovery point. The parameters are
-ToRecoveryPoint FromRecoveryPoint -SearchType -SearchDetail

This is a required parameter. Enter the last Date/Time for your search. This is a required parameter. Enter the first Date/Time for your search. This is a required parameter. Enter startsWith, contains, endsWith, or exactMatch. This is a required parameter. Enter FilesFolders, MailboxByAlias, MailboxByDisplayName, WssSite, or WssDocuments. This is a required parameter. Enter a string to search for. This is not a required parameter. It is the location of the recovery point. This is not a required parameter. Use this parameter to indicate a recursive search.

-SearchString -Location -Recursive

Example:
New-SearchOption -FromRecoveryPoint "07 July 2007" ~ -ToRecoveryPoint "08 August 2008" -SearchDetail FilesFolders ~ -SearchType contains -SearchString "sales" -Recursive

Recover-RecoverableItem

The Recover-RecoverableItem cmdlet recovers a version of a data source to a target location. The parameters are
-RecoveryOption

This is a required parameter. This is built using the NewRecoveryOption cmdlet. -RecoverableItem This is a required parameter. Specify a datasource to recover. This is not a required parameter. It is used with the -Async JobStateChangeEventHandler common parameter to inform a user when a job has completed. -RecoveryNotification This is not a required parameter. Use this parameter to be notified when a recovery completes. -RecoveryPointLocation This is not a required parameter. Specify the location of the recovery point. Example:
$rp = Get-RecoveryPoint -Datasource $ds $rop = New-RecoveryOption -TargetServer dpm.contoso.dpm ~ -RecoveryLocation CopyToFolder -FileSystem ~ -AlternateLocation "d:\restore" -OverwriteType Overwrite ~ -RestoreSecurity -RecoveryType Restore Recover-RecoverableItem -RecoverableItem $rp -RecoveryOption $rop

Remove-BackupNetworkAddress

The Remove-BackupNetworkAddress cmdlet stops a DPM server from using a specified network. The parameters are
DPMServerName -Address

This is a required parameter. Enter the name of a DPM server. This is a required parameter. Enter an IP address or subnet address for the backup network.

Example:
Remove-BackupNetworkAddress -DPMServerName "DPM-SRV01" -Address 192.168.150.0/24

Remove-ChildDatasource

The Remove-ChildDatasource cmdlet removes a data source or child data source from a protection group. The parameters are

ChildDatasource ProtectionGroup -KeepDiskData -KeepTapeData -PassThru

This is a required parameter. Specify the data source. This is a required parameter. Specify the protection group. This is not a required parameter. Use this parameter to keep the replica for the data source after it has been removed from the protection group. This is not a required parameter. Use this parameter to keep the replica for the data source after it has been removed from the protection group. This is not a required parameter. It can be used to return related objects. It allows cmdlets to be part of a pipeline.

Example:
$pg = Get-ProtectionGroup -DPMServerName "DPM-SRV01" $mpg = Get-ModifiableProtectionGroup $pg[0] $po = Get-Datasource -ProtectionGroup $pg Remove-ChildDatasource -ProtectionGroup $mpg -ChildDatasource $po[0]

Remove-DatasourceReplica

The Remove-DatasourceReplica cmdlet removes an inactive replica of a data source. The parameters are
Datasource -Disk -Tape -PassThru

This is a required parameter. Specify the datasource. This is a required parameter. Use this parameter to specify that the operation must be performed on disk. This is a required parameter. It indicates that the operation will be performed on a tape. This is not a required parameter. It can be used to return related objects. It allows cmdlets to be part of a pipeline.

Example:
$pg = Get-ProtectionGroup -DPMServerName "DPM-SRV01" $ds = Get-Datasource -ProtectionGroup $pg Remove-DatasoruceReplica -Datasource $ds -Disk

Remove-DPMDisk

The RemoveDPMDisk cmdlet removes a disk from the storage pool. The parameter is
-DPMDisk

This is a required parameter. It is a disk that is part of the storage pool.

Example:
$DPMDisk = Get-DPMDisk -DPMServerName "DPM-SRV01" Remove-DPMDisk -DPMDisk $DPMDisk

Remove-RecoveryPoint

The Remove-RecoveryPoint cmdlet removes a recovery point from disk or tape. The parameters are
RecoveryPoint -Confirm

This is a required parameter. Enter the name of the recovery point to use.

This is not a required parameter. Use this parameter to ask the user to confirm the action. This is not a required parameter. Use this parameter to delete without user ForceDeletion confirmation. Example:
$pg = Get-ProtectionGroup -DPMServerName "DPM-SRV01" $ds = Get-Datasource -ProtectionGroup $pg $rp = Get-RecoveryPonit -Datasource $ds Remove-RecoveryPoint -RecoveryPoint $rp

Remove-Tape

The Remove-Tape cmdlet removes a tape from a DPM library. The parameters are This is a required parameter. Use this parameter to specify a tape. -DPMLibrary This is a required parameter. Use this parameter to specify a library. -Async This is not a required parameter. It allows the cmdlet to be run asynchronously. This means that the user will regain control of the DMS command prompt before the cmdlet has finished running. Progress is communicated to the user periodically. This is not a required parameter. It is used with the -Async JobStateChangeEventHandler common parameter to inform a user when a job has completed.
-Tape

Example:
$DPMLib = GetDPMLibrary -DPMServerName "DPM-SRV01" $Tape = Get-Tape -DPMLibrary $DPMLib Remove-Tape -DPMLibrary $DPMLib -Tape $Tape[2]

Rename-DPMLibrary

The Rename-DPMLibrary cmdlet renames a specified library. The parameters are


-NewName DPMLibrary -PassThru

This is a required parameter. Enter the new name for the library. This is a required parameter. Enter the name of the library you want to rename. This is not a required parameter. It can be used to return related objects. It

allows cmdlets to be part of a pipeline. Example:


$DPMLib = GetDPMLibrary -DPMServerName "DPM-SRV01" Rename-DPMLibrary -DPMLibrary $DPMLib -NewName "NewDPMLib"

Rename-ProtectionGroup

The Rename-ProtectionGroup cmdlet renames an existing protection group. The parameters are
ProtectionGroup -NewName -PassThru

This is a required parameter. Enter the desired protection group. This is a required parameter. Enter the new name for the library. This is not a required parameter. It can be used to return related objects. It allows cmdlets to be part of a pipeline.

Example:
$pg = Get-ProtectionGroup -DPMServerName "DPM-SRV01" $mpg = Get-ModifiableProtectionGroup $pg[0] Rename-ProtectionGroup -ProtectionGroup $mpg ~ -NewName "RenamedPG" Set-ProtectionGroup $mpg

Set-DatasourceDiskAllocation

The Set-DatasourceDiskAllocation cmdlet sets the disk allocation for protected data.
-Datasource

-ProtectionGroup -Manual -CustomRequirements -FormatVolumes -PassThru ProductionServerJournalSize -ReplicaArea -ReplicaVolume -ShadowCopyArea

This is a required parameter. Enter a share, volume, database, storage group, system state, or other protected data source that is a member of a protection group. This is a required parameter. Enter the desired protection group. This is a required parameter. Use this parameter to specify that the policy schedule will be set manually. This is not a required parameter. Use this parameter to manually specify replica and shadow copy volumes. This is not a required parameter. Use this parameter to specify that the volumes should be formatted This is not a required parameter. It can be used to return related objects. It allows cmdlets to be part of a pipeline. This is not a required parameter. Specify the journal size. This is not a required parameter. Use this parameter to set the disk allocation for the replica area for the data source. This is not a required parameter. Specify the replica volume. This is not a required parameter. Use this parameter to set

-ShadowCopyVolume -USNJournalSize

the disk allocation for the shadow copy area for the data source. This is not a required parameter. Specify the volume with the shadow copy. This is not a required parameter. Specify the USN journal size.

Example:
$pg = Get-ProtectionGroup -DPMServerName "DPM-SRV01" $mpg = Get-ModifiableProtectionGroup $pg $ds = Get-Datasource -ProtectionGroup $pg Get-DatasourceDiskAllocation -Datasource $ds[1] Set-DatasourceDiskAllocation -Datasource $ds[1] -ProtectionGroup $mpg Set-ProtectionGroup $mpg

Set-DatasourceProtectionOption

The Set-DatasourceProtectionOption cmdlet sets protection options for a specified datasource. The parameters are This is a required parameter. Use this parameter to add a file exclusion. -ExchangeOptions This is a required parameter. Use this parameter to indicate that the options that follow will affect only Exchange data sources. -FileType This is a required parameter. Specify the file type to include or exclude. -PassThru This is not a required parameter. It can be used to return related objects. It allows cmdlets to be part of a pipeline. -PreferredPhysicalNode This is not a required parameter. This applies to Exchange 2007 CCR clusters. -ProtectionGroup This is a required parameter. Enter the desired protection group. -Remove This is a required parameter. Use this parameter to specify that the operation is a remove operation. This is not a required parameter. Use this parameter to RunEseUtilConsistencyCheck specify that a consistency check should be performed. -TopologyType This is not a required parameter. This applies to Exchange 2007 CCR clusters. Enter Active, Passive, Active if Passive, or Not Available.
-Add

Set-MaintenanceJobStartTime

The Set-MaintenanceJobStartTime cmdlet sets or removes the start time of a maintenance job. The parameters are

This is a required parameter. Enter the name of the DPM server to which you want to connect. This is a required parameter. Enter CatalogPruning or MaintenanceJob DetailedInventory. -Remove This is not a required parameter. Use this parameter to remove the start time. -StartTime This is not a required parameter. Enter a start time for the operation.
-DPMServerName

Example:
Set-MaintenanceJobStartTime -DPMServername dpmsrv01 ~ -MaintenanceJob catalogpruning -StartTime 02:00

Set-PerformanceOptimization

The Set-PerfomanceOptimization cmdlet enables or disables on-wire compression. The parameters are This is a required parameter. Specify a protection group. This is a required parameter. Use this parameter to disable compression. -EnableCompression This is a required parameter. Use this parameter to enable compression. -PassThru This is not a required parameter. It can be used to return related objects. It allows cmdlets to be part of a pipeline.
-ProtectionGroup DisableCompression

Example:
$pg = Get-ProtectionGroup -DPMServerName "DPM-SRV01" $mpg = Get-ModifiableProtectionGroup $pg[0] SetPerformanceOptimization -ProtectionGroup $mpg -EnableCompression

Set-PolicyObjective

The Set-PolicyObjective cmdlet sets the policy objective for a protection group. The parameters are
-RetentionRangeInWeeks -RetentionRangeInDays -RetentionRange -FrequencyList -RetentionRangeList -

This is a required parameter. Enter the number of weeks to retain the replica. This is a required parameter. Enter the number of days to retain the replica. This is a required parameter. The amount of time data will be retained on tape. This is a required parameter. This is the list of backup frequencies for the objectives. This is a required parameter. This is the list of retention periods defined in the objectives. This is not a required parameter. The number of times

SynchronizationFrequency -LongTermBackupFrequency

synchronization should occur. This is a required parameter. Enter Daily, Weekly,


BiWeekly, Monthly, Quarterly, HalfYearly, or Yearly.

ShortTermBackupFrequency -GenerationList -ProtectionGroup -BeforeRecoveryPoint -CreateIncrementals -PassThru

This is a required parameter. The frequency for short-term backups. This is a required parameter. Enter the list of generations for the objectives. This is a required parameter. Enter the protection group. This is not a required parameter. It specifies that synchronization should occur before recovery point creation. This is not a required parameter. Use this parameter to create incremental backups. This is not a required parameter. It can be used to return related objects. It allows cmdlets to be part of a pipeline.

Example:
$pg = Get-ProtectionGroup -DPMServerName "DPM-SRV01" $mpg = Get-ModifiableProtectionGroup $pg[0] Set-PolicyObjectives -RetentionRangeInWeeks 12 ~ -ShortTermBackupFrequency Daily $mpg

Set-PolicySchedule

The Set-PolicySchedule cmdlet specifies the intervals for recovery point creation for a protection group. The parameters are
OffsetInMinutes ProtectionGroup -Schedule -PassThru

This is a required parameter. Enter the time in minutes to offset the start time of a job. This is a required parameter. Enter the desired protection group. This is a required parameter. Use Get-PolicySchedule to pass the schedule. This is not a required parameter. It can be used to return related objects. It allows cmdlets to be part of a pipeline.

Example:
$pg = Get-ProtectionGroup -DPMServerName "DPM-SRV01" $ShadowCopysch = Get-PolicySchedule $pg -ShortTerm Set-PolicySchedule $pg $ShadowCopysch -DaysOfWeek mo -TimesOfDay 02:00

Set-ProtectionGroup

The Set-ProtectionGroup cmdlet commits all actions performed on the protection group. The parameters are

ProtectionGroup

This is a required parameter. Enter the desired protection group.

-Async This is not a required parameter. It allows the cmdlet to be run asynchronously. This means that the user will regain control of the DMS command prompt before the cmdlet has finished running. Progress is communicated to the user periodically. This is not a required parameter. This is a list of data sources that need to TranslateDSList be force translated. Example:
$pg = Get-ProtectionGroup -DPMServerName "DPM-SRV01" Set-ProtectionGroup -ProtectionGroup $pg

Set-ProtectionJobStartTime

The Set-ProtectionJobStartTime cmdlet sets or removes the start time of a protection job. The parameters are
MaximumDurationInHours -JobType -ProtectionGroup -Remove -PassThru -StartTime

This is a required parameter. Enter the maximum number of hours the job should be allowed to run. This is a required parameter. Enter ConsistencyCheck. This is a required parameter. Enter the desired protection group. This is a required parameter. Use this parameter to remove the job start time. This is not a required parameter. It can be used to return related objects. It allows cmdlets to be part of a pipeline. This is not a required parameter. Enter the start time for the operation.

Example:
$pg = Get-ProtectionGroup -DPMServerName "DPM-SRV01" $mpg = Get-ModifiableProtectionGroup $pg Set-ProtectionJobStartTime -ProtectionGroup $mpg ~ -JobType ConsistencyCheck -StartTime 06:00 Set-ProtectionGroup $mpg

Set-ProtectionType

The SetProtectionType cmdlet associates a protection type with a protection group. The parameters are
ProtectionGroup -LongTerm -PassThru

This is a required parameter. Specify a protection group. This is not a required parameter. Use this parameter to use long-term protection. This is not a required parameter. It can be used to return related objects. It allows cmdlets to be part of a pipeline.

ShortTerm

This is not a required parameter. Enter Disk, Tape, or None.

Example:
$pg = Get-ProtectionGroup -DPMServerName "DPM-SRV01" Set-ProtectionType -ProtectionGroup $pg -ShortTerm Disk -LongTerm

Set-ReplicaCreationMethod

The Set-ReplicaCreationMethod cmdlet sets the replica creation method for a protection group. The parameters are
ProtectionGroup -Later -Manual -Now -PassThru

This is a required parameter. Enter the desired protection group. This is not a required parameter. Enter the time at which the operation should be performed. This is not a required parameter. Use this parameter to specify that the schedule will be set manually. This is not a required parameter. Use this parameter to start replica creation immediately. This is not a required parameter. It can be used to return related objects. It allows cmdlets to be part of a pipeline.

Example:
$pg = Get-ProtectionGroup -DPMServerName "DPM-SRV01" $mpg = Get-ModifiableProtectionGroup $pg[0] Set-ReplicaCreationMethod -ProtectionGroup $mpg -Now Set-ProtectionGroup $mpg

Set-Tape

The Set-Tape cmdlet marks the specified tape as Archive, Cleaner, Free, or Unfree. The parameters are This is a required parameter. Use this parameter to designate the selected tape as a cleaner. -Free This is a required parameter. Use this parameter to designate the selected tape as free. -Archive This is a required parameter. Use this parameter to designate the selected tape as archive. -Tape This is a required parameter. Indicates that the operation will be performed on a tape. This is not a required parameter. It can be used to return related objects. It allows PassThru cmdlets to be part of a pipeline.
-Cleaner

Example:
$DPMLib = GetDPMLibrary -DPMServerName "DPM-SRV01"

$Tape = Get-Tape -DPMLibrary $DPMLib Set-Tape -Tape $Tape[1] -Free

Set-TapeBackupOption

The SetTapeBackupOption sets the backup and library options for a specified protection group. The parameters are This is a required parameter. Specify the library to use. This is a required parameter. It indicates the number of drives allocated to protection. -LongTerm This is a required parameter. Use this parameter to indicate longterm protection for the group. -ProtectionGroup This is a required parameter. Specify a protection group. -ShortTerm This is not a required parameter. Enter Disk, Tape, or None. -CompressData This is not a required parameter. Use this parameter to compress data on the wire. -EncryptData This is not a required parameter. Use this data to encrypt protected data. -PassThru This is not a required parameter. It can be used to return related objects. It allows cmdlets to be part of a pipeline. This is not a required parameter. Use this parameter to check data PerformIntegrityCheck integrity. -TapeCopyLibrary This is not a required parameter. It specifies a secondary library used for making copies of a tape.
-BackupLibrary -DrivesAllocated

Example:
$pg = Get-ProtectionGroup -DPMServerName "DPM-SRV01" Set-TapeBackupOption -ProtectionGroup $pg -ShortTerm -EncryptData

Start-DatasourceConsistencyCheck

The Start-DatasourceConsistencyCheck cmdlet starts a consistency check on a specified data-source. The parameters are This is a required parameter. Enter a share, volume, database, storage group, system state, or other protected data source that is a member of a protection group. -HeavyWeight This is not a required parameter. Use this process to checksum the contents of each file. This is not a required parameter. It is used with the -Async JobStateChangeEventHandler common parameter to inform a user when a job has completed.
-Datasource

Example:
$pg = Get-ProtectionGroup -DPMServerName "DPM-SRV01"

$ds = Get-Datasource -ProtectionGroup $pg Start-DatasourceConsistencyCheck -Datasource $ds

Start-DPMDiskRescan

The Start-DPMDiskRescan cmdlet scans the specified DPM server for new or changed disks. The parameter is
-DPMServerName

This is a required parameter. Enter the name of a DPM server.

Example:
Start-DPMDiskRescan -DPMServerName "DPM-SRV01"

Start-DPMLibraryInventory

The Start-DPMLibraryInventory cmdlet inventories the tape(s) for a specified library. The parameters are This is a required parameter. Use this parameter to perform a detailed inventory. -DPMLibrary This is a required parameter. Specify a library. -FastInventory This is not a required parameter. Use this parameter to perform a fast inventory. This is not a required parameter. It is used with the -Async JobStateChangeEventHandler common parameter to inform a user when a job has completed. -Tape This is not a required parameter. Use this to specify a single tape in a library
-DetailedInventory

Example:
$DPMLib = GetDPMLibrary -DPMServerName "DPM-SRV01" Start-DPMLibraryInventory -DPMLibrary $DPMLib -FastInventory

Start-DPMLibraryRescan

The Start-DPMLibraryRescan cmdlet starts a rescan job to identify new or changed libraries. The parameters are This is a required parameter. Enter the name of a DPM server. This is not a required parameter. It is used with the -Async JobStateChangeEventHandler common parameter to inform a user when a job has completed.
-DPMServerName

Example:
Start-DPMLibraryRescan -DPMServerName "DPM-SRV01"

Start-OnlineRecatalog

The Start-OnlineRecatalog cmdlet starts a recatalog job. The parameters are This is a required parameter. Enter the name of the recovery point to use. This is not a required parameter. It is used with the -Async JobStateChangeEventHandler common parameter to inform a user when a job has completed. -RecoveryPointLocation This is not a required parameter. Enter the current location of the recovery point.
-RecoveryPoint

Example:
$pg = Get-ProtectionGroup -DPMServerName "DPM-SRV01" $ds = Get-Datasource -ProtectionGroup $pg $rp = Get-RecoveryPonit -Datasource $ds $rsl = Get-RecoveryPointLocation -Datasource $ds Start-OnlineRecatalog -RecoveryPoint $rp[1] -RecoveryPointLocation $rsl

Start-ProductionServerSwitchProtection

The Start-ProductionServerSwitchProtection cmdlet switches protection of a datasource to another DPM server. The parameters are This is a required parameter. The password for the user account. It is best not to pass this parameter through the command; you will be prompted. -ProtectionType This is a required parameter. Indicate the protection type. -UserName This is a required parameter. Enter a user account to use. -DomainName This is a required parameter. Enter the domain to which the user belongs. This is a required parameter. It specifies a server with the DPM agent ProductionServer installed.
-Password

Start-TapeDriveCleaning

The Start-TapeDriveCleaning cmdlet starts a drive cleaning job. The parameters are This is a required parameter. Specify the drive to be cleaned. This is not a required parameter. It is used with the -Async JobStateChangeEventHandler common parameter to inform a user when a job has completed.
-TapeDrive

Example:
$DPMLib = GetDPMLibrary -DPMServerName "DPM-SRV01" $td = Get-TapeDrive -DPMLibrary $DPMLib

Start-TapeDriveCleaning -TapeDrive $td

Start-TapeErase

The Start-TapeErase cmdlet starts a tape erase job. The parameters are This is a required parameter. Specify a tape to erase. This is not a required parameter. It is used with the -Async JobStateChangeEventHandler common parameter to inform a user when a job has completed.
-Tape

Example:
$DPMLib = GetDPMLibrary -DPMServerName "DPM-SRV01" $Tape = Get-Tape -DPMLibrary $DPMLib Start-TapeErase -Tape $Tape[2]

Start-TapeRecatalog

The Start-TapeRecatalog cmdlet returns information about the data on a tape. The parameters are This is a required parameter. Specify a tape to erase. This is not a required parameter. It is used with the -Async JobStateChangeEventHandler common parameter to inform a user when a job has completed.
-Tape

Example:
$DPMLib = GetDPMLibrary -DPMServerName "DPM-SRV01" $Tape = Get-Tape -DPMLibrary $DPMLib Start-TapeRecatalog -Tape $Tape[2]

Test-DPMTapeData

The Test-DPMTapeData cmdlet verifies the data for a recovery point. The parameters are This is a required parameter. Enter the name of the recovery point to use. This is not a required parameter. It is used with the -Async JobStateChangeEventHandler common parameter to inform a user when a job has completed. -RecoveryPointLocation This is not a required parameter. Enter the current location of the recovery point.
-RecoveryPoint

Example:
$pg = Get-ProtectionGroup -DPMServerName "DPM-SRV01" $ds = Get-Datasource -ProtectionGroup $pg $rp = Get-RecoveryPonit -Datasource $ds $rsl = Get-RecoveryPointLocation -RecoveryPoint $rp[1] Test-DPMTapeData -RecoveryPoint $rp[1] -RecoveryPointLocation $rsl

Unlock-DPMLibraryDoor

The Unlock-DPMLibraryDoor cmdlet unlocks the door of a specified library. The parameters are This is a required parameter. Enter a library to return a list of drives. -Async This is not a required parameter. It allows the cmdlet to be run asynchronously. This means that the user will regain control of the DMS command prompt before the cmdlet has finished running. Progress is communicated to the user periodically. This is not a required parameter. This is a callDoorAccessJobStateChangeEventHandler back method for the -Async parameter.
-DPMLibrary

Example:
$DPMLib = Get-DPMLibrary -DPMServerName "DPM-SRV01" Unlock-DPMLibraryDoor -DPMLibrary $DPMLib[0]

Unlock-DPMLibraryIEPort
The Unlock-DPMLibraryIEPort cmdlet unlocks and unloads the media in the insert/eject port. The parameters are This is a required parameter. Enter a library to return a list of drives. -Async This is not a required parameter. It allows the cmdlet to be run asynchronously. This means that the user will regain control of the DMS command prompt before the cmdlet has finished running. Progress is communicated to the user periodically. This is not a required parameter. Use this parameter with the JobChangedEventHandler Async parameter to be informed of the job status.
-DPMLibrary

Example:
$DPMLib = Get-DPMLibrary -DPMServerName "DPM-SRV01" Unlock-DPMLibraryIEPort -DPMLibrary $DPMLib

The Bottom Line


Explain the relationship between Windows PowerShell and the DPM Management Shell. Windows PowerShell is a new technology that is just starting to be seen in the 2007 wave of Microsoft products. Understanding how PowerShell relates to the DPM Management Shell will help you learn the underlying technology and master the DPM command-line management interface more quickly, as well as let you leverage your experience with DPM and other PowerShell-enabled products. Master It 1. What version of Windows PowerShell does DPM 2007 use? 2. How is the DPM Management Shell implemented? 3. How many cmdlets are included in the DPM Management Shell? Do these replace the cmdlets offered in Windows PowerShell? Describe the main benefits that PowerShell offers over regular scripting technologies. Microsoft already provides a wide variety of scripting technologies, such as the Windows Scripting Host. Knowing the advantages that PowerShell provides will help you get the most benefit from the DPM Management Shell. Master It 1. How does PowerShell integrate with the .NET Framework? 2. Describe the PowerShell pipeline. How does it differ from the pipeline capabilities offered by traditional scripting environments?

Chapter 5: End-User Recovery


Overview
Oops Any computer user Life as an IT professional has its interesting moments. That's "interesting" in the same sense as one would use in a curse such as, "May you live in interesting times." In our experience, few mishaps seem to generate the same feeling of panic that occurs when a user deletes an important file. Administrators may be tempted to sit back and quietly laugh when this happens because the distraught user's emotional reaction usually seems to be out of proportion with the actual value of the deleted file. However, panic is easy to understand when you take a moment to think of things from the user's point of view:

If the user is computer savvy, then they've just done something that is considered to be the sole province of the rank tyro or the completely clueless. Nobody likes to be thought of as stupid, especially when the mistake forces you to ask for help from people who are likely to be amused by or contemptuous of your mistake. These people are often correct; too many administrators take great visible delight in the misfortunes of their users. If, on the other hand, the user is a novice or unsophisticated user, they may be frustrated by the graphic proof of their inability to master a tool that their peers are all using without this level of difficulty. For most users, their computers are not their job; they are merely tools to help them get their real work done. IT professionals tend to lose sight of the fact that our work consists of the care and feeding of these fascinating and often treacherous beasties known as computers, but only in the pursuit of helping our users in the pursuit of whatever work it is they do. If you are a construction worker who needs to dig a ditch, having your backhoe break down (even if you know how to deal with it) is more than a minor irritation.

Luckily, as responsible IT professionals who pay attention to best practices and the dictates of simple prudence, we have a safety net in place to handle just this kind of situation. In the days before data protection (when backup systems ruled the Earth), this safety net was provided by our backup systems. Any data the user lost, as long as it resided in a protected location, was faithfully collected by the backup agent and written to one of the hordes of tapes produced by tape libraries. If we were using a modern backup system that included disk-to-disk-to-tape (D2D2T) features, it made a brief stop on a drive volume in the corresponding storage system first. This backup copy at best was taken during the previous night (and at worst during the last full backup the preceding weekend), but it still represented the user's last, best hope for some sort of recovery. DPM widens this safety net by providing centralized end-user recovery capabilities that work hand-in-hand with the central VSS and replication technologies to produce a recovery capability that is far more powerful than other solutions and simultaneously gives users the ability to recover their own data.

In this chapter, you will learn to:


Prepare Active Directory for end-user recovery Deploy the VSS client and hotfix to your users

Introducing End-User Recovery


For years, the end-user recovery model has been unchanged; in practice, it works much like the following story. Our hapless user (we'll call him Bob) will illustrate one of the many ways this story plays out: 1. At 1:15 PM on Thursday afternoon, Bob is using his desktop computer to work on an important quarterly budget spreadsheet. Just as he's saving it to his home directory on the network file server, he receives a cryptic error message that seems to indicate the file is missing. He's been working on this spreadsheet since last Friday; it's a critical piece of his presentation for tomorrow's 9 AM department meeting. 2. Bob frantically scrambles to see if there's another copy of the data on his laptop machine. There is, but Bob hasn't worked on that copy since Saturday afternoon. He hasn't copied the spreadsheet to any CDs, floppies, USB drives, or any other removable media. 3. After a moment of thought, Bob thinks he might have saved a copy of the document to a different folder and checks to see if there's another copy of the data on any other network shares he can access. Not finding any, he quickly checks his email client to see if he sent an earlier version to anyone for review. 4. At this point, Bob gives up hope of having a reasonably current copy of the spreadsheet, contemplates a list of likely suspects on whom to pass the blame, and resigns himself to pulling an all-nighter. 5. Co-workers tell Bob that there was a momentary glitch on the file server. While relieved that he didn't do anything to directly cause the file to disappear, he's still quite annoyed that his file was chosen as the ritual sacrifice to the network gods. He's pretty sure his manager won't be impressed with the "It just disappeared; there was a problem with the server!" defense. 6. Bob bows to the inevitable and calls his manager, Ann, to tell her about the file loss and the impact on his schedule. He manages to work in some complaints about the file server and the IT group along the way. 7. Ann is properly sympathetic and even attempts to console Bob with her own story of the server "mysteriously" eating critical files. She promises Bob she'll see what she can do. 8. Ann calls Charlie, the IT Manager, spends several minutes complaining about how unreliable the network is, and explains how today's glitch cost Bob an incredibly important spreadsheet that needs to be presented to the CFO first thing tomorrow morning, and by the way, isn't there some way IT can recover his file from the backup tapes? Charlie agrees this might be the case and promises Ann to do what he can. 9. Once Ann hangs up, Charlie flags down Denise, the first backup administrator he sees. He then explains Bob and Ann's problem and asks her to perform the recovery ASAP. Denise rolls her eyes, but reminds herself that she is in fact paid to make users' accidents her emergencies. 10. Denise logs on to the management console of the backup software and first checks to see if there's a version of Bob's file in the backups. She knows by checking the backup schedule that full backups of that server are made every Saturday night, and

incremental backups are made every Tuesday and Thursday night. She determines that Bob is in luck; there is, in fact, a copy of the missing file in Tuesday night's incremental backup tape. 11. Denise sets up and executes the recovery job, which will take approximately 45 minutes, most of which is spent finding the appropriate tape and waiting for the tape to be properly positioned. Denise has a pointed conversation with Charlie about how the current two-day incremental backup procedures are inadequate. Charlie tells her to be thankful and reminds her that she's working on a support call that will have a (relatively) happy ending. 12. When the complaining is done and the backup system has disgorged Tuesday's version of the file back into Bob's home directory, Denise informs Charlie the backup is done. Charlie calls Ann to give her the good news; Ann, in turn, lets Bob know that his file is now available. 13. Bob takes a disbelieving look at his file and nearly has a stroke when he realizes how much work is missing. Once Ann calms him down, he wades back into his spreadsheet and hopes that if nothing more goes wrong he'll at least be able to get a couple of hours of sleep. We truly hope you don't work in an environment where this is your recovery method; although it is not real, it is based on a mixture of scenarios from our all-too-real personal experiences. However, this story makes an important point: the sad truth is that most user data is not sufficiently protected even when a backup solution is in place and used faithfully. In many environments, administrators try to make sure that their users' data is protected by using the large array of defense mechanisms that Microsoft makes available in the Windows operating system:

Mapped drives to network file shares Folder redirection to automatically redirect data on the local computer to a network file server without user intervention The Windows NT4 My Briefcase feature to attempt to synchronize data between multiple systems.

All of these technologies (and more) have been used to mitigate the impact of lost data. But none of them address the question of what happens when the user deletes an important file from the server and the backed up version is old enough to be nearly useless. To solve this problem, Microsoft developed a nifty little feature called End-User Recovery (EUR). Let's take a closer look at it.
Windows End-User Recovery

The concept behind EUR is very simple: extend the user's desktop so that they can directly interact with the data protection solution and request copies of their own protected data. By itself, this isn't very exciting. Not only that, but it is hard to achieve because there is no way to tell which data protection solution is going to be in use. Even if you can tell, you still require administrator intervention if the solution uses a traditional tape backup. Here's the absolute genius part: instead of making EUR depend on some end-state backup solution,

Microsoft instead tied it to a technology that offered advantages for backup vendorsthe Volume Shadow Copy Service (VSS). When VSS is enabled on a Windows Server 2003 file server, administrators can configure regular data snapshots and allow a greater number of useful recovery points to be available. Almost all of the modern backup solutions now take advantage of VSS technology to ensure consistent backups of protected data, and Microsoft freely provides the Shadow Copy Client, a small Windows Explorer addon that knows how to interact with VSS. Once this client is installed, end users can directly browse the available VSS snapshot on the file servers they connect to, see the different versions of their files, and recover any version that's still availableall without involving an administrator or backup operator. As an added advantage, Microsoft Office 2003 and Office 2007 support the same EUR functionality from within Office applications. Best yet, EUR with VSS is completely optional. If you decide it's not right for your organization, don't deploy the EUR client (this can be a slightly bigger problem if you use Windows Vista; both the client and the hotfix are built-in to Vista, so you will need to disable it using Active Directory Group Policy). You can deploy the EUR client to administrator workstations and key technical support personnel so they can perform restorations on behalf of your users; this is still a huge win over traditional tape restore procedures because it is quicker and doesn't need a separate tape backup application. In our experience, most file recovery requests are for files that were recently backed up; by avoiding the need to go to tape, everybody is happier. You should be aware, however, that using EUR and VSS in this fashion requires you to store the snapshots on each server directly; the EUR client can't pull them from a central backup application, even if that application uses VSS. That's how EUR works in Windows without DPM. Next we'll see how it looks once DPM is in the picture.
DPM End-User Recovery

From an end-user perspective, there is no noticeable difference between EUR from a VSSconfigured file server and EUR from DPM. Users of both variants can use Windows Explorer or the relevant Office application to browse the available replicas of their files and immediately recover the precise point-in-time copy they need. However, because of the way DPM creates its copies, users will see more recovery points. For the administrator, however, there are two major differences: 1. DPM uses client-server replication as well as VSS-based snapshots to create recovery points, but those points are created on a separate server (DPM server) from the data that is being protected. As a result, the EUR client must know how to deal with replica volumes on the DPM servers. This requires the installation of a hotfix to the EUR client before it can be used with DPM. 2. EUR under DPM requires the modification of the Active Directory schema in the forest. This schema modification is not performed during DPM installation, as it's not appropriate for every DPM installation. So that administrators can avoid an unnecessary bout of Active Directory schema updates, this schema extension is left as an optional task for the administrator after DPM is installed.

Given these changes, EUR is not for everyone. It does, however, offer some powerful advantages over traditional tape-based restores and native EUR with VSS:

DPM provides many more recovery points than the native VSS functionality in Windows Server 2003; Windows is limited to a maximum of 64 snapshots. By using replication technology to create additional recovery points, DPM offers a much higher number of individual replicasas often as one every 15 minutes. This helps minimize the effects of data loss. DPM stores the VSS snapshots and synchronized replicas on the DPM server storage pool, not on the protected file server. Your capacity planning for your file servers only needs to account for the data and not the additional overhead required by server-side shadow copies. EUR in DPM eliminates the need to interact with the tape backup system for the vast majority of recovery requests.

However, EUR in DPM is not a panacea; it does not provide perfect protection and restoration all the time. In particular, you need to consider the following disadvantages:

EUR in DPM requires additional user education. If your users have trouble understanding the difference between local storage and network file storage, they may struggle with the concept of multiple point-in-time views of their data. You may want to consider offering training sessions within working groups so that more savvy users can help their co-workers master the concepts; you may also want to offer training during new employee orientation. EUR in DPM puts the power to recover files into the users' hands. This is a doubleedged sword; your users can recover data when they need to, but they can also overwrite more current versions of their data. Using 15-minute replication schedules helps reduce the impact of these types of mistakes; however, they can still be extremely frustrating for the user and require support personnel to untangle the mess. See the "Why Do I Need Both Synchronization Frequency and Recovery Points?" sidebar in Chapter 1 for a more detailed explanation of the implications of synchronization and recovery point schedules. EUR in DPM is ineffective when users are working with offline file caches. When a mobile user creates or modifies a file in a location on a laptop that corresponds to a DPM-protected file share, it won't automatically be protected by DPM until the user's laptop is reconnected to the network, the changes are synchronized with the server again, and the DPM protection agent can synchronize it with the DPM server. If the data is lost before that synchronization happens, there is no way of recovering it and the user will need to restore from an earlier recovery point.

Despite these disadvantages, EUR combined with DPM offers a lot more benefits than it does drawbacks. Let's examine how to enable EUR in your DPM environment.

Enabling DPM End-User Recovery


Before you can unleash your users on EUR with DPM, you must perform the following necessary deployment tasks:

Prepare Active Directory for EUR by extending the schema with the DPM extensions. You can do this via the GUI or the command line.

Deploy the VSS client to your users. You must install the VSS client as well as the DPM-specific VSS client hotfix.

Let's examine these tasks in more detail.


Preparing Active Directory

For DPM end-user recovery to work in your environment, you must extend your Active Directory schema. The schema extension process is simple; however, there are a few prerequisites you need to bear in mind:

Your domain controllers must be running Windows Server 2003, or Windows Server 2000 with Service Pack 4 or later with Schema modifications enabled. To enable schema modifications on Windows Server 2000 domain controllers, see Knowledge Base article 285172, "Schema Updates Require Write Access to Schema in Active Directory" (http://support.microsoft.com/kb/285172). You must perform the upgrade as a user who is a member of both the Domain Admins and Schema Admins built-in security groups. Most Domain Admin accounts aren't members of the Schema Admins groupand they shouldn't be! If you must have someone else with the appropriate permissions perform the upgrade, DPM includes a separate binary that they can use to do the job on a separate server. If the protected server and the DPM server reside in different domains, you must perform the schema extension from the domain of which the DPM server is a member. You should always perform schema extensions on a machine that is as close as possible (network-wise) to the domain controller that holds the Schema Master FSMO role; ideally, this means from the Schema Master domain controller itself or from a machine on the same network segment. By default, this role is held by the first domain controller in the forest. If it's not possible to run the upgrade from a machine within the same site as the Schema Master domain controller, you should perform it as close network-wise as possible. This reduces network latency during the upgrade and reduces the risk of any possible negative effects. Just for the record, this isn't a DPM-specific recommendation; it's something the Active Directory folks at Microsoft have always recommended and that is finally being enforced through all the product groups. The health of your Active Directory forest depends on all of your domain controllers and Global Catalog servers being able to download a valid schema; if something happens during this process and corrupts your schema (as is more likely to happen over a low-bandwidth or high-latency connection), you could break your entire Active Directory deployment.

Enough with the preparation; let's update the Active Directory schema.
Upgrading Your Active Directory Schema

Most Active Directory administrators don't much care for the idea of upgrading their production Active Directory schema. Not only do upgrades cause a temporary but real increase in the level of directory replication throughout the organization (schema upgrades must be replicated to every domain controller in the forest), but most schema upgrades cannot be easily reversed or removed.

As a result, it is (as always) important for you to thoroughly test any upgrades in your lab environment before performing them on your production Active Directory forest. You should always follow all best practices as part of the procedure, such as ensuring a current backup of the Active Directory database before you begin the upgrade. Having said that, we should point out that there is an unnecessarily large amount of fear and hype around the process of upgrading the Active Directory schema, at least when it comes to Microsoft products. Part of the reason why Microsoft applications don't come out very often (when compared to open source applications) is that each product must be extensively tested against a variety of configurations to ensure that there won't be unintentional breakage. If you follow their best practices and display a reasonable amount of care, schema updates are no sweatespecially if you follow the advice to practice the process in your test lab until you feel comfortable with it. Ironically, we've found that the biggest source of resistance at schema upgrades usually come from whatever change management process is in place at your organization. We heartily endorse the concept and practice of change management; it's never a good idea to be making undocumented, random changes to your production servers. Unfortunately, the change management process often swings to the other extreme and becomes a force for stagnation. If you have to fight an unnecessarily resistant change management process in order to get these schema upgrades approved, we feel your pain. Again, thorough testing (and documentation) is the best way to fight this type of battle.

EXTENDING ACTIVE DIRECTORY WITH THE GUI

If you are using an account that has the requisite permissions, the process of enabling the DPM schema extensions is as follows: 1. Open the DPM Administrator console (Start All Programs Microsoft System Center Data Protection Manager v2 Microsoft System Center Data Protection Manager v2). 2. Click Options in the Actions pane, as shown in Figure 5.1.

Figure 5.1: The Options pane

3. In the dialog box, select the End-User Recovery tab if it is not already selected. 4. Click the Configure Active Directory button, as shown in Figure 5.2.

Figure 5.2: The Configure Active Directory box 5. In the Configure Active Directory box, select Use Current Credentials if you are already logged in as a user with both Schema Admin and Domain Admin permissions. If you are logged in without these permissions, enter the credentials of an authorized user. 6. Click Yes to confirm the changes to Active Directory, as shown in Figure 5.3.

Figure 5.3: Confirm the changes 7. A dialog box opens informing you that you will be notified when the process is complete, as shown in Figure 5.4.

Figure 5.4: Change notification 8. Click OK in the Confirmation dialog, as shown in Figure 5.5.

Figure 5.5: Update confirmation 9. A dialog appears informing you that the changes will not take effect until after the next synchronization, as shown in Figure 5.6. Click OK.

Figure 5.6: Synchronization notice You've successfully extended your Active Directory schema and can move on the to VSS client installation.
EXTENDING ACTIVE DIRECTORY FROM THE COMMAND LINE

If you need to extend the Active Directory schema from the command line (for example, when you need a member of the Schema Admins group to perform the upgrade for you), follow these steps: 1. From a Windows Server 2003 computer in the domain in which you want to enable end-user recovery, open a command-line prompt and navigate to the following directory: C:\Program Files\Microsoft Data Protection Manager\DPM\End User Recovery\. 2. Open DPMADSchemaExtension.exe. The warning shown in Figure 5.7 will appear. Click Yes in the dialog box.

Figure 5.7: Confirmation warning for schema extension 3. Enter the machine name of the DPM server. This is the equivalent of the NetBIOS name of the machine, as shown in Figure 5.8. Click OK.

Figure 5.8: Enter the machine name

4. Enter the fully qualified domain name of the DPM Server, as shown in Figure 5.9. Click OK.

Figure 5.9: Enter the FQDN 5. Enter the full DNS name of the domain in which you would like to enable end-user recovery, as shown in Figure 5.10.

Figure 5.10: Enter the DNS domain name 6. As shown in Figure 5.11, a dialog box will appear, letting you know the changes are in progress. Click OK.

Figure 5.11: The schema update in progress 7. When the update is complete, you will be presented with a confirmation dialog. Click OK to close this window. 8. Close the Command window. You've successfully extended your Active Directory schema and can move on to the VSS client installation.
Deploying the Client

Deploying the end-user recovery client is actually a two-step process. The initial step is the deployment of the Volume Shadow Copy client for Windows. The client has not been changed since its release in 2003. The caveat, however, is that for end-user recovery in DPM, the VSS client must be patched with the DPM-specific hotfix.

The VSS client is freely available for Windows XP SP2, Windows Server 2003 RTM, and Windows Server 2003 SP1 (it is built into Windows Vista and Windows Server 2003 SP2). You can download it from the Microsoft Downloads website at: http://www.microsoft.com/downloads/details.aspx?FamilyID=e382358f-33c3-4de7-acd8a33ac92d295e&DisplayLang=en. The VSS client hotfix enables the client machine to communicate with the DPM server to retrieve the snapshots for a volume. Table 5.1 shows the supported operating systems and hotfix download locations for each.
Table 5.1: EUR in DPM Supported Operating Systems and Patch Locations Open table as spreadsheet

Operating System Windows XP SP2 Windows XP SP2 (64-bit) Windows Vista Windows Server 2003 Windows Server 2003 SP1 Windows Server 2003 SP1 (64-bit) Windows Server 2003 SP2 Windows Server 2008

VSS Client Patch Location http://support.microsoft.com/default.aspx?scid=kb;enus;895536 http://go.microsoft.com/fwlink/?LinkId=50683 Included in the operating system http://go.microsoft.com/fwlink/?LinkId=46065 http://go.microsoft.com/fwlink/?LinkId=46067 http://go.microsoft.com/fwlink/?LinkId=46068 Included in the service pack Included in the operating system

Once you've got the VSS client and hotfix downloaded, you must deploy them to your clients. You have three basic methods: a system management solution, logon scripts, or manual installation. In a large enterprise, you probably already have some sort of centralized system management solution such as Microsoft's System Management Server (SMS), System Center Configuration Manager 2007 (SCCM), or some equivalent third-party package. To deploy the client and hotfix via SMS or SCCM, you must create the necessary packages and advertise them to your clients. This method offers some significant advantages:

It gives you the ability to deploy both the client and the hotfix simultaneously. If you create two packages, one for the client and one for the hotfix, you only need to advertise the hotfix package, as long as you specify that it runs the client package as a prerequisite. It requires less effort in larger environments. Even in smaller environments, if you've made the investment in SMS or SCCM, the process of configuring the packages once and then deploying them to collections minimizes the amount of time you spend enabling EUR. It permits a greater degree of control. Depending upon how your collections are defined in SMS or SCCM, you can decide exactly which machines or users will have the client installed on them. You can also ensure that when a user's machine is rebuilt, it automatically reinstalls the VSS client and hotfix.

It provides reporting. Using the built-in analysis and reporting tools in SMS or SCCM, you can tell exactly which machines in your environment have the VSS client and hotfix installed.

Scripted installs are another tried-and-true method of application deployment for many organizations. Like package management solutions, scripting offers a great degree of control combined with one-time configuration of the installation package. It also offers a great degree of flexibility; there are a wide variety of scripting languages to choose from, such as:

Windows batch or command scripting VBScript, JavaScript, or PerlScript via the Windows Scripting Host The new Windows PowerShell, if it is deployed on your desktops

The main downside to a scripted deployment is that although it is supported with special switches in the VSS client and hotfix installers, it is more complicated and difficult to configure and test. Finally, if your organization is small or if other considerations mandate, you can manually set up the client and hotfix on client machines in your environment. This might be an option for you if your company uses some sort of disk-imaging solution for its operating system and application installation; you can perform the manual install on the reference image.
Deploying the EUR Client Using Group Policy

Because Microsoft included a software distribution mechanism within Active Directory Group Policy, you might think that you might as well deploy the EUR client and VSS hotfix in this fashion. However, we do not recommend using Group Policy to install the EUR client software. The DPM VSS hotfix is not in the .MSI format, which is required to use Group Policy for software deployment. You can deploy the EUR client using Group Policy as it is packaged as an .MSI, but that will leave you without the hotfixwhich will leave you with only half of the solution. However, if VSS is enabled on your Windows file servers and you have already deployed the VSS EUR client using Group Policy, you don't have to change a thing. You only need to use one of the other recommended methods to push the hotfix out to your user machines.

INSTALLING THE VSS EUR CLIENT

To manually install the VSS client, follow these steps: 1. Download both the client and patch. Make sure they are in a location that is accessible by the target machine and user. You might need to store the files locally or place them on a network share.

2. Double-click ShadowCopyClient.msi. If you receive a security warning, click Run to continue. 3. In the Welcome screen shown in Figure 5.12, click Next.

Figure 5.12: The VSS Client Welcome screen 4. Select the bullet to accept the license agreement, as shown in Figure 5.13, and click Next.

Figure 5.13: The VSS Client EULA 5. The Installation Progress screen will be displayed. Once the installation has completed, you will see a Confirmation screen, as shown in Figure 5.14. Click Finish to complete the client installation.

Figure 5.14: The VSS Install Confirmation screen Now you can install the VSS client hotfix for DPM support.
INSTALLING THE VSS CLIENT DPM HOTFIX

To manually install the VSS client DPM hotfix, follow these steps: 1. Double-click the hotfix file (the name will vary depending on the version of Windows to which you are installing the hotfix). If you receive a security warning, click Run to continue. 2. In the Hotfix Installation screen shown in Figure 5.15, click Next.

Figure 5.15: The Hotfix Installation screen 3. In the License Agreement screen shown in Figure 5.16, select the I Agree bullet and click Next.

Figure 5.16: The Hotfix EULA 4. The Installation Progress screen will be displayed. Once the installation has completed, you will see a Confirmation screen as shown in Figure 5.17. Click Finish to complete the hotfix installation.

Figure 5.17: The Hotfix installation confirmation Now you've installed both the client and the hotfix, as well as extended the Active Directory schema. It's time to look at how to use the client to recover data protected by DPM

Recovering Protected Data


For users, the actual process of recovering DPM-protected data is relatively simple, as EUR is integrated into Windows Explorer. It is also available from within Microsoft Office 2003 and Microsoft Office 2007. Honestly, the hardest part is determining which recovery point you want to use; once DPM is configured and running, it can produce an astonishing number of recovery points. Let's take a look at using the EUR client.

End-User Recovery Limitations

Unfortunately, you can't perform EUR on every type of data that DPM protects. In particular, you can only perform direct EUR for file server data. You can also recover certain types of SharePoint data such as sites, documents, and lists, but recovery becomes more complicated to set up because of the need for a separate restore server or farm. See Chapter 9, "Protecting SharePoint Servers," for more information. Additionally, your users will need to have the appropriate rights to perform EUR of file data:

To browse and view earlier versions of files they want to recover, your users will need to have the appropriate Read permissions on the file. To actually recover a file to its original location, your users will need to have the appropriate Read and Write permissions on the original location. To recover a file to an alternative location, your users will need to have Read permission on the original file location and have Write permission on the alternative save location.

As long as your users have been granted the appropriate permissions, their recovery attempts will work on all protected files, even through firewallsas long as they can access the protected volume, share, or folder.

Recovering File Data from Windows Explorer

If your users are comfortable in Windows Explorer, it's the natural tool to use for EUR. They can recover individual files or even entire folder hierarchies from DPM-protected file-server resources.
RECOVERING FILES

The process for file recovery is identical regardless of which version of Windows you are using: 1. Right-click on the file you want to revert to an earlier version and click Properties, as shown in Figure 5.18.

Figure 5.18: Select the file to recover 2. In the Properties window, select the Previous Versions tab as shown in Figure 5.19. Select the version of the file you want to recover and click the Restore button.

Figure 5.19: The Previous Versions tab 3. A dialog box will appear asking you to confirm your choice, as shown in Figure 5.20. Click Yes.

Figure 5.20: Recovery choice confirmation 4. A message will appear stating that the file has been successfully recovered, as shown in Figure 5.21. Click OK to close the window.

Figure 5.21: A successful recovery 5. Close the file property sheet by clicking OK. Now let's look at how to recover an entire folder.
RECOVERING FOLDERS

In addition to recovering individual files, you can use the VSS client to recover entire folders: 1. Navigate to the folder you want to recover, right-click on an empty portion and click Properties, as shown in Figure 5.22.

Figure 5.22: Select a folder to recover 2. In the Properties window, select the Previous Versions tab. Select the version of the folder you want to recover, as shown in Figure 5.23, and click Restore.

Figure 5.23: Select a folder to restore 3. A dialog box will appear asking you to confirm your choice, as shown in Figure 5.24. Click Yes.

Figure 5.24: Confirm the recovery choice 4. A dialog box will appear stating that the folder has been successfully restored, as shown in Figure 5.25. Click OK to close the window.

Figure 5.25: The folder recovery is successful 5. Close the folder property sheet by clicking OK. Recovering files and folders from Windows Explorer is easy. Note that when you restore an earlier version of a folder to its location, you will be affecting the files within that folder:

If the file exists both in the current version and the restored version, the restored version overwrites the current version. If the file exists only in the current version, it will not be touched. If the file exists only in the restored version, it will be added to the folder.

Now, let's look at how to recover Office documents from within the Office applications.

Recovering File Data from Microsoft Office

EUR within Microsoft Office is supported from Microsoft Office 2003 and Microsoft Office 2007. Let's look at Microsoft Office 2003 first.
RECOVERING DOCUMENTS FROM OFFICE 2003

1. To recover a document from within an Office 2003 application, open the corresponding application (Start Microsoft Office). 2. Select File Open. 3. Navigate to the folder containing the file and select the file you want to restore. 4. Click the Tools button and click Recover Previous Version, as shown in Figure 5.26.

Figure 5.26: Select the file to recover 5. In the Properties window that opens, select the Previous Versions tab, and choose the version of the file that you want to recover, as shown in Figure 5.27. Click Restore.

Figure 5.27: Select the file version to recover 6. A dialog box will ask you to confirm your choice, as shown in Figure 5.28. Click Yes.

Figure 5.28: Confirm the recovery choice 7. A message will appear stating that the recovery has been successful, as shown in Figure 5.29. Click OK to close the window.

Figure 5.29: The file recovery is successful 8. Close the property sheet by clicking OK. The process for recovering documents from within Office 2007 is almost the same.
RECOVERING DOCUMENTS FROM OFFICE 2007

Recovering documents via Office 2007 is slightly different from Office 2003: 1. To recover a document from within an Office 2007 application, open the corresponding application (Start Microsoft Office). 2. Select Office Button Open. 3. Navigate to the document you want to recover, right-click on it, and click Properties as shown in Figure 5.30.

Figure 5.30: Select the document to recover 4. In the Properties dialog box, click the Previous Versions tab and select the version you want to recover, as shown in Figure 5.31. Click Restore.

Figure 5.31: Select the version to recover 5. You will be prompted to confirm your choice, as shown in Figure 5.32. Click Restore.

Figure 5.32: Confirm the recovery choice 6. A message will appear stating that the recovery was successful, as shown in Figure 5.33. Click OK to close the window.

Figure 5.33: The file recovery is successful 7. Close the property sheet by clicking OK. That's it! Recovering documents within Office is just that simple.
Recovering SharePoint Data

Although DPM offers the ability to recover SharePoint data all the way from the SharePoint farm level to the individual list item or document, this ability tends to be restricted to users with administrative access and isn't generally available for all users. Unlike file-based recovery, you don't need to make any modifications to your Active Directory schema; however, you do need to provide a recovery SharePoint instance for DPM to use as a staging area. See Chapter 9, "Protecting SharePoint," for more details.

The Bottom Line


Prepare Active Directory for end-user recovery. The first step of enabling end-user recovery is to prepare Active Directory by upgrading the directory schema with the necessary extensions. Because the potential ramifications of schema extensions are serious, you must ensure that you handle this process appropriately. Master It Perform a survey of your Active Directory deployment with an eye toward deploying the EUR schema extensions:

What version of Windows Server are your domain controllers running? What service pack level is applied on them? Which domain controller holds the Schema Master FSMO role? If it is Windows Server 2000, have schema upgrades been enabled? Is your DPM server in the same domain and site as the Schema Master domain controller, or will you need to run the process from a separate machine? Is your administrative account a member of the Schema Admins group? If not, which accounts are?

Deploy the VSS client and hotfix to your users. You have a variety of options for pushing the VSS EUR client and DPM VSS hotfix to your users' computers. Master It Review the EUR VSS client and DPM VSS hotfix deployment options. Which methods can you use in your environment? Weigh the merits of each option for your environment.

Chapter 6: Protecting File Servers


Overview
The Internet has no such organization; files are made available at random locations. To search through this chaos, we need smart tools, programs that find resources for us. Clifford Stoll In Chapter 1, "Data Protection Concepts," we talked about the fact that data is at the core of our jobs as information technology professionals. So why is data protection one of the most neglected activities administrators have? We've worked at a variety of jobs in our combined careers; it's safe to say that in nearly every one of those jobs, proper data protection gets more lip service than actual priority. Oh, tape drives (or libraries) and software are purchased and installed, backup jobs are configured, and the occasional media test may even be performed. More often, though, the first real test of the data protection measures comes during an emergency. We've even worked in places where time and money is spent ensuring that server hardware has redundancy features in lieu of proper data protection measures, a practice we call magical thinking. Administrators (and their managers) who engage in magical thinking about data protection usually engage in a chain of logic that looks something like the following:

We've heard that an ounce of prevention is worth a pound of cure. Backups are expensive, time-consuming, and inconvenient; they take away resources that could be better used on real work. If we can prevent our servers from having downtime, we don't have to worry about backups! Our hardware vendors keep telling us about the models they offer that have redundancy features such as multiple power supplies and dual-channel RAID controllers and drive bays. New hardware is sexy! If we spend more money up-front on redundant hardware configurations, our servers won't go down when we lose a hard drive or when someone kicks the power cable out of the wall. Because we've got this fancy, sexy hardware with redundancy, our backups are even less important than usual.

In our experience, magical thinking merely ensures that the little bumps and scrapes get saved up into a much bigger catastrophe; when the inevitable disaster happens, it's much more painful to recover from (if recovery is even possible). This type of magical thinking makes an assumption that just isn't true, in the process ignoring a very important truth: Data loss events cannot be prevented by hardware. The assumption that hardware can prevent all data loss is quickly and graphically disproved the first time that one of your users deletes the wrong documents or a software bug corrupts an important spreadsheet. There are many ways that you can mitigate the impact of data loss events (including change management schemes, proper permissions, VSS replicas, RAID

arrays, redundant hardware, and battery-backed write caches), but in the end, Murphy's Law and entropy trump vendor promises. We're not saying that these measures are bad; however, they are only a part of a complete data protection regime and are certainly not a substitute for backing up your data. Another common flaw of many data protection strategies is that they tend to focus on "business-critical" data repositories such as database-enabled applications, Exchange mail servers, and other highly visible services. Other data repositories, such as file servers and workstations, are not seen as important and are relegated to a second-tier data protection regime. The problem with this approach is that while these sources may in fact contain the most vital data for your business, most data recovery tasks center around documents stored on file servers. The number of times we have had to restore these mission-critical payloads is very low when compared to the number of times we've had to recover files that were corrupted or deleted from file servers. We invite you to consider the follow three scenarios:

Legal issues arise that require you to find and restore files from backup tapes that are several years old. These days, such discovery requests will almost certainly be focused around messaging technologies. We've seen few enterprises that take steps to ensure that they can recover historical files with the same ease they can recover historical messaging data. Physical catastrophes such as a fire, earthquake, or flood may result in the destruction of some of your file servers. Once the more critical services and databases have restored, you at some point must rebuild your file servers and recover their data from backups to a new set of equipment in order to finish fully recovering your IT infrastructure. One common scenario is where the file server with a large amount of data on a RAID array suffers a catastrophic RAID controller failure. If you have a spare controller of the same type, you can simply replace the controller with the spare and reattach the drives. If you're working with older or obsolete equipment, however, you are faced with the task of moving the array to new hardware. Under these conditions, there is a good risk that the new controller will not be able to recognize the old arrayand your data is lost without a good data protection strategy.

With DPM 2006, Microsoft focused on providing replication and data protection for file server data. Clearly, they believed that there was a market for providing these features formerly offered only for SQL and messaging databases and other large mission-critical data repositoriesfor file servers. Prior to this first release of DPM, most data protection vendors offered only conventional backup and restore technologies for file server targets. These features are now becoming a core feature for any competitive offerings in today's market; we can't say whether DPM is directly responsible, but we believe it is. DPM 2007, of course, offers much more than the ability to provide continuous data protection for file servers. Don't overlook them, however; they are still an important part of your overall protection strategy. Because DPM is kind enough to make protecting file serves just as easy as protecting the rest of your servers, why make them second-tier protection targets? In this chapter, you will learn to:

Determine the prerequisites for installing the DPM protection agent on file servers

Configure DPM protection for standalone and clustered file servers Recover protected file server data

Considerations
When most of us think of protecting file server data, we usually think of some type of simple copy of the data. If we're really fancy (and conversant with storage costs), we think of this data being kept in a compressed form, whether on a backup server or some sort of tape media. At its most basic form, this is exactly what file backups are; it's what we've been used to. A good backup program helps manage the multiple data copies it creates by providing a searchable index that correlates protected files with the tapes they're stored on; they'll even tell you which sequence of tapes you need to use when recovering data, if the data is being recovered from across one or more sets of full, differential, or incremental backup tapes. DPM, however, allows for a much more robust approach to file server backups by providing support for a variety of features not seen in traditional backup systems. These features and benefits are outlined in Table 6.1.
Table 6.1: Advanced File Server Technologies Supported by DPM Open table as spreadsheet

Feature File versioning

DPM Benefits By using VSS snapshots and continuous data protection technologies, DPM allows you to create multiple recovery points each day. You can recover the version of your data that is closest to the point in time you need, rather than from the traditional once per day backup. Traditional backup software may use VSS to capture backup data, but they rarely take advantage of its versioning capabilities. Advanced NTFS volume features, such as reparse points, permits protection of volumes mounted to a folder in the same protection group as the parent volume; traditional backup programs may not handle these. With DPM, you can keep your data together while protecting it and recover it together with minimal work, regardless of how you have the file server's storage arranged. Although clustering technology does not solve all problems, many businesses find that using it gives them significant tangible value in their high availability strategy. DPM provides native support for clustered file server configurations, allowing your data protection to continue uninterrupted even when a cluster failover event happens.

NTFS reparse points

Cluster support

Before you begin protecting your file servers with DPM, there are several areas you need to consider:

Do your file servers meet the prerequisites for DPM protection? How do you need to prepare your file server cluster nodes? Do you need to protect the system state of your file servers?

Let's examine these issues in more detail.

Prerequisites

Before we move on to the details of protecting and restoring your file server data with DPM, you should ensure that your file server meets the prerequisites. These requirements are shown in Table 6.2.
Table 6.2: Protected Server Software Requirements Open table as spreadsheet

Software Component Operating system Windows Server 2003 x86 Standard or Enterprise Edition with at least SP1.

Description Windows Server 2003 x64 Standard or Enterprise Edition with at least SP1. Windows Server 2003 R2 x86 Standard or Enterprise Edition. Windows Server 2003 R2 x64 Standard or Enterprise Edition. Windows Storage Server 2003 x86 with at least SP1 (contact your storage device manufacturer for the latest service pack). Windows Storage Server 2003 x64 with at least SP1 (contact your storage device manufacturer for the latest service pack). Windows Storage Server 2003 R2 x86. Windows Storage Server 2003 R2 x64. VSS hotfix 940349 on Windows Server 2003. At release time, DPM 2007 only supports pre-release versions of Windows Server 2008, Standard or Enterprise Edition, for either x86 or x64 platforms. You can see the latest requirements at: http://technet.microsoft.com/en-us/library/bb808827.aspx

DPM License

The S-DPML for non-clustered file servers, or for single nodes of a clustered file server. The E-DPML for a clustered file server configuration to allow DPM to support automatic protection continuation in the event of cluster failover.

Volumes and partitions

All protected volumes and partitions must be formatted with NTFS. VSS (and therefore DPM) cannot protect a volume or partition formatted in FAT or FAT32. All protected volumes and partitions must be at least 1GB in size; this requirement is imposed by VSS.

Network

A persistent (always-on) network connection to the DPM server providing protection.

Table 6.2: Protected Server Software Requirements Open table as spreadsheet

Software Component

Description Must be in the same Active Directory forest as the DPM server providing protection. Must be in the same Active Directory domain as the DPM server providing protection if the domain is running Windows 2000 domain controllers.

Make special note of the restrictions on protected volumes and partitions. These limitations are both fundamental limitations of the underlying VSS technology used to make shadow copies of the protected data. If you have protected volumes where you are using mount points, you must ensure that your volume configuration meets DPM's requirements. Mount points are a special type of NTFS reparse points and are the only type supported by DPM; they permit you to have an NTFS volume mounted as a folder on another NTFS volume instead of a separate drive letter. When it detects that a protected volume is using mount points, DPM will change its behavior slightly. If a mount point is included in a protection group, DPM will prompt you to specify whether you want to include the reparse target in the protection group. If you say no, then you must include the target volume separately. Note, however, that the reparse point itself is not replicated by DPM regardless of how you answer. If you suffer a complete loss of the protected volume, you must first manually recreate the reparse point and relink it with the target volume before you can recover the data. Do note, however, that DPM does not support nested mount points. That is, if you are protecting a volume with a mount point, the target volume of that mount point cannot also contain a mount point. If you've got volumes with this kind of design, you have two choices: forgo protection for all target volumes below the second (and subsequent) mount points, or you can redesign how your volumes are mounted and presented to ensure that they can all be protected. The option you choose will depend both on the value of the data on the affected volumes as well as the expensive and inconvenience in performing any necessary reconfigurations to your data volumes and folder structures.
Why Doesn't DPM Replicate Mount Points?

If DPM is making my life easier, why doesn't it replicate the actual mount points? The answer is somewhat complicated, but we'll try to make it simple: it has to do with the nature of reparse points. As designed, reparse points are an advanced NTFS feature. They are commonly used by relatively few people:

Exchange administrators use mount points for configuring high-end Exchange clustering. In this configuration, NTFS mount points allow you to maintain Exchange

performance best practices of keeping the database and transaction log files for each storage group on separate volumes without running out of drive letters. The Distributed File System (DFS) feature uses junction points (another type of reparse point), allowing multiple volumes and file shares over a number of servers to be viewed by clients as a single namespace.

Because reparse points take requests for one volume namespace and seamlessly transform them into another volume namespace, Microsoft believes that administrators should always know where reparse points are in use and which volumes they are targeting. DPM does not replicate reparse point information in order to avoid the situation where the reparse target information has changed while the local relative path of the recovered data has not. This allows you to adapt the configuration of your mount points during recovery operations where you may not always be able to use the original volumes, yet still retrieve your data.

If you want to protect data in DFS namespaces with DPM, you're in luck. Although DPM does not provide protection for file share data through the DFS namespace, it is DFS-aware:

You cannot protect file shares through their DFS namespace paths. You can, however, select them by their server-specific local paths. This is actually a good limitation, though; it ensures that you always know precisely what copies of data DPM is protecting. DFS allows you to create namespaces with multiple targets for the same root or link. Now, you can use DPM to protect multiple copies of the same data, but why would you? In fact, Microsoft recommends you only protect a single copy to conserve storage and eliminate the possibility of synchronization issues. If you use end-user recovery (see Chapter 5, "End-User Recovery"), users will be able to transparently access protected files in DFS hierarchies. If one of the targets of the DFS namespace is protected by DPM, any requests for previous recovery points will automatically and transparently be directed to the protected target.

As always, you should thoroughly read the DPM Planning Guide, as well as the DPM release notes, to identify any further issues or concerns that may affect the protection of your file servers.
Clustered Configurations

In our experience, it's relatively rare for organizations to run clustered file server configurations, even though file servers are some of the simplest workloads to configure for clustering. Cost is the main reason this is true: you require the more expensive Enterprise Edition for clustering support, and cluster-certified hardware can be prohibitively more expensive. Also, Microsoft provides alternatives such as DFS replication, which can provide the same type of high availability for file shares that clustering provides. However, file servers can be clustered with great results; because DPM is cluster-aware, the resulting configuration can be protected with DPM. Before we talk about how to perform protection and recovery operations on file server clusters, though, you need to understand how these clusters work.

When using the Microsoft Cluster Server (MSCS) component, you define one or more cluster resources, which describe resources that are to be shared between nodes in the cluster. These resources include attributes such as network names, IP addresses, disk resources, and application-specific resources such as file shares. When the node that hosts a cluster resource fails, either through some hardware or software fault or through a manual administrative action, the MSCS component determines which other nodes can host the appropriate resources, activates those nodes, and notifies those nodes that they now host the relevant shared resources. In order to prevent nodes from disagreeing about which resources are hosted by which node, MSCS clusters use a quorum system to determine which node currently owns any given cluster resource. A special quorum resource allows the nodes in the cluster to communicate and come to consensus about resource ownership. While a two-node cluster uses a shared physical volume as the quorum resource, a cluster with three or more nodes can use a majority node set (MNS), which is a new type of quorum resources introduced Windows Server 2003. With the MNS, there is no shared physical device or resource that serves as the quorum resources; instead, all of the nodes in the cluster communicate whether they are online or not and whether cluster resources need to be failed over to another node. In order to protect clustered file servers with DPM, the DPM protection agent must be installed on each cluster node that may possibly own the resources to be protected. As DPM is fully cluster-aware, this allows DPM to continue protecting the resource even if a failover happens and the resource is shifted to another node in the cluster. If the resource fails over to a node that doesn't have the protection agent installed, DPM will not be able to continue protecting the data. In addition, DPM cannot protect the quorum resource. This is usually not a problem, because if you're following best practices for Microsoft clustering, the quorum resource should be a separate resource from all other resources that contain actual data, used only by the cluster quorum process.
Clustering Types

Just so we're on the same page, let's clarify what we mean when we refer to a clustered configuration. As it turns out, when we talk about clustering in Windows servers, there are three categories of behavior we could mean: failover clusters (also called server clusters), component load balancing clusters, and network load-balancing clusters. Each type of clustering can be implemented using native Windows technologies; all three offer their own benefits and disadvantages and are intended to solve different problems:

Failover clusters are what most people think of when they talk about clustering. These clusters are a collection of two or more servers (usually up to a maximum of eight, depending on the workload) that share a common set of resources such as storage in order to ensure data integrity and provide high availability. There are two types of nodes in a failover cluster: the active node, which provides service to incoming connections, and the passive node, which is on standby to take over providing service when an active node moves offline. Using these types of nodes, you can create either active/passive clusters, in which one or more passive nodes act as spares for some (usually greater) number of active nodes, or active/active clusters, in which active nodes have enough spare capacity to take over operations from other

failed nodes. To create a failover cluster, you need some sort of cluster service software, such as the Microsoft Cluster Service (MSCS) component (included in the Enterprise Edition of Windows Server) or some third-party clustering service. You also usually need some sort of shared storage such as a SAN or iSCSI SAN. Component load-balancing clusters are groups of servers that are designed to have key software components work together using the COM+ Services included in Windows, to provide high availability and scalability for application systems that use transactions. In plain English, these clusters allow you to deploy a farm of servers that handle the middle tier of multi-tier applications. These clusters rely on services provided by the Windows operating system, but require additional specific application software. Network load-balancing clusters are groups of servers that provide load balancing and some limited failover capability for front-end services. These types of clusters are ideal for web servers and other relatively stateless protocols, as the individual server nodes in the cluster don't share state information on active connections the same way that failover clusters (and cluster-aware applications such as Exchange and SQL Server) do. In Windows, you use the Network Load Balancing component to configure this functionality, although you can use third-party software or even a hardware application to provide high-end load-balancing services.

When we talk about clusters in this book, it's safe to assume that we're referring to failover clusters using MSCS. If we mean something different, we'll call it to your attention. Microsoft offers more information about the different types of clustering at the Overview of Windows Clustering Technologies TechNet website: http://technet2.microsoft.com/windowsserver/en/library/c35dd48b-4fbc-4eee-8e5c2a9a35cf63b21033.mspx?mfr=true.

System State

DPM includes the ability to protect and recover the local system state for any protected server. Table 6.3 includes a listing of the types of data included in the system state for different types of servers that are likely to act as file servers. Note that the roles listed in Table 6.3 are cumulative; for example, a domain controller is also considered a member server, so its system state data includes the data listed for both roles.
Table 6.3: Data Contained in the System State Open table as spreadsheet

Server Role Member server

System State Data Boot files. The COM+ class registration database. Registry hives.

Table 6.3: Data Contained in the System State Open table as spreadsheet

Server Role Domain controller

System State Data Active Directory (NTDS) files. The system volume (SYSVOL). Other applicable components.

Certificate Services Certificate Authority All Certificate Services data. Other applicable components. Cluster node Cluster Service metadata. Other applicable components.
Protected Data Sources

In many organizations, domain controllers often pull double-or even triple-duty: domain controllers, infrastructure services such as DNS and DHCP, and file servers. You can protect volumes and file shares on domain controllers just as you would on a member server, but infrastructure service data may or may not be directly protected. The general rule is this: If the data to be protected is in Active Directory (such as Active Directory-integrated DNS zone data), system state protection will protect itbut to recover it, you'll have to recover the entire system state. On the other hand, you can configure protection for individual files for those services that store their data in separate files and even recover them independently, but you have to identify and restore the files on your own, increasing the risk of corrupting something. So here's the real question: when do you use DPM to protect system state? From our experience, we recommend that you do it all the time. System state is insanely easy to protect with DPM; it takes up comparatively little room on most file servers even before you factor in DPM's space-saving technologies. You never know when you're going to need it. If you're using some of the advanced protection and service continuation options you have when using DPM in conjunction with Virtual Machine Manager, keeping the system state protected is an essential part of your recovery strategy. While the P2V capabilities of VMM are sufficient to protect the base operating system and program files, you'll need the system state to restore the virtual machine to the last known state. For this reason, you should protect the server's system state in the same protection group in which you protect the rest of its data; this ensures that the entire server can be consistently restored to a known point in time.
Protected Data Sources

You need to decide which file server resources you want to protect with DPM. While most people tend to think of file server data in terms of disk volumes or file shares, DPM allows

you to specify the following types of items as separate data sources to be included in protection groups:

Entire disk volumes. DPM doesn't care if the underlying volume is an entire disk or just a partition of a disk, nor does it care whether they are mounted as drive letters or as folders using NTFS mount points. When you select a volume, all files and folders on that volume are selected (with a few exceptions discussed later). Individual folders on a disk volume. As with a disk volume, when you select a folder for protection, all files and folders within the selected folder are also selected for protection. SMB/CIFS file shares. Instead of defining protection based on volumes or folders, you can protect named shares you have defined on the server.

Data sources can only be in a single protection group; once you select a resource in one protection group, it will automatically be unavailable for selection in any existing or new protection groups. Note, however, that you don't have to select all of the data sources on a file server in the same protection group; you can define multiple protection groups to protect different data sources on the same file server or cluster, each with its own protection policies. The caveat is that all of the protection groups must be on the same DPM server; if you have multiple DPM servers in your organization, you can't add data sources from a single file server or cluster into protection groups on multiple DPM servers. When you select a resource to protect, any child items within it are automatically selected. This makes it easy to protect an entire volume or folder hierarchy; you only have to select the top-level item. If you want to protect only some of the data at or beneath the file hierarchy of selected resources, you can define file exclusions. The way exclusions work is simple: simply unselect the child items that you don't want DPM to protect. The parent items remain selected and will be protected by the DPM agent, but the items you've specifically cleared the checkboxes from will not be synchronized by the agent. Note, however, this means that you can't restore the data in them from DPMand because a resource can only be a member of a single protection group, you can't protect it with DPM in any other way. Certain types of data are automatically excluded from DPM protection:

NTFS hard links are another entry for a file in the file allocation table. With hard links, you have multiple files within the folder hierarchy on a given volume that point to the same physical file. While these links are common on Unix file servers, NTFS only provides support for them to enable POSIX applications that rely on this functionality. There are very few (if any) native Windows applications that require them. If hard links are present in a resource you want to protect, DPM will alert you and the resource will not be protected. NTFS file systems support the use of multiple reparse points; DPM does not support these reparse points, except for mount points as discussed earlier in this chapter. Reparse points are used to provide a variety of advanced functionality, such as Hierarchical Storage Management (where certain data appears to be on the local file system but is really located on some alternate form of storage such as tape) and DFS junctions. If any reparse points other than mount points are present in a resource you want to protect, DPM will alert you and the resource will not be protected.

DPM will not protect a Recycle Bin system folder, a System Volume Information system folder, or Windows paging files. If any of these folders or files are present in the selected resource, they will be silently skipped; however, the rest of the resource will continue to be protected. Volumes that are not formatted with NTFS will be skipped by DPM. In most cases, they won't be available for selection. The same is true of any shares that reside on non-NTFS volumes; DPM will not list them as available data sources for the protection group.

Backup Procedures
Now that we've helped you assess your file servers and prepare them for DPM protection, let's move on to actually protecting file server data in DPM. The procedures for backing up stand-alone file servers and clustered file server configurations differ slightly, so we've divided them into separate sections.

Standalone Configurations
Protecting file server data is one of the simplest protection actions in DPM. In the first version of DPM (DPM 2006), file server resources were the only supported protection target, and file server support continues to be a key feature in DPM 2007. A Note for DPM 2006 Users If you're already familiar with DPM 2006, then nothing in this section should be much of a surprise, other than DPM's integrated support for long-term protection via tape devices. DPM 2006 lacked built-in tape archiving and required you to either purchase a DPM-aware backup solution or muck about building custom scripts using arcane command-line utilities, including the familiar DPMBackup.exe utility. Even though we like scripting, we didn't find this to be any fun at all, and we're sure you'll agree that DPM 2007 is a huge improvement in this area. However, if you miss your scripting fun using the DPMBackup.exe utility, don't despair; if you're backing up DPM using a third-party backup utility, then you still need to use the DPMBackup.exe utility to create dump files of the DPM databases and mountable replicas of the data from the protected servers. See Chapter 12, "Advanced DPM," for more details.

There are two basic steps to protecting standalone file servers with DPM: 1. Install the protection agent on the protected servers. 2. Configure protection by assigning resources to a protection group. Let's start by reviewing how to install the protection agent on your file servers. INSTALLING THE PROTECTION AGENT

We already covered the general steps for installing the DPM protection agent in Chapter 2, so if you've already installed the agent on your file servers, you're good to go. If you haven't, here's a recap: 1. Open the DPM Administrator console, navigate to the Management tab, and select the Agents subtab. 2. Click Install in the Actions pane. 3. From the left pane, select the servers you want to protect, as shown in !Figure 6.1, and click Add.

Figure 6.1: Choosing servers for agent install 4. When all of the servers you want to protect are in the right pane, click Next. 5. Enter the credentials for a user with administrative rights on the selected servers, as shown in Figure 6.2, and click Next.

Figure 6.2: Enter credentials for agent install 6. Once the agent install has been completed, you will not be able to begin protecting your servers until they have been restarted. Choose whether you want the servers to reboot now or later, as shown in Figure 6.3, and click Next.

Figure 6.3: Choose restart method 7. A Summary screen will appear as shown in Figure 6.4, showing the choices you have made. Click Install to proceed with the agent install, or click Back to change your options.

Figure 6.4: Protection agent install summary 8. The final screen will display the agent install progress. You can click Close and the current status and progress will be displayed in the Agents subtab. Once the protected file servers reboots and DPM verifies the connection with the agent, you will see the list of data sources that DPM can protect. Remember that while you need to install the agent on all nodes in a file server cluster in order to get full protection, as soon as you reboot the first node in the cluster, you will see the resources available on it. You may need to install the agent and reboot the cluster nodes in multiple sessions to prevent disruption of services for your users. PROTECTING FILE SERVER RESOURCES You can add file server resources to an existing protection group or create a new protection group. The following process assumes that you're creating a new protection group, but if you

want to add file server resources to an existing protection group all you need to do is open the protection group and select resources you want to add. Easy, no? To create a new protection group for your file server resources: 1. Open the DPM Administrator console, navigate to the Protection tab, and click Create Protection Group in the Actions pane. 2. In the Welcome screen shown in Figure 6.5, click Next.

Figure 6.5: The Create Protection Group Welcome screen 3. In the Select New Group Members screen, expand the file servers you want to protect, and select the data sources on those servers to include in the protection group by checking the boxes next to the data sources, as shown in Figure 6.6.

Figure 6.6: Selecting data sources to protect 4. When you have selected all of the data sources for the protection group, click Next. 5. Choose whether this group will use short-term protection and the associated method, as well as whether to use long-term protection (if you have a tape drive or library attached to your DPM server) as shown in Figure 6.7.

Figure 6.7: Selecting the protection method 6. Once you have chosen the protection methods, click Next. 7. Unless you have chosen not to provide short-term protection for your protection group, the next screen is where you decide how long short-term data is retained in DPM, as well as the synchronization frequency and the recovery point schedule, as shown in Figure 6.8.

Figure 6.8: Short-term recovery goals 8. To change the schedule for either the recovery points or the express full backup, click the Modify button next to either. Here, you can change the frequency by adding times and checking days of the week for the selected operation to occur, as shown in Figure 6.9. When you are finished, click OK.

Figure 6.9: Changing settings for recovery points 9. Back in the Short-Term Goals screen, click Next. 10. In the Review Disk Allocation screen, you'll see that DPM will have already recommended a default allocation from the storage pool based on the amount of data being protected as well as the short-term goals you specified. 11. To change the amount of storage pool space allocated for your protection group, click Modify. Here you can change the amount of space allocated for replicas and recovery points (Figure 6.10) or, on the Protected Server tab, the space used on the protected server for the change journal (Figure 6.11).

Figure 6.10: Modifying the allocation for replicas and recovery points

Figure 6.11: Modifying the change journal space on the protected server 12. Back in the Review Disk Allocation screen, click Next. 13. Unless you have chosen not to provide long-term protection for your protection group, the next screen is where you configure DPM's long-term tape retention strategy. To change the default weekly and monthly backup schedules, go to step 12; to change the day the weekly and monthly backups are performed, go to step 13. If you choose to accept the defaults, go to step 14. 14. To change the long-term protection objectives, click Customize. You can establish a multiple-tier strategy in units of days, weeks, months, or years. You can also specify what happens if more than one of the scheduled backups happens at the same time, as shown in Figure 6.12. When you have finished making your selections, click OK.

Figure 6.12: Customizing long-term protection goals 15. To change the days on which long-term backups occur, click Modify. Select the appropriate day and time for each backup as shown in Figure 6.13. When you have finished making your changes, click OK.

Figure 6.13: Modifying the times for long-term backups 16. Click Next. 17. In the Select Library And Tape Details screen, choose the library to use, the number of drives from the library, integrity checking, and compression and encryption options (See Figure 6.14). When you have chosen the appropriate settings, click Next.

Figure 6.14: The Library And Tape Details screen 18. In the Choose Replica Creation Method screen, select the method by which replicas will be created, as well as when the first one should be created, as shown in Figure 6.15. Click Next.

Figure 6.15: The Replica Creation Method screen 19. In the Summary screen shown in Figure 6.16, you will be presented with a summary of all of the settings you have selected for the protection group. If everything looks good, click Create Group; otherwise, click Back to make any necessary changes.

Figure 6.16: The Create Protection Group Summary screen That's it! You're now protecting your standalone file servers with DPM. Now, let's see how to do the same thing with a clustered file server.

Clustered Configurations
As discussed in the beginning of this chapter, DPM is cluster-aware. This built-in clustering support makes it very easy to use DPM to protect your clustered resources regardless of how they're configured or when failover occurs, and it makes it easier to transparently recover protected data to the cluster no matter which node is currently the owner of the cluster resource. In our example configuration, we have two file servers: node1 and node2; these servers are members of a two-node active/passive cluster. On this cluster, we've configured a shared network name resource cluster.contoso.dpm as well as a clustered file share resource.

Remember that the default cluster group in an MSCS cluster contains the quorum resource, which in a two-node cluster is a file share quorum. Quorum resources cannot be protected by DPM. For this reason, we echo Microsoft's recommendation that you keep the quorum resource in its default location and place all other cluster resources you define in a separate cluster group. Once you install the DPM protection agent on cluster nodes, the DPM administration console will expose the fact that the servers are cluster members. By looking in the Management tab, you'll see additional details, such as cluster groups, as shown in Figure 6.17.

Figure 6.17: Cluster nodes shown in the Management tab INSTALLING THE PROTECTION AGENT Installing the agent on a cluster isn't that different, but we've repeated the process here just to be complete (and keep you from having to page all over the place): 1. Open the DPM Administrator console, navigate to the Management tab, and select the Agents subtab. 2. Click Install in the Actions pane. 3. From the left pane, select the servers you want to protect, as shown in Figure 6.1, and click Add. 4. When all of the servers you want to protect are in the right pane, click Next. 5. Enter the credentials for a user with administrative rights on the selected servers, as shown in Figure 6.2, and click Next. 6. Once the agent install has been completed, you will not be able to begin protecting your servers until they have been restarted. Choose whether you want the servers to reboot now or later, as shown in Figure 6.3, and click Next. 7. A Summary screen will appear, as shown in Figure 6.4, showing the choices you have made. Click Install to proceed with the agent install, or click Back to change your options. 8. The final screen will display the agent install progress. You can click Close, and the current status and progress will be displayed in the Agents subtab. Once the protected file servers reboots and DPM verifies the connection with the agent, you will see the list of data sources that DPM can protect. Remember that while you need to install the agent on all nodes in a file server cluster in order to get full protection, as soon as you reboot the first node in the cluster you will see the resources available on it. You may need to install the agent and reboot the cluster nodes in multiple sessions to prevent disruption of services for your users. PROTECTING CLUSTERED FILE SERVER RESOURCES You can add file server resources to an existing protection group or create a new protection group. The following process assumes that you're creating a new protection group, but again, you can choose to add data sources to an existing protection group.

Use the following steps to create a new protection group for clustered file servers: 1. Open the DPM Administrator console, navigate to the Protection tab, and click Create Protection Group in the Actions pane. 2. In the Welcome screen, click Next. 3. In the Select New Group Members screen, notice that the cluster shows up for protection. Expand the cluster to reveal the available cluster groups, and expand the appropriate group to reveal the data sources that may be protected, as shown in Figure 6.18. Select the data sources to include in the protection group.

Figure 6.18: Selecting clustered data sources to protect 4. When you have selected all of the data sources for the protection group, click Next. 5. Choose whether this group will use short-term protection and the associated method, as well as whether to use long-term protection (if you have a tape drive or library attached to your DPM server), as shown in Figure 6.7. 6. Once you have chosen the protection methods, click Next. 7. Unless you have chosen not to provide short-term protection for your protection group, the next screen is where you decide how long short-term data is retained in DPM, as well as the synchronization frequency and the recovery point schedule as shown in Figure 6.8. 8. To change the schedule for either the recovery points, or the express full backup, click the Modify button next to either. Here, you can change the frequency by adding times and checking days of the week for the selected operation to occur as shown in Figure 6.9. When you are finished, click OK. 9. Back in the Short-Term Goals screen, click Next. 10. In the Review Disk Allocation screen, you'll see that DPM has already recommended a default allocation from the storage pool based on the amount of data being protected as well as the short-term goals you specified. 11. To change the amount of storage pool space allocated for your protection group, click Modify. Here you can change the amount of space allocated for replicas and recovery points (Figure 6.10) or, on the Protected Server tab, the space used on the protected server for the change journal (Figure 6.11). 12. Back in the Review Disk Allocation screen, click Next. 13. Unless you have chosen not to provide long-term protection for your protection group, the next screen is where you configure DPM's long-term tape retention

strategy. To change the default weekly and monthly backup schedules, go to step 14; to change which day the weekly and monthly backups are performed, go to step 15. If you choose to accept the defaults, go to step 16. 14. To change the long-term protection objectives, click Customize. You can establish a multipletier strategy in units of days, weeks, months, or years. You can also specify what happens if more than one of the scheduled backups happens at the same time, as shown in Figure 6.12. When you have finished making your selections, click OK. 15. To change the days on which long-term backups occur, click Modify. Select the appropriate day and time for each backup as shown in Figure 6.13. When you have finished making your changes, click OK. 16. Click Next. 17. In the Select Library And Tape Details screen, choose the library to use, the number of drives from the library, integrity checking, and compression and encryption options (See Figure 6.14). When you have chosen the appropriate settings, click Next. 18. In the Choose Replica Creation Method screen shown in Figure 6.15, select the method by which replicas will be created, as well as when the first one should be created. Click Next. 19. In the Summary screen shown in Figure 6.19, you will be presented with a summary of all of the settings you have selected for the protection group. If everything looks good, click Create Group; otherwise, click Back to make any necessary changes.

Figure 6.19: The Summary screen As you can see, it's just as easy to protect clustered file servers with DPM as it is to protect standalone file servers. The only real difference is that DPM detects the clustering service when it is installed and automatically extends the selection tree with the cluster configuration accordingly. Now that we've gotten the protection tasks out of the way, let's move on to the fun part: restoring the data.

Restore Procedures
Protecting resources in DPM is easy and straightforward; other than configuring the details of your protection schedules and recovery points, it's a linear process without a lot of choices or branches. Data recovery is where things get interesting.

When you are recovering file server data from DPM, there are no real differences between standalone and clustered configurations. You do, however, have several options for where you restore the data to:

You can recover the data to the original location. If the original location is a clustered file server, the data will be recovered to the node that currently owns the data source. You can also recover the data to an alternative location, such a new volume or folder on the original server or even another file server. The recovery server must also have the DPM agent installed. You can choose to write a copy of the data to tape. While this may not initially seem useful, it can be useful in many electronic discovery or regulatory compliance scenarios.

Let's examine these scenarios in detail.


Recovery to the Original Location

This is a straightforward option: you tell DPM which data source you want to recover, tell it how you want to handle conflicts between the recovered data and any data that may currently be in place, and pull the trigger. DPM takes care of the rest. To restore data to the original location, use the following procedure. 1. Open the DPM Administrator console and navigate to the Recovery tab. 2. In the Protected Data pane, expand the available data sources and select the data source that you want to recover. 3. Select the desired recovery point from the list provided, as shown in Figure 6.20, and click Recover.

Figure 6.20: Selecting a recovery point 4. On the Review Recovery Selection screen, ensure that you have chosen the correct items to recover, as shown in Figure 6.21. When you are satisfied with your selections, click Next.

Figure 6.21: Review recovery selection 5. On the Select Recovery Type screen shown in Figure 6.22, select Recovery To The Original Location, Recover To An Alternate Location, or Copy To Tape.

Figure 6.22: Select the recovery type 6. On the Specify Recovery Options screen shown in Figure 6.23, choose your desired recovery options: o Existing Version Recovery Behavior: Select Create Copy to make a copy of existing data when the recovered data conflicts with existing data, select Skip to not restore data when it conflicts with existing data, or select Overwrite to replace the existing data with the recovered data. o Restore Security: You can specify whether to use the security settings as they currently exist on the recovery point, or apply the settings from the recovery point (if they differ). o You can enable email notifications and specify one or more recipients.

Figure 6.23: The Specify Recovery Options screen 7. To adjust the network bandwidth used by the restore process, click Modify. In the new window specify a maximum usable amount of bandwidth for work hours and nonwork hours, as shown in Figure 6.24, and then click OK.

Figure 6.24: Modifying network bandwidth throttling 8. Back on the Specify Recovery Options screen, click Next. 9. On the Summary screen shown in Figure 6.25, review your choices; if you need to make any corrections, click Back to move back through the configuration options. When you are satisfied, click Recover.

Figure 6.25: The Summary screen 10. DPM will display a status window for the recovery operation as shown in Figure 6.26. You may close the status window and track the progress of the operation in the DPM Administrator console.

Figure 6.26: Recovery progress in the Recovery Status window When the recovery operation completes, the version of the data captured in the recovery point will be restored to its original location on the protected file server. Depending on the version recovery behavior you selected, you may have a mixture of older data and current data.
RECOVERY TO AN ALTERNATE LOCATION

This option allows you to recover an older version of your data, or even create a second copy of the current recovery point, without overwriting or otherwise modifying the data on the protected server. To restore data to an alternative location, use the following procedure.

1. Open the DPM Administrator console and navigate to the Recovery tab. 2. In the Protected Data pane, expand the available data sources and select the data source that you want to recover. 3. Select the desired recovery point from the list provided, as shown in Figure 6.20, and click Recover. 4. On the Review Recovery Selection screen, ensure that you have chosen the correct items to recover as shown in Figure 6.21. When you are satisfied with your selections, click Next. 5. On the Select Recovery Type screen shown in Figure 6.22, select the Recovery To An Alternate Location option and click Browse. 6. Expand the server list as shown in Figure 6.27, select the location for your recovery, and click OK.

Figure 6.27: Selecting an alternative recovery location 7. On the Select Recovery Type screen, the recovery path you have chosen will appear. Click Next. 8. On the Specify Recovery Options screen shown in Figure 6.23, choose your desired recovery options: o Existing Version Recovery Behavior: Select Create copy to make a copy of existing data when the recovered data conflicts with existing data, select Skip to not restore data when it conflicts with existing data, or select Overwrite to replace the existing data with the recovered data.

Restore Security: You can specify whether to use the security settings as they currently exist on the recovery point, or apply the settings from the recovery point (if they differ). o You can enable email notifications and specify one or more recipients. 9. To adjust the network bandwidth used by the restore process, click Modify. In the new window, specify a maximum usable amount of bandwidth for work hours and nonwork hours as shown in Figure 6.24, then click OK. 10. On the Summary screen shown in Figure 6.28, review your choices; if you need to make any corrections, click Back to move back through the configuration options. When you are satisfied, click Recover.
o

Figure 6.28: Summary screen for recovering to an alternative location 11. DPM will display a status window for the recovery operation as shown in Figure 6.26. You may close the status window and track the progress of the operation in the DPM Administrator console. When the recovery operation completes, the version of the data captured in the recovery point will be restored to the alternative location you selected.
COPY TO TAPE

With this option, you can create an on-tape copy of your data source from any selected recovery point. Seems somewhat pointless, right? After all, your data is already backed up on disk (or on tape, if it's been long enough); why would you need another copy on tape? Even if you can't think of a reason now, don't discount the option. There are many administrators who need the ability to create tape copies of their data, usually to comply with electronic discovery queries or satisfy audit requests in regulatory compliance scenarios. Note that you don't have the ability to filter the data according to arbitrary criteria; you just get a straight dump of your selected data source from the selected recovery point. To restore data to tape, use the following procedure. 1. Open the DPM Administrator console and navigate to the Recovery tab.

2. In the Protected Data pane, expand the available data sources and select the data source you want to recover. 3. Select the desired recovery point from the list provided, as shown in Figure 6.20, and click Recover. 4. On the Review Recovery Selection screen, ensure that you have chosen the correct items to recover as shown in Figure 6.21. When you are satisfied with your selections, click Next. 5. On the Select Recovery Type screen, select the Copy To Tape option and click Next. 6. On the Specify Library screen shown in Figure 6.29, select the appropriate primary and copy tape libraries. Provide a label for your tape and chose any desired compression and encryption options. Click Next.

Figure 6.29: The Specify Library screen 7. On the Specify Recovery Options screen shown in Figure 6.30, you can enable email notifications and specify one or more recipients. Click Next.

Figure 6.30: The Specify Recovery Options screen

8. On the Summary screen shown in Figure 6.31, review your choices; if you need to make any corrections, click Back to move back through the configuration options. When you are satisfied, click Recover.

Figure 6.31: Copy to tape summary 9. DPM will display a status window for the recovery operation as shown in Figure 6.26. You may close the status window and track the progress of the operation in the DPM Administrator console. When the recovery operation completes, you'll have a second copy of the data on tape. Go wild!

The Bottom Line


Determine the prerequisites for installing the DPM protection agent on file servers. You need to ensure that your protected file servers are running the necessary versions of the Windows operating system and service packs and are configured according to DPM's requirements. Master It 1. Perform a survey of your file servers to ensure that they are compatible with the DPM protection agent: o What version of Windows Server and service pack are you running on the file servers you want to protect? o Do your volume, partition, and share configurations meet the DPM requirements? 2. If your file servers are part of a Distributed File System (DFS) namespace, how should you effectively protect the file server data with DPM? 3. Given a file server that has no other roles, what data will DPM capture as part of the system state? How does this differ from a cluster node system state? Configure DPM protection for standalone and clustered file servers. Once the DPM agent is deployed, you must configure protection groups and select data sources to protect.

Master It 1. 2. 3. 4. What file server data sources can DPM protect? Can DPM handle NTFS reparse points, and if so, is any special handling required? How does DPM handle nested mount points? What DPM licenses do you need to protect standalone servers? What DPM licenses do you need to protect clustered servers?

Recover protected file server data. Protecting the data is only half of the job; you also need to be able to recover it. Master It 1. To where can you recover file server data? 2. How do you handle conflicts between current versions of data and earlier versions you are restoring? 3. What are the differences between recovering data to a standalone server and a cluster?

Chapter 7: Protecting Exchange Servers


Overview
Now that Exchange has deleted item recovery and deleted mailbox recovery, there's no need for brick backups, in my opinion. Not that there ever was such a need, but now it's even more of a waste of time and tape. Ed Crowley, Exchange MVP If anyone has ever written a messaging system that has provoked as much passion, commentary, loathing, and advocacy as Microsoft Exchange Server in all of its versions, we've yet to see it. Every messaging system has its fans and detractors, of course; pick any messaging system you can think of and you'll find people who can spend hours telling you why it's the best system in the world, right along with the people who will spend the same amount of time explaining in loving detail exactly why this system is evil and yea verily doth stink. Many people don't understand how much time and energy goes into making sure that their messaging data can be properly backed up and restored. These people usually aren't messaging administrators; they're end users who have never really stopped to think about how ubiquitous email has become in the average modern business dayor for that matter, in our personal lives. Email has long since passed the phase of being merely a convenience and has become a necessity in most shops. It's now considered a utility service, right up there with electricity. We've heard many stories from administrators about network outages where people were perfectly content with losing the ability to browse the Internet on their web browsers as long as they still had access to their mailboxes. Nothing changes a mail administrator's priorities like an executive who can't send or receive email. Whether you love it, hate it, or merely tolerate it, there's no denying that Exchange is here to stay. If you're reading this chapter, you've probably got an Exchange organization; we'll even go out on a limb and guess that you're trying to figure out the best way to protect your Exchange mail-boxes. We've spent enough time with Exchange to feel pretty confident about saying that DPM is one of the best (if not the best) ways to get the job done; this is in part because it makes restoring data so much easier. Backup up Exchange actually isn't that hard, especially if your traditional backup solution is Exchange-aware. Restoring Exchange data can be a completely different story. It's not so much that the actual act of recovery is difficult; it's that making sure your Exchange mailbox databases and transaction logs are in a consistent state can be an involved and complicated task. Starting with Exchange Server 2003, Microsoft created the Recovery Storage Group (RSG) feature to help make recovery simpler. Over the years, Exchange administrators and third-party vendors developed a variety of backup and restore strategies to gain a handle on protecting Exchange mailbox data. Some of the more notable include:

Streaming or online backups. The capacity to perform online backups is provided by the Exchange APIs. By using this common interface, backup vendors can connect

to an Exchange storage group or mailbox and copy the data without having to take it offline first. Your users stay connected, and you still get protection. If you don't want to use a third-party backup solution, then the built-in Windows Backup (also known as NTBackup) application included with Windows Server will do the job for you. With NTBackup, you can copy mailbox data to tape or to files on disk; you can even schedule backup jobs using the Windows Scheduler. Offline backups. When Exchange brings a storage group online and mounts the mailbox databases within it, it locks all of the corresponding database files and transaction log files. If you don't have an Exchange-aware backup solution, or don't want to go to the trouble of setting up NTBackup to perform an online backup to disk (which your enterprise backup solution can then pick up), offline backups are the common alternative. The problem with them, though, is that they require scripting to unmount the databases and bring the storage groups offlinenot to mention the fact that your users can't access their mailboxes while you're backing up the system. We consider offline backups something to be avoided. VSS-aware online backups. With Windows Server 2003 and Exchange Server 2003, Microsoft added support for VSS. From the administrator and user point of view, this option just looks like a typical streaming backup; the databases are online during the backup operation. Under the hood, though, it's very different, and it took vendors a while to support this option. Going forward, it's likely that Microsoft will be pushing the VSS backup strategy more heavily, over streaming backups. The DPM protection agent utilizes the VSS interfaces. Brick-level backups. This type of backup involves attaching to a mailbox using MAPI and backing up the data, usually to some sort of PST. This type of backup is usually independently rediscovered the first time a fledgling Exchange administrator runs into difficulties when attempting to restore mailbox data for a critical person in their organization and decides that having mailbox-level backups will keep them from getting into trouble again. Most experienced Exchange administrators, however, loathe brick-level backups because they are extremely slow and bulky when compared to other Exchange data-protection methods. Using DPM means never having to think about (or use) brick-level backups again. Clustering. We don't know why, but clustering has developed an unearned reputation for protecting data. This is a false notion; typical failover clustering that uses a shared copy of data does not give you any extra protection from data loss or corruption. If you put in a mail-box cluster because someone told you it would remove the need for backing up and restoring Exchange data, you have a big problem just waiting to happen. What traditional shared storage clustering does do is protect you against hardware failure and keep your healthy mailbox databases online during events that would otherwise produce an outage. We're not saying clusters are bad; we're just saying that you need to have a clear idea of what they actually do for you. Database replication. Like clustering, replication allows you to run mailbox databases on multiple Exchange servers. Unlike clustering, you don't have to have high-end hardware to do it; the actual data is replicated to a second Exchange server, usually through a means called log shipping. This process usually helps provide some degree of protection from data corruption, because each server in the replication configuration reads the operations from the transaction logs and applies them separately to their own copy of the mailbox database. This turns out to be useful enough that Microsoft included native replication technologies in Exchange 2007; we'll talk about how these options affect your DPM deployment.

In this chapter, you will learn to:


Determine the prerequisites for installing the DPM protection agent on Exchange servers Configure DPM protection for standalone and clustered Exchange servers Recover protected Exchange server mailbox databases

Considerations
As we previously discussed, protecting Exchange server data is critical enough that a lot of smart people have put a lot of time and energy into doing it properly. To understand why, you first need to understand how Exchange server stores your users' mailbox data. Various messaging systems use widely different strategies for storing users' messaging data. Three common strategies include one file per mailbox, one file per message, and a shared database. Examples of the former two respectively include the Unix mbox and Maildir standards. Exchange takes the latter approach and stores all messaging data in a special database format known as the Extensible Storage Engine (ESE), aka Jet Blue. Various versions of ESE have been used in other Microsoft applications and technologies, including the DHCP and WINS services, as well as the directory information tree (DIT) database that is found on every Active Directory domain controller. Although similar in name to Jet Red (the database implementation underlying Microsoft Access), ESE/Jet Blue contains little if any shared code and is designed to be used as an embedded database engine rather than as a general-purpose application database. It contains features specifically designed for use in a high-demand server application. While it's a good fit for Exchange, it takes specialized handling in order to be properly backed up. DPM is specifically designed to protect and restore ESE databases without requiring you to take exotic steps. Exchange data stores have four major conceptual component objects:

Mailboxes are tied to an individual User or inetOrgPerson account object in Active Directory. They provide a per-user collection of folders, each of which may contain one or more message objects (which can themselves be of various types). Mailboxes have no corresponding direct representation on the Exchange server filesystem. Mailbox databases are collections of one or more mailboxes. Each Exchange mailbox database is kept in its own ESE instance; multiple mailbox databases are typically spread across physical volumes or SAN LUNs in order to optimize read and write performance. Each mailbox database has its own set of associated data files on the NTFS filesystem. Public folder databases are essentially a specialized type of mailbox database. Instead of storing mailboxes, however, they store replicas of one or more public folders. Like mailbox databases, you can spread multiple public folder databases across volumes, and each public folder database also has its own set of associated data files on the NTFS filesystem. Storage groups are collections of one or more mailbox or public folder databases. A storage group has a corresponding directory on an NTFS file system; this directory stores a unified set of transaction files for all of the mailbox and public folder databases associated with that storage group; these files can be on a separate NTFS volume to gain further performance benefits.

The relationships between these components are fairly clear and are demonstrated in Table 7.1.

Table 7.1: Exchange Components Open table as spreadsheet

Component Storage group

Exchange version Maximum Exchange 2003 SE 1 per server Exchange 2003 EE 5 per server Exchange 2007 SE 5 per server Exchange 2007 EE 50 per server

Contains Mailbox or public folder database

Mailbox database

Exchange 2003 SE 1 per server Exchange 2003 EE 4 per storage group Exchange 2007 SE 5 per server Exchange 2007 EE 50 per server

Mailboxes

Public folder database Exchange 2003 SE 1 per server Exchange 2003 EE 1 per server Exchange 2007 SE 1 per storage group Exchange 2007 EE 1 per storage group Mailbox Exchange 2003 SE Unlimited[1] Exchange 2003 EE Exchange 2007 SE Exchange 2007 EE
[1]

Public folders

Mail data

The number of mailboxes aren't actually unlimited; they are limited in practice by your available storage, number of mailbox licenses, and your backup and restore SLAs. Under Exchange 2003, you can have up to four mailbox databases per storage group. Microsoft recommends that you allocate one database per storage group until you have deployed all five storage groups, and then start adding a second database to each storage group, and so on until you have reached your maximum of four databases per storage group. With Exchange 2007, the rules and best practices for allocating storage groups and mailbox databases have changed: you now get a maximum of either five or fifty databases (depending on your edition), and Microsoft encourages you to place each on in its own separate storage group to ensure a consistent 1:1 relationship between databases and transaction logs. When you're planning on protecting with DPM, we recommend that you follow these rules, although if you choose to put multiple databases in one storage group, DPM will help you through the complexities than can result during backup and restore operations.

Note that when we talk about DPM protection for Exchange, we're really only talking about protecting mailbox servers. If you have Exchange 2003 front-end servers (which cannot contain mailboxes), dedicated Exchange 2007 bridgehead servers, or Exchange 2007 Hub Transport, Edge Transport, Client Access, or Unified Messaging servers, you do not need to install the DPM agent on these machines; there is nothing there for DPM to protect (unless you want to capture their system states). The caveat is with any Exchange 2007 servers that have multiple roles; you need to protect any of your servers that contain the Mailbox role, even if other roles are present on the hardware. We earlier mentioned several of the technologies that have been used to back up Exchange servers. The Exchange team has been heavily pushing VSS backups ever since they were first introduced in Exchange 2003; while VSS is more complicated to deal with from the application developer's point of view, it enables much cleaner and more consistent images to be taken of the data. DPM takes advantage of this preferred protection architecture, making the whole process transparent from your point of view. Exchange 2007 introduces one other consideration for protecting mailbox servers: the high availability and replication configurations. There are four types:

Shared Copy Clusters (SCC). These are the traditional failover clusters with which Exchange 2003 administrator are familiar. They have two or more nodes each running Microsoft Cluster Service (MSCS) and some sort of shared storage solution such as a SAN or iSCSI SAN.

Continuous Clustered Replication (CCR). This new type of lightweight cluster removes the expensive and complicated shared storage component. These clusters are two-node active/passive clusters; each node runs MSCS for failover services but stores the replicated mailbox database data on local storage volumes. Local Clustered Replication (CCR). With this new option, Exchange 2007 replicates an additional copy of a mailbox database to another volume on the same server. The use of this option does not affect DPM protection in any way, but merely represents another local copy of the data you can quickly (but manually) switch over to in the event of local drive failure before beginning rebuild and restore operations. Standby Clustered Replication (SCR). This option will be introduced in Exchange 2007 SP1 and allows mailbox databases on one or more servers to be replicated to a standby machine. If the primary copies go offline, the standby copies can be brought up for service continuation. Like LCR, this option does not affect DPM protection.

Before you begin protecting your Exchange data with DPM, there are several areas you need to consider:

Do your Exchange servers meet the prerequisites for DPM protection? How do you need to prepare your Exchange cluster nodes? Do you need to protect the system state of your Exchange servers?

Let's examine these issues in more detail.

Prerequisites

Before we move on into the details of protecting and restoring Exchange servers with DPM, you should ensure that your Exchange server meets the prerequisites. These requirements are shown in Table 7.2.
Table 7.2: Protected Exchange Server Software Requirements Open table as spreadsheet

Software component Application version

Description Exchange Server 2003 Standard Edition with at least SP2. Exchange Server 2003 Enterprise Edition with at least SP2. Exchange Server 2007 Standard Edition x64. Exchange Server 2007 Enterprise Edition x64. Exchange Server 2007 Standard Edition x86 (for testing only). Exchange Server 2007 Enterprise Edition x86 (for testing only). VSS hotfix 940349 on Windows Server 2003.

DPM License

The E-DPML for each protected standalone Exchange mailbox server or clustered Exchange node using Microsoft Cluster Services. Exchange 2003 failover clusters (active/active and active/passive). Exchange 2007 SCC and CCR clusters.

Note that we list both the 32-bit and 64-bit versions of Exchange 2007; the DPM protection agent will protect both versions. However, because the licensing for Exchange 2007 prohibits the 32-bit version for use in production environments, you can use only the 32-bit version (and therefore DPM protection) for testing purposes in your lab. As we look toward future releases of Windows Server and Exchange, we should remind you that the release version of Exchange 2007 is not supported on Windows Server 2008 machines. The Exchange 2007 SP1 release, however, includes support for installing Exchange on Windows 2008. If you want to use DPM to protect these Exchange servers, you must be running Exchange 2007 SP1 at a minimum. The DPM protection agent uses the VSS capabilities of Windows Server to take a complete snapshot of each protected storage group; this is particularly critical with Exchange data, as it ensures that there is always a consistent view of the various data structures and relationships within the database and transaction log files. By using VSS, DPM prevents disjointed views of the logical mailbox database structure and eliminates the possibility of data corruption during protection activities.

As always, you should thoroughly read the DPM Planning Guide, as well as the DPM release notes, to identify any further issues or concerns that may affect the protection of your Exchange servers.
Clustered Configurations

Many environments with high uptime requirements take advantage of MSCS to provide high availability and protection from hardware failures on Exchange servers. The main reason that more people don't use a clustered Exchange configuration is cost. It's hard to justify the money for cluster-certified hardware and the Enterprise Edition Windows Server licenses; it's even more difficult now with Exchange 2007's expanded range of high availability and replication options. At the same time, however, the CCR feature is one of the most compelling replication options in Exchange 2007, and it requires administrators to deploy MSCS and use some of the same basic technologies as traditional failover clusters (albeit without the expense of the traditional shared storage solution). If you've gone to the trouble of creating an Exchange cluster, protecting it with DPM is easy. DPM requires that all nodes that can possibly be owners of protected Exchange resources have the DPM protection agent installed. Before we talk about how to perform protection and recovery operations on Exchange clusters, though, you need to understand how these clusters work. In MSCS, you define one or more cluster resources, which describe resources that are to be shared between nodes in the cluster. These resources include attributes such as network names, IP addresses, disk resources, and application-specific resources such as databases. When the node that hosts a cluster resource fails, either through some hardware or software fault or through a manual administrative action, the MSCS component determines which other nodes can host the appropriate resources, activates those nodes, and notifies those nodes that they now host the relevant shared resources. In order to prevent nodes from disagreeing about which resources are hosted by which node, MSCS clusters use a quorum system to determine which node currently owns any given cluster resource. A special quorum resource allows the nodes in the cluster to communicate and come to consensus about resource ownership. Exchange 2003 clusters and the Exchange 2007 SCC configuration use this quorum resource. While a two-node cluster uses a shared physical volume as the quorum resource, a cluster with three or more nodes can use a majority node set (MNS), which is a new type of quorum resources introduced Windows Server 2003. With the MNS, there is no shared physical device or resource that serves as the quorum resources; instead, all of the nodes in the cluster communicate whether or not they are online and whether or not cluster resources need to be failed over to another node. The Exchange 2007 CCR option uses this configuration; the third node (which only needs to provide a file share) can be an Exchange 2007 server running some other role, or even a separate server in your domain. When you use DPM to restore databases to cluster nodes of either type (2003 failover/SCC and CCR), there may be additional processes that you must run on the Exchange server to ensure that any replication or failover operations do not interfere with the restore. Specifically, you must ensure that replication has been suspended.

In order to protect clustered Exchange nodes with DPM, the DPM protection agent must be installed on each cluster node that may possibly own the Exchange resources you are protecting. As DPM is fully cluster-aware, this allows DPM to continue protecting your Exchange data even if an unplanned failover happens and the resources are shifted to another node in the cluster. DPM can then alert you that an unplanned failover has taken place and request a consistency check of the affected storage groups. If the resource fails over to a node that doesn't have the protection agent installed, DPM will not be able to continue protecting the data. In addition, DPM cannot protect the quorum resource. This is usually not a problem, because if you're following best practices for Microsoft clustering, the quorum resource should be a separate resource from all other resources that contain actual data, used only by the cluster quorum process.
Clustering Types

Just so that we're on the same page, let's clarify what we mean when we refer to a clustered configuration. As it turns out, when we talk about clustering in Windows servers, there are three categories of behavior we could mean: failover clusters (also called server clusters), component load-balancing clusters, and network load-balancing clusters. Each type of clustering can be implemented using native Windows technologies; all three offer their own benefits and disadvantages and are intended to solve different problems:

Failover clusters are what most people think of when they talk about clustering. These clusters are a collection of two or more servers (usually up to a maximum of eight, depending on the workload) that share a common set of resources such as storage in order to ensure data integrity and provide high availability. There are two types of nodes in a failover cluster: the active node, which provides service to incoming connections, and the passive node, which is on standby to take over providing service when an active node moves offline. Using these types of nodes, you can create either active/passive clusters, in which one or more passive nodes act as spares for some (usually greater) number of active nodes, or active/active clusters, in which active nodes have enough spare capacity to take over operations from other failed nodes. To create a failover cluster, you need some sort of cluster service software, such as the Microsoft Cluster Service (MSCS) component (included in the Enterprise Edition of Windows Server) or some third-party clustering service. You also usually need some sort of shared storage such as a SAN or iSCSI SAN. Component load-balancing clusters are groups of servers that are designed to have key software components work together using the COM+ Services included in Windows, to provide high availability and scalability for application systems that use transactions. In plain English, these clusters allow you to deploy a farm of servers that handle the middle tier of multi-tier applications. These clusters rely on services provided by the Windows operating system, but require additional specific application software. Network load-balancing clusters are groups of servers that provide load balancing and some limited failover capability for front-end services. These types of clusters are ideal for web servers and other relatively stateless protocols, as the individual server nodes in the cluster don't share state information on active connections the same way that failover clusters (and cluster-aware applications such as Exchange and SQL Server) do. In Windows, you use the Network Load Balancing component to

configure this functionality, although you can use third-party software or even a hardware application to provide high-end load-balancing services. When we talk about clusters in this book, it's safe to assume that we're referring to failover clusters using MSCS. If we mean something different, we'll call it to your attention. Microsoft offers more information about the different types of clustering at the Overview of Windows Clustering Technologies TechNet website: http://technet2.microsoft.com/windowsserver/en/library/c35dd48b-4fbc-4eee-8e5c2a9a35cf63b21033.mspx?mfr=true.

System State

DPM includes the ability to protect and recover the local system state for any protected server. System state backups of Exchange servers do not directly affect your ability to protect and restore databases; you can usually recover your mailbox data to a different server whether the system state data is available or not. Table 7.3 includes a listing of the types of data included in the system state for different types of servers that are likely to act as Exchange servers.
Table 7.3: Data Contained in the System State Open table as spreadsheet

Server Role Member server

System State Data Boot files. The COM+ class registration database. Registry hives.

Domain controller

Active Directory (NTDS) files. The system volume (SYSVOL). Other applicable components.

Certificate Services Certificate Authority All Certificate Services data. Other applicable components. Cluster node Cluster Service metadata. Other applicable components. In many organizations, domain controllers often pull double-or even triple-duty as domain controllers, infrastructure services such as DNS and DHCP, and file servers. You can protect

volumes and file shares on domain controllers just as you would on a member server, but infrastructure service data may or may not be directly protected. The general rule is this: If the data to be protected is in Active Directory (such as Active Directoryintegrated DNS zone data), system state protection will protect itbut to recover it, you'll have to recover the entire system state. On the other hand, you can configure protection for individual files for those services that store their data in separate files and even recover them independently, but you have to identify and restore the files on your own, increasing the risk of corrupting something. We hope that you're not making your Exchange servers do double-duty as domain controllers as well. Although this configuration is supported by Microsoft (mainly to allow support for the Small Business Server SKU), doing so has important implications for memory usage on your server. Both Active Directory and Exchange are memory hogs; they will both by default use as much physical memory as the system has in order to allow better caching and performance. When you put both services on the same physical machine, they're both going to be memory-starved and unhappy. You're also complicating your disaster recovery scenario with each additional service you place on the same machine. So here's the real question: when do you use DPM to protect system state? From our experience, we recommend that you do it all the time. System state is insanely easy to protect with DPM; it takes up comparatively little room on most Exchange servers even before you factor in DPM's space-saving technologies. You never know when you're going to need it. If you're ignoring the recommendations and putting Exchange on a domain controller, then you really do need to capture system state; you'll need a functional domain controller in order to rebuild, and if it's the same machine as your Exchange server, you've got a chicken-and-egg problem without the system state. Better yet, don't combine the domain controller and Exchange roles and protect the system state of both machines with DPM. If you're utilizing some of the advanced protection and service continuation options you have when using DPM in conjunction with Virtual Machine Manager (VMM), keeping the system state protected is an essential part of your recovery strategy. Although the P2V capabilities of VMM are sufficient to protect the base operating system and program files, you'll need the system state to restore the virtual machine to the last known state. For this reason, you should protect the server's system state in the same protection group that you protect the rest of its data; this ensures that the entire server can be consistently restored to a known point in time.
Protected Data Sources

Like other workloads, DPM doesn't give you a lot of options about what kind of Exchange data you're going to protect; DPM exposes Exchange server storage groups to you only when you're adding Exchange resources to storage groups. As we discussed previously, mailbox databases don't contain the transaction log files; storage groups do. When a storage group contains multiple databases, the storage group transaction log files contain the transactions for all of those databases. By protecting at the storage group level, DPM is ensuring that it has all of the transaction logs available to it so that it can pull some amazing stunts during recovery operations, such as allowing you to restore individual mailboxes. If per-item and per-database retention is enough to make bricklevel backups obsolete, then DPM plus Exchange puts the nail in the coffin. And good riddance!

When selecting your protection schedule, try to avoid creating recovery points (with their associated express full backup) during periods of peak activity; if possible schedule recovery points immediately after high-use periods. While the synchronization process will probably have little impact on your Exchange Server performance, creating a recovery point during these times is not to your greatest advantage. When you create a recovery point halfway through a busy period, you are capturing only some of the data generated. In the event of a data loss event, your recovery could potentially miss large amounts of data.

Backup Procedures
Protecting Exchange data with DPM is as close to easy as anything involving Exchange that we've seen. Granted, the lack of data sources helps make this simplicity possible. Whether you're running Exchange 2003 or Exchange 2007, DPM makes it easy for you; you can't even tell which version of Exchange a protected server is running. You just see the available storage groups and server system state options, pick the ones you want to protect, and go! Exchange support was not present in DPM 2006, unless you first used NTBackup to produce database and storage group dump files to the file system, and it is a key feature in DPM 2007. There are two basic steps to protecting Exchange servers (whether standalone or clustered) with DPM:

Install the protection agent on the protected Exchange servers. Configure protection by assigning storage groups to a protection group.

Let's start by reviewing how to install the protection agent on your Exchange servers.
Installing the Protection Agent

We already covered the general steps for installing the DPM protection agent in Chapter 2, "Installing DPM," so if you've already installed the agent on your Exchange servers, you're good to go. If you haven't, here's a recap: 1. Open the DPM Administrator console, navigate to the Management tab, and select the Agents subtab. 2. Click Install in the Actions pane. 3. From the left pane, select the servers you want to protect, as shown in Figure 7.1, and click Add.

Figure 7.1: Choosing servers for agent install 4. When all of the servers you want to protect are in the right pane, click Next. 5. Enter the credentials for a user with administrative rights, as shown in Figure 7.2, and click Next.

Figure 7.2: Enter the credentials for agent install 6. Once the agent install has been completed, you will not be able to protect your servers until they have been restarted. Choose whether you want the servers to reboot now or later, as shown in Figure 7.3, and click Next.

Figure 7.3: Choose the restart method 7. A Summary screen will appear as shown in Figure 7.4, showing the choices you have made. Click Install to proceed with the agent install, or click Back to change your options.

Figure 7.4: The Protection Agent Installation summary 8. The final screen will display the agent install progress. You can click Close and the current status and progress will be displayed in the Agents subtab. Once the protected Exchange server reboots and DPM verifies the connection with the agent, you will see the list of data sources that DPM can protect. Remember that while you need to install the agent on all nodes in a server cluster in order to get full protection, as soon as you reboot the first node in the cluster you will see the resources available on it. You may need to install the agent and reboot the cluster nodes in multiple sessions to prevent disruption of services for your users. The steps used to configure protection for Exchange data in DPM are almost the same regardless of whether you're protecting a standalone mailbox server or a cluster configuration; the only difference is one extra drill-down for storage groups on clusters. To protect a storage group, use the following procedures:

1. Open the DPM Administrator console, click the Protection tab, and click Create Protection Group in the Actions pane. 2. The Welcome To The New Protection Group Wizard screen will appear as shown in Figure 7.5, click Next.

Figure 7.5: The Welcome screen 3. In the Select Group Members screen, drill down to the appropriate storage group or groups on the applicable servers. Figure 7.6 shows the tree for a standalone Exchange server; Figure 7.7 demonstrates the equivalent section on an Exchange cluster. When you have selected the storage group or groups to protect, click Next.

Figure 7.6: Selecting a storage group in a standalone configuration

Figure 7.7: Selecting a storage group in a clustered configuration 4. In the Select Data Protection Method screen, choose whether you want to use shortterm protection, long-term protection, or both, as shown in Figure 7.8. Choose your short-term protection method (if applicable), give your protection group a name, and click Next.

Figure 7.8: Selecting a data-protection method 5. In the Specify Exchange Protection Options screen shown in Figure 7.9, choose whether you want DPM to run the ESEUTIL utility to verify the data integrity of the selected resources. Before you enable this option, you must manually copy the eseutil.exe and ese.dll files from the Exchange installation file on your Exchange server to the bin directory of your DPM installation folder (typically C:\Program Files\Microsoft Data Protection Manager\DPM\bin). If you are using the x86 version of DPM with Exchange Server 2007, you will need to copy the 32-bit versions of these files from the Exchange 2007 installation media. When you are done, click Next.

Figure 7.9: The Specify Exchange Protection Options screen 6. In the Specify Short-Term Goals screen shown in Figure 7.10, choose a retention range and specify a synchronization frequency. If you want to modify your express full-backup schedule, click the Modify button and go to step 7. Otherwise, go to step 8.

Figure 7.10: The Specify Short-Term Goals screen 7. In the Optimize Performance screen shown in Figure 7.11, you can adjust the schedule for your express full backups to meet your needs. Make the necessary changes and click OK.

Figure 7.11: Scheduling express full backups 8. Click Next. 9. In the Review Disk Allocation screen shown in Figure 7.12, check that the default allocation for your data source is correct. If you need to make any changes to the disk allocation, click the Modify button and go to step 10. Otherwise, go to step 11.

Figure 7.12: The Review Disk Allocation screen 10. In the Modify Disk Allocation screen shown in Figure 7.13, change the disk allocation to meet your needs. When you're finished, click OK.

Figure 7.13: The Modify Disk Allocation screen 11. Click Next. 12. In the Specify Long-Term Protection Goals screen, you can specify your retention range, frequency of backups, and customize recovery point creation schedules. You can also customize the backup schedule as shown in Figure 7.14. To customize your protection objectives, click the Customize button and go to step 13. To modify the backup schedule, click the Backup button and go to step 14. Otherwise, go to step 15.

Figure 7.14: The Specify Long-Term Protection Goals screen 13. In the Customize Protection Objective screen shown in Figure 7.15, you can set a tape rotation scheme that is up to three levels deep in increments of days, weeks, months, and years. You can also configure behavior for overlapping jobs. When you've made your changes, click OK, click the Modify button, and go to step 14 if you'd like to configure the backup schedule, or go to step 15 to accept the default.

Figure 7.15: The Customize Protection Objective screen 14. In the Modify Long-Term Backup Schedule screen shown in Figure 7.16, set the schedule for each level of your tape rotation. Note that the options will differ depending upon the tape rotation scheme you have chosen. When you have made your changes, click OK.

Figure 7.16: The Modify Long-Term Backup Schedule screen 15. Click Next. 16. In the Select Library And Tape Details screen shown in Figure 7.17, you can choose your target library, number of drives, and a copy library. You can also elect to check for data integrity and select compression and encryption options. When you have made your choices, click Next.

Figure 7.17: The Select Library And Tape Details screen 17. In the Choose Replica Creation Method screen shown in Figure 7.18, you can choose to have the initial replica created immediately, at a scheduled time, or manually via removable media. When you have made your choice, click Next.

Figure 7.18: Choose a replica creation method 18. The Summary screen shown in Figure 7.19 will display a summary of the settings you have chosen for your protection group. If everything looks correct, click Create Group. If you need to make any changes, use the Back button to go back and correct the errors.

Figure 7.19: The Summary screen Congratulations! You've successfully configured DPM to protect your Exchange storage groups. Why don't we move on to showing you how to recover all of this lovely Exchange data?

Restore Procedures
DPM does its level best to mask the underlying differences between standalone and clustered Exchange configurations so that you don't have to worry about them when restoring Exchange data. However, when you're recovering data to a passive CCR node, you need to ensure that replication has been stopped. DPM should take care of this for you, but you should see the DPM documentation for more details. You do have several options for restoring Exchange data:

You can recover a storage group to the original server, recover to an alternative location, or make a copy of the storage group data to tape. Be aware that if you're recovering the most recent recovery point, however, you only have the option of recovering to the original server. You can recover a mailbox or public folder database to the original server, an alternative server, a recovery storage group, an alternative location, or make a copy of the database to tape. Be aware that if you're recovering the most recent recovery point, however, you only have the option of recovering to the original server. You can recover an individual mailbox to an Exchange database, recover to an alternative location, or make a copy of the mailbox data to tape. If you choose to recover the mailbox to an Exchange server, you must have a recovery storage group and dismounted database configured on the target Exchange server.

To recover DPM-protected Exchange data, open the DPM Administrator console, and click the Recovery tab. Proceed to the section that is appropriate for the type of recovery you want.
Restoring a Storage Group

To restore an Exchange storage group, begin with these steps:

1. In the left pane, expand the protected server that houses the desired information store, and click All Protected Exchange Data as shown in Figure 7.20.

Figure 7.20: Selecting a data source for recovery 2. In the center pane, right-click on the appropriate storage group and click Recover as shown in Figure 7.20. 3. In the Review Recovery Selection screen show in Figure 7.21, ensure that you have selected the correct storage group to recover, and click Next.

Figure 7.21: The Review Recovery Selection screen 4. In the Select Recovery Type screen shown in Figure 7.22, select whether you want to recover to the original location, a network folder, or to tape and click Next.

Figure 7.22: Select the recovery type Now that you've picked your recovery target, go to the appropriate section to finish your recovery operation.
RESTORING TO THE ORIGINAL LOCATION

Follow these steps to finish recovering an Exchange storage group to the original Exchange server: 1. In the Specify Recovery Options screen shown in Figure 7.23, indicate whether or not you want the databases to be mounted when the job completes and click Next.

Figure 7.23: Select the recovery options 2. The Summary screen will display the options you have selected as shown in Figure 7.24. If everything looks correct, click Recover; otherwise, use the Back button to make any necessary changes.

Figure 7.24: The Summary screen Your restore operation will proceed.
RESTORING TO AN ALTERNATIVE LOCATION

Follow these steps to finish recovering an Exchange storage group to an alternative location: 1. In the Specify Destination screen, click the Browse button. In the window that appears, select the recovery location from the list (see Figure 7.25). When you have made your selection, click OK.

Figure 7.25: Specify an alternative recovery destination 2. Click Next. 3. In the Specify Recovery Options screen, choose whether to use the security settings of the destination server, or those of the recovery point (see Figure 7.26). If you want to use bandwidth throttling, click the Modify link and go to step 4, otherwise go to step 5.

Figure 7.26: Specify the recovery options 4. In the Throttle window, you can specify bandwidth limitations as well as a schedule for those limitations (see Figure 7.27). When you have made your changes, click OK.

Figure 7.27: Throttling bandwidth 5. Click Next. 6. The Summary screen will display the options you have selected as shown in Figure 7.24. If everything looks correct, click Recover; otherwise, use the Back button to make any necessary changes. Your restore operation will proceed.
RESTORING TO TAPE

Follow these steps to finish recovering an Exchange storage group to a tape copy: 1. In the Specify Library screen shown in Figure 7.28, specify the library to use and whether to use a copy library. You can also enter a custom tape label, and specify encryption and compression options. When you have made your selection, click Next.

Figure 7.28: Specify a library 2. In the Specify Recovery Options screen shown in Figure 7.29, choose whether to send a job notification, as well as any recipients, and click Next.

Figure 7.29: Specify the recovery options 3. The Summary screen will display the options you have selected as shown in Figure 7.24. If everything looks correct, click Recover; otherwise, use the Back button to make any necessary changes. Your restore operation will proceed.

Restoring a Mailbox or Public Folder Database


To restore an Exchange mailbox or public folder database, begin with these steps: 1. In the DPM Administrator console, on the Recovery tab, click the information store in the left pane that contains the database you want to restore. In the center pane, rightclick on the database and click Recover as shown in Figure 7.30.

Figure 7.30: Selecting a database to recover 2. In the Review Recovery Selection screen shown in Figure 7.31, ensure that you have selected the appropriate database, and click Next.

Figure 7.31: Review recovery selection 3. In the Select Recovery Type screen shown in Figure 7.32, select the recovery type that you want, and click Next.

Figure 7.32: Select the recovery type Now that you've picked your recovery target, go to the appropriate section to finish your recovery operation.
Restoring to the Original Location

Follow these steps to finish recovering an Exchange mailbox or public folder database to the original Exchange server: 1. In the Specify Recovery Options screen shown in Figure 7.33, choose whether you'd like to mount the database when the job completes, and click Next.

Figure 7.33: The Specify Recovery Options screen 2. The Summary screen will display the options you have selected as shown in Figure 7.34. If everything looks correct, click Recover; otherwise, use the Back button to make any necessary changes.

Figure 7.34: The Summary screen Your restore operation will proceed.
Restoring to Another Database on an Exchange Server

Follow these steps to finish recovering an Exchange mailbox or public folder database to a database on another Exchange server: 1. In the Specify Destination screen shown in Figure 7.35, enter the values for the destination server, storage group, and database and click Next.

Figure 7.35: Specify the destination 2. In the Specify Recovery Options screen shown in Figure 7.36, choose whether to send a notification email and whether you want the database to be mounted when the job completes. Click Next.

Figure 7.36: Specify the recovery options 3. The Summary screen will display the options you have selected as shown in Figure 7.37. If everything looks correct, click Recover; otherwise, use the Back button to make any necessary changes.

Figure 7.37: The Summary screen Your restore operation will proceed.
Database Recovery Limitations

If you select the latest recovery point for an Exchange 2007 or Exchange 2003 database, you cannot recover the database to an alternative location. If you select the latest recovery point for an Exchange Server 2003 database, you can recover the database to the original location. However, when you recover an Exchange Server 2003 database to the original location, DPM does not use the latest log files from the protected server; this means the recovery is to the last saved state. To perform a database recovery without losing data, recover the database to the original location using one of the following methods:

If no databases are mounted under the storage group, recover the storage group to the latest point in time. If any database is mounted under the storage group, create a recovery point for the storage group, and then recover the database to the latest point in time.

If you select the latest recovery point for an Exchange 2007 database, DPM applies the log files from the protected server and will perform a lossless recovery with no additional action necessary.

Restoring to Recovery Storage Group

Follow these steps to finish recovering an Exchange mailbox or public folder database to an Exchange recovery storage group: 1. In the specify destination screen shown in Figure 7.38, enter information about the destination server, recovery storage group, and database name. When you have entered the information, click Next.

Figure 7.38: Specifying a recovery storage group 2. In the Specify Recovery Options screen shown in Figure 7.39, specify whether to send a job notification, to whom it should be sent, and whether you want the database to be mounted when the job completes. When you have made your choices, click Next.

Figure 7.39: The Specify Recovery Options screen 3. The Summary screen will display the options you have selected as shown in Figure 7.37. If everything looks correct, click Recover; otherwise, use the Back button to make any necessary changes. Your restore operation will proceed.
Copy to a Network Folder

Follow these steps to finish recovering an Exchange mailbox or public folder database to an alternative location: 1. In the Specify Destination screen shown in Figure 7.40, click the Browse button and choose a destination path. When you have chosen your destination, click OK.

Figure 7.40: Specifying a network location 2. Click Next. 3. In the Specify Recovery Options screen shown in Figure 7.41, choose whether to send a notification when the job completes. You can also choose whether you would like to bring the database to a clean shutdown state. If you want to use bandwidth throttling, click the Modify link and go to step 4; otherwise, go to step 5.

Figure 7.41: The Specify Recovery Options screen 4. In the Throttle window as shown in Figure 7.27, you can specify bandwidth limitations as well as a schedule for those limitations. When you have made your changes, click OK. 5. Click Next. 6. The Summary screen will display the options you have selected as shown in Figure 7.24. If everything looks correct, click Recover; otherwise, use the Back button to make any necessary changes. Your restore operation will proceed.
Restoring to Tape

Follow these steps to finish recovering an Exchange mailbox or public folder database to a tape copy: 1. In the Specify Library screen shown in Figure 7.28, specify the library to use and whether to use a copy library. You can also enter a custom tape label and specify encryption and compression options. When you have made your selection, click Next. 2. In the Specify Recovery Options screen shown in Figure 7.29, choose whether to send a job notification, as well as any recipients, and click Next. 3. The Summary screen will display the options you have selected as shown in Figure 7.24. If everything looks correct, click Recover; otherwise, use the Back button to make any necessary changes. Your restore operation will proceed.

Restoring an Individual Mailbox


To restore an individual Exchange mailbox, begin with these steps: 1. Open the DPM Administrator console, go to the Recovery tab. In the left pane, expand the server containing the mailbox, and drill down to the storage group. In the center pane, double-click on the database containing the mailbox, right-click on the appropriate mailbox, and click Recover as shown in Figure 7.42.

Figure 7.42: Selecting a mailbox for recovery 2. In the Review Recovery Selection screen shown in Figure 7.43, ensure that you have selected the correct mailbox, and click Next.

Figure 7.43: The Review Recovery Selection screen 3. In the Select Recovery Type screen shown in Figure 7.44, choose whether you would like to recover to an Exchange database, network folder, or tape. When you have made your selection, click Next.

Figure 7.44: The Select Recovery Type screen Now that you've picked your recovery target, go to the appropriate section to finish your recovery operation.
Recovering to an Exchange Database

In order to recover a mailbox to an Exchange server, a recovery storage group must be configured on the target Exchange server. You must also have a dismounted database within the recovery storage group. Follow these steps to finish recovering an Exchange mailbox to an Exchange server: 1. In the Specify Destination screen shown in Figure 7.45, enter information for the destination server, recovery storage group, and database, and click Next.

Figure 7.45: Specify the destination 2. In the Specify Recovery Options screen shown in Figure 7.46, choose whether to send a notification when the job completes. If you want to use bandwidth throttling, click the Modify link and go to step 3; otherwise, go to step 4.

Figure 7.46: Specify the recovery options 3. In the Throttle window, as shown in Figure 7.27, you can specify bandwidth limitations as well as a schedule for those limitations. When you have made your changes, click OK. 4. Click Next. 5. The Summary screen will display the options you have selected, as shown in Figure 7.24. If everything looks correct, click Recover; otherwise, use the Back button to make any necessary changes. Your restore operation will proceed.
Recovering to a Network Folder

Follow these steps to finish recovering an Exchange mailbox to an alternate location: 1. In the Specify Destination screen shown in Figure 7.40, click the Browse button and choose a destination path. When you have chosen your destination, click OK. 2. Click Next. 3. In the Specify Recovery Options screen shown in Figure 7.41, choose whether to send a notification when the job completes. You can also choose whether you would like to bring the database to a clean shutdown state. If you want to use bandwidth throttling, click the Modify link and go to step 4; otherwise, go to step 5. 4. In the Throttle window shown in Figure 7.27, you can specify bandwidth limitations as well as a schedule for those limitations. When you have made your changes, click OK. 5. Click Next. 6. The Summary screen will display the options you have selected as shown in Figure 7.24. If everything looks correct, click Recover; otherwise, use the Back button to make any necessary changes. Your restore operation will proceed.
Recovering to Tape

Follow these steps to finish recovering an Exchange mailbox to a tape copy:

1. In the Specify Library screen shown in Figure 7.28, specify the library to use and whether to use a copy library. You can also enter a custom tape label and specify encryption and compression options. When you have made your selection, click Next. 2. In the Specify Recovery Options screen shown in Figure 7.29, choose whether to send a job notification, as well as any recipients, and click Next. 3. The Summary screen will display the options you have selected as shown in Figure 7.24. If everything looks correct, click Recover; otherwise, use the Back button to make any necessary changes. Your restore operation will proceed.

The Bottom Line


Determine the prerequisites for installing the DPM protection agent on Exchange servers. You need to ensure that your protected Exchange severs are running the necessary versions of the Windows operating system, service packs, and Exchange Server software. Master It 1. Perform a survey of your Exchange servers to ensure that they are compatible with the DPM protection agent: o What version of Windows Server, Windows Service Pack, and Exchange Server are you running on the SQL Server machines you want to protect? o What storage groups, mailbox databases, and public folder databases are configured on your Exchange servers? Which ones need to be protected? 2. Given an Exchange server that has no other roles, what data will DPM capture as part of the system state? How does this differ from a cluster node system state? Configure DPM protection for standalone and clustered Exchange servers. Once the DPM agent is deployed, you must configure protection groups and select data sources to protect. Master It 1. What highly available Exchange Server configurations can DPM protect? 2. What is the difference between synchronization and express full backups of Exchange storage groups in DPM? 3. What DPM licenses do you need to protect standalone servers? Clustered servers? Recover protected Exchange resources. Protecting your Exchange data is only half of the job; you also need to be able to recover the data at the appropriate level of granularity. Master It 1. At what level can you restore Exchange data? 2. To what locations can you recover Exchange data? 3. What are the differences between recovering data to a standalone server and a cluster?

Chapter 8: Protecting SQL Servers


Overview
The system is down! Strongbad In most environments, database servers tend to host the data that is considered to bethe most critical for a business: financial data, customer information, inventory tracking, just about every modern application (especially web applications) relies on databases to store and manage data relationships and queries. Due to the complexity of many of these databases, there are professionals whose entire careers are dedicated to all facets of database work from development to deployment to administration to application programming. Database administrators and systems administrators often work closely together to ensure that the database services are kept up and running and that the data within them is adequately protected. Not all SQL database information is meant for user-facing applications. Especiallyin a Microsoft shop, you probably have SQL Server machines that host data for infrastructure applications:

Windows Server Update Services (WSUS) and its predecessor, Software Update Services (SUS), both use a SQL Server instance to keep track of which software updates you've down-loaded and approved, the computer groups you've defined, and the status of which machines have applied which updates. Almost all of the System Center applications rely on SQL Server databases for critical data storage: o System Center Configuration Manager (SCCM) and predecessor System Management Server (SMS) use SQL to track the configuration, hardware inventory, and software inventory of the servers and workstations in your network. o System Center Operations Manager (SCOM) and its predecessor Microsoft Operations Manager (MOM) use SQL Server to track the management, performance, and event log information these applications gather from your servers and network equipment. o The new System Center Essentials provides a combination of features found in WSUS, SCCM, and SCOM. It of course utilizes SQL Server to store and query the update, configuration, and monitoring information it collects. o In case you've already forgotten Chapter 2, "Installing DPM," DPM uses SQL Server databases to track information on replicas, synchronizations, available data sources, protection groups, and protection targets. Windows SharePoint Services (SPS) 2.0 uses SQL data to store the content of your Share-Point sites; SPS 3.0 expands this to store much of the metadata formerly stored on the filesystem. Both Microsoft Office SharePoint Server (MOSS) and SharePoint Portal Server SPS

use this architecture and can share a single SQL Server among many SharePoint servers. However, if you're looking to protect SharePoint databases with DPM, stop now and immediately turn to Chapter 9, "Protecting SharePoint." We've even seen (and used) third-party add-ins for Exchange that utilize SQL databases to share information between multiple Exchange bridgehead servers. We could provide even more examples of database-enabled applications, but we think the point is clear: these newfangled relational SQL databases are here to stay. In this chapter, you will learn to:

Determine the prerequisites for installing the DPM protection agent on SQL Server machines Configure DPM protection for standalone and clustered SQL Server machines Recover protected SQL Server databases

Considerations
Protecting SQL databases has historically been a challenge. If you haven't had much exposure to database administration and the strange people (which include the authors) who do it for a living, you are probably used to thinking of SQL data at the server level, or maybe the database level. Many people view SQL servers as magic black boxes because:

Data goes into the magic black box, where it has strange adventures and many things (good and bad) happen to it. Eventually, the data comes back to us. It may or may not look anything like it did before.

If you aren't really aware of how SQL databases work, you may find it hard tounderstand why protecting them is an issue. We'll give a more complete answer later in the "Protected Data Sources" section of this chapter, but for now, the simple answer is that SQL data is usually made up of a large number of objects. These objects have intricate relationships with each other, dependent upon the specifics of the application that uses those objects. If you're backing up one of these objects, you have to make sure you're backing up all the related objects or else your backupwill have an inconsistent view of your data. To make it even more of a challenge, although each SQL databaseand thereforeeach database protection strategyis application-specific, there's nothing that stops you from storing the data for multiple applications in one database. The converse is also true: the data for one application can be spread across multiple databases, SQL Server instances, or even servers. You have to know how the application whose data you're trying to protect distributes its data, so you know which objects must be protected at the same time. There are several traditional ways of backup up SQL databases. The first is to take the databases offline so no one is making any changes to the data while it's being backed up. The second is to use some sort of third-party add-on that can take a snapshot of the data; these applications or agents are often part of a backup application. The third (and most popular these days) is to use SQL Server's built-in capabilities (which rely on VSS and the SQL Server VSS writer) to produce a dump of the databaseswhile they are still online. All of these methods will work; however, none of them provides all of the benefits that DPM does.

Before you begin protecting your SQL Serverdata with DPM, there are several areas you need to consider:

Do your SQL Server machines meet the prerequisites for DPM protection? How do you need to prepare your SQL Server cluster nodes? Do you need to protect the system state of your SQL Server machines?

Let's examine these issues in more detail.


Prerequisites

Before we move on into the details of protecting and restoring SQL databases with DPM, you should ensure that your SQL Server machine meets the prerequisites. These requirements are shown in Table 8.1.
Table 8.1: Protected Server Software Requirements Open table as spreadsheet

Software Component Application version

Description SQL Server 2000 Standard Edition with at least SP4. SQL Server 2000 Enterprise Edition with at least SP4. Microsoft SQL Server 2000 Data Engine (MSDE) with at least SP4. SQL Server 2005 Standard Edition with at least SP1. SQL Server 2005 Enterprise Edition with at least SP1. SQL Server 2005 Workgroup Edition with at least SP1. SQL Server 2005 Express Edition with at least SP1. VSS hotfix 940349 on Windows Server 2003. No IA64 (Itanium) version of SQL Server is supported by DPM.

DPM License

The E-DPML for each standalone SQL Server to be protected, or for each node in a clustered SQL Server configuration to allow DPM to support automatic protection continuation in the event of cluster failover.

In order for the DPM protection agent to protect SQL data, your SQL Server installation must support the SQL Server VSS writer. This means that your databases to be protected must be running on at least SQL Server 2000 SP4 or SQL Server 2005 SP1; later versions of the SQL Server service packs are also supported. In SQL Server 2005 SP1, the VSS Writer is disabled by default. In order to enable it, follow this procedure:

1. 2. 3. 4. 5.

Open the Services control panel (Start Administrative Tools Services). Find the SQL Server VSS Writer service, right-click it, and select Properties. Ensure the Startup Type is Automatic. If the service is not running, click Start. When the service is configured as shown in Figure 8.1, click OK to closethe service prop-erty sheet.

Figure 8.1: The SQL Server VSS Writer While it's not included in the table above, you should also remember that VSSsupport is not present in Windows Server 2000; it was introduced in Windows Server 2003.You must, therefore, be running SQL Server on some supported version of Windows Server2003 with at least SP1. Both Windows Server 2003 and Windows Server 2003 R2 are supported; if you need to know which specific editions of Windows Server 2003 you need, you should see the documentation for your SQL Server version to see what its requirementsare. The DPM protection agent uses the VSS capabilities of Windows Server 2003 to take a complete snapshot of each protected database; this is particularly critical with SQL data, as it ensures that there is always a consistent view of the various data structures and relationships within the database. By using VSS, DPM prevents disjointed views ofthe database and eliminates the possibility of data corruption that can be caused by attempting to enumerate and protect tables (or even specific rows within a table) independently of any related data that may be needed to properly reconstruct the database.
What About the Itanium?

Although the AMD Opteron/Intel EMT64 architecture (known as x64) is by far the dominant 64-bit architecture in the Windows world, it isn't the only one. Intel's Itanium processors

(IA64) are also fully 64-bit; because they don't have to worry about providing any level of compatibility with the legacy 32-bit x86 architecture, the theory is that they can provide better performance than other alternatives. Wewon't get in the middle of the argument whether the Itanium designs meet that goalwe know plenty of smart, respectable people on both sides of the fencebut we will point out that, for whatever reason, the market adoption of Itanium systems was never large enough to make it a point for most people. Although there is an IA64 version of Windows Server 2003, most Microsoft applications were never ported to the platform.

However (and there's usually a "however"), it turns out that the particular properties of the Itanium processors matched very well with the typical CPU profiles demanded by SQL Server. As a result, Microsoft provided a release of SQL Server 2005 for IA64 servers. If you think about howlarge data-centers use SQL Servers, this makes complete sense; the database server is aseparate tier in the typical three-tier application deployment model and all communications between the database tier and themid-tier happen over network connections. There's no reason why all of the tiers have to be on the same platform. Providing support for 64-bit processors and removing the 4GB memory limitations would be most useful on the database tier, where companies could take immediate advantage of consolidation without having to rewrite or recompile applications. Unfortunately, the DPM protection agent is only available in the x86 (32-bit) and x64 (AMD Opteron/Intel EMT64) architectures; it does not support (and will not run on) IA64 systems. If you're using Itanium servers for your SQL databases, you'll need to move them to another machine before protecting them with DPM.

As always, you should thoroughly read the DPM Planning Guide, as well as the DPM release notes, to identify any further issues or concerns that may affect the protectionof your SQL Server machines.
Clustered Configurations

Many environments with high uptime requirements take advantage of MSCS to provide high availability and protection from hardware failures on critical SQL databases. The main reason that more people don't use a clustered SQL Server configuration is cost. It's hard to justify the money for cluster-certified hardware and the Enterprise Edition Windows Server licenses when Microsoft provides alternatives such as database replication, which can provide an alternative type of high availability. If you've gone to the trouble of creating a SQL Server cluster, protecting it with DPM is easy. DPM requires that all nodes that can possibly be owners of protected SQL database resources have the DPM protection agent installed. Before we talk about how to perform protection and recovery operations on SQL Server clusters, though, you need to understand how these clusters work.

In MSCS, you define one or more cluster resources, which describe resources that are to be shared between nodes in the cluster. These resources include attributes such as network names, IP addresses, disk resources, and application-specific resources such as databases. When the node that hosts a cluster resource fails, either through some hardware or software fault or through a manual administrative action, the MSCS component determines which other nodes can host the appropriate resources, activates those nodes, and notifies those nodes that they now host the relevant shared resources. In order to prevent nodes from disagreeing about which resources are hosted by which node, MSCS clusters use a quorum system to determine which node currently owns any given cluster resource. A special quorum resource allows the nodes in the cluster to communicate and come to consensus about resource ownership. While a two-node cluster uses a shared physical volume as the quorum resource, a cluster with three or more nodes can use a majority node set (MNS), which is a new type of quorum resources introduced Windows Server 2003. With the MNS, there is no shared physical device or resource that serves as the quorum resources; instead, all of the nodes in the cluster communicate whether they are online or not and whether cluster resources need to be failed over to another node. In order to protect clustered SQL Server nodes with DPM, the DPM protection agent must be installed on each cluster node that may possibly own the databases resources you are protectin As DPM is fully cluster-aware, this allows DPM to continue protecting your SQL data even if an unplanned failover happens and the database is shifted to another node in the cluster. DPM can then alert you that an unplanned failover has taken place and request a consistency check of the affected databases. If the database fails over to a node that doesn't have the protection agent installed, DPM will not be able to continue protecting the data. In addition, DPM cannot protect the quorum resource. This is usually not a problem, because if you're following best practices for Microsoft clustering, the quorum resource should be a separate resource from all other resources that contain actual data, used only by the cluster quorum process.
Clustering Types

Just so that we're on the same page, let's clarify what we mean when we refer to a clustered configuration. As it turns out, when we talk about clustering in Windows servers, there are three categories of behavior we could mean: failover clusters (also called server clusters), component load-balancing clusters, and network load-balancing clusters. Each type of clustering can be implemented using native Windows technologies; all three offer their own benefits and disadvantages and are intended to solve different problems:

Failover clusters are what most people think of when they talk about clustering. These clusters are a collection of two or more servers (usually up to amaximum of eight, depending on the workload) that share a common set of resources such as storage in order to ensure data integrity and provide high availability. There are two types of nodes in a failover cluster: the active node, which provides service to incoming connections, and the passive node, which is on standby to take over providing service when an active node moves offline. Using these types of nodes, you

can create either active/passive clusters, in which one or more passive nodes act as spares for some (usually greater) number of active nodes, or active/active clusters, in which active nodes have enough spare capacity to take over operations from other failed nodes. To create a failover cluster, you need some sort of cluster service software, such as the Microsoft Cluster Service (MSCS) component (included in the Enterprise Edition of Windows Server) or some third-party clustering service. You also usually need some sort of shared storage such as a SAN or iSCSI SAN. Component load-balancing clusters are groups of servers that are designed to have key software components work together using the COM+ Services included in Windows, to provide high availability and scalability for application systems use transactions. In plain English, these clusters allow you to deploy a farm of servers that handle the middle tier of multi-tier applications. These clusters rely on services provided by the Windows operating system, but require additional specific application software. Network load-balancing clusters are groups of servers that provide load balancing and some limited failover capability for front-end services. These types of clusters are ideal for web servers and other relatively stateless protocols, as the individual server nodes in the cluster don't share state information on active connections the same way that failover clusters (and cluster-aware applications such as Exchange and SQL Server) do. In Windows, you use the Network Load Balancing component to configure this functionality, although you can use third-party software or even a hardware application to provide high-end load balancing services.

When we talk about clusters in this book, it's safe to assume that we're referring to failover clusters using MSCS. If we mean something different, we'll call it out to your attention. Microsoft offers more information about the different types of clustering at the Overview of Windows Clustering Technologies TechNet website: http://technet2.microsoft.com/windowsserver/en/library/c35dd48b-4fbc-4eee-8e5c2a9a35cf63b21033.mspx?mfr=true.

System State

DPM includes the ability to protect and recover the local system state for any protected server. Sys-tem state backups of SQL Server machines do not directly affect your ability to protect and restore databases; you can always recover a database to adifferent server whether the system state data is available or not. Table 8.2 includes a listing of the types of data included in the system state for different types of servers that are likely to act as SQL Server machines. Note that the roles listed in Table 8.2 are cumulative; for example, a domain controller is also considered a member server, so its system state data includes the data listed for both roles.
Table 8.2: Data Contained in the System State Open table as spreadsheet

Server Role

System State Data

Table 8.2: Data Contained in the System State Open table as spreadsheet

Server Role Member server

System State Data Boot files. The COM+ class registration database. Registry hives.

Domain controller

Active Directory (NTDS) files. The system volume (SYSVOL). Other applicable components.

Certificate Services Certificate Authority All Certificate Services data. Other applicable components. Cluster node Cluster Service metadata. Other applicable components. In many organizations, domain controllers often pull doubleor even triple-duty: domain controllers, infrastructure services such as DNS and DHCP, and file servers. You can protect volumes and file shares on domain controllers just as you would on a member server, but infrastructure service data may or may not be directly protected. The general rule is this: If the data to be protected is in Active Directory (such as Active Directoryintegrated DNS zone data), system state protection will protect itbut to recover it, you'll have to recover the entire system state. On the other hand, you can configure protection for individual files for those services that store their data in separate files and even recover them independently, but you have to identify and restore the files on your own, increasing the risk of corrupting something. We hope that you're not making your SQL Server machines do double-duty as domain controllers as well. Although this configuration is supported by Microsoft (mainly to allow support for the Small Business Server SKU), doing so has important implications for memory usage on your server. Both Active Directory and SQL Server are memory hogs; they will both by default use as much physical memory as the system has in order to allow better caching and performance. When you put both services on the same physical machine, they're both going to be memory-starved and unhappy. You're also complicating your disaster recovery scenario with each additional service you place on the same machine. So here's the real question: when do you use DPM to protect system state? From our experience, we recommend that you do it all the time. System state is insanely easy to protect with DPM; it takes up comparatively little room on most SQL Server machines even before you factor in DPM's spacesaving technologies. You never know when you're going to need it. If you're ignoring the recommendations and putting SQL Server on a domain controller, then you really do need to capture system state; you'll need a functional domain controller in order

to rebuild and if it's the same machine as your SQL Server, you've got a chicken-and-egg problem without the system state. Better yet, don't combine the domain controller and SQL Server roles and protect the system state of both machines with DPM. If you're using some of the advanced protection and service continuation options you have when using DPM in conjunction with Virtual Machine Manager, keeping the system state protected is an essential part of your recovery strategy. While the P2V capabilities of VMM are sufficient to protect the base operating system and program files, you'll need the system state to restore the virtual machine to the last known state. For this reason, you should protect the server's system state in the same protection group that you protect the rest of its data; this ensures that the entire server can be consistently restored to a known point in time.
Protected Data Sources

When you're protecting SQL data with DPM, there aren't a lot of choices about which resources you want to protect. Unlike other applications, DPM only allows you to specify protection at the database level. While this may seem like a limitation, if you think about the nature of SQL data, it actually makes sense. If you already know SQL Server well, feel free to skip the rest of this section. As we've mentioned briefly before, SQL Server contains data in a variety of different objects. The main objects we're concerned about are the following:

Tables are a collection of data entries with similar characteristics. A table is composed of at least one row of data (usually more, up to hundreds of thousands or even millions in larger databases), each representing a specific entry. Each row consists of one or more columns that represent specific fields of data used by the entry. As an example, a blog application may use one table to hold all posts and comments; each row would be one entry, with fields for the posting time, whether the entry is a comment or post, the specific blog or post the entry is attached to, and the actual content of the entry. Views look a lot like tables; however, they represent specific prepackaged queries, often complicated joins that involve associated columns from multiple tables, combined with specific filters. Views provide a shortcut and convenience for application developers; they can simply query the view instead of having to know the precise structure of the underlying tables. If the tables are changed, the views can also be re-created without requiring the application to be recompiled. Stored procedures are executable SQL scripts that are associated with a particular database. They allow SQL developers to provide a level of data access abstraction in the database by providing a well-defined interface to perform complicateddata retrievals and updates that affect multiple tables. By embedding any necessary business logic into these scripts, stored procedures allow application developers to make a simple call into the database (much like a function call) without requiring detailed knowledge of the underlying database structures. Like views, stored procedures can be changed as the underlying tables are modified, preventing or reducing the need to recompile application code. An index is a data structure that allows queries to be performed more quickly. At a basic level, indices are sorted collections of one or more columns in a table. SQL Server creates some indices by default as new tables are created, but database administrators often create additional indices to increase the performance of common

queries. If a table is restored without all of the corresponding indices, the resulting performance could be crippled. Databases are collections of tables, views, stored procedures, indices, and other information. While databases are traditionally used one per application, there's no mechanism within SQL Server to enforce this; sometimes, it makes sense for multiple applications to share the same database, usually even tables within the database. Databases are the only data structures to have a physical representation on the SQL Server machine; each database is represented on-disk by at least two files: the main database file and the associated transaction log file. As the database grows, SQL Server may either extend the size of the existing files or create additional ones to store additional data. Instances are effectively multiple copies or installations of SQL Server, running on the same computer. There are two types of instances: default instances and named instances; if you don't specify a name for the instance when you install it, SQL Server will make it the default instance. In DPM, instances appear as subcontainers between the physical machine (or cluster node) and the databases you can protect. The only consideration for an instance is ensuring that it is patched to the appropriate level.

When SQL Server is installed, it creates several system databases. These databases hold configuration information such as which other databases that are configured and mounted on the server, or even examples. These databases probably don't need to be protected, although your database administrator should be able to tell you if there's any critical information there. Remember that DPM understands SQL Server's system data structures and when you recover databases to a server, it will create the appropriate system database entries, so you don't need to protect them just to ensure that your database integrity is safe. When selecting your protection schedule, try to avoid creating recovery points (with their associated express full backup) during periods of peak activity; if possible schedule recovery points immediately after high-use periods. While the synchronization process will probably have little impact on your SQL Server performance, creating a recovery point during these times is not to your greatest advantage. When you create a recovery point halfway through a busy period, you are only capturing some of the data generated. In the event of a data loss event, your recovery could potentially miss large amounts of data. When you consider application databases that are not critical to your immediate recovery needs, you may be able to get by with a less frequent protection schedule. As an example, an SCCM or SMS database contains data about the client systems on your network. While recovering this database may be important in the event of data loss, it does not need to be upto-the-minute to be effective. Both SCCM and SMS update their databases with periodic polls of the client computers; even if the data is not entirely up to date after the restore, the applications will interrogate the client machines and get the latest data. Know the behavior of your applications; if you don't know, ask the people who do. The answers will help you determine your protection needs.

Backup Procedures
Although we've stated previously that protection of SQL Server databases is relatively simple with DPM, you should still keep your application and data recovery needs in mind. Let's take a look at

the actual steps involved. The procedures for backing up standalone SQL Server machines and clustered configurations differ slightly, so we've divided them into separate sections.
Standalone Configurations

SQL Server data is even simpler to protect in DPM than other types of data, if only because you don't have nearly as many choices of data sources to protect. SQL Server support was not present in DPM 2006, unless you first used the SQL Agent to produce database dump files to the file system, and is a key feature in DPM 2007. There are two basic steps to protecting standalone SQL Servers with DPM: 1. Install the protection agent on the protected SQL Server machines. 2. Configure protection by assigning database resources to a protection group. Let's start by reviewing how to install the protection agent on your SQL servers.
INSTALLING THE PROTECTION AGENT

We already covered the general steps for installing the DPM protection agent in Chapter 2, so if you've already installed the agent on your SQL Server machines, you're good to go. If you haven't, here's a recap: 1. Open the DPM Administrator console, navigate to the Management tab, and select the Agents subtab. 2. Click Install in the Actions pane. 3. From the left pane, select the servers you want to protect, as shown in Figure 8.2, and click Add.

Figure 8.2: Choosing servers for agent install 4. When all of the servers you want to protect are in the right pane, click Next. 5. Enter the credentials for a user with administrative rights on the selected servers, as shown in Figure 8.3, and click Next.

Figure 8.3: Enter credentials for agent install 6. Once the agent install has been completed, you will not be able to beginprotecting your servers until they have been restarted. Choose whether you want the servers to reboot now or later, as shown in Figure 8.4, and click Next.

Figure 8.4: Choose restart method 7. A Summary screen will appear as shown in Figure 8.5, showing the choices you have made. Click Install to proceed with the agent install, or click Back to change your options.

Figure 8.5: Protection agent install summary 8. The final screen will display the agent install progress. You can click Close and the current status and progress will be displayed in the Agents subtab. Once the protected SQL Server machine reboots and DPM verifies the connection withthe agent, you will see the list of data sources that DPM can protect. Remember that while you need to install the agent on all nodes in a SQL Server cluster in order to get full protection, as soon as you reboot the first node in the cluster you will see the resources available on it. You may need to install the agent and reboot the cluster nodes in multiple sessions to prevent disruption of services for your users.
PROTECTING SQL SERVER DATABASES

You can add SQL Server databases to an existing protection group or create a new protection group. The following process assumes that you're creating a new protection group; if you want to add SQL databases to an existing protection group, merely openthe protection group and select the databases you want to add. To create a new protection group for your SQL Server databases: 1. Open the DPM Administrator console, navigate to the Protection tab, and click Create Protection Group in the Actions pane. 2. In the Welcome screen shown in Figure 8.6, click Next.

Figure 8.6: The Create New Protection Group Welcome screen 3. In the Select New Group Members screen, expand the SQL Server machine you want to protect, and select the databases to include in the protection group by checking the boxes next to the databases as shown in Figure 8.7.

Figure 8.7: Selecting databases to protect 4. When you have selected the databases you want to protect, click Next. 5. Choose whether this group will use short-term protection and the associated method, as well as whether to use long-term protection (if you have a tape drive orlibrary attached to your DPM server) as shown in Figure 8.8.

Figure 8.8: Selecting a dataprotection method 6. Once you have chosen the protection methods, click Next. 7. Unless you have chosen not to provide short-term protection for your protection group, the next screen is where you decide how long short-term data is retainedin DPM, as well as the synchronization frequency and the recovery point schedule as shown in Figure 8.9.

Figure 8.9: The Specify ShortTerm Goals screen 8. To change the schedule for the express full backups, click the Modify button. Here, you can change the frequency by adding times and checking days of the week for the selected oper-ation to occur as shown in Figure 8.10. When you are finished, click OK.

Figure 8.10: Changing the express full backup schedule 9. Back in the Short-Term Goals screen, click Next. 10. In the Review Disk Allocation screen, you'll see that DPM has already recommended a default allocation from the storage pool based on the amount of data being protected as well as the short-term goals you specified. 11. To change the amount of storage pool space allocated for your protectiongroup, click Modify. Here you can change the amount of space allocated for replicas andrecovery points, as shown in Figure 8.11.

Figure 8.11: Modifying disk allocation 12. Back in the Review Disk Allocation screen, click Next. 13. Unless you have chosen not to provide long-term protection for your protection group, the next screen is where you configure DPM's long-term tape retention strategy as shown in Figure 8.12.

Figure 8.12: Specifying longterm goals 14. To change the long-term protection objectives, click Customize. You can establish a multiple-tier strategy in units of days, weeks, months, or years. You can also specify what happens if more than one of the scheduled backups happens at the same time, as shown in Figure 8.13. When you have finished making your selections, click OK.

Figure 8.13: Specifying long-term goals 15. To change the days on which long-term backups occur, click Modify. Select the appropriate day and time for each backup, as shown in Figure 8.14. When you have finished making your changes, click OK.

Figure 8.14: Modifying the backup schedule for your objectives 16. Click Next. 17. In the Select Library And Tape Details screen, choose the library to use, the number of drives from the library, integrity checking, and compression and encryption options, as shown in Figure 8.15. When you have chosen the appropriate settings, click Next.

Figure 8.15: Selecting library and tape details 18. In the Choose Replica Creation Method screen, select the method by which replicas will be created, as well as when the first one should be created, as shown in Figure 8.16. Click Next.

Figure 8.16: The Choose Replica Creation Method screen 19. The Summary screen shown in Figure 8.17 will present a summary of all ofthe settings you have selected for the protection group. If everything looks good, click Create Group; other-wise, click Back to make any necessary changes.

Figure 8.17: The Summary screen That's it! You're now protecting your standalone SQL Server machines with DPM. In the next section, we'll show you how to protect clustered configurations.
Clustered Configurations

When you deploy the DPM protection agent with the E-DPML, DPM becomes cluster-aware. DPM is already extremely intelligent when protecting standalone machines, but its built-in clustering support makes it even easier to use DPM to protect your clustered resources. You don't really need to think about the details of the cluster configuration; you simply configure protection and know that DPM will alert you when an unplanned failover requires a consistency check. If you need to recover a database, you can do so transparently no matter which node currently owns the database.

Remember that the default cluster group in an MSCS cluster contains the quorum resource, which in a two-node cluster is a file share quorum. Quorum resources cannot be protected by DPM. For this reason, we echo Microsoft's recommendation that you keep the quorum resource in its default location and place all other cluster resources you define in a separate cluster group. Once you install the DPM protection agent on cluster nodes, the DPM Administrator console will expose the fact that the servers are cluster members. By looking in the Management tab, you'll see additional details, such as cluster groups.
INSTALLING THE PROTECTION AGENT

Installing the agent on a cluster isn't that different, but we've repeated the process below just to be complete (and to keep you from having to page all over the place): 1. Open the DPM Administrator console, navigate to the Management tab, and select the Agents subtab. 2. Click Install in the Actions pane. 3. From the left pane, select the servers you want to protect, as shown in Figure 8.2, and click Add. 4. When you have all of the servers you want to protect are in the right pane, click Next. 5. Enter the credentials for a user with administrative rights on the selected servers, as shown in Figure 8.3, and click Next. 6. Once the agent install has been completed, you will not be able to begin protecting your servers until they have been restarted. Choose whether you want the servers to reboot now or later, as shown in Figure 8.4, and click Next. 7. Summary screen will appear as shown in Figure 8.5, showing the choices you have made. Click Install to proceed with the agent install, or click Back to change your options. 8. The final screen will display the agent install progress. You can click Close and the current status and progress will be displayed in the Agents subtab. Once the protected SQL Server machine reboots and DPM verifies the connection with the agent, you will see the list of data sources that DPM can protect. Remember that while you need to install the agent on all nodes in a SQL Server cluster in order to get full protection, as soon as you reboot the first node in the cluster, you will see the resources available on it. You may need to install the agent and reboot the cluster nodes in multiple sessions to prevent disruption of services for your users.
PROTECTING DATABASES ON SQL SERVER CLUSTER NODES

Just as with a standalone server, you can add SQL Server databases on clustered configurations to an existing protection group or create a new protection group. When DPM detects that the SQL Server is a cluster node, it will automatically represent the data as part of a cluster. The following process assumes that you're creating a new protection group; just as is the case when you're protecting a standalone server, you can choose to add data sources to an existing protection group. You can even mix and match data sources from standalone servers and clusters in the same group, if you need to. Use the following steps to create a new protection group for clustered SQL Server databases:

1. Open the DPM Administrator console, navigate to the Protection tab, and click Create Pro-tection Group in the Actions pane. 2. In the Welcome screen shown in Figure 8.6, click Next. 3. In the Select New Group Members screen, notice that the cluster shows up for protection. Expand the cluster to reveal the available cluster groups, and expand the appropriate group to reveal the databases that may be protected as shown in Figure 8.18. Select the databases to include in the protection group.

Figure 8.18: Selecting clustered databases to protect 4. When you have selected the databases you want to protect, click Next. 5. Choose whether this group will use short-term protection and the associated method, as well as whether to use long-term protection (if you have a tape drive or library attached to your DPM server), as shown in Figure 8.8. 6. Once you have chosen the protection methods, click Next. 7. Unless you have chosen not to provide short-term protection for your protection group, the next screen is where you decide how long short-term data is retained in DPM, as well as the synchronization frequency and the recovery point schedule as shown in Figure 8.9. 8. To change the schedule for the express full backups, click the Modify button. Here, you can change the frequency by adding times and checking days of the week for the selected oper-ation to occur as shown in Figure 8.10. When you are finished, click OK. 9. Back in the Short-Term Goals screen, click Next. 10. In the Review Disk Allocation screen, you'll see that DPM will havealready recommended a default allocation from the storage pool based on the amount of data being protected as well as the short-term goals you specified. 11. To change the amount of storage pool space allocated for your protection group, click Modify. Here you can change the amount of space allocated for replicas and recovery points, as shown in Figure 8.11. 12. Back in the Review Disk Allocation screen, click Next. 13. Unless you have chosen not to provide long-term protection for your protection group, the next screen is where you configure DPM's long-term tape retention strategy, as shown in Figure 8.12. 14. To change the long-term protection objectives, click Customize. You can establish a multipletier strategy in units of days, weeks, months, or years. You can also specify

what happens if more than one of the scheduled backups happens at the same time, as shown in Figure 8.13. When you have finished making your selections, click OK. 15. To change the days on which long-term backups occur, click Modify. Select the appropriate day and time for each backup as shown in Figure 8.14. When youhave finished making your changes, click OK. 16. Click Next. 17. In the Select Library And Tape Details screen, choose the library to use, the number of drives from the library, integrity checking, and compression and encryption options as shown in Figure 8.15. When you have chosen the appropriate settings, click Next. 18. In the Choose Replica Creation Method screen, select the method by which replicas will be created, as well as when the first one should be created, as shown in Figure 8.16. Click Next. 19. In the Summary screen shown in Figure 8.17, you will be presented with a summary of all of the settings you have selected for the protection group. If everything looks good, click Create Group; otherwise, click Back to make any necessary changes. That's it! You're now protecting your standalone SQL Server machines with DPM. In the next section, we'll show you how to protect clustered configurations. As we promised, it's just as easy to protect SQL clusters with DPM as it is to protect standalone SQL Server machines. DPM detects the clustering service when it is installed (and when you use the appropriate license) and automatically extends the selection tree with the cluster configuration accordingly. Next up: restoring your databases. This has traditionally been a pain point for many administrators; let's see how DPM makes it easy and almost, well, fun at least it's more fun than doing it with traditional backup software.

Restore Procedures
We've said it before and will say it again: DPM makes data protection into a straight-forward process. The real magic happens when you need to recover your data. DPM makes the recovery process simple by using well-designed wizards to help you get the data you want to the destination you need. You'll see pretty much the same interface regardless of where you're recovering your database; there is little difference in recovering a default instance, named instance, or clustered instance of SQL server. When you recover SQL databases with DPM, there are no real differences between standalone and clustered configurations. You do, however, have several options for where you restore the data to:

You can recover the data to the original SQL Server instance. If the original instance is a clustered node, the data will be recovered to the node that currently owns the database. You can also recover the data to an alternative SQL Server instance, whether a standalone or clustered configuration. The destination server holding the SQL Server instance must also have the DPM agent installed. You can write the database files to a folder on a file server. When you choose this option, DPM will write the database and transaction log files to the selected location,

just as would be produced if you did a VSS-aware database dump. The destination server must have the DPM agent installed. You can choose to write a copy of the data to tape. While this may not initially seem useful, it can be useful in many electronic discovery or regulatory compliance scenarios.

Enough chit-chat; let's get some recovery going!


Restoring to the Original Instance

Recovering your database to the original instance is one of the most likely types of recovery you'll perform in a production environment. If your SQL Server moves on to the big datacenter in the sky due to some catastrophic failure, your quickest route back to service is likely to be to rebuild the server (and maybe perform an optional system state restore), then recover the databases to the original instance. To recover databases to their original instance, use the following steps: 1. Open the DPM Administrator console and navigate to the Recovery tab. 2. In the Protected Data pane, expand the available data sources and selectthe data source that you want to recover as shown in Figure 8.19.

Figure 8.19: Selecting a database to recover 3. Select the desired recovery point from the list provided and click Recover. 4. On the Review Recovery Selection screen, ensure that you have chosen thecorrect items to recover, as shown in Figure 8.20. When you are satisfied with your selections, click Next.

Figure 8.20: The Review Recovery Selection screen 5. On the Select Recovery Type screen shown in Figure 8.21, select the Recover To The Original Instance option and click Next.

Figure 8.21: Choosing a recovery type 6. In the Specify Database Recovery Completion State screen, select whether you want to bring the database back online or to keep it offline when the recovery operation is complete, as shown in Figure 8.22. o Leave Database Operational: With this option, DPM recovers the database as the VSS snapshot was taken at the specified recovery point, including replaying any necessary trans-action logs, and then remounts the database. o Leave Database Operational But Able To Restore Additional Transaction Logs: DPM recov-ers the database to the state it was in when the VSS snapshot was taken at the specified recovery point, including replaying any necessary transaction logs. With this option, the database is kept offline, giving you the option to later replay additional transaction logs and bring the database forward to a later point in time. This is useful under specific circumstances, such as needing to be able to recover data that was overwritten

between recovery points. If there are additional transaction logs available, DPM will give you the option to copy them.

Figure 8.22: Selecting the recovery state of the database 7. Click Next. 8. On the Specify Recovery Options screen shown in Figure 8.23, choose yourdesired recovery options: o Network Bandwidth: To adjust the network bandwidth used by the restore process, click Modify. In the new window specify a maximum usable amount of bandwidth for work hours and non-work hours, then click OK. o Email Notifications: You can enable email notifications and specify one or more recipients.

Figure 8.23: Selecting recipients for job notifications 9. On the Summary screen shown in Figure 8.24, review your choices; if you need to make any corrections, click Back to move back through the configuration options. When you are satisfied, click Recover.

Figure 8.24: The Summary screen 10. DPM displays a Status window for the recovery operation. Instead of keeping this window open, you may close it and instead track the recovery progress in the DPM Administrator console. When the recovery operation completes, the database as captured in the selected recovery point will be restored to its original location on the protected SQL Server machine. If the database exists on the machine before you begin recovery, it will be replaced with the recovered version. Be careful when using this option!
Restoring to an Alternative Instance

As mentioned, recovering a database to its original instance is useful when you're rebuilding a production server that has suffered some form of outage. If you do it to a live server, however, you're almost certain to overwrite your database with an older version. The ability to recover databases to an alternative instance can be an extremely useful ability in a variety of situations:

Testing recovery procedures Verifying data integrity Performing a migration to a new instance Recovering to an alternative instance to get a critical database online as quickly as possible, when rebuilding the original server will take too long

The biggest caveat for any of these scenarios (as with any recovery) is you lose any data that are not recorded in your selected recovery point. If you plan to use DPM to help you move a database from one instance to another, ensure that the clients and applications that use the database cannot make any writes to the database for a period of time before you perform the migration. When you're ready to migrate, manually create a recovery point, then perform the recovery to the new instance using this new recovery point. This way, when you bring the database up in the new location, no transactions will be lost. To recover databases to an alternative instance, use the following steps:

1. Open the DPM Administrator console and navigate to the Recovery tab. 2. In the Protected Data pane, expand the available data sources and select the data source that you want to recover, as shown in Figure 8.19. 3. Select the desired recovery point from the list provided and click Recover. 4. On the Review Recovery Selection screen, ensure that you have chosen the correct items to recover as shown in Figure 8.20. When you are satisfied with your selections, click Next. 5. On the Select Recovery Type screen shown in Figure 8.21, select the Recover To An Alternate Instance option and click Next. 6. In the Specify Alternate Database And Instance For Recovery screen shown in Figure 8.25, you can either type in the name of the desired SQL Server machine and instance, or click Browse to see a list of available instances, as shown in Figure 8.26.

Figure 8.25: The Specify Alternate Database And Instance For Recovery screen

Figure 8.26: Browsing for an alternative instance and database 7. In the Specify Database Recovery Completion State screen, select whether you want to bring the database back online or to keep it offline when the recovery operation is complete, as shown in Figure 8.22. o Leave Database Operational: With this option, DPM recovers the database as the VSS snap-shot was taken at the specified recovery point, including replaying any necessary trans-action logs, and then remounts the database. o Leave Database Operational But Able To Restore Additional Transaction Logs: DPM recovers the database as the VSS snapshot was taken at the specified recovery point, including replaying any necessary transaction logs. The database is kept offline, however, so you can replay additional transaction logs to bring the database to a later point in time. If additional transaction logs are available, DPM will give you the option to copy them. 8. Click Next. 9. On the Specify Recovery Options screen shown in Figure 8.23, choose your desired recovery options: o Network Bandwidth: To adjust the network bandwidth used by the restore process, click Modify. In the new window, specify a maximum usable amount of bandwidth for work hours and non-work hours, and then click OK. o Email Notifications: You can enable email notifications and specify one or more recipients. 10. On the Summary screen shown in Figure 8.27, review your choices; if you need to make any corrections, click Back to move back through the configuration options. When you are sat-isfied, click Recover.

Figure 8.27: Recovering to an Alternate Instance Summaryscreen 11. DPM displays a Status window for the recovery operation. Instead of keeping this window open, you may close it and track the recovery progress in the DPM Administrator console. When the recovery operation completes, the version of the database captured in therecovery point will be restored to the alternative instance you selected. Remember thatthis instance can be on any protected SQL Servermachine, regardless of whether it is standalone or clustered.
Restoring to a Network Folder

Restoring a database to a network folder can be useful in situations where you need access to the physical database files, such as copying a production database to import into another system (such as a development server or testing virtual machine) that isn't protected by DPM. Use the following steps to restore to a network folder: 1. Open the DPM Administrator console and navigate to the Recovery tab. 2. In the Protected Data pane, expand the available data sources and select the data source that you want to recover, as shown in Figure 8.19. 3. Select the desired recovery point from the list provided and click Recover. 4. On the Review Recovery Selection screen, ensure that you have chosen the correct items to recover, as shown in Figure 8.20. When you are satisfied with your selections, click Next. 5. On the Select Recovery Type screen shown in Figure 8.21, select the Recover To An Alternate Instance option and click Next. 6. In the Specify destination screen shown in Figure 8.28, click Browse to choose a location for the restore as shown in Figure 8.29.

Figure 8.28: The Specify Destination screen

Figure 8.29: Browsing for a network location 7. On the Specify Recovery Options screen shown in Figure 8.30, choose yourdesired recovery options: o Network Bandwidth: To adjust the network bandwidth used by the restore process, click Modify. In the new window specify a maximum usable amount

of bandwidth for work hours and non-work hours as shown in Figure 8.31, and then click OK.

Figure 8.30: The Specify Recovery Options screen

Figure 8.31: Modifying network bandwidth throttling Restore Security: You canspecify whether to use the security settings as they currently exist on the recovery point, or apply the settings from the recovery point (if they differ). o Email Notifications: You can enable email notifications and specify one or more recipients. 8. On the Summary screen shown in Figure 8.32, review your choices; if you need to make any corrections, click Back to move back through the configuration options. When you are sat-isfied, click Recover.
o

Figure 8.32: The Recover To Network Location Summary screen 9. DPM displays a Status window for the recovery operation. Instead of keeping this window open, you may close it and track the recovery progress in the DPM Administrator console. When the recovery operation completes, the database and transaction log files corresponding to the version of the database captured in the recovery point will be restored to the network folder location you selected. You can then use these database and transaction log files as you need. The server you recover these files to does not need to be a SQL Server machine, but it does need to have the DPM protection agent installed.
Copy to Tape

With this option, you can create an on-tapecopy of your database from any selected recovery point. This option may sound somewhat crazy at first; your data is already backed up on disk (or on tape, if it's been long enough); why would you need another copy on tape? If you can't think of a reason now, don't discount the option. There are many administrators who need the ability to create tape copies of their data; how often do you need to send data to people with whom you don't have direct network connections? It is becoming increasingly common to need a tape copy to comply with electronic discovery queries or satisfy audit requests in regulatory compliance scenarios. As with recovering other types of data with this option, you don't have the ability to filter the data according to arbitrary criteria; you get the whole database. We suspect it will be a long time (if ever) before any future version of DPM offers this functionality for SQL databases, for many of the same reasons why you can't currently recover SQL data at any level more granular than the whole database. There's too much potential dependency locked into a database. Use the following steps to copy a recovered database to tape: 1. Open the DPM Administrator console and navigate to the Recovery tab. 2. In the Protected Data pane, expand the available data sources and select the data source that you want to recover, as shown in Figure 8.19. 3. Select the desired recovery point from the list provided and click Recover.

4. On the Review Recovery Selection screen, ensure that you have chosen the correct items to recover as shown in Figure 8.20. When you are satisfied with your selections, click Next. 5. On the Select Recovery Type screen shown in Figure 8.21, select the Recover To An Alternate Instance option and click Next. 6. In the Specify Library screen, as shown in Figure 8.33, select the tape device to use (if you have more than one), customize your tape label, specify your desired compression and encryption options, and click Next.

Figure 8.33: The Specify Library screen 7. On the Specify Recovery Options screen shown in Figure 8.23, choose yourdesired recovery options: o Network Bandwidth: To adjust the network bandwidth used by the restore process, click Modify. In the new window, specify a maximum usable amount of bandwidth for work hours and non-work hours, then click OK. o Email Notifications: You can enable email notifications and specify one or more recipients. 8. On the Summary screen shown in Figure 8.34, review your choices; if you need to make any corrections, click Back to move back through the configuration options. When you are satisfied, click Recover.

Figure 8.34: The Copy To Tape Summary screen 9. DPM displays a Status window for the recovery operation. Instead of keeping this window open, you may close it and instead track the recovery progress in the DPM Administrator console. When the recovery operation completes, you'll have a copy of the database on tape. While it may be easier to copy the database to some other media such as an external hard drive by using the option to recover to a network folder, remember that the DPM protection agent must interact with NTFS-formatted volumes. Because some external devices don't support NTFS, you may find it easier to use tapes to avoid these types of drive formatting issues.

The Bottom Line


Determine the prerequisites for installing the DPM protection agent on SQL Server machines. You need to ensure that your protected SQL Server machines are running the necessary versions of the Windows operating system, service packs, and SQL Server software. Master It 1. Perform a survey of your SQL Server machines to ensure that they are compatible with the DPM protection agent: o What version of Windows Server, Windows service pack, and SQL Server are you running on the SQL Server machines you want to protect? o What instances are installed on your SQL Server machines? Which ones need to be protected? 2. Given a file server that has no other roles, what data will DPM capture as part of the system state? How does this differ from a cluster node system state? Configure DPM protection for standalone and clustered SQL Server machines. Once the DPM agent is deployed, you must configure protection groups and select data sources to protect. Master It

1. What types of SQL Server instances can DPM protect? 2. What is the difference between synchronization and express full backups of SQL Server databases in DPM? 3. What DPM licenses do you need to protect standalone servers? What DPM licenses do you need to protect clustered servers? Recover protected SQL Server databases. Protecting your databases is only half of the job; you also need to be able to recover them. Master It 1. To where can you recover SQL databases? 2. At what level can you restore SQL data? 3. What are the differences between recovering data to a standalone server and a cluster?

Chapter 9: Protecting SharePoint Servers


Overview
"How does it work?" "Well, you have front-end servers in a farm hosting the websites, an application layer that handles advanced services like Excel, search, and InfoPath forms, and a database that holds the information." "OK, how does it work?" Ryan, trying to explain SharePoint to someone who has never used it Ryan's not the only person in the world who has had a tough time explaining the concepts behind SharePoint. For many years, the trick was finding people who'd even heard of it. Although SharePoint has been around as an actual product since 2001, it wasn't until Microsoft released the Office System 2003 wave of products that people really began to hear about it. Even if you could corner someone who knew about SharePoint, getting a coherent description of what it was and what you could do with it was as entertaining as Ryan's quote. Before we can really start talking about how to protect SharePoint data with DPM, let's first explore SharePoint's history, see how it has developed into the product it is today, and get an understanding of where that data is stored. The modern versions of SharePoint are vastly different from their origins, and the architectural changes between even the last two versions are great enough to produce a significant effect on your ability to protect the data within them. SharePoint's evolution as a product starts back in the late 1990s, before the introduction of Windows 2000, Active Directory, and the .NET framework. The Internet and World Wide Web were clearly important to the continued future growth of the software industry as a whole; as a result, software designers were taking key Internet technologies such as HTTP and figuring out how to use them in future releases. As Microsoft began working on the 2000 wave of products, they also began working on a variety of offerings that would allow their customers to start using new web technologies built on top of the then-current platform of Windows NT 4.0, Internet Information Server (IIS) 4.0, and Active Server Pages (ASP). To encourage developers to understand and use ASP, Microsoft released a number of starter kits, pre-written applications complete with source code to provide a base implementation of new technologies. In 1999, Microsoft released the Digital Dashboard Starter Kit; this kit provided a simple webportal framework that developers could use to get a head start on creating their own ASPdriven portal websites. The Digital Dashboard included the concept of nuggets, small code components that helped gather and display information from a variety of sources. Developers didn't have to worry about all of the various tasks involved in creating web-based applications; by using the Digital Dashboard and ASP in combination with the various Windows COM interfaces, they could instead create small nuggets to interface with applications and programs on their network. By 2000, Microsoft had released a third version of the Digital Dashboard and renamed it the Digital Dashboard Resource Kit.

At the same time, Microsoft was working on the 2000 wave of product releases, including Windows Server 2000 (which contained Active Directory and IIS 5.0), SQL Server 2000, and Exchange 2000. During this development cycle, there were two key developments. The first was the Tahoe project, an add-on to Exchange 2000 to provide support for the Web Document Authoring and Versioning (WebDAV) protocol as well as document indexing and search capabilities. The second was the Office Web Server, a web-based collaboration engine for Office documents, created by the Office team to allow multiple users to work on the same document. In 2001, Microsoft continued these trends by releasing three products:

The Office Web Server was expanded to a free Office add-on called SharePoint Team Services (STS), aimed at providing document management. STS was what would later be called Windows SharePoint Services (WSS) 1.0. The Tahoe project produced SharePoint Portal Server (SPS) 2001, a departmental portal system with some document management capabilities. This was based on many features of both the Digital Dashboard and Tahoe. Microsoft purchased NCompass Labs and produced a new version of their web content management product. This new version was rebranded as Content Management Server 2001 and was intended to work with Commerce Server.

Having multiple products with largely overlapping feature sets didn't make anyone happy not the end-users, the developers, or even the folks at Microsoft. Losing sales to the competition never makes anyone happy; when that competition is a product made by another team in your company, it makes you even less happy. The key focus was on the portal functionality; although document management was nice to have, it wasn't a major driver for most purchases. At the same time, the .NET framework and ASP.NET had been introduced and needed to be integrated into the next generation of products to replace the ASP-driven Digital Dashboard. Finally, SPS used the Local Web Storea discontinued storage technologyand needed to be rewritten to use SQL Server as its storage system. To meet these goals, Microsoft combined the STS (by now renamed WSS) and SPS teams into a single product team. The relationship between these technologies was clarified: WSS would be a free add-on to Windows Server and provided the basic SharePoint capabilities, such as a native ASP.NET framework, modular web parts (the former nugget system), SQLbased content storage, and basic document management capabilities. SPS would use WSS as a key component and expand on it to provide integrated portal management, administration for multiple WSS sites, and rudimentary indexing and search capabilities. Both WSS and SPS were now compatible with Visual Studio and .NET, allowing developers to extend them. In 2002, Microsoft provided the CMS 2002 upgrade, which had been reworked to use ASP.NET. However, it was still a separate product from the SharePoint offerings. In 2003, Microsoft released the next version of the SharePoint componentsWSS 2.0 and SPS 2003as part of the Office System 2003 application wave. Now we can move forward to the 2007 generation of Office products. WSS 3.0 and Microsoft Office SharePoint Server (MOSS) 2007 are now key parts of the Office 2007 release and offer even more benefits than before:

They've upgraded to the .NET framework version 2.0. ASP.NET 2.0 natively includes web parts, allowing WSS to use a wider selection of parts and objects. They have been combined with CMS so that SharePoint offers a single portal, document, and content management solution. They integrate more tightly with the full line of Office applications. They offer improved search and indexing capabilities.

And that's the quick version of what SharePoint is! With all of this great functionality (and accompanying complexity), those of us in the systems administration world are left with a single, yet significant question: "How do I protect this monster?" For that, you'll need to read on. In this chapter, you will learn to:

Determine the prerequisites for installing the DPM protection agent on SharePoint servers Configure DPM protection for SharePoint farms Recover protected SharePoint farms

Considerations
DPM 2007 is capable of protecting SharePoint data just as easily as it does the other types of data we've discussed. When you want to protect SharePoint data with DPM, you simply select the appropriate SharePoint farm to add to the protection group; DPM hides which web servers and SQL Server databases are part of the SharePoint farm. This means that whenever you add or remove servers from a SharePoint farm, DPM automatically reads the SharePoint configuration so it can update itself and transparently take any changes into account. However, there's a caveat to this: this nice, easy, transparent farm-level data source access is designed to protect only WSS 3.0 and MOSS 2007, not earlier versions of SharePoint. The blame isn't with DPM, though; there's a very good reason involving the architectural improvements the SharePoint team made between versions of SharePoint. One of the most major changes between WSS 2.0 and WSS 3.0 (and by extension, SPS 2003 and MOSS 2007) is where the data is stored. In WSS 2.0, your SharePoint data is split between the SQL Server database and the filesystems of the IIS web servers that host the SharePoint installations. By contrast, in WSS 3.0 all data, from configuration information to files, is stored solely in the SQL Server database of the farm. This change in architecture enables DPM to address a farm as a data source; the DPM protection agent doesn't really care which web servers are part of the farm or how they're configured, as long as it finds the SharePoint services installed and can query them for the appropriate SQL Server database. As a consequence, when you are protecting SharePoint data, you must first choose the farm you want to protect and then select the applicable content databases within that farm. This process is identical both for WSS 3.0 and MOSS 2007; it's going to be a bit different if you need to protect WSS 2.0 or SPS 2003 sites. Before you begin protecting your SharePoint data with DPM, there are several areas you need to consider:

Do your SharePoint servers meet the prerequisites for DPM protection? How do you need to prepare your SharePoint servers? Do you need to protect the system state of your SharePoint servers?

Let's examine these issues in more detail.


Prerequisites

Before we move on into the details of protecting and restoring SharePoint data with DPM, you should ensure that your SharePoint servers meet the prerequisites. These requirements are shown in Table 9.1.
Table 9.1: Protected Server Software Requirements Open table as spreadsheet

Software Component Application version

Description Windows SharePoint Services 3.0 with the KB 941422 hotfix applied (http://go.microsoft.com/fwlink/?LinkId=100392). Microsoft Office SharePoint Server 2007. Windows SharePoint Services 2.0 (using the steps in KB 915181). Microsoft SharePoint Portal Server 2003 (using the steps in KB 915181). VSS hotfix 940349 on Windows Server 2003.

DPM License The E-DPML for each SharePoint server on which you install the protection agent. You only need to install the agent on one front-end web server in the farm, however. As always, you should thoroughly read the DPM Planning Guide, as well as the DPM release notes, to identify any further issues or concerns that may affect the protection of your SQL Server machines.
Manually Protecting WSS 2.0 and SPS 2003

We've got some good news and some bad news. First, the good news: DPM supports the protection of Windows SharePoint Services (WSS) 3.0 and Microsoft Office SharePoint Server (MOSS) 2007. Now, the bad news: Previous versions of SharePoint are not natively supported. DPM 2007 natively supports protecting only WSS 3.0 and MOSS 2007. Don't panic, though; even though you don't have full native support, you can still protect WSS 2.0 and SPS 2003-based sites. It is, however, going to take a little bit more work on your part.

As you know, DPM 2006 is only capable of backing up filesystem-based data. Microsoft knew that this wasn't going to be very palatable to their customers who had Exchange and SQL Server and SharePoint servers they also wanted to protect, so they wrote a series of Knowledge Base articles that describe the procedures for using DPM's file-based protection to protect other workloads. One of those articles, KB 915181, shows how you can protect WSS 2.0 and SPS 2003 with DPM 2006. The basic steps are listed here: 1. Use the appropriate tools to back up your SharePoint server data. These tools vary depending on which product you're protecting. If you're protecting SPS 2003, use the SPS Data Backup and Restore tool (Spsbackup.exe); if you're protecting WSS 2.0, you can choose from native SQL Server database dumps, the Stsadmin.exe tool to dump site collections, or the Microsoft SharePoint Migration tool (Smigrate.exe) to dump sites and subsites. Each option has its pros and cons, which are nicely laid out by the KB article. 2. Use the Windows Scheduled Tasks tool to create a schedule backup task using your chosen tool. This will create a regularly updated set of backup files on your filesystem to which the DPM 2006 protection agent can get. To keep from bogging down SQL Server with backup traffic, these files can be on a local volume or on a network file share, but they shouldn't be on a volume that's being used for SQL Server databases or transaction log files. 3. Install the DPM protection agent on the machine and then add the backup files you created with the scheduled job in the previous step to a protection group. You can create a new group or add it to an existing one. These steps are pretty simple and should for the most part be just as applicable to protecting these SharePoint installations with DPM 2007 as they are to DPM 2006, There are, however, a couple of things to keep in mind:

When you're protecting a WSS 2.0 installation, the choice of tool you use is important. For example, using Smigrate.exe doesn't produce a full-fidelity copy of your site; you lose important data. If you need a full-fidelity copy, use the Stsadmin.exe tool. If you use the native SQL Server tools to dump your WSS 2.0 database, you're already ahead of the game by using DPM 2007. You can protect that database directly using the DPM protection agent. However, this does mean putting an E-DPML on the machine (because you're protecting SQL data) instead of the S-DPML that is required to protect file data.

When protecting a SharePoint farm (regardless of version), you first must ensure that you are not protecting the content database on the SQL Sever through some other protection group. The SharePoint SQL data will be protected automatically when you select the SharePoint web farm. If you are hosting other non-SharePoint databases on the same SQL Server instance, however, you will still have to protect them separately.
Clustered Configurations

The back-end SQL Server instance that hosts the SharePoint data can be in a clustered configuration; however, the SharePoint web farm itself uses a number of front-end web

servers to provide load-balancing, scalability, and service resiliency. Unless you're protecting a WSS 2.0 or SPS 2003 database directly, you don't need to worry about the SQL Server configuration. When DPM protects a SharePoint farm, it manages all database access from the web front-end server that hosts the DPM protection agent. As a result, you don't need to worry about clustering issues.
System State

DPM includes the ability to protect and recover the local system state for any protected server. System state backups of SharePoint machines do not directly affect your ability to protect and restore SharePoint data; as long as the SQL Server instance and a web front-end server with the DPM protection agent is available, you can always recover the farm data whether the system state data is available or not. Table 9.2 includes a listing of the types of data included in the system state for different types of servers that are likely to act as SharePoint servers.
Table 9.2: Data Contained in the System State Open table as spreadsheet

Server Role Member server

System State Data Boot files. The COM+ class registration database. Registry hives.

Domain controller

Active Directory (NTDS) files. The system volume (SYSVOL). Other applicable components.

Certificate Services All Certificate Services data. Certificate Authority Other applicable components. Cluster node Cluster Service metadata. Other applicable components. In many organizations, domain controllers often pull doubleor even triple-duty: they serve as domain controllers, perform infrastructure services such as DNS and DHCP, and serve as file servers. You can protect volumes and file shares on domain controllers just as you would on a member server, but infrastructure service data may or may not be directly protected. The general rule is this: If the data to be protected is in Active Directory (such as Active Directoryintegrated DNS zone data), system state protection will protect itbut to recover it, you'll have to recover the entire system state. On the other hand, you can configure protection for individual files for those services that store their data in separate files and even recover them independently, but

you have to identify and restore the files on your own, increasing the risk of corrupting something. We devoutly hope that you're not running SharePoint on a domain controller. Although this configuration is supported by Microsoft (mainly to allow support for the Small Business Server SKU), it's a bad idea in just about any other configuration. You're complicating your disaster recovery scenario with each additional service you place on the same machine. So here's the real question: when do you use DPM to protect system state? From our experience, we recommend that you do it all the time. System state is insanely easy to protect with DPM; it takes up comparatively little room on most SharePoint servers even before you factor in DPM's space-saving technologies. You never know when you're going to need it. If you're ignoring the recommendations and putting SharePoint on a domain controller, then you really do need to capture system state. You'll need a functional domain controller in order to rebuild, and if it's the same machine as your SharePoint server, you've got a chickenand-egg problem without the system state. Better yet, don't combine the domain controller and SharePoint roles and protect the system state of both machines with DPM. If you're utilizing some of the advanced protection and service continuation options you have when using DPM in conjunction with Virtual Machine Manager, keeping the system state protected is an essential part of your recovery strategy. While the P2V capabilities of VMM are sufficient to protect the base operating system and program files, you'll need the system state to restore the virtual machine to the last known state. For this reason, you should protect the server's system state in the same protection group that you protect the rest of its data; this ensures that the entire server can be consistently restored to a known point in time.
Protected Data Sources

When you're protecting SharePoint data natively with DPM, you get no say in the decision about which resources DPM will protect; you can have any level of protection you like, as long as it's farm-level protection. Although SharePoint contains data and content in a variety of different objects, the main objects that we may be concerned about are the following:

A farm is a collection of one or more content databases and configuration data. Each SharePoint server that is a member of a farm contributes some role toward the maintenance and publishing of the content data within the farm. In WSS 3.0 and MOSS 2007, the farm contains all of the configuration information needed by the various server roles. Databases are individual SQL Server databases, each focused on one set of content data. Sometimes, you're just worried about a specific group of content data objects held with the SharePoint farm; chances are, these objects are all held within the same database. Lists are essentially tables of content within a SharePoint site; they can contain one or more types of data objects, such as documents, calendar events, contacts, and more. By capturing a list, you are in essence capturing a collection of related objects. Documents are the most granular SharePoint object as you can geta single Word document that's been uploaded to a document library, a single announcement, and so on. Due to the database-driven design of SharePoint, it's extremely difficult to deal with documents without going through the SharePoint API.

With DPM 2007 protecting WSS 3.0 or MOSS 2007, you get an extra bonus capability that is perhaps the coolest part about protecting SharePoint data: you can restore individual sites, lists, or even documents. Yes, that's right; by using DPM you get automated protection and item-level recovery. How cool is that?! In order for DPM to give you the magic of item-level recovery, though, you need to give DPM a little bit of help through some sleight-of hand known as a recovery farm. The recovery farm is an intermediate restore location used by DPM to stage the data that it restores from the selected recovery point. Once the appropriate database has been placed into the recovery farm, the DPM protection agent can browse through the data structures, choose the selected data objects, and populate them to the target SharePoint farm. While Microsoft has made no official recommendations regarding how to deploy the recovery farm or what kind of hardware it might take, we do want to suggest that you take advantage of Microsoft Virtual Server to create a recovery farm virtual machine instead of tying up dedicated hardware for what will be (we hope) a rarely used process. By using a virtual machine, you not only free up hardware but also have the advantage of Undo disks and can count on being able to quickly reset and enable your recovery farm when it is needed. To use a recovery farm, you must create a new web application on a separate SharePoint server. This web application must be called DPMRecoveryWebApplication so that DPM can automatically find it during restore operations. However, don't think that you need yet another MOSS installation; the big plus side of the DPM recovery farm design, and the relationship between WSS 3.0 and MOSS 2007, is that you don't need to have a separate MOSS install to restore individual items to a MOSS farm. Instead, you can get by with a simple WSS 3.0 installation. All DPM needs is a site to temporarily dump your data long enough that it can use the SharePoint API to find the items you're looking to recover and pull them out. Aside from you needing to create the recovery farm, you don't have to do anything manually; DPM takes care of it all transparently for you and makes the recovery happen seamlessly.

Backup Procedures
Although it's easy to protect SharePoint farms with DPM, you should still keep your application and data recovery needs in mind, especially once you get involved in item-level recovery. Let's take a look at the actual steps involved.
Installing the Protection Agent

We already covered the general steps for installing the DPM protection agent in Chapter 2, "Installing DPM," so you should be familiar with them. Unlike other workloads, you don't need to install the protection agent on all of the servers in the SharePoint farm; you only need to install it on a single web front-end server. If you've already installed the agent on one of the SharePoint web front-end servers in your farm, you're good to go. If you haven't, here's a recap: 1. Open the DPM Administrator console, navigate to the Management tab, and select the Agents subtab. 2. Click Install in the Actions pane.

3. From the left pane, select the servers you want to protect, as shown in Figure 9.1. Click Add.

Figure 9.1: Choosing servers for agent install 4. When all of the servers you want to protect are in the right pane, click Next. 5. Enter the credentials for a user with administrative rights on the selected servers, as shown in Figure 9.2. Click Next.

Figure 9.2: Enter the credentials for the agent install 6. Even though the agent install has been completed, you will not be able to begin protecting your servers until they have been restarted. Choose whether you want the servers to reboot now or later, as shown in Figure 9.3. Click Next.

Figure 9.3: Choose the restart method 7. A Summary screen will appear, as shown in Figure 9.4, indicating the choices you made. Click Install to proceed with the agent install, or click Back to change your options.

Figure 9.4: The Protection Agent Install summary 8. The final screen will display the progress of the agent installation. You can click Close and the current status and progress will be displayed in the Agents subtab. Once the protected SharePoint web front-end server reboots and DPM verifies the connection with the agent, you will see the data sources that DPM can protectin this case, the SharePoint farm to which this server belongs. In order for the DPM protection agent to protect SharePoint data on WSS 3.0 and MOSS 2007, there's one other housekeeping task you must do. You need to register and start the Windows SharePoint Services VSS Writer service (also known as the WSS Writer). To do this, follow this process:

1. Open a command prompt on the SharePoint web front-end server on which you installed the DPM agent. 2. Enter the following command:
3. stsadm.exe -o registerwsswriter

You may have trouble finding the Stsadm.exe tool if its directory isn't in your environment path variable. You can find it in:
drive:\Program Files\Common Files\Microsoft Shared\web server extensions\12\BIN

4. Open the Component Services MMC snap-in console (Start Administrative Tools Component Services). 5. Expand the Component Services, Computers, My Computer, and DCOM Config nodes. 6. Right-click the WSSCmdletsWrapper component and click Properties. 7. On the WSSCmdletsWrapper Properties screen, click the Identity tab, and select This User. Enter the SharePoint Services farm administrator credentials and click OK. 8. Close the MMC console. 9. You may need to restart the SharePoint server to make this change active. That's enough of the prerequisites; let's get on to protecting SharePoint servers!
Protecting SharePoint Farms

You can add SharePoint farms to an existing protection group or create a new protection group. The following process assumes that you're creating a new protection group; if you want to add SharePoint farms to an existing protection group, merely open the protection group and select the databases you want to add. To create a new protection group for your SharePoint farms: 1. Open the DPM Administrator console, navigate to the Protection tab, and click Create Protection Group in the Actions pane. 2. In the Welcome screen, click Next. 3. In the Select Group Members screen, expand the farm (or farms) and content databases you want to protect, as shown in Figure 9.5.

Figure 9.5: Selecting content databases 4. When you have selected the farms you want to protect, click Next. 5. Choose whether or not this group will use short-term protection and the associated method, as well as whether to use long-term protection (if you have a tape drive or library attached to your DPM server), as shown in Figure 9.6.

Figure 9.6: Selecting a protection method 6. Once you have chosen the protection methods, click Next. 7. Unless you have chosen not to provide short-term protection for your protection group, the next screen is where you decide how long short-term data is retained in DPM, as well as the synchronization frequency and the recovery point schedule, as shown in Figure 9.7.

Figure 9.7: Select a retention range 8. To change the schedule for the recovery point creation, click the Modify button. Here, you can change the frequency by adding times and checking days of the week for the selected operation to occur, as shown in Figure 9.8. When you are finished, click OK.

Figure 9.8: Modifying recovery point frequency 9. Back in the Short-Term Goals screen, click Next. 10. In the Review Disk Allocation screen shown in Figure 9.9, you'll see that DPM will have already recommended a default allocation from the storage pool based on the amount of data being protected as well as the short-term goals you specified.

Figure 9.9: Review disk allocation 11. To change the amount of storage pool space allocated for your protection group, click Modify. You can change the amount of space allocated for replicas and recovery points, as shown in Figure 9.10.

Figure 9.10: Modifying disk allocation 12. Back in the Review Disk Allocation screen, click Next. 13. Unless you have chosen not to provide long-term protection for your protection group, the next screen is where you configure DPM's long-term tape retention strategy, as shown in Figure 9.11.

Figure 9.11: Specify longterm goals 14. To change the long-term protection objectives, click Customize. You can establish a multipletier strategy in units of days, weeks, months, or years. You can also specify what happens if more than one of the scheduled backups happens at the same time, as shown in Figure 9.12. When you have finished making your selections, click OK.

Figure 9.12: Customize protection objectives 15. To change the days on which long-term backups occur, click Modify. Select the appropriate day and time for each backup as shown in Figure 9.13. When you have finished making your changes, click OK.

Figure 9.13: Setting the schedule for tape backups 16. Click Next. 17. In the Select Library And Tape Details screen, choose the library to use, the number of drives from the library, integrity checking, and compression and encryption options, as shown in Figure 9.14. When you have chosen the appropriate settings, click Next.

Figure 9.14: Select the library and tape options 18. In the Choose Replica Creation Method screen, select the method by which replicas will be created, as well as when the first one should be created, as shown in Figure 9.15. Click Next.

Figure 9.15: Choose the replica creation method 19. In the Summary screen shown in Figure 9.16, you will be presented with a summary of all of the settings you have selected for the protection group. If everything looks good, click Create Group; otherwise, click Back to make any necessary changes.

Figure 9.16: The Summary screen That's it! You're protecting your SharePoint farm with DPM. In the next section, we'll move on to recovery options and procedures.

Recovery Procedures
More recovery options are available with MOSS and WSS than with any other data sources we've discussed so far. The range of recovery options covers everything from farm-level all the way down to individual items in document libraries. 1. To recover protected SharePoint data, use the following steps: 2. Open the DPM Administrator console and click on the Recovery tab. 3. Expand the web front-end server (or, in a single server farm, expand the server) on which you installed the protection agent. Click All Protected SharePoint Data.

4. Go to the section that is appropriate for the level of recovery you want to accomplish.
Recovering a Farm

Recovering an entire SharePoint farm is probably not going to happen to you too often, but when you need to do it, you'll be glad it's this easy. To recover your SharePoint farm: 1. In the center pane, as shown in Figure 9.17, right-click the SharePoint item and click Restore.

Figure 9.17: Starting a farm recovery 2. In the Review Recovery Selection screen, as shown in Figure 9.18, ensure that you have selected the correct item to restore and click next.

Figure 9.18: The Review Recovery Selection screen 3. In the Select Recovery Type screen, as shown in Figure 9.19, select whether you want to recover the farm to the original location, a network folder, or to tape. If you want to

recover to the original location, click the appropriate bullet and go to step 6. If you want to recover to a network folder, go to step 4. If you want to recover to tape, select the appropriate bullet and go to step 5.

Figure 9.19: Select the recovery type 4. To recover to a network folder, click the Browse button and select a server and path for recovery, as shown in Figure 9.20. When you have selected the location, click OK and click Next. Go to step 6.

Figure 9.20: The Specify Library screen 5. In the Specify Library screen, as shown in Figure 9.20, choose a library and copy the library (if applicable). In the Tape Options section, enter a tape label and specify the encryption and compression options. Click Next. 6. In the Specify Recovery Options screen, select whether to send any notifications about the job and specify the recipients as shown in Figure 9.21.

Figure 9.21: Specify the recovery options 7. To apply bandwidth throttling, click Modify. In the Throttle window shown in Figure 9.22, click the Throttle checkbox and specify the available bandwith for work and non-work hours. You can also define which hours are work hours. When you have made your selections, click OK.

Figure 9.22: Throttling network bandwidth 8. Click Next. 9. In the Summary screen, as shown in Figure 9.23, check to make sure that your selections are correct. If everything looks fine, click Recover; otherwise, use the Back button and make any necessary changes.

Figure 9.23: The Summary screen


Recovering a Site

Recovering a single SharePoint site is just as easy to do as recovering the entire farm, but it's going to take less time because there's less data to move around. To recover your SharePoint site: 1. In the center pane, double-click the SharePoint item to expand it, and then doubleclick the appropriate content database. Right-click on the site, and click Restore (see Figure 9.24).

Figure 9.24: Select a site to recover 2. In the Review Recovery Selection screen, ensure that you have selected the correct item to restore and click Next (see Figure 9.25).

Figure 9.25: The Review Recovery Selection screen 3. In the Select Recovery Type screen, select the recovery type that's appropriate for your needs, click Next, and proceed to the appropriate section (see Figure 9.26).

Figure 9.26: Selecting the site recovery type


RECOVERING TO THE ORIGINAL SITE

Recovering an entire SharePoint site to its original site is probably not going to be a common task for you, but it's nice to have the capability. To recover sites to their original instance, use the following steps: 1. In the appropriate boxes, enter the name of the web front-end server and SQL instance of your recovery farm. Also, enter a temporary location on the recovery farm for the database files (see Figure 9.27). Click Next.

Figure 9.27: Specify the recovery farm details 2. In the Specify Recovery Farm screen, select a path on a recovery farm server to use as a staging ground for files before they are transferred to the production server (see Figure 9.28). Click Next.

Figure 9.28: Specify the recovery farm 3. In the Specify Recovery Options screen, select whether or not to restore security settings and indicate the recipients for a job notification (see Figure 9.29). If you want to modify bandwidth throttling, click the Modify link and go to step 4. Otherwise go to step 5.

Figure 9.29: Specify the recovery options 4. In the Throttle window, click the Throttle checkbox and specify the available bandwidth for work and nonwork hours. You can also define which hours are work hours (see Figure 9.30). When you have made your selections, click OK.

Figure 9.30: Throttling network bandwidth 5. Click Next. 6. In the Summary screen, check to make sure that the settings you've chosen are correct. Notice in Figure 9.31 that the portion for database files is empty. This is because you are recovering specific items from the database. If you need to change to your selection, click Back. If everything is correct, click Recover.

Figure 9.31: The Summary screen When the recovery operation completes, the site as captured in the selected recovery point will be restored to its original location on the protected SharePoint server. If a version of this site exists on the server before you begin recovery, it will be replaced with the recovered version. Be careful when using this option!
RECOVERING TO AN ALTERNATIVE SITE

Recovering a site to the original site is useful under certain conditions such as disaster recovery; however, it's not at useful when you don't want to overwrite the site data that's there. Instead, you can recover the site to an alternative SharePoint site. To recover SharePoint sites to alternative sites, use the following steps: 1. In the appropriate boxes, enter the name of the web front-end server and SQL instance of your recovery farm. Also, enter a temporary location on the recovery farm for the database files (see Figure 9.32). Because you are recovering to an alternative site location, you'll need to give a URL for the new location as well. Click Next.

Figure 9.32: Specify a recovery farm and target site 2. In the Specify Recovery Farm screen, select a path on a recovery farm server to use as a staging ground for files before they are transferred to the production server (see Figure 9.28). Click Next. 3. In the Specify Recovery Options screen, select whether or not to restore the security settings and indicate the recipients for a job notification (see Figure 9.29). If you want to modify bandwidth throttling, click the Modify link and go to step 4. Otherwise, go to step 5. 4. In the Throttle window, click the Throttle checkbox and specify the available bandwith for work and nonwork hours. You can also define which hours are work hours (see Figure 9.30). When you have made your selections, click OK. 5. Click Next. 6. In the Summary screen, check to make sure that the settings you've chosen are correct. Notice in Figure 9.31 that the portion for database files is empty. This is because you are recovering specific items from the database. If you need to make changes to your selection, click Back. If everything is correct, click Recover. When the recovery operation completes, the selected SharePoint site as it exists in the recovery point will be restored to the alternative instance you selected.
Why Can't I Recover a Site to a Network Folder or Tape Copy?

You may have noticed that there are a number of combinations of recovery items and locations that can't be selected together, such as recovering a SharePoint site to a network folder. The reason DPM prevents you from selecting these options together goes back to how SharePoint stores data. Remember that in WSS 3.0 and MOSS, all of your SharePoint data is stored within the corresponding SQL Server databases. As a result, once you move to restore targets that are more granular than a database, you run into some of the types of complications that you'd get into trying to recover specific items from a normal SQL Server database (see Chapter 8, "Protecting SQL Servers," for a discussion of this topic).

The short version is that it's really hard to pull out just the data that a particular SharePoint site uses. Multiple sites co-exist within a single database; some of the tables in that database store metadata that refers to multiple sites. As a result, there's no single table or combination of tables that DPM can look at to capture the site data; it would need to understand the database formats, as well as the SharePoint-specific schema and data object relationships, and be able to run some sophisticated business logic to do it. That would require a lot of coding for fairly limited benefit; we're hard-pressed to think of any reason why you'd want to recover a single site to network folder (or tape) when you can just as easily recover the database that contains the site. The same logic applies to recovering lists and individual items to a network folder or to tape; again, this data is stored too granularly for DPM to capture. This kind of functionality is exactly why site, list, and item-level recovery to a SharePoint site requires a recovery farm; the recovery farm SharePoint instance already contains all the necessary business logic. It doesn't know, however, how to direct the recovered items back into DPM.

Recovering Lists and Individual Items

The procedures for recovering lists and individual items are the same, so we'll roll them all up into the following sections. 1. Starting from the Recovery screen, double-click a content database, double-click on a site, and continue to drill down in this manner until you have the item you want to recover (see Figure 9.33). Right-click it and click Recover.

Figure 9.33: Selecting an item to recover 2. In the Review Recovery Selection screen, ensure that you have selected the correct item to restore and click Next (see Figure 9.25). 3. In the Select Recovery Type screen, select the recovery type that's appropriate for your needs, click Next, and proceed to the appropriate section (see Figure 9.34).

Figure 9.34: Select the recovery type


RECOVERY TO THE ORIGINAL SITE

Recovering SharePoint data objects to the original site is, frankly, the most common type of recovery operation you'll perform in your production environment. It's a great protection against butterfingers moments. To recover databases to their original instance, use the following steps: 1. In the appropriate boxes, enter the name of the web front-end server and SQL instance of your recovery farm. Also, enter a temporary location on the recovery farm for the database files (see Figure 9.35). Click Next.

Figure 9.35: Specify the recovery farm details 2. In the Specify Recovery Farm screen, select a path on a recovery farm server to use as a staging ground for files before they are transferred to the production server (see Figure 9.36). Click Next.

Figure 9.36: Specify a recovery farm 3. In the Specify Recovery Options screen, select whether or not to restore the security settings and indicate the recipients for a job notification (see Figure 9.37). If you want to modify bandwidth throttling, click the Modify link and go to step 4. Otherwise go to step 5.

Figure 9.37: Specify the recovery options

4. In the Throttle window, click the Throttle checkbox and specify the available bandwith for work and nonwork hours. You can also define which hours are work hours (see Figure 9.38). When you have made your selections, click OK.

Figure 9.38: Throttling network bandwidth 5. Click Next. 6. In the Summary screen, check to make sure that the settings you've chosen are correct. Notice in Figure 9.39 that the portion for database files is empty. This is because you are recovering specific items from the database. If you need to make changes to your selection, click Back. If everything is correct, click Recover.

Figure 9.39: The Summary screen When the recovery operation completes, the selected data objects as captured in the selected recovery point will be restored to their original location on the protected SharePoint server. If

a version of this data exists on the server before you begin recovery, it will be replaced with the recovered version. Be careful when using this option!
RECOVERY TO AN ALTERNATIVE SITE

Again, recovering SharePoint data to the original site it came from can be useful; however, you may not need to do it all the time. Instead, you can recover the selected data to an alternative SharePoint site. To recover selected data to an alternative site, use the following steps: 1. In the appropriate boxes, enter the name of the web front-end server and SQL instance of your recovery farm. Also, enter a temporary location on the recovery farm for the database files (see Figure 9.40). Because you are recovering to an alternative site location, you'll need to give a URL for the new location as well. Click Next.

Figure 9.40: Specify the recovery farm and target site 2. In the Specify Recovery Farm screen, select a path on a recovery farm server to use as a staging ground for files before they are transferred to the production server (see Figure 9.41). Click Next.

Figure 9.41: Specify the recovery farm 3. In the Specify Recovery Options screen, select whether or not to restore the security settings and indicate the recipients for a job notification (see Figure 9.42). If you want to modify bandwidth throttling, click the Modify link and go to step 4. Otherwise go to step 5.

Figure 9.42: Specify the recovery options

4. In the Throttle window, click the Throttle checkbox and specify the available bandwith for work and nonwork hours. You can also define which hours are work hours (see Figure 9.43). When you have made your selections, click OK.

Figure 9.43: Throttling network bandwidth 5. Click Next. 6. In the Summary screen, check to make sure that the settings you've chosen are correct. Notice in Figure 9.44 that the portion for database files is empty. This is because you are recovering specific items from the database. If you need to make changes to your selection, click Back. If everything is correct, click Recover.

Figure 9.44: The Summary screen When the recovery operation completes, the selected SharePoint objects as they exist in the recovery point will be restored to the alternative instance you selected.

The Bottom Line


Determine the prerequisites for installing the DPM protection agent on SharePoint servers. You need to ensure that your protected SharePoint servers are running the necessary versions of the Windows operating system and service packs and are configured according to DPM's requirements. Master It 1. Perform a survey of your SharePoint servers to ensure that they are compatible with the DPM protection agent: o What version of Windows Server and service pack are you running on the SharePoint servers you want to protect? o Does your SharePoint version and configuration meet the DPM requirements? 2. What additional process do you need to perform on a SharePoint server after installing the DPM protection agent? 3. How does protecting your WSS 2.0 or SPS 2003 deployments differ from protecting WSS 3.0 or MOSS 2007 deployments? Configure DPM protection for SharePoint servers. Once the DPM agent is deployed, you must configure protection groups and select data sources to protect. Master It 1. What SharePoint data sources can DPM protect? 2. What DPM licenses do you need to protect SharePoint servers? 3. Can you protect older SharePoint versions with DPM; if so, what licenses do you need? Recover protected SharePoint servers. Protecting the data is only half of the job; you also need to be able to recover it. Master It 1. To where can you restore SharePoint data? 2. What types of SharePoint data can you recover? 3. What additional steps must you take to enable item-level recovery?

Chapter 10: Protecting Virtual Servers


Overview
There was a very consistent creation of a virtual reality, and eventually it collided with our old-fashioned, ordinary reality. Hans Blix Acquiring enough servers to run the various applications we need is one of the biggest administration challenges we face. As soon as we get new servers, though, we usually are faced with the problem of making sure those servers are adequately utilized. How many times have you had the joy of a conversation like the following? Manager: We need to deploy this new application! Administrator: We need a new server for it. Manager: We don't have the budget for a new server. Administrator: Well, we need to put it somewhere. Manager: What about that server? It doesn't do much. Administrator: Um, no; that's our firewall. Manager: I don't see the problem. Administrator: It's a really bad idea to put applications on a firewall machine. Manager: I'm not really feeling what you mean when you say "bad idea." Administrator: It's a huge security risk to put anything on there. The firewall protects us from the Internet. Manager: Oh, is that all? Administrator: Is that all? What do you mean by that? Are you insane?!? Manager: You had me worried for a minute! You're a smart administrator; you can keep us safe. Just put the application on that firewall thingy. Administrator: Manager: And while you're at it, the SAN administrator tells me that our Exchange server uses way too many disks on our SAN array. He suggested that he could just create a big RAID-5 volume to share between SQL and Exchange and reuse a lot of the disks currently tied up by Exchange. What do you think?

Administrator: I think you need to find a new administrator. If this conversation sounds completely over the top to you, congratulations! You've somehow managed to avoid a common frustration for administrators and we are more than slightly envious. Although this conversation isn't taken verbatim, it is pieced together from actual situations from our experiences. Everyone else, you're nodding along with us at this point; you've been there, done that, and have the white hair to prove it. Although most of us are not in such dire straits that we have to deploy applications to a firewall server (but if you are in that place, we feel your pain), there's no denying that there's a strong push to get the most out of your servers. Microsoft's ideal, of course, is one server per function, but they bow to the realities faced by us customers and tell us which of their products can be run on the same hardware. More importantly, they tell us where we shouldn't do itmostly by saying they won't support it. When they won't support it, they usually have an excellent reason. There are many places in the typical enterprise where servers are not being fully utilizedsometimes dramatically soeven after room for future growth is taken into account. Many administrators view virtualization (the ability to run one or more virtual machines (VMs) on shared underlying hardware) as the best way to take full advantage of the hardware they already have on hand. The big question of virtualization, of course, is what services do you run virtualized? Early in the virtualization era, the debate still rages far and wide, depending on many factors such as the kind of hardware you have, whether you're using a SAN, and which virtualization solution you're using. The advances in virtualization technology are changing the rules of how applications should be deployed, which in turn causes a ripple effect through bodies of expertise such as security and data protection. Even though we've been aggressive users of Microsoft Virtual Server (MSVS) and Virtual PC (VPC) for many of the projects we work on, we've been cautiously feeling our way into the use of VMs in our production network environment. There are many applications that traditionally have been deployed on servers already performing other roles; these types of applications may make sense as VMs:

Most enterprise class antivirus solutions include a service for central management and update deployment. These services usually have light requirements, making them ideal for deployment in a VM. Many small business financial software packages have minimal requirements. Because this information is sensitive and needs isolation, however, these applications are good prospects for VM deployment. Some proprietary legacy applications require out-of-support operating systems such as Windows NT 4.0. Rather than support older hardware or find drivers, administrators can run these configurations in VMs.

VMs can provide solutions for many problems; however, the solution in itself presents several other challenges, not the least of which is how do we protect VMs and the data within them? The obvious option is to treat them exactly as if they were physical hosts: buy additional agents for each virtual machine and protect them with our normal backup solution. This option has the advantage of simplifying the complexity of our restore operations; on the other

hand, it can reduce some of the cost savings that virtualization can provide by increasing the amount we pay for licenses. If you can host four applications on one server, splitting them each to their own VM means you're paying for four backup agents instead of one. Note, however, that if you're running Windows on those VMs and you're using MSVS, you're getting the operating systems included in the Windows license for the host. For more information, go to http://www.microsoft.com/presspass/features/2005/oct05/1010virtualizationlicensing.mspx. It would really be cool (it's probably not cool to say "cool" anymore, but what do we care?) if we could integrate our data protection regime with some sort of host virtualization service. If we're using MSVS, we can; DPM provides native support for protection of VMs when they are run under MSVS. We've already gained advantages just by using virtualization for strategically selected servers, but by using DPM for data protection, we gain several more advantages:

You are protecting an entire known-good server configurationoperating system, system state, application, and dataall at once, regardless of which operating system or applications the virtual machine is running. You don't have to install the DPM protection agent (or any other backup agent) on the VMs, just the MSVS host machinesthereby reducing the cost of your DPM deployment. You can easily restore a virtual machine from DPM to any server that has the DPM protection agent installed. This gives you added flexibility in the event of a server outage or site disaster. Properly securing and isolating your application servers becomes easier; you only need to establish firewall exceptions for connections between the MSVS hosts and your DPM servers, instead of to your virtual machines.

In this chapter, you will learn to:


Determine the prerequisites for installing the DPM protection agent on MSVS hosts Configure DPM protection for virtual machines on MSVS hosts Recover protected MSVS virtual machines

Considerations
Virtual machines scare a lot of administrators and confuse a lot of users. The concept seems pretty simpleyou create a pretend server and treat it as if it's a real serverbut it rapidly leads to a lot of resource management complications:

How do you manage physical hardware on the host server and partition it among the virtual machines? How do you allocate disk volumes, especially when the virtual server software abstracts entire volumes to discrete sets of files on the host filesystems? How do you ensure adequate performance for each virtual machine while balancing the total load on the host? How much like a real server do you treat a virtual server when it comes to data protection strategies?

The challenges of protecting MSVS data are remarkably similar to those of protecting SQL Server data. Let's compare SQL databases and virtual machines, as shown in Table 10.1.
Table 10.1: Comparing MSVS to SQL Server Open table as spreadsheet

Characteristic SQL Database Metadata Stored at the instance level; describes one or more databases. Data format Backup methods

MSVS Virtual Machine Stored at the host level; describes one or more virtual machines.

At least one database file couple A configuration file coupled with one or with at least one log file. more types of virtual hard disk files. Offline backup to quiesce files. Offline backup to quiesce files; VSS aware backup to capture consistent snapshot. VSS-aware backup to capture consistent snapshot.

The various data files that make up a virtual machine have specific relationships with each other. A differencing disk, for example, uses a combination of a read-only, baseline-image virtual hard-disk file (which could be shared among many machines) and a VM-specificdifference, virtual hard-disk file that contains all the deltas from the baseline-image file that VM has generated; this is the file that the VM will write to whenever the corresponding disk volume is written to within the VM. When you're backing up virtual machines, it is vital to ensure that all of these data files are capturedand even more importantly, are in a consistent stateor else the data is worthless. Before you begin protecting your MSVS data with DPM, there are several areas you need to consider:

Do your MSVS host machines meet the prerequisites for DPM protection? How do you need to prepare your MSVS host cluster nodes? Do you need to protect the system state of your MSVS host machines?

Let's examine these issues in more detail.


What About Virtual PC?

You may be wondering if you can use DPM to protect VMs that are being hosted by Microsoft Virtual PC. There are two answers: the short answer and the long answer. The short answer is "No." The long answer is still "No," but we'll at least take a stab at explaining why. We can think of several reasons why the DPM development team made this choice:

Virtual PC supports being run on client operating systems such as Windows XP and Windows Vista. Now, this isn't a show-stopper by itself. You can run Virtual PC on Windows Server 2003, of course, but that's not where it's intended to run. If you've got a VM that's critical enough to protect with DPM, you should probably have it on a real server. Virtual PC is designed and intended to run as a user application, not as a service. That is, someone has to be logged on to the machine and the Virtual PC application has to be started (either manually or through some sort of logon automation such as a script or shortcut in the Startup folder) in order for Virtual PC VMs to be started. If you reboot the machine, the VMs are shut down until the next time the application is started. MSVS runs as a service, which means that it starts back up (and can restart VMs according to the configuration options) automatically. As far as we know, Virtual PC has no support for VSS, which is a critical and necessary component for DPM protection. Even though Windows XP and Windows Vista support a limited form of VSS (it can only create a single snapshot, not the multiple snapshots available on the server versions of Windows), that's only part of the VSS puzzle. Without a VSS writer, DPM has no way to ensure that the data files within Virtual PC are consistent. Even though the DPM protection agent could simply capture the virtual hard drive and virtual machine configuration files associated with a VM running under Virtual PC, the other problem is how to capture a consistent snapshot of the memory in the active VM. Virtual PC has no method for one program (like the DPM protection agent) to suspend a running VM and dump its memory to a file.

Prerequisites

Before we move on into the details of protecting and restoring MSVS data with DPM, you should ensure that your MSVS host machine meets the prerequisites. These requirements are shown in Table 10.2.
Table 10.2: Protected Server Software Requirements Open table as spreadsheet

Software Component

Description

Table 10.2: Protected Server Software Requirements Open table as spreadsheet

Software Component

Description

Application version Microsoft Virtual Server 2005 R2 SP1 Standard Edition x86. Microsoft Virtual Server 2005 R2 SP1 Standard Edition x64. Microsoft Virtual Server 2005 R2 SP1 Enterprise Edition x86. Microsoft Virtual Server 2005 R2 SP1 Enterprise Edition x64. VSS hotfix 940349 on Windows Server 2003. Virtual machine additions Virtual Machine Additions version 13.813. This is the version supplied with MSVS 2005 R2 SP1; each virtual machine that will be protected must have its VM additions upgraded to this version. DPM License One E-DPML for each MSVS host. One S-DPML or E-DPML for each virtual machine you will be protecting as a server, based on the workload it hosts. In order for the DPM protection agent to protect MSVS data, your MSVS hosts must support the necessary VSS writer functionality. This means that you have to be running at least MSVS 2005 R2 SP1, which is the latest version at the time of writing. While it's not included in the table above, you should also remember that VSS support is not present in Windows Server 2000; it was introduced in Windows Server 2003. You must, therefore, be running MSVS on some supported version of Windows Server 2003 with at least SP1. Both Windows Server 2003 and Windows Server 2003 R2 are supported; if you need to know which specific editions of Windows Server 2003 you need, you should see the documentation for MSVS to see what its requirements are. The DPM protection agent uses the VSS capabilities of Windows Server 2003 to take a complete snapshot of each protected virtual machine, as well as the MSVS host configuration information. This helps ensure that there is always a consistent view of the data files composing a virtual machine. You should also be aware that the VSS writer for MSVS can reach into virtual machinesthanks to the additional functionality of the Virtual Machine Additionsand determine if the virtual machine operating system also supports VSS. If it does, the various applications and data files within the virtual machine are also quiesced (made inactive) using VSS, ensuring a total level of data consistency that is unavailable with any other product current on the market. If that didn't make sense on first reading, let us assure you that this is exciting stuff. For example, let's assume that you're running a virtual Exchange mailbox server. You've placed your storage groups and mailbox servers on virtual hard drives (obviously, you're in a small

shop and disk performance isn't much of an issue). When the DPM protection agent on your MSVS host uses VSS to take a snapshot of the Exchange virtual machine, here's what happens: 1. The MSVS VSS writer prepares to quiesce the data files in the VM. 2. The MSVS VSS writer sees that the Exchange VM operating system is VSS-aware, so passes the quiesce request into the VM operating system via the Virtual Machine Additions. 3. The VM operating system passes the quiesce request to all of the VSS-aware applications running on the hostin this case, Exchange. 4. Exchange (on the VM) quiesces the storage group and mailbox database files. 5. The VSS system of the VM operating system takes a snapshot of all files and reports back via the Virtual Machine Additions that everything is ready. 6. The MSVS VSS writer quiesces the virtual machine files and takes a snapshot of them. 7. The VSS writer releases the quiesce request, passing notice back down into the VM. 8. The VM operating system passes notice to Exchange, which releases the data files. 9. The MSVS VSS writer tells the DPM protection agent where to find the snapshot it just took. 10. The DPM protection agent notes which blocks have changed and synchronizes them to the DPM server. As we said before, this is cool stuff that no one else is currently doing as far as we're aware. As always, you should thoroughly read the DPM Planning Guide, as well as the DPM release notes, to identify any further issues or concerns that may affect the protection of your MSVS hosts.
Clustered Configurations

Virtual Server 2005 R2 supports a configuration known as the host cluster; in this configuration, you can use the MSCS components to provide manually configured failover of specific resources used by MSVS virtual machines. MSVS is not natively cluster aware, however, so you must use specific scripts in conjunction with the configured cluster resources to ensure virtual machines are properly started and stopped during failovers. However, this is all irrelevant from a DPM point of view: because MSVS is not natively cluster-aware, neither is the DPM protection agent when protecting MSVS hosts and virtual machines. We hope this will change in future releases of MSVS and DPM, but for now, you will need to take steps to protect each node of an MSVS host cluster separately.
System State

DPM includes the ability to protect and recover the local system state for any protected server in this case, the MSVS host. Because you can't use DPM to protect standalone servers, your MSVS host is going to be a member of an Active Directory domain, and its system state will include the following types of data at a minimum:

Boot files The COM+ class registration database Registry hives

If you're running other functions on your MSVS host, you might want to look at Table 10.3 to see what other types of data may be stored in the system state.
Table 10.3: Data Contained in the System State Open table as spreadsheet

Server Role Member server

System State Data Boot files. The COM+ class registration database. Registry hives.

Domain controller

Active Directory (NTDS) files. The system volume (SYSVOL). Other applicable components.

Certificate Services All Certificate Services data. Certificate Authority Other applicable components. Cluster node Cluster Service metadata. Other applicable components. When you protect a VM in DPM, you don't get access to its system state as a separate data source as you do with the MSVS host. If you want to protect the system state of a VM, you need to treat it like any other server and install the DPM protection agent on it. You then have the option to protect the entire VM, just the various data sources on the server, or both. Here's the real question: when do you use DPM to protect system state? From our experience, we recommend that you do it all the time. System state is insanely easy to protect with DPM; it takes up comparatively little room on most file servers even before you factor in DPM's space-saving technologies. You never know when you're going to need it. If you're using some of the advanced protection and service continuation options you have when using DPM in conjunction with Virtual Machine Manager, keeping the system state protected is an essential part of your recovery strategy. Although the P2V capabilities of VMM are sufficient to protect the base operating system and program files, you'll need the system state to restore the virtual machine to the last known state. For this reason, you should protect the server's system state in the same protection group that you protect the rest of its data; this ensures that the entire server can be consistently restored to a known point in time.
Protected Data Sources

When you're protecting MSVS with DPM, you actually have multiple data sources you can select in the administration console (unlike other workloads such as Exchange Server or SQL Server):

The actual virtual machines on the host. As we've mentioned before, there are two subtypes of virtual machines (those that are VSS-aware and those that are not); we'll talk about them in more detail in a moment. The MSVS host configuration information. While virtual machine configuration files store the machine-specific information, there is potentially a large amount of configuration data that affects virtual machine operation.

Let's examine how these types of data figure into your protection strategy.
VIRTUAL MACHINES

Protecting virtual machines seems to be deceptively simplea lot simpler than protecting a real host:

Copy the virtual machine configuration (.vmc) file. Copy each virtual hard drive (.vhd) file used by the virtual machine.

Voil!! You're done! Okay, maybe not; we did say "deceptively" simple. Each virtual hard drive file contains hundreds, perhaps thousands of individual files. Even if your average virtual server has only 50 or 60 of them open at any time, that's a lot of chances to corrupt data. Nobody wants that, so MSVS provides a method to ensure consistency in your virtual machine data: 1. The backup application pauses or halts the virtual machine to stop any pending write operations to hard drives and RAM; MSVS comes with an API that includes a hibernation function just for this purpose. 2. Once the virtual machine has been successfully paused, MSVS dumps the contents of its RAM to the transient save state file (.vsv). 3. MSVS then ensures that all pending write operations to the hard drives are completed. This can be complicated not only by the use of differential hard drives, but by the Undo Drives feature. 4. MSVS makes sure that the virtual machine configuration file is in a consistent and upto-date state. 5. The backup application copies all of the data files. 6. The backup application uses the MSVS API to restore the virtual machine to an active state. The traditional process involves more than a pinch of custom scripting, combined with hacking these scripts into the backup jobs to ensure they are run before and after the job to hibernate and restore the virtual machines. Unless you go to a lot of effort, you end up having to modify these scripts every time you add or remove a virtual machine from the host server; you may also need to change your backup configuration. This level of overhead leads to missed backups or to a rigid process for adding and removing virtual machines. As we discussed previously, the introduction of VSS into the mix makes this all potentially even more complicated (but more powerful) at the same time. Oddly, when DPM and MSVS are able to utilize VSS, they mask the difficulties for you, making data protection of virtual machines even easier.

VSS-Aware Virtual Machines

The best-case situation, of course, is if your virtual machine is VSS-aware. In realistic terms, this means that the virtual machine must be one of four operating systems:

Windows Server 2003 Windows XP Windows Vista The forthcoming Windows Server 2008

However, it's not just enough to have the right operating system; you must also have the right version of the Virtual Machine Additions (VMA). Each version of MSVS comes with its own corresponding version of the VMA, which is intended to be installed on the Windows virtual machines you run on that host. We are continually amazed by how many people run virtual machines without bothering to install VMA. VMA isn't just a convenience feature; it provides very real and necessary driver updates that allow the virtual machine to get a higher level of performance under MSVS. As of MSVS 2005 R2 SP1, it also provides the mechanism by which MSVS can query the virtual machine and pass along the requests from the MSVS VSS writer to the VSS-aware applications within the virtual machine. We've seen plenty of virtual machines that had the current version of VMA installed two or three revisions of MSVS ago; although the host MSVS instance had been upgraded to the latest version along the way, the VMA in the virtual machines had not been. These virtual machines got better performance than bare virtual machines, but they usually got better performance (and DPM support for VSS) by taking the time to upgrade. The bottom line is, if you don't have the latest version of the VMA on your virtual machines, upgrade them. Not only will it make your DPM-based protection strategy work better, it might gain you an unlooked-for performance boost.
NonVSS-Aware Virtual Machines

For all other virtual machines (including those that are running non-Microsoft operating systems such as Linux), DPM is reduced to using the same strategy as other backup applications: hibernate the virtual machine, protect the individual file-level components, and finally bring the machine back to active status. Even here, though, DPM makes the process far simpler than traditional backup applications:

DPM automatically enumerates which virtual machines are actively configured on the MSVS host. You don't have to modify your scripts to try to keep up or write elaborate routines to try to automatically capture new virtual machines and stop protecting ones that have been removed. Block-level backup reduces the amount of time needed to copy the hibernated data files. Instead of copying all of the data, DPM only needs to copy the changed blocks. Microsoft claims that this is typically a two to three minute process for the average virtual machine, and we see no reason to doubt it. That's a dramatically reduced level of outage.

DPM handles hibernation and protection on a machine-by-machine basis. You will need to choose between using a separate backup job for each virtual machine (so that you can minimize the downtime for vital virtual machines) or using simple, shared pre-and post-execution scripts that hibernate all of the virtual machines at once and don't restore any until all have been backed up.

Even if you're living in the non-VSS world, DPM will still save you time, effort, and disk space.
VIRTUAL SERVER DATA

The typical MSVS host hides a variety of seemingly minor but nevertheless important configuration data. You may not think about how your virtual networks are configured, to pick an example, but you'll notice them when they're gone. For your information, by default each virtual network you configure has a corresponding configuration file stored in the All Users profile, typically something like C:\Documents and Settings\All Users\Documents\Shared Virtual Networks. It's just as important to capture this host-level configuration data as it is to capture your virtual machine data, which is why DPM exposes it as a separate protection target. Protecting this information is like protecting system state data: you may not need it often, but when you do, you'll be very thankful to have it.
APPLICATION-LEVEL BACKUP

There's one other option that you might want to consider: don't back up the virtual machine from MSVS. Instead, you can always install the DPM agent on the virtual machine and protect it just as you would any other server. This approach isn't quite as simple as just protecting the whole virtual machine in one fell swoop, but it does offer some benefits. When do you know if this is the right approach? That depends on the nature of the data on the virtual machine:

If losing the data on the machine would be a larger inconvenience that losing the entire machine, you may want to protect the server directly. Examples include SQL Server databases or Exchange mailbox databases. If losing the entire machine would put a serious hitch in your giddyup, or if the machine carries data that is hard to protect using DPM, it's a good candidate for virtual machine protection. Examples of this category include virtual machines that provide critical infrastructure services such as DHCP.

(Yes, people do run SQL Server databases and Exchange mailbox databases in virtual servers. It's a great way to cut down hardware costs in smaller environments that don't need high-end performance, but we do mean "smaller.") Of course, nothing stops you from going wild and using both approaches. We can think of several situations where that kind of double coverage would be justifiedand we're willing to bet you can too.

Backup Procedures
Few things could be simpler than backing up virtual machines with DPMas long as they're running in MSVS. It's a good thing you installed DPM back in Chapter 2, "Installing DPM." However easy it is, though, you need to be familiar with the steps involved in protecting and restoring your virtual machines. Because DPM doesn't support clustered MSVS configurations, that's one layer of complexity you don't have to worry about. If you're upgrading from DPM 2006, you're really going to appreciate how much simpler protecting MSVS has gotten. You don't have to fuss with scripts to pause the virtual machine files so that DPM can pick them up. There are two basic steps to protecting standalone MSVS Servers with DPM: 1. Install the protection agent on the protected MSVS hosts. 2. Configure protection by assigning virtual machines to a protection group. But first, we'll review how to install the protection agent on your MSVS hosts.
Installing the Protection Agent

You might remember these steps from Chapter 2unless, of course, you skipped it in your quest to get your virtual machines protected. If you've already installed the agent on your MSVS hosts, you can skip this section; if not, we've got you covered. Check out the following steps: 1. Open the DPM Administrator console, navigate to the Management tab, and select the Agents subtab. 2. Click Install in the Actions pane. 3. From the left pane, select the servers you want to protect, as shown in Figure 10.1. Click Add.

Figure 10.1: Choosing servers for agent install 4. When all of the servers you want to protect are in the right pane, click Next.

5. Enter the credentials for a user with administrative rights on the selected servers, as shown in Figure 10.2. Click Next.

Figure 10.2: Enter the credentials for the agent install 6. Once the agent install has been completed, you will not be able to protect your servers until they have been restarted. Choose whether you want the servers to reboot now or later, as shown in Figure 10.3. Click Next.

Figure 10.3: Choose the restart method 7. A Summary screen will appear, as shown in Figure 10.4, showing the choices you have made. Click Install to proceed with the agent install, or click Back to change your options.

Figure 10.4: The Protection Agent Install summary 8. The final screen will display the progress of the agent install. You can click Close, and the current status and progress will be displayed in the Agents subtab. Once the protected MSVS host reboots and DPM verifies the connection with the agent, you will see the list of data sources that DPM can protect. Although you need to install the agent on all nodes in a file server cluster in order to get full protection, as soon as you reboot the first node in the cluster you will see the resources available on it. You may need to install the agent and reboot the cluster nodes in multiple sessions to prevent disruption of services for your users.
PROTECTING VIRTUAL MACHINES

You can add MSVS virtual machines to an existing protection group or create a new protection group. The following process assumes that you're creating a new protection group. If you want to add virtual machines to an existing protection group, open the protection group and select the virtual machines you want to add. To create a new protection group for your MSVS virtual machines: 1. Open the DPM Administrator console, navigate to the Protection tab, and click Create Protection Group in the Actions pane. 2. In the Welcome screen shown in Figure 10.5, click Next.

Figure 10.5: The Create New Protection Group Welcome screen 3. In the Select New Group Members screen, expand the MSVS host, expand the Microsoft Virtual Server 2005 node, and select the virtual machines to include in the protection group by checking the boxes next to the databases, as shown in Figure 10.6.

Figure 10.6: Selecting the virtual machines to protect 4. When you have selected the virtual machines you want to protect, click Next. 5. Indicate whether or not this group will use short-term protection and the associated method, as well as whether or not to use long-term protection (if you have a tape drive or library attached to your DPM server), as shown in Figure 10.7.

Figure 10.7: Selecting a data protection method 6. Once you have chosen the protection methods, click Next. 7. Unless you have chosen not to provide short-term protection for your protection group, the next screen is where you indicate how long short-term data is retained in DPM, as well as the synchronization frequency and the recovery point schedule, as shown in Figure 10.8.

Figure 10.8: Specify shortterm goals 8. To change the schedule for the recovery point creation, click the Modify button. Here, you can change the frequency by adding times and checking days of the week for the selected operation to occur, as shown in Figure 10.9. When you are finished, click OK.

Figure 10.9: Modify the recovery point creation schedule 9. Back in the Short-Term Goals screen, click Next. 10. In the Review Disk Allocation screen, shown in Figure 10.10, you'll see that DPM will have already recommended a default allocation from the storage pool based on the amount of data being protected as well as the short-term goals you specified.

Figure 10.10: The Review Disk Allocation screen 11. To change the amount of storage pool space allocated for your protection group, click Modify. Here you can change the amount of space allocated for replicas and recovery points, as shown in Figure 10.11.

Figure 10.11: The Modify Disk Allocation screen 12. Back in the Review Disk Allocation screen, click Next. 13. Unless you have chosen not to provide long-term protection for your protection group, the next screen is where you configure DPM's long-term tape retention strategy, as shown in Figure 10.12.

Figure 10.12: Specify the long-term protection goals 14. To change the long-term protection objectives, click Customize. You can establish a multipletier strategy in units of days, weeks, months, or years. You can also specify what happens if more than one of the scheduled backups happens at the same time, as shown in Figure 10.13. When you have finished making your selections, click OK.

Figure 10.13: Select the long-term objectives 15. In the Modify Long-Term Backup Schedule screen, set the schedule for each level of your tape rotation (see Figure 10.14). Note that the options will differ depending upon the tape rotation scheme you have chosen.

Figure 10.14: Modify the backup schedule for your objectives 16. To change the days on which long-term backups occur, click Modify. Select the appropriate day and time for each backup, as shown in Figure 10.14. When you have finished making your changes, click OK. 17. Click Next. 18. In the Select Library And Tape Details screen, choose the library to use, the number of drives from the library, integrity checking, and compression and encryption options, as shown in Figure 10.15. When you have chosen the appropriate settings, click Next.

Figure 10.15: The Select Library And Tape Details screen 19. In the Choose Replica Creation Method screen, select the method by which replicas will be created, as well as when the first one should be created, as shown in Figure 10.16. Click Next.

Figure 10.16: The Choose Replica Creation Method screen 20. In the Summary screen shown in Figure 10.17, you will be presented with a summary of all of the settings you have selected for the protection group. If everything looks good, click Create Group; otherwise, click Back to make any necessary changes.

Figure 10.17: The Summary screen We told you it was easy; now you're protecting your MSVS hosts and virtual machines with DPM. Now that that's done, let's restore those virtual machines. The good news is that with DPM it's almost as easy to restore your protected data as it is to protect it in the first place.

Restore Procedures
As with other workloads, restoring virtual machines with DPM is almost sinfully easy. (Don't worry, though, it's not actually bad for you.) You have several options for your restore destination:

You can recover the virtual machine to the original MSVS host. This is probably going to be your typical option. You can recover the virtual machine to an alternative location, such as a file server. DPM copies the entire file set to the selected location; the destination host must have the DPM agent installed. You can choose to write a copy of virtual machine files to tape. Although this may not initially seem useful, it turns out to be handy in many electronic discovery or regulatory compliance scenarios.

Time to recover some virtual machines!


Restoring to the Original Instance

Recovering your database to the original instance is one of the most common types of recovery you'll perform in a production environment. To recover a virtual machine to its original host, use the following steps: 1. Open the DPM Administrator console and navigate to the Recovery tab. 2. In the Protected Data pane, expand the available data sources and select the data source that you want to recover, as shown in Figure 10.18.

Figure 10.18: Selecting a VM to recover 3. On the Review Recovery Selection screen, ensure that you have chosen the correct items to recover, as shown in Figure 10.19. When you are satisfied with your selections, click Next.

Figure 10.19: Review your recovery selection 4. On the Select Recovery Type screen shown in Figure 10.20, select the Recover To Original Instance option and click Next.

Figure 10.20: The Select Recovery Type screen 5. On the Specify Recovery Options screen shown in Figure 10.21, choose your desired recovery options: o Network bandwidth: To adjust the network bandwidth used by the restore process, click Modify. In the new window shown in Figure 10.22 specify a maximum usable amount of bandwidth for work hours and nonwork hours, and then click OK.

Figure 10.21: The Specify Recovery Options screen

Figure 10.22: Throttling network bandwidth Email notifications: You can enable email notifications and specify one or more recipients. 6. On the Summary screen shown in Figure 10.23, review your choices. If you need to make any corrections, click Back to move back through the configuration options. When you are satisfied, click Recover.
o

Figure 10.23: The Summary screen 7. DPM displays a Status window for the recovery operation. Instead of keeping this window open, you may close it and track the recovery progress in the DPM Administrator console. When the recovery operation completes, the virtual machine as captured in the selected recovery point will be restored to its original location on the protected MSVS host machine.

If the virtual machine exists on the machine before you begin recovery, it will be replaced with the recovered version. Be careful when using this option!
Restoring to an Alternative Location

Recovering a virtual machine to its original instance is useful when you're rebuilding a production server that's bitten the dust. Recovering virtual machines to an alternative location can be useful for a variety of other scenarios:

Testing recovery procedures Verifying data integrity Performing a migration to a new MSVS host Recovering to an alternative instance to get a critical database online as quickly as possible, when rebuilding the original server will take too long

To recover virtual machines to an alternative instance, use the following steps: 1. Open the DPM Administrator console and navigate to the Recovery tab. 2. In the Protected Data pane, expand the available data sources and select the data source that you want to recover, as shown in Figure 10.18. 3. On the Review Recovery Selection screen, ensure that you have chosen the correct items to recover as shown in Figure 10.19. When you are satisfied with your selections, click Next. 4. On the Select Recovery Type screen shown in Figure 10.20, select the Copy To A Network Folder option and click Next. 5. In the Specify Destination screen, click the Browse button and select an appropriate location (see Figure 10.24). Once you have selected the location, click OK. Back in the Specify Destination screen, click Next.

Figure 10.24: Choosing a location 6. On the Specify Recovery Options screen shown in Figure 10.25, choose your desired recovery options: o Restore security: Choose whether DPM will overwrite the NTFS permissions of the recovered files or allow the permissions at the restore location to be applied. o Network bandwidth: To adjust the network bandwidth used by the restore process, click Modify. In the new window shown in Figure 10.22 specify a maximum usable amount of bandwidth for work hours and nonwork hours, and then click OK. o Email notifications: You can enable email notifications and specify one or more recipients.

Figure 10.25: Specify the recovery options 7. Click Next. 8. On the Summary screen shown in Figure 10.23, review your choices; if you need to make any corrections, click Back to move back through the configuration options. When you are satisfied, click Recover. 9. DPM displays a status window for the recovery operation. Instead of keeping this window open, you may close it and track the recovery progress in the DPM Administrator console. When the recovery operation completes, the version of the virtual machine captured in the recovery point will be restored to the network folder you selected. Remember that this instance can be on any DPM-protected machine.
Copy to Tape

With this option, you can create an on-tape copy of your virtual machine from any selected recovery point. This option sounds even crazier for virtual machine data than it does for other workloads such as SQL Server (where it actually makes sense). The 900-pound gorilla behind this feature is the growing use of electronic discovery queries and audit requests in regulatory compliance scenarios. As with recovering other types of data with this option, you don't have the ability to filter the data according to arbitrary criteria; you get the whole virtual machine. Use the following steps to copy a recovered virtual machine to tape: 1. Open the DPM Administrator console and navigate to the Recovery tab. 2. In the Protected Data pane, expand the available data sources and select the data source that you want to recover, as shown in Figure 10.18. 3. On the Review Recovery Selection screen, ensure that you have chosen the correct items to recover, as shown in Figure 10.19. When you are satisfied with your selections, click Next. 4. On the Select Recovery Type screen, shown in Figure 10.20, select the Copy To Tape option and click Next.

5. In the Specify Library screen, as shown in Figure 10.26, select the tape device to use (if you have more than one), customize your tape label, specify your desired compression and encryption options, and click Next.

Figure 10.26: The Specify Library screen 6. On the Specify Recovery Options screen shown in Figure 10.21, choose your desired recovery options: o Email notifications: You can enable email notifications and specify one or more recipients. 7. On the Summary screen shown in Figure 10.23, review your choices; if you need to make any corrections, click Back to move back through the configuration options. When you are satisfied, click Recover. 8. DPM displays a Status window for the recovery operation. Instead of keeping this window open, you may close it and track the recovery progress in the DPM Administrator console. When the recovery operation completes, you'll have a copy of the virtual machine on tape. While it may be easier to copy the virtual machine on to some other media such as an external hard drive by using the option to recover to a network folder, remember that the DPM protection agent must interact with NTFS-formatted volumes. Because some external devices don't support NTFS, you may find it easier to use tapes to avoid these types of drive formatting issues.

The Bottom Line


Determine the prerequisites for installing the DPM protection agent on MSVS hosts. You need to ensure that your protected MSVS hosts are running the necessary versions of the Windows operating system and service packs and that they are configured according to DPM's requirements. Master It 1. Perform a survey of your MSVS hosts to ensure that they are compatible with the DPM protection agent:

What version of Windows Server and service pack are you running on the file servers you want to protect? o Does your MSVS version and configuration meet the DPM requirements? 2. What requirements does a virtual machine need to meet in order for DPM to be able to protect it with a recursive VSS backup, and what benefit does this provide? 3. If a virtual machine does not meet the requirements for a recursive VSS backup, how does DPM protect it?
o

Configure DPM protection for virtual machines on MSVS hosts. Once the DPM agent is deployed, you must configure protection groups and select data sources to protect. Master It 1. What MSVS data sources can DPM protect? 2. What DPM licenses do you need to protect MSVS hosts? 3. What criteria may indicate that you should protect a virtual machine by installing the DPM agent on it directly instead of protecting it through MSVS? Recover protected MSVS virtual machines. Protecting the data is only half of the job; you also need to be able to recover it. Master It 1. To where can you restore virtual machines? 2. What targets can you recover?

Chapter 11: Protecting Workstations


Overview
Do you think he'll notice we gave him an etch-a-sketch? Scott Adams, "Dilbert" DPM is some seriously cool technology, but it's not the coolest computing technology out there. If you want the coolest computers, you have to go to Hollywood. We want some of the desktop computers we've seen on the silver screen; those things can do anything, including backing up an entire secret government database onto a single floppy. That's cool! How is DPM supposed to compete with that? Unfortunately, in the real world, desktop computers are simply familiar, everyday toolsjust another part of the job. Our business literally is technology, we use computers for everything: maintaining communications, managing projects, developing source code, preparing presentations, editing whitepapers, and even writing the occasional blog post (not to mention books). That's a lot of data, and despite our best efforts as technology experts, not all of it always makes it onto our servers. This can sometimes be a good thing, though; Ryan's SMS package for installing Microsoft Bob really doesn't need to take up valuable space. You may have come up with the clever idea of using DPM to protect your enterprise desktops or at the very least, using it to protect key strategic desktops (like the CEO's desktop, or more importantly, the CEO's executive assistant's desktop). The good news is that, yes, you can in fact do this marvelous thing and use DPM to protect workstation data with the same ease you're now protecting your servers. Here's the big question, though: Do you really want to? Before you answer that one, stop and think for a minute. The history of computing is filled with several cyclic trends, but one of the most obvious is the back and forth pendulum swing between centralized resources and distributed computing. Initial computers were big mainframes that took up entire rooms and buildings; they were shared out to multiple users, usually by running computational jobs in batch mode. Ah, the fun of punch cards. (Bonus points to you if you ever had to stripe your punch card deck so that when the operator dropped it, they could reassemble it in order. Extra bonus points if you wrote each card with GOTO statements so that the deck could be loaded and run in any order. We respect that kind of crazy.) The next innovation was timesharing using dumb terminals and serial lines. The advent of the personal computer slowly shifted the balance to personal desktops with their own copies of the software they needed. Add in networks, network operating systems, client-server architectures, thin clients, and it becomes pretty clear that modern computing is navigating a careful and twisty course between the two extremes of "everything must be centralized" and "everything must be local." There are, of course, good arguments for both positions, and if you're pragmatists like we are, you realize that the answer lies somewhere in the middle (and one organization's answer won't be right for the next organization). However, by its very nature, a data protection

solution such as DPM necessarily favors the centralized approach more than it favors the localized approach otherwise, you'd be attaching tape drives to all of your servers and teaching your users how to run their own backups. As a result, the bias with DPM is already leaning toward using and protecting centralized resources. For many organizations, using DPM to protect workstations actually would be a counterproductive move in terms of time and administrative overhead. We've worked at many places with strict policies that no data left on the workstation could ever be considered secure:

Many desktop support teams use imaging technologies to deploy new desktops. In these organizations, the user's Windows profile, Outlook profile and local cache, and My Document folders are usually remapped to locations on network file servers. This has the side benefit of allowing application troubleshooting to be a simple matter of wiping the machine and re-imaging it; 15 minutes later, you have a pristine installation. In other organizations, many desktops are shared between users. In call centers, for example, users may not even sit at the same workstations every day. If they want to see their files from one workday to the next, they'd better make sure they're on a network file server somewhere.

Under these conditions, deploying the DPM protection agent to workstations probably would be extremely counterproductive; it's another step that would have to be redone every time a workstation was re-imaged, with the corresponding involvement from the data protection team. There are ways that it can be made to work, such as deploying the DPM agent using Group Policy Objects and mandating that all restores of workstation data are to be staging points on servers. The point, however, is that before you blindly run ahead to use DPM to protect workstations, you need to seriously stop and think about how your organization deploys and manages its desktops, how your users are expected to handle their critical data, and whether DPM protection actually fits. If you go through all these assessments and decide that, yes, some of your workstations do need to be protected with DPMwell, at least you know you're at the right place. In this chapter, you will learn to:

Determine the prerequisites for installing the DPM protection agent on workstations Configure DPM protection for workstations Recover protected workstation data

Considerations
In many ways, protecting workstations with DPM is exactly like protecting file servers. This shouldn't be a surprise; Windows XP and Vista are closely related to Windows Server. Before you begin protecting your workstations with DPM, there are several areas you need to consider:

Do your workstations meet the prerequisites for DPM protection?

Do you need to protect the system state of your workstations?

Let's examine these issues in more detail.


Protecting Portable Computers

When we talk about workstations, we should note that we're usually talking about protecting desktop client computer systems. There are, however, absolutely no technical reasons why you can't use DPM to protect notebook, tablet, or other mobile computers that run the supported versions of Windows. This isn't to say that it's necessarily a good idea, mind you; it all depends on how you use your mobile systems. The key is whether or not they stay online on a regular basis. DPM gets twitchy when it can't contact protected computers consistently, especially when the computers are making changes to their protected data sources and storing updates that the DPM agent can't then synchronize back to the DPM server. What do we mean by "twitchy"? How does having to perform consistency checks on your replicas before you can reestablish synchronization sound to you? Yeah, we didn't think much of it either. So, here's the rule of thumb: if the portable computer doesn't meet DPM's assumptions for constant network connectivity when it's powered on, then it's probably not a good idea to use DPM to protect its data. This means that DPM isn't the right choice for your typical Sales or Marketing road warrior, who is on the road a fair chunk of the time outside of your company firewall. There's little benefit to be gained here. On the other hand, if you have a population of users who use their portable computers pretty consistently within the corporate network, over either wired or wireless network connections, then DPM may work for them. The best way to find out if it's going to work is to test it in your lab.

Prerequisites

Before we move on to the details of protecting and restoring your workstation data with DPM, you should ensure that your workstations meet the prerequisites. These requirements are shown in Table 11.1.
Table 11.1: Protected Workstation Requirements Open table as spreadsheet

Software Component Operating system

Description Windows Vista Business Edition.

Table 11.1: Protected Workstation Requirements Open table as spreadsheet

Software Component

Description Windows Vista Enterprise Edition. Windows Vista Ultimate Edition. Windows XP Professional Edition with at least SP2.

DPM License Volumes and partitions

The S-DPML for each protected workstation. All protected volumes and partitions must be formatted with NTFS. VSS (and therefore DPM) cannot protect a volume or partition formatted in FAT or FAT32. All protected volumes and partitions must be at least 1GB in size; this requirement is imposed by VSS.

Make special note of the restrictions on protected volumes and partitions. These limitations are fundamental limitations of the underlying VSS technology used to make shadow copies of the protected data. If you have protected volumes where you are using mount points (which, we must admit, is probably not a common configuration on workstations), you must ensure that your volume configuration meets DPM's requirements. Mount points are a special type of NTFS reparse points and are the only type supported by DPM; they permit you to have an NTFS volume mounted as a folder on another NTFS volume instead of a separate drive letter. When it detects that a protected volume is using mount points, DPM will change its behavior slightly, just as it does with file servers. If a mount point is included in a protection group, DPM will prompt you to specify whether you want to include the reparse target in the protection group. If you say no, you must include the target volume separately. Note, however, that the reparse point itself is not replicated by DPM regardless of how you answer. If you suffer a complete loss of the protected volume, you must first manually re-create the reparse point and relink it with the target volume before you can recover the data. Do note, however, that DPM does not support nested mount points. That is, if you are protecting a volume with a mount point, the target volume of that mount point cannot also contain a mount point. If you've got volumes with this kind of design, you have two choices: forgo protection for all target volumes below the second (and subsequent) mount points, or you can redesign how your volumes are mounted and presented to ensure that they can all be protected. The option you choose will depend both on the value of the data on the affected volumes as well as the expense and inconvenience in performing any necessary reconfigurations to your data volumes and folder structures.
Why Doesn't DPM Replicate Mount Points?

If DPM is making my life easier, why doesn't it replicate the actual mount points? The answer is somewhat complicated, but we'll try to make it simple: it has to do with the nature of reparse points. As designed, reparse points are an advanced NTFS feature. They are used by relatively few people:

Exchange administrators use mount points for configuring high-end Exchange clustering. In this configuration, NTFS mount points allow you to maintain Exchange performance best practices of keeping the database and transaction log files for each storage group on separate volumes without running out of drive letters. The Distributed File System (DFS) feature uses junction points (another type of reparse point), allowing multiple volumes and file shares over a number of workstations to be viewed by clients as a single namespace.

Because reparse points take requests for one volume namespace and seamlessly transform them into another volume namespace, Microsoft believes that administrators should always know where reparse points are in use and which volumes they are targeting. DPM does not replicate reparse point information in order to avoid the situation where the reparse target information has changed while the local relative path of the recovered data has not. This allows you to adapt the configuration of your mount points during recovery operations where you may not always be able to use the original volumes, yet still retrieve your data.

System State

DPM includes the ability to protect and recover the local system state for any protected server or workstation. Table 11.2 includes a listing of the types of data included in the system state for workstations.
Table 11.2: Data Contained in the System State Open table as spreadsheet

Server Role

System State Data

Member workstation Boot files. The COM+ class registration database. Registry hives. So here's the real question: when do you use DPM to protect system state? From our experience, we recommend that you do it all the time. System state is insanely easy to protect with DPM; it takes up comparatively little room on most workstations even before you factor in DPM's space-saving technologies. You never know when you're going to need it. Granted, you're less likely to need to restore a workstation wholesale, but if it's your boss's or CEO's workstation, you'll be the hero.

Protected Data Sources

You need to decide which workstation resources you want to protect with DPM. While most people tend to think of workstation data in terms of folders and documents, DPM allows you to specify the following types of items as separate data sources to be included in protection groups:

Entire disk volumes. DPM doesn't care if the underlying volume is an entire disk or just a partition of a disk, nor does it care whether they are mounted as drive letters or as folders using NTFS mount points. When you select a volume, all files and folders on that volume are selected (with a few exceptions as will be discussed). Individual folders on a disk volume. As with a disk volume, when you select a folder for protection, all files and folders within the selected folder are also selected for protection. SMB/CIFS file shares. Instead of defining protection based on volumes or folders, you can protect named shares you have defined on the workstation.

Unless you know that you have just a few specific locations that you need to protect, our recommendation (if you're going to protect desktops at all) is simply to protect the entire operating system volume. You can always restore a smaller amount of data (or allow the user to do so if you enable End-User Recovery as discussed in Chapter 4, "Using the DPM Management Shell"), and it gives you a nice backup if something happens. If you do decide to protect specific folders or file shares, then you need to take another few minutes to think about why you're protecting workstation-level resources again. After all, if you expect the user to move their data to a specific magic location for it to be protected, why shouldn't that location be on a networked file server instead of the local system where misbehaving applications and penguin bowling flash games can interfere? Data sources can only be in a single protection group; once you select a resource in one protection group, it will automatically be unavailable for selection in any existing or new protection groups. Note, however, that you don't have to select all of the data sources on a workstation in the same protection group; you can define multiple protection groups to protect different data sources on the same workstation, each with its own protection policies. The caveat is that all of the protection groups must be on the same DPM server; if you have multiple DPM servers in your organization, you can't add data sources from a single workstation into protection groups on multiple DPM servers. The reality, though, is that most workstations have a single hard drive with a single partition, and you're most likely to protect either the entire partition or a few selected folders. When you select a resource to protect, any child items within it are automatically selected. This makes it easy to protect an entire volume or folder hierarchy; you only have to select the top-level item. If you want to protect only some of the data at or beneath the file hierarchy of selected resources, you can define file exclusions. The way exclusions work is simple: simply unselect the child items that you don't want DPM to protect. The parent items remain selected and will be protected by the DPM agent, but the items you've specifically cleared the checkboxes from will not be synchronized by the agent. Note, however, this means that you can't restore the data in them from DPMand because a resource can only be a member of a single protection group, you can't protect it with DPM in any other way.

If the data you want to exclude from protection is scattered through the file hierarchy on the protected resource, but shares one or more file extensions, you can also define extensionbased exclusions. Certain types of data are automatically excluded from DPM protection:

NTFS hard links are another entry for a file in the file allocation table. With hard links, you have multiple files within the folder hierarchy on a given volume that point to the same physical file. Although these links are common on Unix file servers, NTFS only provides support for them to enable POSIX applications that rely on this functionality. There are very few (if any) native Windows applications that require them. If hard links are present in a resource you want to protect, DPM will alert you and the resource will not be protected. These type of links aren't common on Windows servers and are even less likely to be found on workstations, unless you've installed Services for Unix or some other POSIX subsystembut it's good to be prepared. All types of NTFS reparse points except for mount points as discussed earlier in this chapter. Reparse points are used to provide a variety of advanced functionality. If any reparse points other than mount points are present in a resource you want to protect, DPM will alert you and the resource will not be protected. DPM will not protect a Recycle Bin system folder, a System Volume Information system folder, or Windows paging files. If any of these folders or files are present in the selected resource, they will be silently skipped; however, the rest of the resource will continue to be protected. Volumes that are not formatted with NTFS will be skipped by DPM. In most cases, they won't be available for selection.

Protecting Windows Vista

Windows Vista is the newest client operating system from Microsoft. Vista offers many new enhancements that make it an attractive choice for many enterprises, including what we find the most compelling: the large array of security enhancements. However, no gift comes without a price tag; this enhanced functionality in turn means more work for administrators who want to deploy and integrate Vista into their environments. This extra work affects you if you want to protect Vista workstations with DPM. Before you can run out, install the DPM protection agent on your Vista workstations, and get that continuous data protection love flowing, you must first consider the following issues:

The Remote Registry Service must be enabled on any Vista machine you want to protect. By default, this service is disabled because it represents a potential security threat. The Windows Firewall is enabled by default on Vista workstations. You must create Windows Firewall exceptions to allow the DPM agent to communicate with the DPM server.

You can make both of these modifications in a variety of ways. If you have only a few Vista workstations to manage, you can make the changes manually from the Control panel on the Vista machine. We, however, recommend using a Group Policy Object to define all of the policy settings, and then apply it to the affected Vista workstations. See the Firewalls section

in Chapter 12, "Advanced DPM," for a list of the specific ports DPM needs to communicate on.
Protecting Encrypted Data

Every time you turn around these days, it seems there's yet another news item about a laptop with sensitive data that disappeared or was stolen. These incidents are embarrassing at the very best and potentially very costly (both to the organization and the people whose data was just exposed). As a result, many organizations are looking for ways to protect data on portable machinesand that means using some sort of data encryption on the data:

The Encrypting File System (EFS) feature has been around for several versions of Windows, allowing users to encrypt files or folders as they choose. EFS is implemented entirely in software, so it can be used on any Windows machine that supports it. With Windows Vista, we now have Bitlocker. Bitlocker is a boot-level technology that protects the entire hard drive and relies on special hardware built into the system. Bitlocker allows a user to encrypt an entire volume so that the data on that volume cannot be extracted by a third party without having physical access to the local machine. This prevents malicious third parties from accessing the data on a drive from a system they don't have the credentials to log on to (a typical method of data extraction in the past).

So, now that the data is secure, it still needs to be protected. If it makes sense for you to use DPM to protect a laptop machine, then by all means use DPM. Fortunately, using DPM to protect either EFS or Bitlocker secured data requires no extra steps:

Because a Bitlocker-protected drive is accessible to the operating system once it's been booted, the DPM protection agent can access all of the files on the volume normally and protect them in the normal fashion. As for EFS-protected files, DPM simply protects the files as they are (encrypted). It's up to the administrator or user to ensure that the proper key is available for decryption in the event that the data needs to be recovered.

Backup Procedures
Now that we've helped you assess your workstations and prepare them for DPM protection, let's move on to actually protecting workstation data in DPM. As we've not yet seen a way to create a clustered workstation configuration yet, you are spared the added complication of having to choose between a standalone or clustered configuration. Protecting workstation data is extremely simple, just like protecting a file server. There are two basic steps to protecting workstations with DPM: 1. Install the protection agent on the protected workstations. 2. Configure protection by assigning resources to a protection group. Let's start by reviewing how to install the protection agent on your workstations.

Installing the Protection Agent

We already covered the general steps for installing the DPM protection agent in Chapter 2, so if you've already installed the agent on your workstations, you're good to go. If you haven't, here's a recap: 1. Open the DPM Administrator console, navigate to the Management tab, and select the Agents subtab. 2. Click Install in the Actions pane. 3. From the left pane, select the workstations you want to protect, as shown in Figure 11.1, and click Add.

Figure 11.1: Choosing the workstations for an agent install 4. When all of the workstations you want to protect are in the right pane, click Next. 5. Enter the credentials for a user with administrative rights on the selected workstations, as shown in Figure 11.2, and click Next.

Figure 11.2: Enter the credentials for agent install

6. Once the agent install has been completed, you will not be able to protect your workstations until they have been restarted. Choose whether you want the workstations to reboot now or later, as shown in Figure 11.3. Click Next.

Figure 11.3: Choose the restart method 7. A Summary screen will appear, as shown in Figure 11.4, and indicate the choices you made. Click Install to proceed with the agent install, or click Back to change your options.

Figure 11.4: The Protection Agent Installation Summary screen 8. The final screen will display the agent install progress. You can click Close, and the current status and progress will be displayed in the Agents subtab. Once the protected workstation reboots and DPM verifies the connection with the agent, you will see the list of data sources that DPM can protect.

Protecting Workstation Resources

You can add workstation resources to an existing protection group or create a new protection group. The following process assumes that you're creating a new protection group, but if you want to add workstation resources to an existing protection group all you need to do is open the protection group and select the specific workstation resources you want to add. To create a new protection group for your workstation resources: 1. Open the DPM Administrator console, navigate to the Protection tab, and click Create Protection Group in the Actions pane. 2. In the Welcome screen shown in Figure 11.5, click Next.

Figure 11.5: The Create New Protection Group Welcome screen 3. In the Select New Group Members screen, expand the workstations you want to protect, and select the data sources on those workstations to include in the protection group by checking the boxes next to the data sources, as shown in Figure 11.6.

Figure 11.6: Select the data sources to protect

4. When you have selected all of the data sources for the protection group, click Next. 5. Choose whether this group will use short-term protection and the associated method, as well as whether to use long-term protection (if you have a tape drive or library attached to your DPM server), as shown in Figure 11.7.

Figure 11.7: Selecting the protection method 6. Once you have chosen the protection methods, click Next. 7. Unless you have chosen not to provide short-term protection for your protection group, the next screen is where you decide how long short-term data is retained in DPM, as well as the synchronization frequency and the recovery point schedule as shown in Figure 11.8.

Figure 11.8: Short-term recovery goals 8. To change the schedule for either the recovery points or the express full backup, click the appropriate Modify button next. You can change the frequency by adding times and checking days of the week for the selected operation to occur as shown in Figure 11.9. When you are finished, click OK.

Figure 11.9: Changing settings for recovery points 9. Back in the Short-Term Goals screen, click Next. 10. In the Review Disk Allocation screen, you'll see that DPM has already recommended a default allocation from the storage pool based on the amount of data being protected as well as the short-term goals you specified. 11. To change the amount of storage pool space allocated for your protection group, click Modify. You can change the amount of space allocated for replicas and recovery points (Figure 11.10) or, on the Protected Server tab, the space used on the protected server for the change journal (Figure 11.11).

Figure 11.10: The Review Disk Allocation screen

Figure 11.11: Modifying disk allocation 12. Back in the Review Disk Allocation screen, click Next. 13. Unless you have chosen not to provide long-term protection for your protection group, the next screen, shown in Figure 11.12, is where you configure DPM's longterm tape retention strategy.

Figure 11.12: Customizing longterm protection goals 14. To change the long-term protection objectives as shown in Figure 11.12, click Customize. You can establish a multiple-tier strategy in units of days, weeks, months, or years. You can also specify what happens if more than one of the scheduled backups happens at the same time, as shown in Figure 11.13. When you have finished making your selections, click OK.

Figure 11.13: The Customize Recovery Goal screen 15. To change the days on which long-term backups occur, click Modify. Select the appropriate day and time for each backup as shown in Figure 11.13. When you have finished making your changes, click OK. 16. Click Next. 17. In the Select Library And Tape Details screen, choose the library to use, the number of drives from the library, integrity checking, and compression and encryption options as shown in Figure 11.15. When you have chosen the appropriate settings, click Next.

Figure 11.14: Modifying the times for long-term backups

Figure 11.15: The Select Library And Tape Details screen 18. In the Choose Replica Creation Method screen, select the method by which replicas will be created, as well as when the first one should be created, as shown in Figure 11.16. Click Next.

Figure 11.16: The Choose Replica Creation Method screen 19. In the Summary screen shown in Figure 11.17, you will be presented with a summary of all of the settings you have selected for the protection group. If everything looks good, click Create Group; otherwise, click Back to make any necessary changes.

Figure 11.17: The Create New Protection Group Summary screen That's it! You're protecting your workstations with DPM.

Restore Procedures
Restoring workstation data in DPM is pretty much the same as restoring other types of data; it is, after all, effectively just simple file data. When you are recovering workstation data from DPM, you have several options for where you can restore the data:

You can recover the data to the original location. You can also recover the data to an alternative location, such a new volume or folder on the original workstation or even another machine. The recovery machine must also have the DPM agent installed. You can choose to write a copy of the data to tape. This may not initially seem useful; however, it can be useful in many electronic discovery or regulatory compliance scenarios.

Let's examine these scenarios in detail.


Recovery to the Original Location

This option work exactly the way it sounds: DPM restores the data from the recovery point right back to the same data source from which it came. Because of the potential to overwrite the data currently there, you'll need to tell it how you want to handle conflicts between the recovered data and any data that may currently be in place. To restore data to the original location use the following procedure. 1. Open the DPM Administrator console and navigate to the Recovery tab. 2. In the Protected Data pane, expand the available data sources and select the data source that you want to recover.

3. Select the desired recovery point from the list provided, as shown in Figure 11.18, and click Recover.

Figure 11.18: Selecting a recovery point 4. On the Review Recovery Selection screen, ensure that you have chosen the correct items to recover, as shown in Figure 11.19. When you are satisfied with your selections, click Next.

Figure 11.19: Review the recovery selection 5. On the Select Recovery Type screen shown in Figure 11.20, select Recover To The Original Location, Recover To An Alternate Location, or Copy To Tape.

Figure 11.20: Select the recovery type 6. On the Specify Recovery Options screen shown in Figure 11.21, choose your desired recovery options: o Existing version recovery behavior: Select Create Copy to make a copy of existing data when the recovered data conflicts with existing data, Skip to not restore data when it conflicts with existing data, or Overwrite to replace the existing data with the recovered data. o Restore security: You can specify whether to use the security settings as they currently exist on the recovery point, or apply the settings from the recovery point (if they differ). o You can enable email notifications and specify one or more recipients. o Back on the Specify Recovery Options screen, click Next. o On the Summary screen shown in Figure 11.22, review your choices. If you need to make any corrections, click Back to move back through the configuration options. When you are satisfied, click Recover.

Figure 11.21: Specify the recovery options

Figure 11.22: The Summary screen


o

DPM will display a Status window for the recovery operation as shown in Figure 11.23. You may close the status window and track the progress of the operation in the DPM Administrator console.

Figure 11.23: Recovery progress in the status window When the recovery operation completes, the version of the data captured in the recovery point will be restored to its original location on the protected workstation. Depending on the recovery behavior you selected, you may have a mixture of older data and current data.
RECOVERY TO AN ALTERNATIVE LOCATION

Recovery to an alternative location is useful when you want to recover an older version of your data, or even create a second copy of the current recovery point, without overwriting or otherwise modifying the data on the protected workstation. The alternative location can be on the same workstation or another machine. To restore data to an alternative location, use the following procedure.

1. Open the DPM Administrator console and navigate to the Recovery tab. 2. In the Protected Data pane, expand the available data sources and select the data source that you want to recover. 3. Select the desired recovery point from the list provided, as shown in Figure 11.18, and click Recover. 4. On the Review Recovery Selection screen, ensure that you have chosen the correct items to recover as shown in Figure 11.19. When you are satisfied with your selections, click Next. 5. On the Select Recovery Type screen shown in Figure 11.24, select the Recovery To An Alternate Location option and click Browse.

Figure 11.24: Selecting an alternative recovery location 6. Expand the server list, select the location for your recovery, and click OK. 7. On the Select Recovery Type screen, the recovery path you have chosen will appear. Click Next. 8. On the Specify Recovery Options screen shown in Figure 11.20, choose your desired recovery options: o Existing version recovery behavior: Select Create Copy to make a copy of existing data when the recovered data conflicts with existing data, Skip to not restore data when it conflicts with existing data, or Overwrite to replace the existing data with the recovered data. o Restore security: You can specify whether to use the security settings as they currently exist on the recovery point, or apply the settings from the recovery point (if they differ). o You can enable email notifications and specify one or more recipients.

9. To adjust the network bandwidth used by the restore process, click Modify. In the new window, specify a maximum usable amount of bandwidth for work hours and nonwork hours, as shown in Figure 11.21. Click OK. 10. On the Summary screen, review your choices; if you need to make any corrections, click Back to move back through the configuration options. When you are satisfied, click Recover. 11. DPM will display a Status window for the recovery operation, as shown in Figure 11.23. You may close the Status window and track the progress of the operation in the DPM Administrator console. When the recovery operation completes, the version of the data captured in the recovery point will be restored to the alternative location you selected.
COPY TO TAPE

With this option, you can create an on-tape copy of your data source from any selected recovery point. That seems somewhat pointless, doesn't it? After all, your data is already backed up on disk (or on tape, if it's been long enough); why do you need another copy on tape? Even if you can't think of a reason now, don't discount the option. Many administrators need the ability to create tape copies of their data, usually to comply with electronic discovery queries or satisfy audit requests in regulatory compliance scenarios. Note that you don't have the ability to filter the data according to arbitrary criteria; you just get a straight dump of your selected data source from the selected recovery point. To restore data to tape copy, use the following procedure. 1. Open the DPM Administrator console and navigate to the Recovery tab. 2. In the Protected Data pane, expand the available data sources and select the data source that you want to recover. 3. Select the desired recovery point from the list provided, as shown in Figure 11.17, and click Recover. 4. On the Review Recovery Selection screen, ensure that you have chosen the correct items to recover as shown in Figure 11.18. When you are satisfied with your selections, click Next. 5. On the Select Recovery Type screen, select the Copy To Tape option and click Next. 6. On the Specify Library screen shown in Figure 11.26, select the appropriate primary and copy tape libraries. Provide a label for your tape and chose any desired compression and encryption options. Click Next.

Figure 11.25: The Specify Library screen

Figure 11.26: Specify the recovery options 7. On the Specify Recovery Options screen shown in Figure 11.26, you can enable email notifications and specify one or more recipients. Click Next. 8. On the Summary screen, review your choices; if you need to make any corrections, click Back to move back through the configuration options. When you are satisfied, click Recover. 9. DPM will display a Status window for the recovery operation, as shown in Figure 11.23. You may close the Status window and track the progress of the operation in the DPM Administrator console. When the recovery operation completes, you'll have a second copy of the data on tape.

The Bottom Line


Determine the prerequisites for installing the DPM protection agent on workstations. You need to ensure that your protected workstations are running the necessary versions of the Windows operating system and service packs and are configured according to DPM's requirements. Master It 1. Perform a survey of your workstations to ensure that they are compatible with the DPM protection agent: o What version of Windows (and service packs) are you running on the workstations you want to protect? o Do your volume, partition, and share configurations meet the DPM requirements? 1. If your workstations use EFS or Bitlocker, can you protect the workstation data with DPM? 2. What data will DPM capture as part of the workstation system state? Configure DPM protection for workstations. Once the DPM agent is deployed, you must configure protection groups and select data sources to protect. Master It 1. 2. 3. 4. What workstation data sources can DPM protect? Can DPM handle NTFS reparse points, and if so, is any special handling required? How does DPM handle nested mount points? What DPM licenses do you need to protect workstations?

Recover protected workstation data. Protecting the data is only half of the job; you also need to be able to recover it. Master It 1. To where can you recover workstation data? 2. What are the differences between recovering data to the original location and an alternative location? 3. How do you handle conflicts between current versions of data and earlier versions you are restoring?

Chapter 12: Advanced DPM


Overview
New capabilities emerge just by virtue of having smart people with access to state-of-the-art technology. Robert E. Kahn Troubleshooting systems and software when they're not working correctly is a large part of our duties as systems administrators. We can't remember how many times we've been stuck in front of a machine that has been misbehaving, at the end of our creative ropes, having just watched the last idea we could think of fail to solve the problem. We're willing to bet that most of you have been in that situation too. (If you haven't been there yet, you will be if you stay in the field long enough.) In these situations, there's a natural tendency to silently retry a procedure that didn't work the last time while we say to ourselves, the servers, and whatever spirits or entities may be listening, "Please, just work this time!" While the exact words you use may be some other derivative, the intent is the same; this is the IT equivalent of throwing a Hail-Mary pass during that last "fourth down and hopeless" to somehow, against all odds, get the touchdown and win the game. We have a somewhat uncomfortable insight to share about these situations: they often arise when we've been trying to make some technology fit into our networks in ways that most people (including the original designers) wouldn't think to use it. The truth is that every network environment we've ever seen has featured at least one major quirk, some oddity of deployment that exists (and causes pain) because of one or more of the following reasons:

Budgetary constraints that create deployment challenges. Sometimes, even when you know the ideal way to deploy a given technology and want to do it, it costs too much for the benefits that you'd gain. Practical IT is always a constant balancing act between doing things "right now" versus doing them the "right way" (if there ever is truly one right way to accomplish a given deployment). If the money isn't there, administrators have to come up with creative solutions to the problems they face. Legacy hardware or software applications that must be worked around or accommodated. In one environment Devin was in, all of the major data processing for the primary business activity happened on a single specialized mainframe that was several years past its prime. However, the cost to re-create the custom application that ran on this mainframe and performed the calculations would have been prohibitively expensive. As a result, this system dominated the planning for any new additions to the network. Political or organizational boundaries that complicate planning. Although IT stretches through all layers of most organizations, it is common to find multiple groups that handle their own IT needs and deployments instead of one central group that handles IT across the environment. In these circumstances, you can acquire some interesting network design artifacts as you try to assemble a coherent set of capabilities from these isolated silos of connectivity.

There are other reasons, of course, but we think you get the point; nobody has a perfect network deployment, if such a thing even exists outside of network diagrams. Not even

vendors' test labs have "perfect" deployments; it's really no wonder that you don't have one either. So, what does all of this have to do with DPM? Quite simply, this is your jumping-off point beyond what we have the ability to teach you about DPM in this book. This chapter will point you toward a variety of advanced issues, techniques, and practices that will help you get even more out of the DPM deployment in your network. Some of these topics may be questions that have occurred to you as you've been reading through the other chapters, and others might not have occurred to you before seeing them here. We sincerely hope the following discussion helps you deal with any DPM deployment issues that may come up before you need to bring in clergy of your choice to your server room to appease the servers. In this chapter, you will learn to:

Finish your DPM deployment Protect your DPM servers Identify and manage DPM-related networking issues

Finishing Your Deployment


One of the most common implementation problems to plague most environments is the failure to complete a software installation and fully integrate the new technology with the existing applications in the network. Systems administrators often get so focused on getting the new application installed and working that they get distracted (or just plain forget) to finish the job with all the little final details. Then, they're assigned to their next task and the forgotten jobs lurk out of sight and out of mind until the day something goes wrong and blows up in their face. What types of finishing touches are we talking about? Small things such as:

Establishing email notifications Properly sizing disk partitions Creating performance baselines Tuning the system performance Hardening the DPM servers

Not spending the time to accomplish these tasks will eventually come back to haunt you often in the form of having to take time out of an already too-busy schedule to configure these elements as a separate project. Worse, if you don't make time to attend to these details when you're doing the original deployment, someone else may not realize that the capabilities are there and decide that it would be better to bring in a third-party solution to handle a task that is built-in and should already have been in use.
Establishing Email Notifications

In each of the workload-oriented chapters, we've given the steps necessary to configure email notifications when you create a new protection group. Although we gave these steps as optional after all, you don't need to turn on notification emails for a protection group in

order for DPM to begin protecting the data sources in that protection groupyou would be very wise to do so. You've already proven that you're wise by reading this book (wiser still if you bought it!), so make this step a part of your process for creating new protection groups. When you create email notifications, the natural temptation is to list a handful of user email addresses. Resist this temptation. Instead, select a suitable distribution group or mail-enabled security group and use that as the recipient; create a new one if necessary. Then, document this group for your account provisioning team and let them know who needs to be members of it. Depending on the scope of the group, you may even be able to make its direct membership be other groups in Active Directorygroups such as"Domain Administrators" or custom groups that you've created in order to help people do their jobs. Taking the few extra seconds to create a new group will one day save someone some effort. The advantage of using groups over individual recipients is that you don't have to make any configuration changes to existing protection groups when you have staff turnover. The new personnel simply gets added to all of the same groups as the person they replaced, and now they automatically start getting all of the notification email. If you have a custom group that controls which users are allowed to log on to your DPM servers, you can easily use this same group for notificationsensuring that when you create a new DPM administrator, they won't miss any notifications.
Manually Sizing Disk Partitions

DPM supports adding disks that already have data on them to the disk pool. Provided there is some amount of space that is unallocated, DPM will use it. This means that if you have a 100GB disk, with a 50GB partition on it and the rest of the space unallocated, adding the disk to the storage pool increases the size of the pool by 50GB. When a basic disk is added to the storage pool, DPM will convert it to a dynamic disk. This is true of any disk, be it ISCSI, local, direct attached, etc. Any disk that is seen as a block-level nonremovable device in Disk Manager (in other words, any disk that can be made a dynamic disk) can be used.

Dynamic Disks and iSCSI Some of you may be jumping up and down, thinking that you've caught us in an error. "Wait!" we hear you cry. "You can't use iSCSI volumes as dynamic disks!" Ah, you clever, clever people. You've been using the free Microsoft iSCSI Initiator software, haven't you? Well, you're right and you're wrong. This limitation does in fact exist, but it's not a fundamental limitation of iSCSI. It is a limitation in the current Microsoft iSCSI Initiator client software. When using this software, you can actually use iSCSI disks as dynamic disks; however, Microsoft does not recommend that you do soand for a good reason. When you first connect to the iSCSI target LUN, you can open the Disk Manager MMC console and happily convert that basic disk to a dynamic

disk. The problem comes when you reboot your server. When Windows initiates disks and disk connections, basic disks are done much earlier than dynamic disks. Connections to basic disks will be restored; connections to dynamic disks cannot be guaranteed. Just for a moment, think about how bad it would be to have DPM start up and not be able to find its storage pool disks! If you're using some other iSCSI initiator software, however, you may not have this limitation; check with your initiator software vendor. Neither iSCSI nor DPM care one way or another. You may also, for this reason, want to invest in one of the hardware-based iSCSI HBAs; by handling these functions in hardware, they often avoid many of the timing issues.

Once the disk space is allocated to the storage pool, DPM makes it available for protection groups. When space is allocated to a protection group, DPM creates a volume on one of the dynamic disks in the storage pool that corresponds to the size specified during protection group creation. Let us for a moment consider the disk configuration of a machine with three physical disks. Disk 1, the system disk, has no available space and has not been added to the storage pool. Disk 2 has two volumes: a volume that was created prior to the disk being added to the DPM storage pool and a second volume for use by DPM. Disk 3 was added as an empty disk with no partitions; the entire disk has been converted to a dynamic disk and used by DPM. Note that it's a very bad idea to manually delete any disk volumes that are being used by DPM! Don't do it. If you need to stop protection on a group, first use the DPM Administrator console and do it the right way; then, you can remove the disk volume from DPM control. Doing it this way takes longer and involves more steps, but it also prevents the DPM protection database from being in an inconsistent state with the actual data replicas to which DPM has access. It also keeps you from breaking the protection groups for every replica that was stored on the nowabsent volume; these protection groups can become corrupt, requiring you to jump through extra hoops to remove them from the active DPM configuration before you can rebuild them and restart protection. Having to do this invalidates any of the backups you made to this point, which pretty much makes having them an exercise in futility. If you want DPM to share disk space with a partition that is not going to be part of the storage pool, all you have to do is add the disk itself. Simple, right? Not so fast there; that's all that's involved in the administrative side of things, but as with everything else there is some fine print to consider: your expected I/O load. If you have a single physical disk that hosts both part of the storage pool and another I/O intensive application, such as the SQL Server database used by DPM, you are definitely setting yourself up for nasty performance issues. In a lab environment, this type of shared disk configuration may be acceptable for functionality testing; in the real world, you should avoid it if at all possible. Another possible issue that can arise from this configuration is that if a disk fails, you not only lose part of your recovery data, but whatever was on the other partition as well.

Another consideration for disks on DPM is whether or not to use RAID. Let's say you have a DPM server with a ten-disk array on it. What level of RAID (if any) should you use? Well, it all comes down to your SLA. If you are using long-term protection, and your SLA states that you will be able to recover data to within a week of an outage, RAID is not that important. However, if you guarantee in your SLA that data will be available from the previous night's backup (a typical SLA clause), you want to make sure that you are using some sort of RAID fault-tolerance. We recommend that if you have two or more disks in the storage pool, you place them into some sort of a RAID configuration to provide fault tolerance. We also recommend that you use at least two drives, so that mirroring (RAID-1) is an option. You can use a RAID-5 volume to get the maximum disk space out of your hardware, but be aware that in most RAID implementations you suffer degraded write performance because of the overhead imposed by the RAID-5 parity checksum process. High-end solutions such as RAID 1+0 (or RAID 10) may be overkill for the amount of actual disk I/O your DPM server is experiencing. For a moment, let's assume that you need to have a lot of storage to protect all of your data sources; for the sake of argument, six or more drives are in your storage pool. The next question that comes to mind is this: is it better to put them all in one big array or to use multiple smaller arrays? The answer to that question, of course, lies in the particular requirements in your organization. However, do remember that DPM manages all the details of its storage pools by itself. Some typical reasons for arrays with large numbers of drives include:

To provide a contiguous amount of storage for a truly stupendous file or set of files, such as a very large database. With DPM this is not necessary; it will happily use multiple smaller volumes and manage the storage itself. It does not need to have a single free space available to protect a data source, but it won't hurt if you want to allocate it in that fashion. To provide a high amount of drive I/O by using a high-end configuration, such as RAID 1+0, to increase the number of available spindles. Again, with DPM this usually isn't the requirement you need to worry about; simple capacity with decent performance is more than sufficient.

We tend to favor using multiple smaller RAID arrays over one monster RAID 5 or RAID 1+0 array; let DPM manage where the data is kept, we just want to give it the disks it needs and move on to our next task!

Ryan likes RAID 5 arrays because they usually deliver acceptable performance for typical operations while providing the highest degree of efficiency from the standpoint of raw disk capacity. Devin prefers RAID-1 mirror pairs because they don't tend to degrade the I/O subsystem performance nearly as badly when one of the drives fails, nor does it take as long to rebuild the array and get back out of degraded mode when you replace the failed drive.

Which configurations you actually use in your environment, however, are up to you and the particular requirements of your deployment.

Striped RAID Volumes and DPM: Just Say No We will give you one hard and fast rule: never, ever, ever use RAID 0 (simple striping) for your DPM storage pool. You'll have awesome read and write performance for your DPM server, so it might be tempting to think about, especially if you're protecting a large number of data sources. Remember that a RAID-1 array offers absolutely no level of redundancy whatsoever; the minute one of the drives in your RAID-1 array fails, you've lost it all.

Multi-Forest and Multi-Domain Considerations

Many medium and large-sized companies have more than one Active Directory domain, if not more than one Active Directory forest in their environment. For these people, we have great news: DPM 2007 will absolutely protect any domain-joined machine that is a member of the same domain as the DPM server; it will also protect a domain-joined machine in another domain, provided there is a two-way trust between the two domains. Let's talk for a moment about trust relationships between domains. When you install your first Active Directory domain controller, you create a single forest with a single domain. As you add new domains into this forest, they automatically have new trust relationships created. If you remember domain trusts in the Windows NT days, they had two undesirable characteristics:

They were one-way. If domain A trusted domain B, domain B did not in turn automatically trust domain A unless a separate trust relationship was also configured. In Active Directory,new domains in the forest automatically have two-way trust relationships created with their parent domains. If the domain is the start of a new domain tree, then the roots of the trees have two-way trust relationships with the other domain trees. They were intransitive. If domain A trusted domain B, and domain B trusted domain C, domain A did not automatically trust domain C. Again, in Active Directory this all changes; if your forest's root domain is domain.tld and you have the child domains foo.domain.tld and bar.foo.domain.tld, each domain has an explicit two-way trust relationship with its immediate parent and child domain. However, because both domain.tld and bar.foo.domain.tld trust foo.domain.tld, they also trust each other.

For those of you with a single-forest deployment, this keeps things simple. By default, all domains within a forest trust each other via these automatically configured two-way transitive trusts. Although you can protect all of the domains in a forest from DPM servers in just one of those domains, you may not want to or be able to; a lot depends on your security considerations and network topology. If you have multiple forests and want to protect data sources in another forest using DPM, you don't have a lot of options. DPM 2007 does not support protecting resources in another forest; you can't use a forest-level trust to allow DPM servers in one forest to protect servers in another forest. Likewise, you can't use a shortcut trust to create a direct trust relationship

between two domains in two forests and expect DPM to work. We hope that support for this scenario will come in future versions of DPM, but for now, you have to deploy DPM servers in each forest you want to protect.
Protecting Other Workloads

The DPM 2006 release showed some impressive capabilities and technologies, but fell short in a few critical areas such as native tape support and the inability to natively protect any data sources except file server resources. DPM 2007 certainly addresses those concerns, but you may be asking yourself, "How do I use DPM to protect this application I really need?" The answer to that question depends on the underlying technologies in use. The first question you need to answer is who makes the application?
OTHER MICROSOFT APPLICATIONS

If the application you're trying to protect is a Microsoft application, then the chances are good that it relies on a combination of Microsoft technologies such as SQL Server and IIS. Your job is to figure out which data sources the application is using and how best to protect those data sources in DPM. If you're already protecting these applications using some backup software, then you already have an easy way to find this information. Simply list the data sources you have to protect in your current backup solution and match them to the available workloads natively supported by DPM. Using this method, you can easily protect the necessary components of any Microsoft application, just as you would with any other backup solution. In Table 12.1, we've provided a listing of other common Microsoft applications and the data sources you need to protect with DPM.
Table 12.1: Protecting Other Microsoft Applications Open table as spreadsheet

Application Great Plains

Data Sources Protect the relevant SQL databases using a SQL Server database data source. Dump the IIS metabase and configuration to disk. Protect file systems and dumped IIS data using a file-based data source. Optionally, protect the system partition, application data, and system state on protected servers for easy restoration.

Internet Security and Dump the ISA configuration to disk. Protect the dumped IIS data Acceleration Server using a file-based data source; this requires your ISA server to be domain-joined. Optionally, protect the system partition, application data, and system state on protected servers for easy restoration. Live Communications Server Protect the relevant SQL databases using a SQL Server database data source. Dump the IIS metabase and configuration to disk. Protect file systems and dumped IIS data using a file-based data source. Optionally, protect the system partition, application data, and system state on protected servers for easy restoration.

Table 12.1: Protecting Other Microsoft Applications Open table as spreadsheet

Application Data Sources Microsoft Operations Protect the relevant SQL databases using a SQL Server database data Manager source. Dump the IIS metabase and configuration to disk. Protect file systems and dumped IIS data using a file-based data source. Optionally, protect the system partition, application data, and system state on protected servers for easy restoration. Microsoft Project Server Protect the relevant SQL databases using a SQL Server database data source. Dump the IIS metabase and configuration to disk. Protect file systems and dumped IIS data using a file-based data source. Optionally, protect the system partition, application data, and system state on protected servers for easy restoration. Protect the relevant SQL databases using a SQL Server database data source. Dump the IIS metabase and configuration to disk. Protect file systems and dumped IIS data using a file-based data source. Optionally, protect the system partition, application data, and system state on protected servers for easy restoration. Protect the relevant SQL databases using a SQL Server database data source. Dump the IIS metabase and configuration to disk. Protect file systems and dumped IIS data using a file-based data source. Optionally, protect the system partition, application data, and system state on protected servers for easy restoration.

Office Communications Server

System Center Configuration Manager

System Center Protect the relevant SQL databases using a SQL Server database data Operations Manager source. Dump the IIS metabase and configuration to disk. Protect file systems and dumped IIS data using a file-based data source. Optionally, protect the system partition, application data, and system state on protected servers for easy restoration. Systems Protect the relevant SQL databases using a SQL Server database data Management Server source. Dump the IIS metabase and configuration to disk. Protect file systems and dumped IIS data using a file-based data source. Optionally, protect the system partition, application data, and system state on protected servers for easy restoration. When you are devising custom protection regimes, there are a number of issues that you should be aware of to ease your protection process. Many of these data sources will require you to take some action such as creating dump files so that the file-based data sources can be used. These dump files should be created on a regular basis using some sort of scheduling mechanism, such as the built-in Scheduled Tasks component that comes with Windows Server. As you create dump files from data sources, you may not stop to think about what filename you use. One common best practice in these situations is to embed a data and timestamp in the filename so you can easily distinguish between the dump files from different days. While

this practice is good for other backup applications, it isn't really needed by DPM. In fact, there are two problems it creates that may make it unsuitable for use with DPM:

The first problem is that it creates a new copy of the data dump file every time the scheduled task is run. Over time, these dump files add up and waste disk space. Unless you really enjoy having to write a script that rotates through these dump files and removes the older ones, simply reuse the same filename in each invocation and let DPM's recovery points be your versioning system. The second problem is with your synchronization schedules. Since your data source changes only once a day (we presume), there's no point in setting synchronization for the protection agent to a small interval. Instead, set up your recovery point schedule to update the data source only after the dump file has been created. This is the best use of network bandwidth and disk space.

Because of the specialized synchronization requirements, you will definitely want to place all data sources for one application in the same protection group.
THIRD-PARTY APPLICATIONS

If the application you're trying to protect isn't a Microsoft application but does use Microsoft technologies such as SQL Server and IIS, then you're still going to have a relatively easy time protecting it. As with the applications we talked about in Table 12.1, you need to figure out which data sources the application is using, and then protect those data sources in DPM. As an example, consider the typical blog application built on ASP.NET. These web-based applications usually have a data tier, based on top of a database such as SQL Server. They also have the web tier that consists of files placed on the IIIS server. Protecting this application, therefore, requires protecting two data sources:

Protect the database tier as a SQL Server database data source on the database server. Protect the web tier as a file-based data source on the IIS server.

Depending on your application, you may need to use application-specific utilities to export or dump configuration files to the filesystem, where you can then protect them in DPM as file server data sources. You also need to consider whether you need to protect the system state on your application servers. If your application uses a third-party data source, you'll probably have to use the backup methods included with those products to create a dump file. You can then protect these dump files as file server data sources with DPM. As with our advice for Microsoft applications, you definitely want to keep all data sources in an application in the same protection group.
PHYSICAL TO VIRTUAL PROTECTION

This option is extremely intriguing; by combining DPM 2007 with MSVS 2005 R2 SP1 and the new Microsoft System Center Virtual Machine Manager (VMM), you can actually provide advanced data continuation protection for physical servers. Here's how it works:

Use the Physical to Virtual (P2V) protection feature of VMM to create virtual machine copies of your physical production hosts. These virtual machine copies have the same Windows SID and Active Directory membership information as the physical host; they can be activated and restored much more quickly than you can rebuild the physical hosts in the event of a widespread emergency that causes the loss of an entire site. You only need to perform this protection once a week to capture the important application and operating system file changes. Use the DPM protection agent on the physical production server to protect the system state. When you switch over to using the virtual machine copy, you have this system state to restore to the virtual machine to capture any changes that happened since the last P2V snapshot was taken. Use the DPM protection agent on the physical production server to protect your production data, just as you would normally. When you switch over to using the virtual machine copy, you can restore this data to the virtual machine and continue to offer service to your users.

Like DPM, VMM also offers Windows PowerShell integration. This allows you to easily create custom scripts that manage all aspects of switching over to your virtual machine copies in the event of an emergency. You get a tested, automated failover scenario that provides quick recovery and restoration of essential services. Going into the details of how to achieve this configuration is rather outside the scope of this book, as we don't cover VMM at all. However, Microsoft has already begun produced whitepapers on this topic; we feel confident that they will continue to offer more advanced guidance on this option.
Protecting DPM

There's a large debate over how effective clustering really is. Most of the discussion and argumentation on this subject really centers on a single issue, which Ryan states like this: "It does no good to have redundant servers if your storage device represents a possible single point of failure." We'd like to point out that the same holds true for your protection method. By deploying DPM you now have this wonderful deployment that protects all of your critical servers easier and faster than ever before. In the process, many of us forget that this functionality now turns our data protection system into a critical server as well. To be complete, you need to protect your DPM deployment. How you do this depends on your environmental needs. In some organizations, a central department manages all backup and restore operations. If you've got a large enterprise that has a combination of Windows servers, Unix servers, and a mix of other server operating environments, you probably have some sort of high-priced, high-end heterogeneous backup application. On the other hand, you may have a complete Windows environment in which DPM is going to be your new protection application. In either case, you need to take adequate measures to protect DPM itself.

TWO-TIER DPM DEPLOYMENTS

We strongly recommend that you deploy two tiers of DPM servers in your organization: the first tier to protect your production servers and data sources and the second tier to protect the first-tier DPM servers. Microsoft calls the first-tier servers primary servers and the second tier, logically enough, secondary servers. Now, don't panic as you start to think of the runaway hardware and storage costs; a secondary server that is used to protect other DPM servers doesn't need to have nearly as much disk storage as the primary servers do. Using a two-tier deployment gives you a few advantages:

You can use a secondary server to protect both the database and the replica data on the primary server, giving you several levels of additional protection for your primary servers. You can deploy a primary server in your branch offices and then have a central secondary server to pull the protected data into your central datacenter. This deployment makes a great scenario for site recovery, especially in a geographically diverse organization You have a secondary source for recovering data. A secondary server recognizes the fact that the data on the primary server comes from production data sources and keeps that information intact. When you want to recover the data from the secondary server, you can recover it directly to the protected data sources from which it originally came. You can separate your short-term disk storage and long-term tape storage policies by tier. The primary servers can provide the disk-based short term local recovery; the secondary servers can then provide centralized tape storage once the data is replicated to the secondary servers.

In order to protect a primary server with a secondary server, you first need to install the DPM protection agent on the primary server. Once you've done that, you also need to ensure that the SQL Server VSS Writer is enabled on the SQL Server that holds the primary server's SQL Server databases. In an out-of-the-box installation, these databases are held on the SQL Server instance that was installed on the primary server and the VSS writer is disabled by default. In order to enable it, follow this procedure: 1. 2. 3. 4. 5. Open the Services control panel (Start Administrative Tools Services). Find the SQL Server VSS writer service, right-click it, and select Properties. Make sure that the Startup Type is Automatic. If the service is not running, click Start. When the service is configured, click OK to close the Service property sheet.

When you select a primary server as your data source, you will see the following data sources on the primary server:

The SQL Server databases used by DPM, if they are in the default instance of SQL Server on the primary server. If not, you will need to add these databases from the appropriate server. The disk volumes configured on the primary server. This allows you to protect the operating system and application files.

The protected replicas that are present on the primary server. This allows you to protect the replica data that has been synchronized from the production data sources.

You should never protect a DPM SQL Server database from the DPM server that uses that database. There is, however, one exception to this rule: you can protect a DPM SQL Server database from the DPM server using that database as long as you are copying the database to tape. So, what are the minimum data sources you should protect on a primary server? At the very least, protect these data sources:

The SQL Server databases The \Program Files\Microsoft Data Protection Manager\DPM\Config folde The \Program Files\Microsoft Data Protection Manager\DPM\Scripting folder

By protecting the above data sources and no other, you can at least rebuild a protected primary server as long as the drive volumes in the storage pool are still intact. In order to get additional recovery coverage and options, you should protect the other data sources. As always, these data sources should be included in the same protection group, just as with any other protected application. Finally, you can't deploy more than two tiers of DPM servers in DPM 2007; you get primary servers and secondary servers, and that's it. You also can't use two DPM servers to protect each other. However, we know of no limitation (other than disk and bandwidth, of course) that would prevent you from using one secondary server to protect multiple primary servers.
PROTECTING DPM WITH ANOTHER APPLICATION

Another option that works well for heterogeneous environments is to protect your DPM server with an existing backup solution. This option is definitely attractive to organizations with large enterprise-scale heterogeneous backup solutions. In such an instance, you probably want to use the same solution that you are currently using to protect non-Microsoft operating systems to also protect DPM while using DPM as a convenient protection aggregation point for your Windows servers. You may be wondering, "If I already have an enterprise-level backup solution, why should I bring in DPM just to protect my Windows machines?" That's a good question, and there are three main answers we're aware of:

As we've spent the rest of the book trying to persuade you, DPM gives you significantly simplified management options and better recovery processes than just about any other product with which we're familiar. Combine that with the continuous data protection, and this is a natural way to protect Windows servers. Often, Windows servers in a larger environment are built around silos of specific applicationsExchange servers, for example, to provide corporate-wide messaging when most of the other servers are other operating systems. By deploying DPM, these administrators now have a level of immediate backup and restore that's under their

control. They get the ease and speed of disk-based restores, allowing them to meet their SLAs, but they can still integrate into the corporate-wide tape-based solution. If you're large enough to negotiate one of the SA/EA price agreements with Microsoft, you should be able to include the necessary DPM licenses significantly lower than you can procure licenses for most competing enterprise-level products

In order to protect DPM with another backup application, you should first look for an agent that is specifically designed to be DPM-compatible. Often (but not always), enterprise backup applications with DPM-aware agents will help mask the potential complexity of backing up DPM servers. If your application isn't DPM-aware, it should at the very least be VSS-aware. If your backup software supports VSS, you can protect the DPM server configuration and replica data by protecting the following targets:

By default, the \Program Files\Microsoft Data Protection Manager\DPM\DPMDB folder contains the DPM SQL Server database: DPMDB2007.mdf. By default, the \Program Files\Microsoft Data Protection Manager\Prerequisites\MS$DPMV2Beta2$\Data folder contains the DPM SQL Server Reporting Services database: ReportServer.mdf. The protected replica data is stored under the \Program Files\Microsoft Data Protection Manager\DPM\Volumes\Replica folder. Each separate protected data source is under a separate NTFS mount point, so you can pick and choose specific replicas to protect if you don't want to back them all up.

When you protect DPM in this manual fashion, you must absolutely ensure that the application you're using makes no changes to the data that you're protecting. If it updates any of the file metadata at all (flipping the archive bits, updating file access times, or other common operations), you risk corrupting your protected replicas. You'll have to perform a full consistency check to sort out any of the problems created by this software. Usually, this is an option in your backup software.
Backing Up DPM with a Non-VSS-Aware Application

DPM really relies on VSS to protect production data on your servers and to create and manage replicas on the storage pool. If your backup application can't use VSS, you've got a much harder process ahead of you. DPM includes a little-known utility by the name of DPMBackup.exe, a name that makes DPM 2006 administrators cringe in fear. The purpose of the DPMBackup.exe utility is, quite simply, to make dumps of vital DPM data available for non-VSS-aware applications. It is essentially a manual interface to the various VSS snapshots and replicas on the DPM server. When you run this command, it does two tasks:

Creates dump files of the DPM SQL Server databases. These dump files are created in a well-known location that is separate from the live databases. Creates a separate set of mount points that contain backup shadow copies of the latest replicas of each protected data source. These mount points are organized by machine

name under the \Program Files\Microsoft Data Protection Manager\DPM\Volumes\ShadowCopy\ folder. The basic theory is simple; configure your backup job to run the DPMBackup.exe utility as a pre-execution task, and then back up the above locations. It sounds simple, too, but in practice it makes your backup task run significantly slower; DPMBackup.exe takes a while to run, as it has to enumerate, create a shadow copy for, and mount every single protected replica on the server whether or not you're interested in backing it up. Although you can pick and choose which of the resulting mount points you actually copy, you have no control over which ones to create; it's all or nothing. The process of integrating DPMBackup.exe into your current backup application regime is long, tedious, and requires specific knowledge of your application that we can't have; as a result, we can't give you the specific steps here. We can, however, urge you to read the DPM Operations Guide, which does contain specific procedures on this process. You should also consult with your backup application vendor to see if they have a specific list of processes to follow when protecting DPM servers. We also urge you to just upgrade to a better application that provides VSS support on Windows servers. Trust us; this will save you a lot of pain and agony in the long run. If you think that backups are bad using this method, you really don't want to see how to do restore operations! As we mentioned, DPM 2006 administrators probably hold a special place in their hearts for this tool. Recall that DPM 2006 didn't have native tape-handling capabilities. Unless you had one of a couple of very specific DPM-aware backup applications, you had to perform backups through the DPMBackup .exe utility with DPM 2006.

Network Issues
Depending on your environment, the amount of network traffic needed and used by DPM may raise some eyebrows with the people in charge of the switches and routers. The key to a successful deployment of DPM is the integration. You want your DPM deployment to be convenient, reliable, and as transparent as possible.
Managing Network Traffic

When you are testing your deployment in a lab environment, exactly duplicating the production conditions in a lab is a near impossibility for most administrators. We feel the best approach is to gather current information from your current infrastructure, and then in your lab environment, duplicate the amount of data you'll be protecting. You can measure this traffic and add it to your baseline to get an idea of the impact. Regardless of how comfortable you are with your network capacity, it is always a good idea to roll out DPM in your production environment without protecting any of your servers. Once rolled out, begin the protection in off hours. This will allow you to monitor the network traffic and have a better feel for how replication traffic may or may not impact your environment during peak hours. The main reason we emphasize being careful to minimize impact is that many users have become cynical and jaded due to hearsay and bad past experience. When informed of a new service being implemented, most users will expect days or weeks of problems before it all works as expected. Correct deployment of new services goes a long way toward setting the right expectations and garnering appreciation.
NETWORK INTERFACE AFFILIATION

Some organizations have dedicated subnets, VLANs, or physical networks to handle backups. If you have a separate network for backups, the protected server needs to be able to communicate on that network. For this reason, a second physical interface may be necessary (we say "may" here because of the possibility of using VLANs; see the next section for more info). If this is the case, we recommend that you follow these guidelines:

Do not specify a DNS server for the backup network. Do not specify a WINS server for the backup network Do not specify a default gateway for the backup network. By leaving the default gateway setting empty, any packets that cannot be directly routed via interfaces on the host cannot be sent. Having more than one default gateway on a server can be the cause of serious and hard-to-fix networking issues for any host.

If your DPM server has a second interface on the backup network, as well as one on the regular network, there's an interesting problem: how do you make the traffic go where you want? There are two basic approaches to this problem:

Use hosts files to hardwire network names to IP addresses. Use DPM's built-in functionality.

Using Hosts Files

The first approach is to handle it via entries in your hosts file. For example, if your regular network is 192.168.0.0/24 and your backup network is 172.16.0.0/24, the DPM server will automatically register its name with the DNS server on the regular subnet. You can make the protected servers access the DPM server by adding an entry in the protected servers' hosts files that lists the backup network IP for the DPM server. You should, however, be aware that by overriding server names and IP addresses in this fashion, you are doing so universally on the affected server. You have to provide two separate hostnames in order to distinguish your networks; applications have to use the correct name in order to get the correct network. Sometimes, you don't have the option to select which name an application uses (such as is the case with DPM, which looks up the names in Active Directory). As a result of these difficulties, using this approach for DPM is less than optimal; it may work fine for SQL Server hosts, but DPM is a different beast.
Using the Backup Network Feature

DPM has the built-in ability to use a preferred backup network specified by an administrator. This results in a much simpler method of utilizing a segregated network for backups. For all of you readers who are afraid of the command line, there's some bad news; this feature is available only via the DPM Management Shell. Do not, however, panic; the command is as simple as can be. The Add-BackupNetworkAddress cmdlet (for more about cmdlets and PowerShell, see Chapter 4, "Using the DPM Management Shell") takes care of this functionality for you. This cmdlet requires you to the following parameters:

Which DPM server you're modifying. The IP address of the network to use for protection operations. This address can be either a single IP address or an IP subnet in CIDR notation. A sequence number. This is used when you've specified multiple addresses or networks, and it tells DPM in which order to use them.

Let's give you a couple of quick examples: If your DPM sever is DPM-SRV01, your production network is 10.10.0.0/24, and your backup network is 192.168.150.0/24, you'd enter the following command in the DMS session on DPM-SRV01 or whichever management workstation you've installed the DMS on:
Add-BackupNetworkAddress -Address 192.168.150.0/24 -DPMServername DPMSRV01 - SequenceNumber 1

To tell DPM to use the production network as a secondary preference, you would then use this command:
Add-BackupNetworkAddress -Address 10.10.0.0/24 -DPMServername DPM-SRV01 SequenceNumber 2

MANAGING BANDWIDTH

When you're trying to protect servers over wide-area network (WAN) connections or other slow networks, you need to manage your bandwidth. Isn't it good to know that DPM gives you the tools you need to keep control?
Jumbo Frames

If you're lucky enough to have a dedicated network for backups, consider using jumbo frames. Enabling jumbo frames on an interface changes the MTU size of the packet, enabling larger packets to be sent over the wire with less overhead from the underlying Ethernet media. The end result is that your network spends more time passing data and less time passing packet headers, which helps you get even better performance out of high-end technologies such as Gigabit Ethernet. If you do decide to use jumbo frames, you'll need to keep in mind that jumbo frames carry a consequence: because a jumbo frame is larger than the default MTU of 1500, you will in turn need to ensure that every network interface, switch, and device attached to the network also supports jumbo frames. You also need to ensure that they all support the same maximum frame size. For example, our servers have a variety of gigabit Ethernet network interfaces; some of them support a maximum jumbo frame size of 7,000 bytes, while others support a maximum size of 9,000 bytes. However, some of our switches only support an MTU of 7,000 bytes. If we ever try to use those switches for our backup network, we will have to set every network interface on the backup network to an MTU of 7,000 bytes in order to take advantage of jumbo frames. If we don't, then some devices issue packets large than 7,000 bytes, which causes retransmissions that bring the whole network to a crawl. Ever seen a Gigabit Ethernet network pass data slower than a 10MB Ethernet network? It's not a pretty sight. To avoid this unpleasantness, make sure you do your homework on your equipment to match MTU settings. Also, don't forget to test in the lab environment. You may find other surprises; most switches support a single device-wide MTU setting. Another caveat to keep in mind is that while many network interface cards support both jumbo frames and tagged VLANs, we have yet to see a single device that supports jumbo frames over virtual interfaces. If you are using tagged VLANs and VLAN trunking to provide multiple networks for your DPM servers, you might want to revisit that architecture and plan to add a dedicated physical network interface on each server participating in the network. The difference between running with jumbo frames and without can make a world of difference to your restore timeswhich is when you need speed the most.

Calculating Usage

Calculating network usage is an important and often ignored aspect of running a network (not to mention optimizing a DPM server). Using a monitoring tool that is compatible with your

switching infrastructure can give you some good information, including average utilization, peak usage information, etc. This is important information to have, but we recommend you take it a step or two further. Most managed switches have an ability called port mirroring. Port mirroring allows you to replicate all traffic from one switch port to another. If you have a machine attached to a mirrored port running a network monitor, you can pull usage information on a single machine. We recommend this so that you can get an idea of DPM's network utilization as a percentage of the whole. It can also be useful in identifying traffic bottlenecks to a single host. Another method is to use either the Network Monitor (NetMon) or Performance Monitor (PerfMon) applications on the DPM server to capture information on peak and average usage. In PerfMon, this can be accomplished by using the Bytes Total/sec counter, and monitoring during synchronization and express full backup periods.
Throttling

In other chapters, we tell you that you can throttle a protected server (and yes, we've met many a server we wished to throttle, but we could never find the neck). Throttling means limiting the amount of bandwidth that DPM can use to protect any data sources on that server. DPM sets throttling options on a per-protected server basis so that you may have multiple subnets with differing bandwidth utilization and requirements. Throttling utilizes the Quality of Service (QoS) Packet Scheduler that comes with Windows Server (make sure it is installed on both the DPM server and the protected server before you try to use throttling). The QoS Packet Scheduler is a group of components that can differentiate traffic flows so that higher-priority traffic receives preferential treatment. This group of components also contains elements necessary to define a specific limit on bandwidth usage based on traffic type. Obviously, if you're not experiencing any network slowness, you won't be concerned with throttling. But if you need it, how do you know how much bandwidth to allow DPM to use? If you throttle it down too much, synchronizations may not complete in a timely fashion. The answer is (you've probably guessed it), try it out in a test environment. Set up an identical payload, and simulate the circumstances of your production environment. At this point, you can take two different approaches:

Put nothing but DPM and the payload in the test environment. You can then determine about how much data is transferred with each synchronization, and plan your throttling accordingly. Get some real-time statistics of traffic on your production network and use a packet generator to simulate the conditions. Use trial and error to figure out how much bandwidth to allow DPM to use.

Encryption

It seems that every time you check the news, there's another story about data theft. Usually it comes in the form of a stolen laptop, but if you rotate your tapes to an offsite location the

opportunity arises for one or more to go walking. Because tapes are a removable and highly portable medium, protecting the data on them becomes necessary.
THE EFFECT OF THE WINDOWS SERVER 2003 SP2 IP STACK CHANGES

When Windows Server 2003 SP2 was released, it came with an extra goody for all the good systems administrators: the Scalable Networking Pack (SNP). One of the major functions of this feature is to help Windows offload CPU-intensive packet processing onto specially equipped network cards, allowing the CPU cycles normally associated with these functions to be freed up. If that doesn't make sense to you, think about it in the same fashion that the graphics processor units on modern high-end graphics cards take over the heavy-duty rendering tasks for your gaming machine. You may have heard of or are already using a cryptographic accelerator card, which performs the same kind of processor offloading for cryptographic libraries. The SNP allows a new generation of network interface cards (supported by the proper drivers) to offload additional computational tasks that might otherwise be the source of performance bottlenecks on machines that handle many simultaneous connections, such as servers. As a result, the SNP can help you increase throughput on the network interface cards in your servers, removing performance bottlenecks and increasing your potential scalability. In order to support the SNP features, Windows requires your network card miniport driver to be redesigned to meet the specifications of NDIS version 6.0; this is the same version of NDIS that is present in Windows Vista and Windows Server 2008. NDIS 6.0 gives us a lot of desirable functionality; in addition to offloading some of the packet processing to the NIC, it also allows the driver to utilize multiple CPUs (in systems that have them) to spread the network processing load evenly over the system instead of bogging down a single CPU. Of course, there is a downside: there are some issues that come from interactions between the SNP and bugs in the drivers for existing popular server-level network interface chipsets. Some of the issues reported include:

You cannot create a Remote Desktop Protocol (RDP) connection to the server. You cannot connect to shares on the server from a computer on the local area network. You can only connect to websites that are hosted on the server or on the Internet if they use a Secure Sockets Layer (SSL) connection; you cannot connect over an unencrypted HTTP connection You experience slow network performance. You cannot create an outgoing FTP connection from the server. You experience intermittent RPC communications failures. You cannot run the Configure E-mail and Internet Connection Wizard successfully. You find that Microsoft Internet Security and Acceleration (ISA) Server blocks RPC communications. You cannot browse Internet Information Services (IIS) Virtual Directories.

This goes to prove two points:

Always test new software and upgrades in a lab environment.

Always keep up to date on hardware-specific updates.

However, we live in the real world where sometimes things don't work as we want. If you've installed SP2 and you're running into problems and need to disable SNP for a quick fix, there's a way. Simply open a regular DOS command prompt on the affected server and enter the following command: Netsh int ip set chimney DISABLED This turns off the SNP offloading feature and should give you the breathing room you need to get updated drivers tested and loaded onto your servers. Once you've done that, don't forget to re-enable the SNP offloading: Netsh int ip set chimney ENABLED There are several Registry parameters that you can tune to affect how the SNP performs its offloading; Microsoft has helpfully documented them in Knowledge Base article 912222, "The Microsoft Windows Server 2003 Scalable Networking Pack release," available online at http://support.microsoft.com/kb/912222. As always with Registry edits, you need to be cautious and fully test any changes you make.

DPM provides the option to encrypt data whenever tape is used as the storage medium (be it short or long term). In order to encrypt the data on a tape, you need to have a certificate issued to the DPM server. You can use any of the following types of X.509-compliant certificates:

Self-signed certificates Certificates from a trusted third-party certificate authority (CA) Certificates from an internal PKI deployment, such as the Windows Certificate Services CA feature of Windows Server 2003 or other comparable PKI offerings from other vendors

For DPM to encrypt data to tape, the certificate must reside in the DPMBackupStore. So if you generate a cert from a trusted root CA in your network, this is where you must place it. You can use the Certificates MMC snap-in console to view, manage, and import the certificates on your server. By default, this console is not preconfigured on your computer; you must open the MMC, add the console, and select the local machine account to manage. Self-signed certificates are a little easier. You can accomplish the whole process with a utility that is included in the .NET Framework SDK (which you can find at http://download.microsoft.com) called MakeCert.exe. The syntax is as follows:
Makecert.exe -r -n "CN=MyCertificate" -ss DPMBackupStore -sr localmachine -sky exchange -sp "Microsoft RSA SChannel Cryptographic Provider" -sy 12 -e expiration

date in mm/dd/yy format

You will need to supply your own information for values of the CN and expiration date fields. If you want to encrypt DPM connections over the wire, you should use the built-in IPSec policies offered by Windows. Offering specific steps for enabling IPSec policies for DPM is outside the scope of this book, but we can point you to the Microsoft TechNet IPSec website. This site contains a variety of resources to help you understand and deploy IPSec in your network. You can find this site at http://technet.microsoft.com/en-us/network/bb531150.aspx.
Importing Certificates in Windows

As we mentioned previously, Microsoft failed to include a predefined shortcut to the MMC Certificate console. We could speculate about why they decided to omit one, but we'll spare you and cut to the point: this console is extremely useful to you, so you're going to have to know how to load it. Here's how you do it: 1. Open the Microsoft Management console. You can do this by running the mmc.exe executable from either a command prompt or the Start Run prompt. 2. In the MMC, select the File Add/Remove Snap-in menu option. 3. Select Certificate from the list of snap-ins and press the Add button to add the Certificate snap-in to the active console instance. 4. Select the appropriate account management option: User Account if you're managing the private certificate stores for the user you're currently logged on as, Service Account if you're managing the certificate stores associated with a specified service account, and Computer Account to manage the certificate stores that are available for all accounts running on the computer, including the local machine account. For DPM, you probably want to pick the Computer Account option. 5. If you selected Service Account or Computer Account in step 4, click the Next button. On the next screen, select Local Computer; this should be suitable for most uses. Click the Next button. 6. If you selected Service Account in step 4, click on the specified service account from the list. 7. Click the Finish button. 8. Click the Close button to close the snap-in chooser, and click OK to return to the console. Your selected snap-in should now be loaded in the console. We recommend that you save this custom MMC console for future use, perhaps in the C:\Documents and Settings\All Users\Start Menu\Programs\Administrative Tools folder. Now that you have your own certificate management console, let's talk quickly about how to import certificates. In order for the DPM service processes to be allowed to access the certificate and associated private key, you usually want to manage certificates for the computer account.

Let's talk for a minute about the characteristics of the certificate you'll need for use with DPM. The biggest point we can make is that your certificate needs to be packaged with the associated private key:

If you don't have the private key, you can only use the certificate to authenticate and decrypt files that have been signed and encrypted with that certificate and associated private key. In order to encrypt the backup files, DPM needs both the certificate and the private key. The private key is created when you create the original certificate request; this private key is never sent to the issuing CA. Instead, the CA issues the certificate based on a hash of the key. If you're moving the certificate from another machine, it must be exported with the private key. Somewhat counter-intuitively, whether or not you can export the private key depends entirely on how it was first imported or installed on the computer. During the first installation of a certificate and private key on a computer, you need to check the option that allows Windows to export the private key at a later date; if you don't, you won't be able to export the private key. You'll need to use the original certificate bundle file that you received from the CA; if you don't have a copy of that, you're out of options; it's time to request another certificate from the CA (with any corresponding cost). We've been bitten by this in the past, so you should get in the habit of saving your certificate bundles in a known, protected, secure location.

The certificate bundle must be in the PKCS#12 format. For Windows machines, this means either a.p12 or .pfx file extension. This format is required in order to safely store both the certificate and the private key in the same file, and it uses a password to protect the private key (remember, anyone who has the certificate and the private key can sign and encrypt files as if they were you). The PKCS#12 format has another advantage, though: it can also be used to store an entire certificate chain, such as the certificates used by the intermediate and root CAs that were part of issuing your certificate.

Now that you know what kind of certificate you'll need, here's how to import this certificate for use with DPM: 1. Open your Saved Certificate console. 2. Navigate to Certificates (Local Computer) Personal Certificates. You should note that if you don't have any certificates present under this computer account, the Certificates folder won't be present yet. Just use the Personal folder instead for now; when you've successfully imported a certificate the Certificates folder will be created automatically and your certificate will be placed in it. 3. Right-click the folder and select the All Tasks Import option. Click the Next button to begin the Certificate Import Wizard. 4. Click the Browse button and navigate to the PKCS#12 file containing the certificate you're importing. Click the Open button, and then click the Next button. 5. Enter the password used to protect the certificate bundle file. 6. You should not check the Enable Strong Private Key Protection option; this requires you to manually input a password every time DPM attempts to access the key. Obviously, this would be a bad thing.

7. You can, if you wish, check the Mark Key As Exportable option, although you should think about it before you do. With this option selected, anyone who can access the computer account certificate stores will be able to re-export the certificate and key. If you're expecting to move the certificate and private key from this machine within a short time, you'll need this option enabled. Otherwise, if you ever need this certificate and private key pair on another machine, you'll need to re-import the certificate from the file bundleso get in the habit of saving that file bundle! We recommend that you just develop good habits and don't use this option. 8. Click the Next button. 9. Unless you have a reason to override the store placement choices, you should leave the Automatically Select The Certificate Store Based On The Type Of Certificate option selected. Click the Next button. 10. Review your options, and then click the Finish button to import the certificate 11. Refresh your view in the Certificate Management console. Ensure that the certificate is visible under Certificates (Local Computer) Personal Certificates 12. To verify that the certificate chain is properly installed, double-click the certificate and click the Certification Path tab. Ensure that the Certificate Status field says "The certificate is OK." For more guidance on using certificates with Windows Server 2003, we recommend the Microsoft TechNet website "Microsoft Windows Server 2003 Security Services," available online at http://technet2.microsoft.com/windowsserver/en/technologies/featured/gensec/default.mspx.

Firewalls

Frequently, the ability to use DPM across a firewall will be necessary. To ensure that it will work, you must allow the traffic defined in Table 12.2.
Table 12.2: DPM Port Requirements Open table as spreadsheet

Protocol Port Direction DCOM TCP 135 to the host, and 135/Dynamic dynamic secondary connections TCP TCP 57185719 Bidirectional

Details DPM communicates with the protection agent via DCOM calls. The agent also responds with DCOM calls. DPM uses these ports for a data channel. Both DPM and the agent machine use these ports to initiate operations such as synchronization and recovery. DPM communicates with the agent coordinator on port 5718 and the agent on port 5719. DPM and clients use DNS for name resolution. DPM uses this to authenticate the connection endpoint.

DNS

UDP 53

From all hosts to the DNS server

Kerberos UDP 88 TCP Send 88 Receive/Outbound

Table 12.2: DPM Port Requirements Open table as spreadsheet

Protocol Port Direction LDAP TCP 389 UDP Outbound/Send 389 Receive NetBIOS UDP 137-138 Bidirectional TCP 139

Details DPM uses this to query Active Directory. Used between DPM, clients, and DCsm for miscellaneous operations.

If you're using host-based firewalls, be sure to define these ports on your servers and client machines as well. You should deploy these firewall settings to your servers using any appropriate management technology. We recommend that you create and use an Active Directory Group Policy Object since your DPM-protected machines must already be joined to a domain.

The Bottom Line


Finish your DPM deployment. Installing DPM and establishing protection groups isn't the end; there are several tasks you need to perform to ensure that your DPM environment continues to run in good health. Master It 1. What notification mechanism does DPM offer on a protection group basis? What is the best practice for using this mechanism? 2. When you are adding RAID arrays to the storage pool, what factors determine whether you add multiple small arrays or one large one? 3. DPM converts the disks it uses in its storage pool to dynamic disks. Can DPM use iSCSI LUNs in its storage pool? Protect your DPM servers. DPM protects the critical data on your production servers; you should in turn protect the data on your DPM servers. Master It 1. Can you protect DPM servers with other DPM servers? If so, what configurations are supported? 2. What features should a third-party backup application support in order to protect DPM? 3. What is the purpose of the DPMBackup.exe utility? Identify and manage DPM-related networking issues. When you use DPM, you move a lot of data over your network; it's good to have control over it. Master It 1. What is the advantage of using a separate backup network? What mechanisms can you use with DPM to use a separate backup network? 2. How do jumbo frames affect DPM protection? 3. What type of certificates can you use to encrypt DPM data? Does this encryption cover network connections?

Appendix A: The Bottom Line


Chapter 1: Data Protection Concepts
Understand general data protection concepts. Understanding the concepts that apply to any dataprotection scenario makes it easier for you to identify the challenges you face in your environment. Master It 1. Name the common factors affecting the design of traditional backup and restore solutions. 2. What are the two common storage technologies used for backup and restore? What are two advantages and disadvantages for each technology? 3. Describe how D2D2T (disk-to-disk-to-tape) works. 4. Name the two replication strategies and explain how they differ. 5. Describe the three levels of replication. Solution 1. The common factors include bandwidth, capacity, cost, location, metadata, reliability, security, service level agreements, and speed; however, you may have additional concerns in your environment. 2. The two most common storage technologies are tape and disk; other removable media such as CDs and DVDs are used by individual users or smaller companies, but they do not scale well to a network environment. See Table 1.2 for the comparison of disk and tape. 3. Initial backups from production resources are made to a temporary holding area located on disk storage and held for a short period of time; any restore requests made during this time can be quickly and easily fulfilled from this storage area, taking advantage of disks' random access characteristics and speed. Once the defined time has passed, the data is transferred to tape and removed from the holding area. Subsequent restoration requests are handled from tape. 4. Synchronous replication creates multiple copies of the data as it is being written to its primary source, such as RAID-1 mirroring. Asynchronous replication creates the replication copy after the primary copy has been written, usually by reading the data from the primary copy. 5. Byte-level replication copies data at the individual byte level; it requires specialized expensive hardware but it transmits only the exact data that has changed. File-level replication copies files that have changed, even if only a single byte has been updated; they are slow and inefficient. Block-level replication copies only disk blocks or database pages that have been updated; as most disk blocks are only 512 characters, this level offers a good compromise between speed and bandwidth. Distinguish new concepts introduced by DPM. DPM presents a whole new way of thinking about data protection, but it introduces several new concepts to master. Master It

1. List which of the following members can be included in the same protection group: a shared volume on a file server, a virtual machine on Virtual Server, a SQL database, a SharePoint farm, and an Exchange storage group. 2. Missy, an Exchange administrator, has two mailbox databases for which she needs to design separate protection policies. To do this, she must put them into separate protection groups. What must she first do in order to permit this configuration? 3. Tom, a SQL Server administrator, has two SQL databases that he needs to protect with DPM. How many protection groups does he need to protect them? 4. You are protecting your department's file server and have it as a member of a protection group defined to synchronize every 30 minutes and create recovery points at 7:00 AM, 3:00 PM, and 11:00 PM At 3:07 PM, your manager saves changes to an important spreadsheet on the file server. At 3:31, his secretary makes changes to the spreadsheet but the file is corrupted. Up until what time will you be able to recover his saved version before it is overwritten? (Hint: reread the "Why Do I Need Both Synchronization Frequency and Recovery Points?" sidebar.) Solution 1. All of them. You can mix and match any type of resource in a protection group, as long as DPM can protect it. 2. DPM permits protection of Exchange resources only at the storage group level. In order to place two mailbox databases into separate protection groups, they must be members of separate storage groups. Missy should move one of the databases into a new storage group; then she can proceed with her DPM configuration. 3. Tom can place both SQL databases into the same protection group as long as they can be covered by the same protection policy. 4. You will be able to recover his saved version until 4:00 PM The 3:30 PM synchronization has captured his version of the file and it is written to the "Latest" replica; this replica will in turn be overwritten by the 4:00 PM synchronization, because 3:30 PM is not a recovery point. He's lucky, though; if his secretary had written her version just one minute earlier, the 3:30 PM synchronization pass would have captured her corrupted version instead. Identify the components in the DPM architecture. While DPM attempts to mask the complexity of its protection operations, you still need to know the underlying components of your DPM deployment. Master It 1. Name the tiers of the DPM application. 2. Does DPM require the use of a separate tape backup solution? Solution 1. DPM has three tiers: the DPM server, the protection agent, and the (optional) thirdparty tape backup application. Optionally, DPM's required SQL Server instance can be located on a separate SQL Server installation. 2. No, it does not; DPM includes integrated tape-handling capabilities. However, if you already have an existing enterprise backup application, you can configure it to back

up the replicas on the DPM server, thereby gaining all of DPM's benefits while integrating with your existing solution.

Chapter 2: Installing DPM


Determine the prerequisites for installing the DPM server components. The first step of installing DPM into your organization is to ensure that your DPM server is running the necessary versions of the Windows operating system, service packs, and hotfixes. Master It Perform a survey of your Windows environment to ensure that you have the necessary hardware and software to install DPM:

What version of Windows Server and service pack will you be running on the DPM server? Does your DPM server meet the hardware requirements, including storage configuration? What Active Directory forest and domain is the DPM server a member of? Is it in the same forest as the servers it will protect?

Solution Review the guidance in this chapter with your answers in hand and determine whether your server meets the requirements. Make note of any that don't meet the requirements, and create a plan to get them up to the minimum specifications. Prepare the server for use in your test lab, as well as any infrastructure servers such as your Active Directory domain controllers. Ensure that all serves have the appropriate service packs and hotfixes applied. Determine the prerequisites for installing the DPM protection agent. The next step for installing DPM is to ensure that your protected servers are running the necessary versions of the Windows operating system and service packs. Master It Perform a survey of your Windows environment to ensure that your protected servers are compatible with the DPM protection agent:

What version of Windows Server and service pack are you running on the protected servers? Does the workload version and architecture meet the requirements? What Active Directory forest and domain is the server a member of?

Solution Review the guidance in this chapter with your answers in hand and determine whether your servers meet the requirements. Make note of any that don't meet the requirements, and create a plan to get them up to the minimum specifications.

Duplicate your production servers and workstations for use in your test lab. Ensure that all servers have the appropriate service packs and hotfixes applied. You may want to peek ahead to Chapters 6 through 11 to see more details on the specific requirements for each type of protected workload. Add disk volumes to the DPM storage pool. Storage on your DPM server is a critical part of your protection strategy. Although DPM's block-based replication and use of VSS helps reduce the amount of space it requires, you still need to give DPM an adequate amount of disk space to ensure that you can create the number of recovery points and synchronization schedules you need to protect your data. Master It 1. Of the following forms of storage, which ones can DPM use and which ones can it not use: o Direct attached storage o iSCSI volumes o Network attached storage volumes o Storage area network volumes 2. How does DPM require volumes for the storage pool to be configured in Disk Manager? Solution 1. Direct attached storage, storage area network volumes, and iSCSI volumes can be used for the DPM storage pool, provided they can be used as dynamic disks. Network attached storage devices that present their storage as SMB/CIFS shares cannot be used by DPM; they must support a block-level protocol, typically iSCSI, to be DPMcompatible. 2. The volumes must show up as separate volumes in Disk Manager. They can be partitions or whole disk volumes. Deploy the DPM protection agent to protected servers. Once the DPM server is configured, you must ensure that the DPM protection agent is deployed to the servers whose data you wish to protect. This agent ensures that the server resources are seen by DPM and can be protected. Master It 1. Where do you deploy the DPM protection agent? 2. Is a reboot required to install the DPM protection agent on a protected server? Solution 1. The DPM protection agent is deployed from the DPM Administrator Console. This allows DPM to install the agent executables on the target server, validate the credentials and connection that will be used to protect the server, and enumerate the resources on the server that can be protected with DPM. 2. Yes, a reboot is required. DPM can reboot the server for you as part of the agent installation or allow you to do it manually. The agent is not considered installed and

verified until the server is rebooted and DPM can validate the connection to the agent, however, so no resources can be protected until after this is complete. Configure a DPM protection group. The final part of preparing DPM to protect data is to create the protection groups. A protection group allows you to specify one set of protection policies and apply them to multiple protected resources. You should use as few protection groups as you need, but enough to ensure that all of your policy requirements are met. Master It 1. What two protection methods does DPM provide in a protection group? 2. What options does DPM give you for creating an initial replica of your protected data? Solution 1. DPM allows you to define short-term protection to disk using the DPM storage pool. If you have a suitable tape drive or library attached to the DPM server, you can also define long-term protection. 2. You can allow DPM to create the initial replica either now or at a scheduled time. Optionally, you can choose to manually transfer the replica over using other media; DPM cannot protect data until the initial replica is present.

Chapter 3: Using the DPM Administration Console


Navigate the DPM GUI. Before you can master DPM, you need to be familiar with its primary administrative interface. Although DPM offers both a graphical interface and a command-line interface, the primary interface that most administrators will use and be comfortable with is the GUI. You should know the different components of the GUI. Master It 1. What standard Windows technology does the DPM Administrative console use? 2. What are the major areas of the DPM Administrative console? 3. What is the function of the Navigation pane? Solution 1. The DPM Administrative console is a snap-in for the Microsoft Management Console (MMC). By using the MMC framework, the DPM Administrative console offers a consistent management framework that allows experienced Windows administrators to immediately begin to use DPM. 2. The Navigation pane and the Actions pane. 3. The Navigation pane shows the objects and options available for you to configure depending on which type of operation you are performing. Name the major areas of functionality in the DPM GUI. The DPM Administrator console allows you to perform a variety of management tasks and functions for your DPM deployment.

Master It 1. How many main tabs or functionality groups are there in the DPM Administrative console? 2. In which section would you see the status of any ongoing protection jobs? 3. In which section would you discover new servers on which to deploy the DPM protection agent? Solution 1. The DPM Administrative console contains five main functionality areas. From left to right, these areas or tabs are o Monitoring o Protection o Recovery o Reporting o Management 2. You can see the status of any ongoing protection jobs in the Jobs subtab of the Monitoring tab. 3. You can discover new servers to protect in the Agents subtab of the Protection tab. Describe the purpose of the Actions pane. The Actions pane is a key part of the DPM Administration console. Master It 1. What is the function of the Actions pane? 2. What options are always available in the Actions pane? Solution 1. The Actions pane provides a context-sensitive area to communicate what tasks or actions are available with the selected object. As the user navigates around the GUI and selects different objects, the Actions pane will always contain just those actions that are relevant to the user's current selection. 2. The Actions pane will always show the View, Help, and Options links.

Chapter 4: Using the DPM Management Shell


Explain the relationship between Windows PowerShell and the DPM Management Shell. Windows PowerShell is a new technology that is just starting to be seen in the 2007 wave of Microsoft products. Understanding how PowerShell relates to the DPM Management Shell will help you learn the underlying technology and master the DPM command-line management interface more quickly, as well as let you leverage your experience with DPM and other PowerShellenabled products. Master It 1. What version of Windows PowerShell does DPM 2007 use? 2. How is the DPM Management Shell implemented?

3. How many cmdlets are included in the DPM Management Shell? Do these replace the cmdlets offered in Windows PowerShell? Solution 1. DPM 2007 requires Windows PowerShell 1.0 as a prerequisite component. DPM 2007 will install Windows PowerShell 1.0 during the product installation. 2. The DPM Management Shell is implemented as a Windows PowerShell snap-in. Just as the MMC GUI allows snap-ins to provide custom functionality within a familiar framework, Windows PowerShell allows snap-ins to provide custom cmdlets for working with a particular application's tasks and data. 3. DPM 2007 includes 81 cmdlets in the DPM Management Shell. These cmdlets are offered in addition to the native cmdlets offered by the baseline Windows PowerShell, giving DPM administrators the ability to perform DPM tasks in the same scripts that they may perform other system management activities. Describe the main benefits that PowerShell offers over regular scripting technologies. Microsoft already provides a wide variety of scripting technologies, such as the Windows Scripting Host. Knowing the advantages that PowerShell provides will help you get the most benefit from the DPM Management Shell. Master It 1. How does PowerShell integrate with the .NET Framework? 2. Describe the PowerShell pipeline. How does it differ from the pipeline capabilities offered by traditional scripting environments? Solution 1. Windows PowerShell 1.0 is built on top of the .NET Framework version 2.0. All PowerShell cmdlets and objects share the same data types, properties, and methods as .NET objects. PowerShell scripts can invoke .NET objects and assemblies, and .NET applications can call PowerShell scripts. By integrating with .NET, PowerShell allows administrators and developers to work more closely together, reducing duplication of effort and increasing efficiency of administrative tasks and actions. 2. The PowerShell pipeline allows the output of one cmdlet to be seamlessly provided as input to the next cmdlet; this chaining can be performed multiple times. Unlike conventional pipelines, which operate by passing text strings between commands, PowerShell pipelines pass entire objects. This approach eliminates the need for glue code and commands.

Chapter 5: End-User Recovery Line


Prepare Active Directory for end-user recovery. The first step of enabling end-user recovery is to prepare Active Directory by upgrading the directory schema with the necessary extensions. Because the potential ramifications of schema extensions are serious, you must ensure that you handle this process appropriately. Master It

Perform a survey of your Active Directory deployment with an eye toward deploying the EUR schema extensions:

What version of Windows Server are your domain controllers running? What service pack level is applied on them? Which domain controller holds the Schema Master FSMO role? If it is Windows Server 2000, have schema upgrades been enabled? Is your DPM server in the same domain and site as the Schema Master domain controller, or will you need to run the process from a separate machine? Is your administrative account a member of the Schema Admins group? If not, which accounts are?

Solution Review this chapter with your answers in hand and determine which procedure you must perform to enable end-user recovery in your environment. Duplicate your production network configuration in your lab and test your process. Deploy the VSS client and hotfix to your users. You have a variety of options for pushing the VSS EUR client and DPM VSS hotfix to your users' computers. Master It Review the EUR VSS client and DPM VSS hotfix deployment options. Which methods can you use in your environment? Weigh the merits of each option for your environment. Solution For the most part, your choice of deployment method will depend on what you already use in your environment. If you already use one of these techniques (such as SMS or logon scripts) in your environment, you probably will use the same method for the EUR VSS client.

Chapter 6: Protecting File Servers


Determine the prerequisites for installing the DPM protection agent on file servers. You need to ensure that your protected file servers are running the necessary versions of the Windows operating system and service packs and are configured according to DPM's requirements. Master It 1. Perform a survey of your file servers to ensure that they are compatible with the DPM protection agent: o What version of Windows Server and service pack are you running on the file servers you want to protect? o Do your volume, partition, and share configurations meet the DPM requirements? 2. If your file servers are part of a Distributed File System (DFS) namespace, how should you effectively protect the file server data with DPM?

3. Given a file server that has no other roles, what data will DPM capture as part of the system state? How does this differ from a cluster node system state? Solution 1. Review this chapter with your answers in hand and determine whether your servers meet the requirements. Duplicate your production servers for use in your test lab. 2. Microsoft recommends that you only protect a single copy of each DFS root or link with DPM. You will need to protect it by the local server name; DPM does not support protecting data through the DFS namespace, although it does integrate DFS and end-user recovery. 3. A file server is a member server; the system state includes the boot files, COM+ registration, and Registry hives. A clustered-node system state includes those components as well as the MSCS metadata. Configure DPM protection for standalone and clustered file servers. Once the DPM agent is deployed, you must configure protection groups and select data sources to protect. Master It 1. 2. 3. 4. What file server data sources can DPM protect? Can DPM handle NTFS reparse points, and if so, is any special handling required? How does DPM handle nested mount points? What DPM licenses do you need to protect standalone servers? What DPM licenses do you need to protect clustered servers?

Solution 1. DPM can protect volumes, folders, and file shares on file servers. 2. DPM will not protect data sources that contain NTFS reparse points. Mount points are the exceptions; DPM will protect the volume that is the target of the mount point, but you must manually re-create the mount points during restore operations. 3. DPM does not support multiple layers of mount points. 4. You need one S-DPML for every standalone file server you protect, or for every cluster node you protect that you don't want DPM to handle in a cluster-aware fashion. You need one E-DPML for every cluster node if you want DPM to detect it is part of a cluster. Recover protected file server data. Protecting the data is only half of the job; you also need to be able to recover it. Master It 1. To where can you recover file server data? 2. How do you handle conflicts between current versions of data and earlier versions you are restoring? 3. What are the differences between recovering data to a standalone server and a cluster? Solution

1. You can recover to the original location, to an alternative location, or create a copy of the recovered data on tape. If you recover to an alternative server, it must have the DPM protection agent installed. 2. You can choose to replace, overwrite, or skip. You can also recover to an alternative location and avoid the whole issue entirely. 3. There are effectively no differences; DPM handles them transparently.

Chapter 7: Protecting Exchange Servers


Determine the prerequisites for installing the DPM protection agent on Exchange servers. You need to ensure that your protected Exchange severs are running the necessary versions of the Windows operating system, service packs, and Exchange Server software. Master It 1. Perform a survey of your Exchange servers to ensure that they are compatible with the DPM protection agent: o What version of Windows Server, Windows Service Pack, and Exchange Server are you running on the SQL Server machines you want to protect? o What storage groups, mailbox databases, and public folder databases are configured on your Exchange servers? Which ones need to be protected? 2. Given an Exchange server that has no other roles, what data will DPM capture as part of the system state? How does this differ from a cluster node system state? Solution 1. Review this chapter with your answers in hand and determine whether your servers meet the requirements. Duplicate your production servers for use in your test lab. 2. An Exchange server is usually a member server. The system state includes the boot files, COM+ registration, and Registry hives. A clustered Exchange node system state includes those components as well as the MSCS metadata. Configure DPM protection for standalone and clustered Exchange servers. Once the DPM agent is deployed, you must configure protection groups and select data sources to protect. Master It 1. What highly available Exchange Server configurations can DPM protect? 2. What is the difference between synchronization and express full backups of Exchange storage groups in DPM? 3. What DPM licenses do you need to protect standalone servers? Clustered servers? Solution 1. DPM can protect standalone Exchange Server 2003 and Exchange Server 2007 servers. Additionally, DPM supports clustered configurations that use the MSCS components: Exchange 2003 clusters and Exchange 2007 SCC and CCR clusters. You can also protect Exchange 2007 servers that are using LCR and SCR, but these configuration options don't affect DPM directly.

2. Synchronization performs a regularly scheduled replication of the changed database blocks and transaction log entries to the DPM server, comparable to an incremental backup on a conventional backup system. An express full backup, on the other hand, allows the DPM servers to create a new VSS replica of the mailbox databases in the storage group. Because Exchange data is contained both with the database files and the transaction logs, all protection activities take place on the storage group. 3. You need one E-DPML for every standalone or clustered Exchange server you protect with DPM. Recover protected Exchange resources. Protecting your Exchange data is only half of the job; you also need to be able to recover the data at the appropriate level of granularity. Master It 1. At what level can you restore Exchange data? 2. To what locations can you recover Exchange data? 3. What are the differences between recovering data to a standalone server and a cluster? Solution 1. Exchange data exists at multiple levels: storage groups, mailbox or public folder databases, mailboxes, folders, and message items. The relationships between these data types are wellknown, allowing DPM to safely handle restoring single databases or even individual mailboxes. Because DPM protects entire storage groups, the DPM server can perform any necessary transaction log replay to ensure that you recover only the items you have selected from the recovery point. 2. You can recover to the original server, to an alternative server, to a network folder, or you can create a copy of the recovered data on tape. If you recover to an alternative server, it must have the DPM protection agent installed. When you recover to an alternative server, it can be either standalone or clustered, regardless of how the source was configured. 3. There are effectively no differences; DPM transparently handles restoring Exchange data to either a standalone machine or cluster node. The only difference is in selecting your restore target.

Chapter 8: Protecting SQL Servers


Determine the prerequisites for installing the DPM protection agent on SQL Server machines. You need to ensure that your protected SQL Server machines are running the necessary versions of the Windows operating system, service packs, and SQL Server software. Master It 1. Perform a survey of your SQL Server machines to ensure that they are compatible with the DPM protection agent: o What version of Windows Server, Windows service pack, and SQL Server are you running on the SQL Server machines you want to protect? o What instances are installed on your SQL Server machines? Which ones need to be protected?

2. Given a file server that has no other roles, what data will DPM capture as part of the system state? How does this differ from a cluster node system state? Solution 1. Review the guidance in this chapter with your answers in hand and determine whether your servers meet the requirements. Duplicate your production servers for use in your test lab. 2. A SQL Server machine is a member server; the system state includes the boot files, COM+ registration, and Registry hives. A clustered SQL Server node system state includes those components as well as the MSCS metadata. Configure DPM protection for standalone and clustered SQL Server machines. Once the DPM agent is deployed, you must configure protection groups and select data sources to protect. Master It 1. What types of SQL Server instances can DPM protect? 2. What is the difference between synchronization and express full backups of SQL Server databases in DPM? 3. What DPM licenses do you need to protect standalone servers? What DPM licenses do you need to protect clustered servers? Solution 1. DPM can protect both default instances and named instances. These instances can be located on either standalone or clustered SQL Server configurations. 2. Synchronization performs a regularly scheduled replication of the changed database blocks and transaction log entries to the DPM server, comparable to an incremental backup on a conventional backup system. An express full backup, on the other hand, allows the DPM servers to create a new VSS replica of the database. 3. You need one E-DPML for every standalone SQL Server machine you protect, or for every SQL Server cluster node you protect. If you install the DPM protection agent on one node in a cluster, you should install it on all of them. Recover protected SQL Server databases. Protecting your databases is only half of the job; you also need to be able to recover them. Master It 1. To where can you recover SQL databases? 2. At what level can you restore SQL data? 3. What are the differences between recovering data to a standalone server and a cluster? Solution 1. You can recover to the original instance, to an alternative instance, to a network folder, or create a copy of the recovered data on tape. If you recover to an alternative server, it must have the DPM protection agent installed. When you recover to an

alternative instance, it can be either standalone or clustered, regardless of how the source was configured. 2. SQL data consists of multiple tables, columns, rows, and other objects within the database. The relationships between these data can be complicated and exist only within the application using the database. Because of this, there is no way for DPM to safely handle units of data smaller than a complete database. If you only need to recover specific tables or rows, you can restore the database to an alternative instance and then manually recover the data you need. 3. There are effectively no differences; DPM transparently handles restoring a database to either a standalone machine or cluster node. The only difference is in selecting your restore target.

Chapter 9: Protecting SharePoint Servers


Determine the prerequisites for installing the DPM protection agent on SharePoint servers. You need to ensure that your protected SharePoint servers are running the necessary versions of the Windows operating system and service packs and are configured according to DPM's requirements. Master It 1. Perform a survey of your SharePoint servers to ensure that they are compatible with the DPM protection agent: o What version of Windows Server and service pack are you running on the SharePoint servers you want to protect? o Does your SharePoint version and configuration meet the DPM requirements? 2. What additional process do you need to perform on a SharePoint server after installing the DPM protection agent? 3. How does protecting your WSS 2.0 or SPS 2003 deployments differ from protecting WSS 3.0 or MOSS 2007 deployments? Solution 1. Review this chapter with your answers in hand and determine whether your servers meet the requirements. Duplicate your production servers for use in your test lab. 2. You must register and enable the WSS Writer service. 3. DPM 2007 natively protects WSS 3.0 and MOSS 2007 deployments because all of their data is kept in the corresponding SQL Server databases. Earlier versions of SharePoint did not capture all of the configuration data in the database; therefore, you must run server-side scheduled backup processes to create data dump files that can then be captured by DPM. Configure DPM protection for SharePoint servers. Once the DPM agent is deployed, you must configure protection groups and select data sources to protect. Master It 1. What SharePoint data sources can DPM protect? 2. What DPM licenses do you need to protect SharePoint servers?

3. Can you protect older SharePoint versions with DPM; if so, what licenses do you need? Solution 1. DPM will protect an entire SharePoint farm. 2. You need one E-DPML for every SharePoint web front-end server on which you install the DPM protection agent. 3. Yes, by following the steps in KB 915181. If you do not protect the WSS SQL Server database directly, you only need an S-DPML on the SharePoint machine. Recover protected SharePoint servers. Protecting the data is only half of the job; you also need to be able to recover it. Master It 1. To where can you restore SharePoint data? 2. What types of SharePoint data can you recover? 3. What additional steps must you take to enable item-level recovery? Solution 1. You can recover to the original site, to an alternative site, or to a network location, or you can create a copy of the recovered data on tape. If you recover to an alternative server, it must have the DPM protection agent installed. 2. You can recover an entire farm, a database, a site, or an individual list or document. 3. You must first create a recovery farm for DPM to use to stage your recovered data. This recovery farm can be a single WSS 3.0 deployment.

Chapter 10: Protecting Virtual Servers


Determine the prerequisites for installing the DPM protection agent on MSVS hosts. You need to ensure that your protected MSVS hosts are running the necessary versions of the Windows operating system and service packs and that they are configured according to DPM's requirements. Master It 1. Perform a survey of your MSVS hosts to ensure that they are compatible with the DPM protection agent: o What version of Windows Server and service pack are you running on the file servers you want to protect? o Does your MSVS version and configuration meet the DPM requirements? 2. What requirements does a virtual machine need to meet in order for DPM to be able to protect it with a recursive VSS backup, and what benefit does this provide? 3. If a virtual machine does not meet the requirements for a recursive VSS backup, how does DPM protect it? Solution

1. Review this chapter with your answers in hand and determine whether your servers meet the requirements. Duplicate your production servers for use in your test lab. 2. The virtual server must be running MSVS 2005 R2 SP1 or later. Additionally, the virtual machine must be running Windows XP, Windows Server 2003, Windows Vista, or Windows 2008, and it must have the latest version of the Virtual Machine Additions installed. 3. DPM uses the MSVS API to hibernate the machine. It then uses VSS and block-based replication to copy the changed blocks and bring the virtual machine back to an active state as quickly as possible. This method results in a small amount of downtime. Configure DPM protection for virtual machines on MSVS hosts. Once the DPM agent is deployed, you must configure protection groups and select data sources to protect. Master It 1. What MSVS data sources can DPM protect? 2. What DPM licenses do you need to protect MSVS hosts? 3. What criteria may indicate that you should protect a virtual machine by installing the DPM agent on it directly instead of protecting it through MSVS? Solution 1. DPM can protect individual virtual machines as a unit; all of the data files that comprise a virtual machine are protected. 2. You need one E-DPML for every MSVS host you protect. 3. If you're concerned only about specific workload data and not the entire host configuration, you may want to protect the virtual machine and application data directly instead of from the MSVS host level. Recover protected MSVS virtual machines. Protecting the data is only half of the job; you also need to be able to recover it. Master It 1. To where can you restore virtual machines? 2. What targets can you recover? Solution 1. You can recover to the original MSVS host, recover to an alternative folder location, or create a copy of the recovered data on tape. If you recover to an alternative location, it must have the DPM protection agent installed. 2. You can recover virtual machines. Additionally, you can recover the virtual server configuration to the original MSVS host.

Chapter 11: Protecting Workstations


Determine the prerequisites for installing the DPM protection agent on workstations. You need to ensure that your protected workstations are running the necessary versions of the

Windows operating system and service packs and are configured according to DPM's requirements. Master It 1. Perform a survey of your workstations to ensure that they are compatible with the DPM protection agent: o What version of Windows (and service packs) are you running on the workstations you want to protect? o Do your volume, partition, and share configurations meet the DPM requirements? 2. If your workstations use EFS or Bitlocker, can you protect the workstation data with DPM? 3. What data will DPM capture as part of the workstation system state? Solution 1. Review this chapter with your answers in hand and determine whether your workstations meet the requirements. Duplicate your workstation configurations for use in your test lab. 2. Yes, you can protect workstations that are protected by EFS or Bitlocker. EFSencrypted data will still be encrypted when it is restored. 3. A workstation is a member server; the system state includes the boot files, COM+ registration, and Registry hives. Configure DPM protection for workstations. Once the DPM agent is deployed, you must configure protection groups and select data sources to protect. Master It 1. 2. 3. 4. What workstation data sources can DPM protect? Can DPM handle NTFS reparse points, and if so, is any special handling required? How does DPM handle nested mount points? What DPM licenses do you need to protect workstations?

Solution 1. DPM can protect volumes, folders, and file shares on workstations. 2. DPM will not protect data sources that contain NTFS reparse points. The exceptions are mount points; DPM will protect the volume that is the target of the mount point, but you must manually re-create the mount points during restore operations. 3. DPM does not support multiple layers of mount points. 4. You need one E-DPML for every workstation you protect. Recover protected workstation data. Protecting the data is only half of the job; you also need to be able to recover it. Master It 1. To where can you recover workstation data?

2. What are the differences between recovering data to the original location and an alternative location? 3. How do you handle conflicts between current versions of data and earlier versions you are restoring? Solution 1. You can recover to the original location, to an alternative location, or create a copy of the recovered data on tape. If you recover to an alternative server, it must have the DPM protection agent installed. 2. Recovery to the original location can potentially cause data to be overwritten; recovery to an alternative location can be to any DPM-protected server or workstation. 3. You can choose to replace, overwrite, or skip. You can also recover to an alternative location and avoid the whole issue entirely.

Chapter 12: Advanced DPM


Finish your DPM deployment. Installing DPM and establishing protection groups isn't the end; there are several tasks you need to perform to ensure that your DPM environment continues to run in good health. Master It 1. What notification mechanism does DPM offer on a protection group basis? What is the best practice for using this mechanism? 2. When you are adding RAID arrays to the storage pool, what factors determine whether you add multiple small arrays or one large one? 3. DPM converts the disks it uses in its storage pool to dynamic disks. Can DPM use iSCSI LUNs in its storage pool? Solution 1. When you create or modify a protection group, you can set up email notifications. As a best practice, the notification address should be a mail-enabled security group or distribution group rather than specific individual recipients. 2. The primary factors to consider are ease of management, effect on I/O performance, and the time it takes to rebuild an array in the event of a disk replacement. 3. Yes, as long as the iSCSI initiator or HBA being used allows iSCSI LUNs to be dynamic disks. The freely downloadable Microsoft iSCSI Initiator software does not permit this configuration and, therefore, cannot be used to add storage pool disks. Protect your DPM servers. DPM protects the critical data on your production servers; you should in turn protect the data on your DPM servers. Master It 1. Can you protect DPM servers with other DPM servers? If so, what configurations are supported?

2. What features should a third-party backup application support in order to protect DPM? 3. What is the purpose of the DPMBackup.exe utility? Solution 1. You can use a two-tier configuration to protect DPM primary servers with secondary servers. You can use only two tiers, and you cannot use two servers to protect each other in a circular fashion. 2. At a minimum, a third-party backup application should support VSS backups. Ideally, the application agents will be specifically designed to support DPM. 3. This utility creates static dump files and mount points of all the DPM databases and replicas on the server. These files and mount points can then be used with non-VSSaware backup applications, and God have mercy on your souls. Identify and manage DPM-related networking issues. When you use DPM, you move a lot of data over your network; it's good to have control over it. Master It 1. What is the advantage of using a separate backup network? What mechanisms can you use with DPM to use a separate backup network? 2. How do jumbo frames affect DPM protection? 3. What type of certificates can you use to encrypt DPM data? Does this encryption cover network connections? Solution 1. A separate backup network removes potentially sensitive information from the public network used by servers and clients. You can use HOST file entries, multiple adapters, tagged VLANs, and the backup network feature of DPM to ensure that DPM protection traffic is segregated onto a specific network. 2. Jumbo frames, commonly used with fast networking technologies such as Gigabit Ethernet, increase the maximum size of packets that can be sent over the network. By lowering the amount of per-packet overhead that must be sent, jumbo frames allow applications to use more of the total bandwidth for the actual data, thereby increasing network utilization. The effects can often be dramatic. 3. You can use any X.509-compliant certificate, such as those from a third-party CA, from an internally deployed PKI such as the Windows 2003 Windows Certificate Services, or a self-signed certificate. When a certificate is used to encrypt data, it only encrypts data written to tape. To encrypt DPM network connections, you should use IPSec policies.

Appendix B: Setting Up a Lab Environment


Overview
We have the technology. Oscar Goldman, The Six Million Dollar Man Over the course of the book, you may have become tired of our constant badgering about having a proper lab. Just so you know, we're not stopping now. If you're reading this appendix, we hope that you've caved in and are looking into setting up a lab as you work your way through the rest of the book. Our objective in this appendix is three-fold:

To outline what you'll need to address in your lab and help you determine the most simple and cost-effective configuration you'd need in your test lab To show you how to set up a simple environment, based as much as possible on Microsoft Virtual Server 2005 R2 SP1 (MSVS)or, if you absolutely have to, Microsoft Virtual PC (VPC) To put together an environment that you can use to familiarize yourself with DPM 2007

We're going to make some simplifying assumptions along the way, such as that you're using a single network instead of performing DPM operations over a second network. Feel free to expand on our plans here.

Hardware and Software Requirements


One of the big questions you need to answer is whether or not you're going to run your lab all on physical hardware or use a virtual machine (VM) architecture. VMs have a lot of advantages and disadvantages, but they work well for the purposes of this configuration. We're not trying to build a lab that you can do full performance testing in, after all; we're just trying to give you an environment in which you can learn DPM and experiment. It's easy to break a test lab; using VMs, it can be a lot easier to repair it again. Human nature being what it is, we usually learn a new technology best once we've broken it and had to fix the resulting mess. However, if you have the spare hardware to run your test lab, who are we to argue? The following sections give the hardware and software requirements for both approaches.
Using Physical Machines

If you have the physical hardware, you might as well take advantage of it. The best option, if you can swing it, is to use the same type of hardware configurations that you are planning to put into production. This gives you an opportunity to not only learn the basics of how to

install and use DPM, but allows you to also get familiar with the response times and machine-specific quirks you're likely to encounter. Table B.1 shows the minimum hardware requirements, machine names, and operating system editions to get your test lab up and running (although you should check these requirements against Chapter 2, "Installing DPM," to make sure that you haven't overlooked anything). We've even thrown in a standard IP address scheme for no extra charge.
Table B.1: Lab Hardware Open table as spreadsheet

Server Name DC DPM FS FSClus1 FSClus2 Exch ExchClus1 ExchClus2 SQL SQLClus1 SQLClus2 SharePoint VMSRV XP Vista

IP Address

Hardware

Operating System Windows Server 2003 Standard Windows Server 2003 Standard

192.168.150.10 800Mhz processor 256MB RAM 192.168.150.20 1Ghz processor 1+GB RAM

192.168.150.60 1Ghz processor 512+MB Windows Server 2003 Standard RAM 192.168.150.61 1Ghz processor 512+MB Windows Server 2003 Standard RAM 192.168.150.62 1Ghz processor 512+MB Windows Server 2003 Standard RAM 192.168.150.70 x64 processor 1+GB RAM 192.168.150.71 x64 processor 1+GB RAM 192.168.150.72 x64 processor 1+GB RAM 192.168.150.80 1Ghz processor, 1+GB RAM 192.168.150.81 1Ghz processor, 1+GB RAM 192.168.150.82 1Ghz processor 1+GB RAM 192.168.150.90 2Ghz processor 1+GB RAM 192.168.150.100 2Ghz processor 2+GB RAM Windows Server 2003 Standard Windows Server 2003 Standard Windows Server 2003 Standard Windows Server 2003 Standard Windows Server 2003 Standard Windows Server 2003 Standard Windows Server 2003 Standard Windows Server 2003 Standard

192.168.150.110 1Ghz processor 512+MB Windows XP RAM 192.168.150.111 1Ghz processor 1+GB RAM Windows Vista Business, Enterprise, or Ultimate

While it is preferable from a performance standpoint to run your lab on dedicated hardware for each role, we all know real life doesn't always work that way. In these cases, Microsoft Virtual Server (or the virtualization software of your choice) comes to your rescue, allowing you to test your DPM configuration without first having to spend a fortune on the hardware.
Using Virtualization You Can't Always Be Virtual

Even in a virtual DPM test environment, there are two hosts that cannot be run as virtual machines: the Virtual Server host and the DPM server itself. Here's why:

The Virtual Server host has to be run on a physical machine because MSVS cannot be installed and run on a virtual machine hosted by MSVS or VPC. It's not our fault; you can blame Microsoft if you're inclinedalthough if you think about the potentially drastic and negative drawbacks of an infinite series of matryoshka-like virtual servers, we're confident you can see why this restriction makes sense. We confess: we overstated the case for the DPM server by a tiny bit. You have to run DPM on a physical host if you intend to attach tape hardware and run it. MSVS doesn't yet give you the capability to take external hardware devices and attach them to a virtual machine, so while DPM in a VM will work, you won't have tape functionality at all. You're also likely to experience much slower performance, due to the virtualized disk layers for your DPM storage pool disks.

Just be aware that in a virtual setup, you will suffer some level of I/O performance loss. Thankfully, in Virtual Server 2005 R2 SP1, you can take disk volumes on the host and map them as virtual hard drives, allowing you to pass through a disk volume to a VM with minimal performance loss. You should definitely consider this configuration for the disk storage pool if you're running the DPM server as a VM. While you might be tempted to use an iSCSI configuration for the storage pool, we don't recommend it at all for two reasons:

The virtual network interface in MSVS and VPC doesn't support the full throughput of Gigabit Ethernet (1,000Mbps). In our testing, we were lucky to get even poor Fast Ethernet performance (100Mbps) out of it. There are things you can do to tweak this (such as dedicating a host adapter to the DPM VM's iSCSI adapter only), but you will probably find (as we did) that performance was painfully slow, even for testing. In an MSVS/VPC VM, you'll almost certainly use the Microsoft iSCSI Initiator software. If you do, you'll run into the same dynamic disk issues we talk about in Chapter 12, "Advanced DPM."

There are ways to get around these problems (such as not using MSVS as your virtual server environment), but they're outside the scope of this book. If you're experienced in another virtualization environment, don't think that you need to be hobbled by MSVS; by all means, feel free to use the virtualization software that you already know.

To meet these requirements, you'll need at least two physical machines configured as follows:

2Ghz or faster processors. The more processors or cores you have, the better; multiple virtual machines can take advantage of multiple core or multiple processor configurations. At least 4GB of RAM on one server; at least 2GB on the other. We recommend having 4GB of RAM minimum both for DPM and MSVS, but the total you need (especially for the Virtual Server host) depends on how many VMs you plan to run. At least 60GB of free disk space on each server, preferably on a second physical disk to improve I/O. You can have even more disks to further improve disk I/O performance.

We assume that you understand and are comfortable with the rudimentary steps of creating and managing virtual machines.

Setting Up Your Virtual Machines


On the machine with 4GB of RAM, install your choice of: Windows Server 2003 Standard or Enterprise Edition (either x86 or x64 is fine) with SP1 or better; Windows XP Professional x64 edition; or Vista Business, Enterprise, or Ultimate x64 Edition. The x64 versions of XP and Vista are necessary to take full advantage of the 4GB of RAM. This machine should not be joined to your test domain. On this host, install Virtual Server 2005 R2 as described in the "Setting Up Your Lab Virtual Server" section. But first, you'll create a new virtual machine, as described in the following section.
Setting Up Your Lab Virtual Server

Your virtual server is going to be the centerpiece of your lab environment if you don't have dedicated hardware to fulfill all of the roles. It is important to ensure that your VMs are bound to the proper NIC on the server and that IP settings are correct. We've run into a few interesting issues with VMs in the past. The design we've laid out here should prevent any of them from popping up, but keep the following in mind:

Your NIC drivers should be up to the latest revision. Check the manufacturer's website; it will often have a more recent driver than Microsoft Update. Allow space on your hard drives for growth of the VM. If you're only planning to use your lab environment for testing DPM, that should include at least a 20 percent buffer.

Installing Virtual Server is a very straight-forward process, but we've included instructions here for anyone who may not be familiar with it. 1. Start the Virtual Server setup. In the opening screen, click Install Microsoft Virtual Server 2005 R2 SP1 (see Figure B.1).

Figure B.1: The opening Setup screen 2. In the next screen, click the bullet to accept the license agreement and click Next (see Figure B.2).

Figure B.2: The License Agreement 3. In the Customer Information screen, enter the appropriate information and click Next (see Figure B.3).

Figure B.3: The Customer Information screen 4. In the Setup Type screen, click Next (see Figure B.4).

Figure B.4: The Setup Type screen 5. In the Configure Components screen, click Next (see Figure B.5).

Figure B.5: The Configure Components screen 6. In the next screen, click Next (see Figure B.6).

Figure B.6: Firewall exceptions 7. In the Ready To Install screen, click Install (see Figure B.7).

Figure B.7: The Ready To Install screen 8. In the Setup Complete screen, click Finish (see Figure B.8).

Figure B.8: The Setup Complete screen


Creating a Baseline Virtual Machine

To build a baseline virtual machine (VM), create a new virtual machine and install Windows Server 2003 Enterprise Edition on it. Once you have set it up, shut down the VM and remove it from the Virtual Server list. Do not delete the .vhd. This .vhd and the install you performed will be used by differencing disks for each of the VMs we'll use. This will save both time and disk space. The following steps should be completed to create all of the machines named in Table B.1, except for VMSRV and DPM: 1. In the Virtual Server administration website under the Virtual Disks section on the left, click Create Differencing Virtual Hard Disk (see Figure B.9).

Figure B.9: Create a differencing virtual hard disk 2. In the Differencing Virtual Hard Disk screen, enter a name for the new hard disk and provide the information on the .vhd that is the parent (see Figure B.10). Click Create.

Figure B.10: Tying the differencing disk to the parent 3. In the Virtual Server administration website under the Virtual Machines section on the left, click Create (see Figure B.11).

Figure B.11: Create a virtual machine 4. In the Create Virtual Machine screen (see Figure B.12), enter a name for the VM (it's easiest to give it the same name as the machine name). Assign memory according to Table B.2.

Table B.2: Virtual Machine Memory Requirements Open table as spreadsheet

VM Name DC SQL SQLClus1 SQLClus2 Exch ExchClus1 ExchClus2 FS FSClus1 FSClus2 XP Vista SharePoint

Memory 256MB 667MB 667MB 667MB 667MB 667MB 667MB 667MB 667MB 667MB 1000MB 1000MB 2000MB

5.

Figure B.12: Configure a virtual machine

Once you have created all these VMs, start up the domain controller machine (named DC, whether real or virtual) and create a single Active Directory forest and domain. When that is done, leave DC running. As a reminder, DC and DPM are the only hosts that will always need to be on. If you have the space in your lab to keep all of the other machines up and running, great! However, we know that test resources are often tight, so we've scoped the requirements for the other machines and made sure they are given enough memory so that the machines relevant to a chapter can all be run simultaneously while the others are shut down or in a saved state. Before you starting working with each application host, you should ensure that it has the proper name and IP address. Then, join it to your test domain.

Setting Up Your Lab File Servers


You are probably familiar with the file server setup; however, we are including the instructions so you can make sure your setup matches ours and the results you get match up exactly with ours. Once familiar with all the processes, you should modify your setup to more realistically reflect the type of environment you'll be supporting.
Standalone Configuration

The standalone file server is the most common server configuration you're likely to encounter. File services were some of the original network applications, and we feel confident that people will continue to use networks to share files long past the time we retire from this business. 1. Ensure that the file server has Windows Server installed, with at least two partitions. 2. On the file server, click Start, right-click My Computer, and click Manage (see Figure B.13).

Figure B.13: Opening the Computer Management screen 3. In the Computer Management screen, click Disk Management in the left pane. Rightclick on unallocated space in any disk and click New Partition (see Figure B.14).

Figure B.14: Create a new partition 4. The Welcome To The New Partition Wizard screen will appear. Click Next (see Figure B.15).

Figure B.15: The Welcome To The New Partition Wizard screen 5. In the Select Partition Type screen, select either Primary Partition or Extended Partition and click Next. (See Figure B.16.)

Figure B.16: Select the partition type 6. In the Specify Partition Size screen, enter the appropriate size for the partition. (The default is to use all available space. See Figure B.17.) Click Next.

Figure B.17: The Specify Partition Size screen 7. In the Specify Drive Letter Or Path screen, accept the default drive letter and click Next (see Figure B.18). If you want to test restorations that include recovering mount points, enter the path to an empty folder on the C:\ drive and mount the partition there. For more information, see the Microsoft KB article at http://support.microsoft.com/kb/307889/en-us.

Figure B.18: Specify the drive letter 8. In the Format Partition screen, check the box to perform a quick format and click Next. (See Figure B.19.) Optionally, you can create another partition and format it as FAT32 to illustrate that it is not a protectable data source.

Figure B.19: Format the partition 9. The Completing The New Partition screen will appear with a summary of your choices. Click Finish. 10. Back in the Disk Management screen, right-click on your new partition and click Explore. (See Figure B.20.)

Figure B.20: Your new partition 11. In the Windows Explorer window that opens, right-click in the blank space in the right pane, click New, and click Folder. (See Figure B.21.)

Figure B.21: Create a new folder 12. Name your folder Data. Right-click on it and click Sharing And Security. (See Figure B.22.).

Figure B.22: Select Sharing And Security

Figure B.23: Sharing a folder 13. In the Data Properties screen, click the Share This Folder bullet and click OK. Put some files in the folder. This will be your protected data.
Clustered Configuration

Clustering, some people love it; some hate it. If Ryan had his way, everything would be clustered. He likes the ability to fail over a clustered service to another box so he can perform maintenance without interruption in service. Devin, on the other hand, feels that it adds an extra layer of complexity, creating more possibilities for things to go wrong. Over the time we've known each other, we've determined that it's an argument along the lines of the whole "My OS is better than your OS" frame of mind. If you have the skills and think clustering is a good idea, go with it. If not, don't. After all, you're the one who has to support it.

CREATING THE SHARED DISK

Traditional failover clustering relies on some sort of shared storage in order to work properly. In Microsoft Virtual Server (MSVS), you can create a virtual equivalent to enable you to test the functionality by using a shared SCSI bus. Note, however, that MSVS introduces some limitations that you must work around in this configuration and that would not be present in a real-world configuration. We've included the steps to do this here: 1. In the Virtual Server Administration website, click on one of the clustered VMs and click Edit Configuration (see Figure B.24).

Figure B.24: VM Configuration 2. In the lower half of the screen, click the SCSI Adapters link. On the next page, click the Add SCSI Adapter button (see Figure B.25).

Figure B.25: Add a SCSI adapter 3. In the SCSI Adapter Properties screen, ensure that Share SCSI Bus For Clustering is checked, and take note of the SCSI adapter ID (see Figure B.26). The second cluster host will need to use a different address.

Figure B.26: Adding a shared SCSI adapter 4. Repeat the previous steps for the virtual machine that is the other cluster host. 5. Create two fixed size virtual hard disks and name them quorum and data. They should be 4GB and 10GB, respectively. (For clustering VMs, you have to use fixed size virtual hard drive files.) 6. In the Configuration screen for the first cluster node, click the Disks link. 7. In the Virtual Hard Disk Properties page, add two disks and attach them to the SCSI bus so that the first disk you add is the quorum and the second is data (see Figure B.27).

Figure B.27: Disk configuration You only have to create one quorum and one data disk. All of the cluster-joined VMs can use them because you'll never have more than one cluster running at a time. Next, you need to create the cluster group. 1. On FSCLUS1, click Start All Programs Administrative Tools and click Cluster Administrator. 2. Right-click on FSCLUSTER in the left pane, click New, and click Group (see Figure B.28).

Figure B.28: Creating a new group 3. In the New Group window, enter ClusteredShare in the name box and click Next (see Figure B.29).

Figure B.29: Naming the group 4. In the Preferred Owners window, hold down the Ctrl key, click both nodes, and click Add (see Figure B.30). Click Finish.

Figure B.30: Adding preferred owners 5. A dialog box will appear stating that the group was created successfully. Click OK. 6. In the left pane, right-click on the ClusteredShare group, click New, and click Resource (see Figure B.31).

Figure B.31: Creating a new resource 7. In the New Resource window, enter ClusteredShareIP in the name box, and choose IP Address from the Resource Type dropdown (see Figure B.32). Click Next.

Figure B.32: The New Resource window 8. The Possible Owners screen will appear. Click Next. 9. The Dependencies screen will appear. Click Next. 10. In the TCP/IP Address Parameters screen, enter 192.168.150.20 for the IP Address and 255.255.255.0 for the subnet mask (see Figure B.33). Click Finish.

Figure B.33: The TCP/IP Address Parameters screen 11. A dialog box will appear stating that the resource was created successfully. Click OK. 12. In the left pane, right-click on the ClusteredShare group, click New, and click Resource (see Figure B.31). 13. In the New Resource window, enter ClusteredShareDisk in the name box and select Physical Disk from the Resource Type dropdown (see Figure B.34). Click Next.

Figure B.34: Creating a disk resource In the Possible Owners window, click Next. In the Dependencies window, click ClusteredShareIP in the left pane, and click Add (see Figure B.35). Click Next. 1. In the Disk Parameters, ensure that disk F: is selected and click Finish. 2. A dialog box will appear stating that the resource was created successfully. Click OK. 3. In the right pane of the Cluster Administrator screen, right-click ClusteredShareDisk and click Bring Online. 4. In the left pane, right-click on the ClusteredShare group, click New, and click Resource (see Figure B.31). 5. In the New Resource screen, enter ClusteredShareName in the name box, and select Network Name from the Resource Type dropdown (see Figure B.36). Click Next.

Figure B.35: Adding an IP dependency

Figure B.36: Adding a network name resource 6. In the Possible Owners screen, click Next. 7. In the Dependencies screen, select ClusteredShareIP from the left pane and click Add (see Figure B.37). Click Next.

Figure B.37: Adding an IP dependency 8. In the Network Name Parameters screen, enter FSCLUS in the name box, and check the DNS Registration Must Succeed box (see Figure B.38). Click Finish.

Figure B.38: Network Name Parameters 9. A dialog box will state that the resource was created successfully. Click OK. 10. In the left pane, right-click on the ClusteredShare group, click New, and click Resource (see Figure B.31). 11. In the New Resource window, enter ClusteredShareFileShare in the name box, and select File Share from the Resource Type dropdown (see Figure B.39). Click Next.

Figure B.39: Creating a file share resource 12. In the Possible Owners screen, click Next. 13. In the Dependencies screen, click ClusteredShareDisk in the left pane and click Add (see Figure B.40).

Figure B.40: Adding a disk dependency 14. In the File Share Parameters screen, enter Data in the Share name box and F:\Data in the Path box (see Figure B.41). Click Finish.

Figure B.41: The File Share Parameters screen 15. A dialog box will appear stating that the resource was created successfully. Click OK. 16. In the left pane of the Cluster Administrator window, right-click on the ClusteredShare group and click Bring Online.

Setting Up Your Lab Exchange Servers


Exchange servers tend to be some of the most important data sources in an organization, so you'll want to test them in your lab to see how well they are protected. Fortunately, you don't need to configure every possible aspect of an Exchange organization to test its mailbox protection.
Standalone Configuration

If you are in a smaller environment, you probably have a single Exchange Server that handles all of the roles in your organization. This is true whether you have Exchange 2003 or 2007. Because the methods for protection are the same with both, we're including the steps for setting up Exchange 2007 because you may be less familiar with the process than with 2003: 1. Start the Exchange Server setup. At the opening screen, click the Install Microsoft Exchange link (see Figure B.42).

Figure B.42: Setting up Exchange 2007 2. In the Exchange Server 2007 setup wizard Introduction screen, click Next (see Figure B.43).

Figure B.43: The Introduction screen 3. In the License Agreement screen, click the bullet to accept the EULA and click Next (see Figure B.44).

Figure B.44: The License Agreement 4. In the Error Reporting screen, click Next (see Figure B.45).

Figure B.45: The Error Reporting screen 5. In the Installation Type screen, ensure that Typical Exchange Server Installation is selected and click Next (see Figure B.46).

Figure B.46: The Installation Type screen 6. When the readiness checks have completed, click Install (see Figure B.47). Note that because you are using the 32-bit version of Exchange 2007, you will receive warnings that this version is not supported for production use. These are expected; you can safely ignore them. (Unless, of course, you're trying to put these machines into production usedon't do that!)

Figure B.47: Readiness Checks 7. In the Completion screen (Figure B.48), click Finish.

Figure B.48: The Completion screen


Clustered Configuration

Many medium size or larger organizations employ clusters for their Exchange mailbox servers. If this is true in your organization, you are probably familiar with the setup process for Exchange 2003. Most people will not be familiar with the steps for Exchange 2007, so we include them here: 1. Open Cluster Administrator and right-click groups. Click New Group (see Figure B.49).

Figure B.49: Create a new group 2. In the New Group window, enter ExchClus (see Figure B.50). Click Next.

Figure B.50: Naming the group 3. In the Preferred Owners window, add both nodes (see Figure B.51) and click Next.

Figure B.51: The Preferred Owners window 4. Right-click on the Exch-Clus group and click New Resource (see Figure B.52).

Figure B.52: Create a new resource 5. In the New Resource window, enter ExchDisk for the name and change the resource type to Physical Disk (see Figure B.53).

Figure B.53: Adding a physical disk 6. In the Possible Owners window, ensure that both nodes are possible owners and click Next (see Figure B.54).

Figure B.54: The Possible Owners screen 7. In the Dependencies window, click Next (see Figure B.55).

Figure B.55: The Dependencies window 8. In the Disk Parameters window, ensure that the drive letter you specified for the data disk is selected and click Finish (see Figure B.56).

Figure B.56: The Disk Parameters window 9. Right-click on the ExchClus group and click Bring Online (see Figure B.57).

Figure B.57: Bringing the group online 10. Start the Exchange Server setup. At the opening screen, click the Install Microsoft Exchange link (see Figure B.58).

Figure B.58: Set up Exchange 11. In the Exchange Server 2007 Setup screen, click Next (see Figure B.59).

Figure B.59: The Introduction screen 12. In the License Agreement screen, click the bullet to accept the EULA and click Next (see Figure B.60).

Figure B.60: License Agreement 13. In the Error Reporting screen, click Next (see Figure B.61).

Figure B.61: The Error Reporting screen 14. In the Installation Type screen, ensure that Custom Exchange Server Installation is selected and click Next (see Figure B.62).

Figure B.62: The Installation Type screen 15. In the Server Role Selection screen, choose Active Clustered Mailbox Role and click Next (see Figure B.63).

Figure B.63: The Server Role Selection screen 16. In the Cluster Settings screen, choose the Single Copy Cluster option, enter ExchClus for the Clustered Mailbox Server Name, enter 192.168.150.40 for the Clustered Mailbox Server IP Address, and specify M:\Data for the database files (see Figure B.64). Click Next.

Figure B.64: The Cluster Settings screen 17. When the readiness checks have completed, click Install (see Figure B.65).

Figure B.65: Readiness Checks 18. In the Completion screen (Figure B.66), click Finish.

Figure B.66: The Completion screen

Setting Up Your Lab SQL Servers


SQL Server, like Exchange, tends to be a critical service in most environments. Fortunately, for testing the functionality of DPM, a simple setup with a blank database will do for our purposes.
Standalone Configuration

As with Exchange and file servers, the standalone configuration is the most common for SQL. DPM protects both SQL 2000 and 2005 servers; the method is the same for both so we're including the steps for 2005. 1. Ensure that SQClus1 and SQLClus2 have Windows Server installed. 2. Start SQL Server setup on SQLClus1. At the End User License Agreement screen, check the box to accept the EULA (see Figure B.67) and click Next.

Figure B.67: The EULA 3. In the Microsoft SQL Server 2005 Setup Screen (see Figure B.68), click Install.

Figure B.68: SQL Server Setup 4. When the prerequisites have installed (see Figure B.69), click Next.

Figure B.69: Prerequisites are installed 5. In the Welcome screen, click Next (see Figure B.70).

Figure B.70: The SQL Server Installation Welcome screen 6. When the system configuration check has completed (see Figure B.71), click Next.

Figure B.71: The System Configuration Check 7. Enter the appropriate information in the Registration Information screen (see Figure B.72) and click Next.

Figure B.72: The Registration Information screen 8. In the Components To Install screen, select SQL Server Database Services (see Figure B.73) and click Next.

Figure B.73: Select the components to install 9. In the Instance Name screen (see Figure B.74), click Next.

Figure B.74: The Instance Name screen 10. In the Service Account screen, enter sqlservice for the account, and pass@word1 for the password. Enter contoso for the domain (see Figure B.75). Click Next.

Figure B.75: Service account information 11. In the Authentication Mode screen (see Figure B.76), click Next.

Figure B.76: The Authentication Mode screen 12. In the Collation Settings screen (see Figure B.77), click Next.

Figure B.77: The Collation Settings screen 13. In the Error And Usage Report Settings screen (see Figure B.78), click Next.

Figure B.78: The Error And Usage Report Settings screen 14. In the Ready To Install screen (see Figure B.79), click Install.

Figure B.79: The Ready To Install screen 15. The Setup Progress screen will let you know when the installation has completed successfully (see Figure B.80), click Next.

Figure B.80: The Setup Progress screen 16. In the Completing Microsoft SQL Server 2005 Setup Screen, click Finish.
Clustered Configuration

Clustered SQL databases are typically seen in larger environments with SLAs that don't allow for much downtime. Typically, these environments host multiple databases on one server. The setup steps are as follows:

1. Ensure that SQL1 has Windows Server installed. 2. Open the Cluster Administrator and right-click Groups. Select New Group (see Figure B.81).

Figure B.81: The Cluster Administrator 3. In the New Group screen, enter SQLCluster (see Figure B.82) and click Next.

Figure B.82: The New Group screen 4. In the Preferred Owners screen, select SQLClus1 and SQLClus2 and then click Add (see Figure B.83). Click Next.

Figure B.83: The Preferred Owners screen 5. In the Cluster Administrator, right-click SQLCluster, click New and click Resource (see Figure B.84).

Figure B.84: Click New Resource 6. In the New Resource screen, enter SQLDisk for the name, and select Physical Disk from the Resource Type dropdown (see Figure B.85).

Figure B.85: Select the Physical Disk resource 7. In the Possible Owners screen (see Figure B.86), click Next.

Figure B.86: The Possible Owners screen 8. In the Dependencies screen (see Figure B.87), click Next.

Figure B.87: The Dependencies screen 9. In the Disk Parameters screen, ensure that disk S: appears (see Figure B.88) and click Finish.

Figure B.88: The Disk Parameters screen 10. In the Cluster Administrator, right-click on SQLCluster and click Bring Online (see Figure B.89).

Figure B.89: Bring the clustered group online 11. Start SQL server setup. At the End User License agreement screen, check the box to accept the EULA (see Figure B.90) and click Next.

Figure B.90: The EULA 12. In the Microsoft SQL Server 2005 Setup Screen (see Figure B.91), click Install.

Figure B.91: The SQL Server Setup screen 13. When the prerequisites have installed (see Figure B.92), click Next.

Figure B.92: Prerequisites are installed 14. In the Welcome screen, click Next (see Figure B.93).

Figure B.93: The SQL Server Installation Welcome screen 15. When the system configuration check has completed (see Figure B.94), click Next.

Figure B.94: The System Configuration Check screen 16. Enter the appropriate information in the Registration Information screen (see Figure B.95) and click Next.

Figure B.95: Enter the registration information 17. In the Components To Install screen, select SQL Server Database Services and Create A SQL Server Failover Cluster (see Figure B.96) and click Next.

Figure B.96: Select the components to install 18. In the Instance Name screen (see Figure B.97), click Next.

Figure B.97: The Instance Name screen 19. In the Virtual Server Name screen, enter SQLClus (see Figure B.98) and click Next.

Figure B.98: The Virtual Server Name screen 20. In the Virtual Server Configuration screen, enter 192.168.150.30 for the IP address and click Add (see Figure B.99). Click Next.

Figure B.99: The Virtual Server Configuration screen 21. In the Cluster Group Selection screen, select SQLCluster. Ensure that Drive S: appears in the Data Files box (see Figure B.100) and click Next.

Figure B.100: The Cluster Group Selection screen 22. In the Cluster Node Configuration screen (see Figure B.101), click Next.

Figure B.101: The Cluster Node Configuration screen 23. In the Remote Account Information screen, enter pass@word1 for the password (see Figure B.102) and click Next.

Figure B.102: The Remote Account Information screen 24. In the Service Account screen, enter sqlservice for the account, and pass@word1 for the password. Enter contoso for the domain (see Figure B.103). Click Next.

Figure B.103: Enter the service account information 25. In the Domain Groups For Clustered Services screen, enter contoso\sqlaccts in each box (see Figure B.104) and click Next.

Figure B.104: The Domain Groups For Clustered Services screen 26. In the Authentication Mode screen (see Figure B.105), click Next.

Figure B.105: The Authentication mode screen 27. In the Collation Settings screen (see Figure B.106) click Next.

Figure B.106: The Collation Settings screen 28. In the Error And Usage Report Settings screen (see Figure B.107), click Next.

Figure B.107: The Error And Usage Report Settings screen 29. In the Ready To Install screen (see Figure B.108), click Install.

Figure B.108: The Ready To Install screen 30. The Setup progress screen will let you know when the installation has completed successfully (see Figure B.80), click Next. 31. In the Completing Microsoft SQL Server 2005 Setup Screen, click Finish.

Setting Up Your Lab SharePoint Servers


Microsoft Office SharePoint 2007 is being adopted much more often than any of its predecessors. Its capabilities and flexibility are turning it into a mission-critical application in

many businesses of all sizes. Yet, it is a new product and many people are not yet familiar with its setup (much less its nuances). To set up SharePoint 2007, follow these steps: 1. Start SharePoint setup and enter your product key when prompted (see Figure B.109).

Figure B.109: The SharePoint product key 2. In the EULA screen, check the box to accept the license agreement and click Next (see Figure B.110).

Figure B.110: The EULA 3. In the Installation Type screen, click Basic (see Figure B.111).

Figure B.111: The Installation Type screen 4. The Installation Progress screen will appear (see Figure B.112).

Figure B.112: The Installation Progress screen 5. When installation has completed, leave the Run The Sharepoint Products And Technologies Configuration Wizard Now checkbox checked (see Figure B.113) and click Close.

Figure B.113: Complete installation by configuring SharePoint 6. When the Products And Technologies Configuration Wizard opens, click Next (see Figure B.114).

Figure B.114: The SharePoint Products And Technologies Configuration Wizard 7. A prompt will inform you that some services have to be reset. Click Yes (see Figure B.115).

Figure B.115: Service restart warning 8. A progress indicator will display the actions being performed (see Figure B.116).

Figure B.116: Configuration tasks 9. When the wizard completes, click Finish (see Figure B.117).

Figure B.117: The Configuration Successful screen

Appendix C: A Collection of DPM Best Practices


Overview
In theory there is no difference between theory and practice. In practice there is. Commonly attributed to Yogi Berra Throughout this book, we've talked about or alluded to things you should or shouldn't do. Because having all of this guidance in one place can be handy, we've tried to collect it all here in this appendix for your easy reading. We've sorted everything into the following topics:

Installation and architecture best practices End-user recovery best practices File servers best practices Exchange servers best practices SQL servers best practices SharePoint servers best practices Virtual servers best practices General best practices

Without further ado, here are the distilled results of our wisdomsuch as it is.

Installation and Architecture Best Practices

Before installing DPM to a server or deploying the DPM protection agent to a production server you want to protect, create and use a checklist. You can make one of your own or use one or more of the checklists in this appendix. When choosing a hardware platform for your DPM server, use up-to-date hardware that will provide good performance wherever possible: o RAM. The minimum recommended RAM for a full DPM install is 1GB, the lowest amount of RAM you'll ever want on a DPM server because of the SQL Server instance DPM uses. If you have a large amount of data or a large number of hosts to protect, you will find that more RAM is the single greatest performance boost you can give your DPM server. o Processor. Again, the minimum requirements don't quite cut it; remember that the typical DPM installation includes SQL Server, which likes to have a healthy amount of CPU resources available. If the combined speed and number of your processors aren't adequate, you'll give the server a rollercoaster ride of processor utilization. o Network. DPM can throttle bandwidth to help you prevent bandwidth bottlenecks, but too much throttling will keep your protection jobs from finishing. Realistically, you'll want to use a gigabit-speed network with servergrade network interface cards and decent switches. Monitor how much the

data sources on your protected servers change and adjust the throttling to match. You may also want to look into using a separate backup network. o Hard Drives. The DPM protection process will be limited by the slowest link of the chain. Keep a close eye on the transfer speeds of the hard drives and RAID arrays on both your DPM servers and protected servers; on slow disks with heavy I/O load, you may not be able to transfer data at a high enough rate. If that's the case, your network speed will not make up the difference. Use some basic Performance Monitor counters to check the CPU, RAM, and disk utilization levels for your DPM server. Begin your reports with the unloaded server to establish the baseline and make sure to take regular reports to see how much each new set of protected data sources affects your server; this helps you plan for future growth. Never install DPM on any server that will not be dedicated solely to DPM (with one exception that we'll talk about in the next bullet point). In particular, you should avoid installing DPM on servers with other functions such as: o Domain controllers. DPM may not let you install on a domain controller, but you can bypass that by running dcpromo on the box after DPM is installed. However, you do not want to do this. Running dcpromo after installing major applications such as DPM and SQL Server directly goes against Microsoft guidelines and puts your system into an unsupported configuration. Besides, system resources are precious to a DPM server, and you'd be putting two critical eggs in one basket by forcing DPM to contend with the domain controller processes. o MOM or SCOM. Yes, we know that DPM is part of the System Center family of products. This does not, however, mean that they should be installed on the same server. If DPM shares the same server as any version of Operations Manager, you can threaten the health of both applications. High resource utilization often prevents processes from running, and it may interfere with any notifications that the Operations Manager wants to send out to tell you that there's a problem. o Exchange Server. Just like SQL Server or a domain controller, Exchange Server is a memory hungry process. It needs that memory to perform adequate caching so that your users can read their spam. While that cache size is limited under Exchange 2003, that's because its maximum memory size is likewise limited by 32-bit Windows. Under Exchange 2007 and 64-bit Windows, Exchange's cache will use as much memory as it can. If that's not enough to convince you, think about your drive I/O: the sheer amount of hard drive activity that would occur in this situation makes us shudder. o SQL Server. Yes, we know this seems funny considering the fact that DPM relies on a SQL Server instance that is usually placed on the DPM server during the installation routine. However, when DPM installs its instance in this fashion, it won't play well with any other SQL Server instance of any version or edition, including the MSDE or SQL Server Express Edition. Even if you're using an external SQL Server instance to store the DPM database, you'll only buy trouble by putting SQL Server and DPM together yourself. The one time that you want to place DPM on the same server with other applications is when you're doing physical to virtual disaster recovery. In this instance, you build a recovery server that has DPM 2007, Microsoft Virtual Server 2005 R2 SP1 or greater, and Microsoft System Center Virtual Machine Manager all on the same hardware. This recovery server captures virtual machines images of your protected production servers for use during disaster recovery scenarios.

Storage requirements for DPM are very flexible. Any drive that appears as a blocklevel device that supports conversion to a dynamic drive may be used. This means that the following will not work: o External USB drives and other removable drives. If you have any question about whether a device falls under this restriction, see how it appears in the Disk Manager MMC console; if you can't turn the drive into a dynamic drive, you can't use it with DPM. o Laptops, tablets, and other portables. We know that most of you wouldn't think of deploying DPM to a laptop; we actually tried it, though, as part of our test lab. The reason we mention it here is that when Windows identifies any machine as a "portable computer," one of the changes it makes is to disable its ability to convert basic drives to dynamic drives. So, even if you wanted to set this up in a lab environment for testing, it won't work. Even if it could, almost all laptop drives run at 5,400RPM, which is extremely slow. You could protect a Commodore 64 or two at that rate. o Network attached storage devices. By NAS devices, we specifically mean any devices that don't offer the ability to use block-level protocols such as iSCSI as a connection method. If your device can only be accessed via SMB/CIFS file shares, it isn't compatible with DPM. In some cases, your desired levels of resource utilization will require you to install DPM to use an external SQL Server for its databases. In this configuration, do not attempt to have a DPM server protect the external SQL Server that also hosts its data. If the SQL Server machine fails, there will be no recovery path. Instead, set up a second standalone DPM server to protect the SQL Server databases. At some point during your DPM deployment planning (and definitely before you finish the deployment!), take the time to set up a lab environment. This allows you to test your configuration and get comfortable with the DPM procedures; it also is the only real way to ensure that the data replication traffic on your production servers will not negatively impact your network. It will take some time to get all the metrics right for your network via bandwidth throttling, so test, test, and test again. If you have multiple locations that have low bandwidth links connecting them to other locations, you should consider deploying a separate DPM server for each location. Bandwidth throttling can only accomplish so much, and having the protection local will make both backup and recovery operations go more quickly.

End-User Recovery Best Practices

Before you extend the Active Directory Schema to enable EUR, create a lab environment with a copy of your live AD forest. Test the process to ensure that you're not going to break anything. When you are extending the Active Directory schema to enable EUR, if possible you should perform the operation from the domain controller that is the current holder of the Schema Master FSMO role in order to remove any undesired lag. If you can't perform the process from this domain controller, you must perform this operation as network-close as you can get: another machine on the same subnet and in the same Active Directory site. You definitely don't want to perform the process from a workstation in a branch office that is hanging off of a saturated 256Kbps link away from your Schema Master domain controller. If this domain controller isn't in your main site, you may want to first move the Schema Master FSMO role to a new

domain controller in a better-connected site to help reduce replication latency for schema updates. User education is the single greatest piece of preventative maintenance that administrators can perform, yet it seems to be woefully neglected in a majority of environments. In some organizations, weekly emails with helpful hints work well; in others, such as when you have a call center to support, your users don't tend to have time to read these emails while on shift. In these situations, try to schedule short classes for small groups of people so that the impact on their productivity is minimal. It does no good to give your users the ability to recover documents on their own if they don't know how (or even that it's possible). When you create protection groups in an environment that is planning to enable (or has already enabled) EUR, be sure to configure the recovery point creation schedule to match the amount of use it sees. If users frequently update documents in a location, schedule additional recovery point creation times. If the data source is not so heavily modified, once or twice a day should suffice. It's important to balance the need to recover current data with disk and server capacity. Don't waste valuable recovery point space by scheduling multiple recovery points during times that users won't be making changes to data, such as creating recovery points both at the end of the business day and at the beginning. Creating a single recovery point at the end of the day captures changes that users made during the previous day; creating another recovery point when the bulk of the files have not changed can just produce confusion for your users.

File Server Best Practices


When configuring protection for a file server, ensure that it meets the requirements in Table 6.2 before you try to install the DPM protection agent. If your file servers are clustered, the following apply: o Ensure that the protection agent is installed on all cluster nodes that are possible owners of the protected resources. o Whether you use a quorum disk or majority node set, keep your quorum in the default cluster group, and don't add anything to it. It is separate by default and should stay that way. o Keep your configuration simple; just as you should install DPM on a dedicated server, you may not want to make your clustered file server pull double-duty. This simplifies your recovery scenarios and steps. o If a failover (manual or automatic) happens while DPM is performing an express full backup, DPM will mark the protected data sources as being in an inconsistent state. You will need to perform a manual consistency check to allow DPM to begin protecting those data sources again. File server data update volume varies widely depending on the type of data and nature of your users. Choose your retention range wisely for your protected data. If you choose to retain data for five days but only perform an express full backup every seven days, you could end up with changed data that doesn't make it into long-term protection. Keep track of your reparse points. If you protect a volume with reparse points, you'll have to manually re-create them in the event of a recovery. Protect the file server's system state data unless you have a compelling reason not to do so.

When you're protecting a file server's system state, it is best to do so in the same protection group with the protected data. This ensures that all server-related data can be reliably recovered from a specified point in time.

Exchange Server Best Practices


When configuring protection for an Exchange server, ensure that it meets the requirements in Table 7.2 before you try to install the DPM protection agent. If your Exchange servers are clustered, the following apply: o Ensure that the protection agent is installed on all cluster nodes that are possible owners of the protected resources. o Whether you use a quorum disk or majority node set, keep your quorum in the default cluster group, and don't add anything to it. It is separate by default and should stay that way. o Keep your configuration simple; just as you should install DPM on a dedicated server, you may not want to make your clustered Exchange server pull doubleduty. This simplifies your recovery scenarios and steps. o If you are running an Exchange 2007 CCR configuration, the replication process must be stopped before a restore can take place. o If a failover (manual or automatic) happens while DPM is performing an express full backup, DPM will mark the protected data sources as being in an inconsistent state. You will need to perform a manual consistency check to allow DPM to begin protecting those data sources again. DPM can protect both Exchange 2003 and 2007, but it cannot restore data from one version to the other. Protect the Exchange server's system state data unless you have a compelling reason not to do so. When you're protecting an Exchange server's system state, it is best to do so in the same protection group with the protected data. This ensures that all server-related data can be reliably recovered from a specified point in time.

SQL Server Best Practices


When configuring protection for an Exchange server, ensure that it meets the requirements in Table 8.2 before you try to install the DPM protection agent. Ensure that the SQL Server VSS Writer is installed and configured on protected SQL Server standalone machines and cluster nodes. If your SQL Server machines are clustered, the following apply: o Ensure that the protection agent is installed on all cluster nodes that are possible owners of the protected resources. o Whether you use a quorum disk or majority node set, keep your quorum in the default cluster group, and don't add anything to it. It is separate by default, and should stay that way. o Keep your configuration simple; just as you should install DPM on a dedicated server, you may not want to make your clustered SQL Server machines pull double-duty. This simplifies your recovery scenarios and steps. o If a failover (manual or automatic) happens while DPM is performing an express full backup, DPM will mark the protected data sources as being in an

inconsistent state. You will need to perform a manual consistency check to allow DPM to begin protecting those data sources again. Protect the SQL Server machine's system state data unless you have a compelling reason not to do so. When you're protecting a SQL Server machine's system state, it is best to do so in the same protection group with the protected data. This ensures that all server-related data can be reliably recovered from a specified point in time.

SharePoint Server Best Practices


When configuring protection for a SharePoint server farm, ensure that it meets the requirements in Table 9.2 before you try to install the DPM protection agent. When protecting SharePoint 2007 servers, keep in mind the load on the protected web-front-end server. If you have a large SharePoint server farm and the front-end servers are near capacity, consider deploying an extra front-end server so that the DPM protection agent activity doesn't cause your front-end servers to become overloaded. The recovery farm is an essential element for site-level and lower recovery. While we recommend that you deploy your recovery farm using a virtual machine with a simple installation of Windows SharePoint Services 3.0, keep in mind that the performance of the virtual machine will directly affect your recovery times. If you do use a virtual machine for your recovery farm, you should protect the virtual server that hosts the recovery farm virtual machine.

Virtual Server Best Practices

When configuring protection for a Microsoft Virtual Server (MSVS) host, ensure that it meets the requirements in Table 10.2 before you try to install the DPM protection agent. Remember that DPM does not directly support the use of MSVS hosts clusters in the same way that it supports other types of application clusters. If your MSVS machines are in a host cluster configuration, the following apply: o Ensure that the protection agent is installed on all cluster nodes that are possible owners of the protected resources. o Whether you use a quorum disk or majority node set, keep your quorum in the default cluster group, and don't add anything to it. It is separate by default and should stay that way. o Keep your configuration simple; just as you should install DPM on a dedicated server, you may not want to make your clustered MSVS host pull double-duty. This simplifies your recovery scenarios and steps. Protect the MSVS host's system state data unless you have a compelling reason not to do so. When you're protecting a MSVS host's system state, it is best to do so in the same protection group with the protected data. This ensures that all server-related data can be reliably recovered from a specified point in time.

Generic Best Practices

Test your recovery procedures on a regular basis, both in your lab and in production. One of the best ways to do this in a production network is to recover protected data to an alternative location periodically. DPM includes the ability to perform bare-metal recoveries for your protected servers using the System Recovery Tool. You should perform a bare-metal recovery for each server type from time to time. Using the System Recovery Tool can also a great way to keep your lab environment updated. Avoid the "one protection group per server" mentality. Protection groups are designed to allow you to easily protect data that shares the same protection needs. If your file server and Exchange data have the same protection needs, then by all means go ahead and place both resources together in the same protection group. Likewise, you may have a SQL Server database and an internal website that rarely get updated; it makes perfect sense to place these two data sources together in a separate protection group. The best way to use protection groups is by grouping your data protection needs, not trying to use them to segregate your data by source or type. Use the reporting features of DPM. The reporting features can help you identify when you need to expand your disk pool, as well as tape utilization, protection, and recovery information. These reports also serve another purpose; they work well for convincing the decision makers that either: o More disk space needs to be acquired for protection. or Policies need to be put in place to keep server overuse in check (quotas, written policies, etc.) If you archive your data to tape according to a rotation scheme such as Child/Father/Grandfather, you're probably keeping some of the media for permanent archiving. If this is the case, store it in a secure, remote location. Run, do not walk, to set up email alert notifications on every protection group you create. Data protection is your insurance policy; you definitely want to know if there's a problem with your insurance policy.
o

Appendix D: Checklists
I love it when a plan comes together. Colonel Hannibal Smith These checklists are intended to be starting points for you to use in your environment.

DPM Planning

Verify your Active Directory domain functional level. Determine the data owners. Estimate your bandwidth utilization; compare the estimate to your available bandwidth. o Determine whether your estimated traffic should be on a separate backup network. o If using a Gigabit Ethernet network, determine whether your estimated traffic would benefit from enabling jumbo frames. Write out the specifications for the necessary hardware. Allocate existing hardware resources. If necessary, get bids on new hardware and begin the acquisition process. Identify the data to be protected. Identify different data types that have similar requirements and design your protection groups accordingly. o Determine short-term protection media. o Determine data retention policies. o Determine short-term protection goals. o Determine long-term protection goals. o Determine disk allocation. Identify primary role owners: o Who will be in charge of protecting data? o Who gets contacted via email alerts? o Who handles tape rotation and storage? o Who will fulfill other miscellaneous roles? Determine software versions on servers to be protected. o On those servers that don't meet the software requirements, determine whether upgrades or migration will be the simpler strategy. o Plan upgrades for the affected servers. o Plan data migration efforts for the affected servers. Determine whether to support end-user recovery. Draw up a diagram of your DPM deployment: o Include the IP address, DNS name, and physical location of your DPM server. o Ensure that these resources are available. o Perform any necessary configuration, such as DNS, network switch ports, and KVM switch ports.

DPM Deployment and Installation

In a lab environment, test your planned install against a restored and isolated version of your network. o Install DPM. o Test protection and recovery of your data. o Test bandwidth utilization; compare this with your estimates and make adjustments where necessary. o Test alerting features. o Test "bare metal" recovery. Install DPM in your production environment.

DPM Configuration Checklist


If you plan to use end-user recovery, extend the Active Directory schema. Specify the SMTP server to use for email alerts. Enable and test email alerts. If you use MOM or SCOM in your environment: o Install the DPM Management Pack. o Publish active alerts. Configure notifications. Set the Auto Discovery schedule. Schedule reporting, and email recipients for reports. Add disks to the disk pool. Ensure that any attached tape libraries appear in the Libraries subtab. Install the VSS hotfix and the DPM protection agent on all servers to be protected.

DPM End-User Recovery Checklist

Test extending the schema: o Identify the Schema Master domain controller in your test network. o Determine whether you can perform the schema extension from this machine. o If you cannot use the Schema Master domain controller, identify another machine on the same subnet and in the same Active Directory site. o From your chosen workstation, perform the schema extensions. o Validate that the extensions have been loaded. o Wait for Active Directory replication. If you plan to use end-user recovery, extend the Active Directory schema: o Identify the Schema Master domain controller in your production network. o Determine whether you can perform the schema extension from this machine. o If you cannot use the Schema Master domain controller, identify another machine on the same subnet and in the same Active Directory site. o From your chosen workstation, perform the schema extensions. o Validate that the extensions have been loaded. o Wait for Active Directory replication. If you plan to support end-user recovery, plan training for the users. Determine how to deploy the Shadow Copy Client package and hotfix: o Remember that Vista workstations already have the client and hotfix installed.

For Windows XP workstations, determine which method you will use depending on the resources and applications you have in your production environment.

DPM Protection Group Checklist


Create protection groups according to the plan you created. For each protection group: o Enter the short-term media type. o Enter the retention policy. o Specify short-term protection goals. o Specify long-term protection goals. o Enter disk allocation. o Modify bandwidth throttling, if necessary. When you have created a protection group, ensure that the initial replica is created successfully. Repeat for each protection group. When all protection groups are created, check reporting at the next scheduled reporting interval to ensure that it's working properly.

Wrap Up Checklist

Perform a test recovery of your production data to your lab environment. Test "Bare Metal" recovery of a production server to equipment in your lab environment. Test end-user recovery. Verify that bandwidth usage is as expected. Use Performance Monitor on protected servers to verify that protection activity does not impact them severely. Test recovery from tape in your lab environment.

List of Figures
Chapter 1: Data Protection Concepts
Figure 1.1: Centralized backups on mainframes Figure 1.2: kup scenario 1: a single file server Figure 1.3: Backup scenario 2: a database server farm Figure 1.4: A common tape rotation Figure 1.5: Disk to disk to tape Figure 1.6: ITG's Exchange 2003 backup solution Figure 1.7: The VSS snapshot process Figure 1.8: Synchronous replicas Figure 1.9: Asynchronous replicas Figure 1.10: Byte-level replicas Figure 1.11: File-level replicas Figure 1.12: Block-level replicas Figure 1.13: A typical DPM solution Figure 1.14: A single domain DPM deployment Figure 1.15: A complex DPM deployment

Chapter 2: Installing DPM


Figure 2.1: The DPM installation screen Figure 2.2: The license agreement Figure 2.3: The Welcome screen Figure 2.4: Prerequisites check in progress Figure 2.5: Summary of the prerequisites check Figure 2.6: Enter the user and company information Figure 2.7: Select a SQL server instance Figure 2.8: Choose an existing SQL instance Figure 2.9: Provide the SQL service account password

Figure 2.10: Choose whether to use Microsoft Update Figure 2.11: Customer Experience Improvement Program preferences Figure 2.12: Installation summary Figure 2.13: Installation progress Figure 2.14: Add disks to the storage pool Figure 2.15: Status of storage pool disks Figure 2.16: Choose the servers on which to install the agent Figure 2.17: Enter the credentials to install the agent Figure 2.18: Choose the server restart method Figure 2.19: Agent installation summary Figure 2.20: Agent installation progress Figure 2.21: The Create Protection Group Welcome screen Figure 2.22: Select the protection group members Figure 2.23: Select the data protection method Figure 2.24: Select the short-term protection details Figure 2.25: Change the recovery point schedules Figure 2.26: The disk allocation recommendation Figure 2.27: Modify the disk allocation Figure 2.28: Choose the replica creation method Figure 2.29: Protection group summary

Chapter 3: Using the DPM Administration Console


Figure 3.1: Opening DPM Administration console Figure 3.2: The Options window Figure 3.3: The Auto Discovery tab Figure 3.4: The SMTP Server tab Figure 3.5: The Notifications tab Figure 3.6: The Alert Publishing tab Figure 3.7: The Customer Feedback tab

Figure 3.8: The Monitoring tab Figure 3.9: Grouping in the Alerts subtab Figure 3.10: The Jobs subtab Figure 3.11: The Filter screen Figure 3.12: The Protection tab of the Filter screen Figure 3.13: The Other tab of the Filter screen Figure 3.14: The Protection tab Figure 3.15: The Recovery tab Figure 3.16: The Search subtab Figure 3.17: The Reporting tab Figure 3.18: Report filtering options Figure 3.19: A Disk Utilization Report Figure 3.20: A Protection Report Figure 3.21: A Recovery Report Figure 3.22: A Status Report Figure 3.23: A Tape Management Report Figure 3.24: A Tape Utilization Report Figure 3.25: Changing a report generation schedule Figure 3.26: The Management tab Figure 3.27: The Disks subtab Figure 3.28: The Libraries subtab

Chapter 4: Using the DPM Management Shell


Figure 4.1: The DPM Management Shell Figure 4.2: The DPM Management Shell Initial prompt Figure 4.3: The Get-DpmCommand cmdlet Figure 4.4: Piping the Get-DpmCommand to more

Chapter 5: End-User Recovery


Figure 5.1: The Options pane

Figure 5.2: The Configure Active Directory box Figure 5.3: Confirm the changes Figure 5.4: Change notification Figure 5.5: Update confirmation Figure 5.6: Synchronization notice Figure 5.7: Confirmation warning for schema extension Figure 5.8: Enter the machine name Figure 5.9: Enter the FQDN Figure 5.10: Enter the DNS domain name Figure 5.11: The schema update in progress Figure 5.12: The VSS Client Welcome screen Figure 5.13: The VSS Client EULA Figure 5.14: The VSS Install Confirmation screen Figure 5.15: The Hotfix Installation screen Figure 5.16: The Hotfix EULA Figure 5.17: The Hotfix installation confirmation Figure 5.18: Select the file to recover Figure 5.19: The Previous Versions tab Figure 5.20: Recovery choice confirmation Figure 5.21: A successful recovery Figure 5.22: Select a folder to recover Figure 5.23: Select a folder to restore Figure 5.24: Confirm the recovery choice Figure 5.25: The folder recovery is successful Figure 5.26: Select the file to recover Figure 5.27: Select the file version to recover Figure 5.28: Confirm the recovery choice

Figure 5.29: The file recovery is successful Figure 5.30: Select the document to recover Figure 5.31: Select the version to recover Figure 5.32: Confirm the recovery choice Figure 5.33: The file recovery is successful

Chapter 6: Protecting File Servers


Figure 6.1: Choosing servers for agent install Figure 6.2: Enter credentials for agent install Figure 6.3: Choose restart method Figure 6.4: Protection agent install summary Figure 6.5: The Create Protection Group Welcome screen Figure 6.6: Selecting data sources to protect Figure 6.7: Selecting the protection method Figure 6.8: Short-term recovery goals Figure 6.9: Changing settings for recovery points Figure 6.10: Modifying the allocation for replicas and recovery points Figure 6.11: Modifying the change journal space on the protected server Figure 6.12: Customizing long-term protection goals Figure 6.13: Modifying the times for long-term backups Figure 6.14: The Library And Tape Details screen Figure 6.15: The Replica Creation Method screen Figure 6.16: The Create Protection Group Summary screen Figure 6.17: Cluster nodes shown in the Management tab Figure 6.18: Selecting clustered data sources to protect Figure 6.19: The Summary screen Figure 6.20: Selecting a recovery point Figure 6.21: Review recovery selection Figure 6.22: Select the recovery type

Figure 6.23: The Specify Recovery Options screen Figure 6.24: Modifying network bandwidth throttling Figure 6.25: The Summary screen Figure 6.26: Recovery progress in the Recovery Status window Figure 6.27: Selecting an alternative recovery location Figure 6.28: Summary screen for recovering to an alternative location Figure 6.29: The Specify Library screen Figure 6.30: The Specify Recovery Options screen Figure 6.31: Copy to tape summary

Chapter 7: Protecting Exchange Servers


Figure 7.1: Choosing servers for agent install Figure 7.2: Enter the credentials for agent install Figure 7.3: Choose the restart method Figure 7.4: The Protection Agent Installation summary Figure 7.5: The Welcome screen Figure 7.6: Selecting a storage group in a standalone configuration Figure 7.7: Selecting a storage group in a clustered configuration Figure 7.8: Selecting a data-protection method Figure 7.9: The Specify Exchange Protection Options screen Figure 7.10: The Specify Short-Term Goals screen Figure 7.11: Scheduling express full backups Figure 7.12: The Review Disk Allocation screen Figure 7.13: The Modify Disk Allocation screen Figure 7.14: The Specify Long-Term Protection Goals screen Figure 7.15: The Customize Protection Objective screen Figure 7.16: The Modify Long-Term Backup Schedule screen Figure 7.17: The Select Library And Tape Details screen Figure 7.18: Choose a replica creation method

Figure 7.19: The Summary screen Figure 7.20: Selecting a data source for recovery Figure 7.21: The Review Recovery Selection screen Figure 7.22: Select the recovery type Figure 7.23: Select the recovery options Figure 7.24: The Summary screen Figure 7.25: Specify an alternative recovery destination Figure 7.26: Specify the recovery options Figure 7.27: Throttling bandwidth Figure 7.28: Specify a library Figure 7.29: Specify the recovery options Figure 7.30: Selecting a database to recover Figure 7.31: Review recovery selection Figure 7.32: Select the recovery type Figure 7.33: The Specify Recovery Options screen Figure 7.34: The Summary screen Figure 7.35: Specify the destination Figure 7.36: Specify the recovery options Figure 7.37: The Summary screen Figure 7.38: Specifying a recovery storage group Figure 7.39: The Specify Recovery Options screen Figure 7.40: Specifying a network location Figure 7.41: The Specify Recovery Options screen Figure 7.42: Selecting a mailbox for recovery Figure 7.43: The Review Recovery Selection screen Figure 7.44: The Select Recovery Type screen Figure 7.45: Specify the destination

Figure 7.46: Specify the recovery options

Chapter 8: Protecting SQL Servers


Figure 8.1: The SQL Server VSS Writer Figure 8.2: Choosing servers for agent install Figure 8.3: Enter credentials for agent install Figure 8.4: Choose restart method Figure 8.5: Protection agent install summary Figure 8.6: The Create New Protection Group Welcome screen Figure 8.7: Selecting databases to protect Figure 8.8: Selecting a dataprotection method Figure 8.9: The Specify ShortTerm Goals screen Figure 8.10: Changing the express full backup schedule Figure 8.11: Modifying disk allocation Figure 8.12: Specifying longterm goals Figure 8.13: Specifying long-term goals Figure 8.14: Modifying the backup schedule for your objectives Figure 8.15: Selecting library and tape details Figure 8.16: The Choose Replica Creation Method screen Figure 8.17: The Summary screen Figure 8.18: Selecting clustered databases to protect Figure 8.19: Selecting a database to recover Figure 8.20: The Review Recovery Selection screen Figure 8.21: Choosing a recovery type Figure 8.22: Selecting the recovery state of the database Figure 8.23: Selecting recipients for job notifications Figure 8.24: The Summary screen Figure 8.25: The Specify Alternate Database And Instance For Recovery screen Figure 8.26: Browsing for an alternative instance and database

Figure 8.27: Recovering to an Alternate Instance Summaryscreen Figure 8.28: The Specify Destination screen Figure 8.29: Browsing for a network location Figure 8.30: The Specify Recovery Options screen Figure 8.31: Modifying network bandwidth throttling Figure 8.32: The Recover To Network Location Summary screen Figure 8.33: The Specify Library screen Figure 8.34: The Copy To Tape Summary screen

Chapter 9: Protecting SharePoint Servers


Figure 9.1: Choosing servers for agent install Figure 9.2: Enter the credentials for the agent install Figure 9.3: Choose the restart method Figure 9.4: The Protection Agent Install summary Figure 9.5: Selecting content databases Figure 9.6: Selecting a protection method Figure 9.7: Select a retention range Figure 9.8: Modifying recovery point frequency Figure 9.9: Review disk allocation Figure 9.10: Modifying disk allocation Figure 9.11: Specify longterm goals Figure 9.12: Customize protection objectives Figure 9.13: Setting the schedule for tape backups Figure 9.14: Select the library and tape options Figure 9.15: Choose the replica creation method Figure 9.16: The Summary screen Figure 9.17: Starting a farm recovery Figure 9.18: The Review Recovery Selection screen Figure 9.19: Select the recovery type

Figure 9.20: The Specify Library screen Figure 9.21: Specify the recovery options Figure 9.22: Throttling network bandwidth Figure 9.23: The Summary screen Figure 9.24: Select a site to recover Figure 9.25: The Review Recovery Selection screen Figure 9.26: Selecting the site recovery type Figure 9.27: Specify the recovery farm details Figure 9.28: Specify the recovery farm Figure 9.29: Specify the recovery options Figure 9.30: Throttling network bandwidth Figure 9.31: The Summary screen Figure 9.32: Specify a recovery farm and target site Figure 9.33: Selecting an item to recover Figure 9.34: Select the recovery type Figure 9.35: Specify the recovery farm details Figure 9.36: Specify a recovery farm Figure 9.37: Specify the recovery options Figure 9.38: Throttling network bandwidth Figure 9.39: The Summary screen Figure 9.40: Specify the recovery farm and target site Figure 9.41: Specify the recovery farm Figure 9.42: Specify the recovery options Figure 9.43: Throttling network bandwidth Figure 9.44: The Summary screen

Chapter 10: Protecting Virtual Servers


Figure 10.1: Choosing servers for agent install Figure 10.2: Enter the credentials for the agent install

Figure 10.3: Choose the restart method Figure 10.4: The Protection Agent Install summary Figure 10.5: The Create New Protection Group Welcome screen Figure 10.6: Selecting the virtual machines to protect Figure 10.7: Selecting a data protection method Figure 10.8: Specify shortterm goals Figure 10.9: Modify the recovery point creation schedule Figure 10.10: The Review Disk Allocation screen Figure 10.11: The Modify Disk Allocation screen Figure 10.12: Specify the long-term protection goals Figure 10.13: Select the long-term objectives Figure 10.14: Modify the backup schedule for your objectives Figure 10.15: The Select Library And Tape Details screen Figure 10.16: The Choose Replica Creation Method screen Figure 10.17: The Summary screen Figure 10.18: Selecting a VM to recover Figure 10.19: Review your recovery selection Figure 10.20: The Select Recovery Type screen Figure 10.21: The Specify Recovery Options screen Figure 10.22: Throttling network bandwidth Figure 10.23: The Summary screen Figure 10.24: Choosing a location Figure 10.25: Specify the recovery options Figure 10.26: The Specify Library screen

Chapter 11: Protecting Workstations


Figure 11.1: Choosing the workstations for an agent install Figure 11.2: Enter the credentials for agent install Figure 11.3: Choose the restart method

Figure 11.4: The Protection Agent Installation Summary screen Figure 11.5: The Create New Protection Group Welcome screen Figure 11.6: Select the data sources to protect Figure 11.7: Selecting the protection method Figure 11.8: Short-term recovery goals Figure 11.9: Changing settings for recovery points Figure 11.10: The Review Disk Allocation screen Figure 11.11: Modifying disk allocation Figure 11.12: Customizing longterm protection goals Figure 11.13: The Customize Recovery Goal screen Figure 11.14: Modifying the times for long-term backups Figure 11.15: The Select Library And Tape Details screen Figure 11.16: The Choose Replica Creation Method screen Figure 11.17: The Create New Protection Group Summary screen Figure 11.18: Selecting a recovery point Figure 11.19: Review the recovery selection Figure 11.20: Select the recovery type Figure 11.21: Specify the recovery options Figure 11.22: The Summary screen Figure 11.23: Recovery progress in the status window Figure 11.24: Selecting an alternative recovery location Figure 11.25: The Specify Library screen Figure 11.26: Specify the recovery options

Appendix B: Setting Up a Lab Environment


Figure B.1: The opening Setup screen Figure B.2: The License Agreement Figure B.3: The Customer Information screen Figure B.4: The Setup Type screen

Figure B.5: The Configure Components screen Figure B.6: Firewall exceptions Figure B.7: The Ready To Install screen Figure B.8: The Setup Complete screen Figure B.9: Create a differencing virtual hard disk Figure B.10: Tying the differencing disk to the parent Figure B.11: Create a virtual machine Figure B.12: Configure a virtual machine Figure B.13: Opening the Computer Management screen Figure B.14: Create a new partition Figure B.15: The Welcome To The New Partition Wizard screen Figure B.16: Select the partition type Figure B.17: The Specify Partition Size screen Figure B.18: Specify the drive letter Figure B.19: Format the partition Figure B.20: Your new partition Figure B.21: Create a new folder Figure B.22: Select Sharing And Security Figure B.23: Sharing a folder Figure B.24: VM Configuration Figure B.25: Add a SCSI adapter Figure B.26: Adding a shared SCSI adapter Figure B.27: Disk configuration Figure B.28: Creating a new group Figure B.29: Naming the group Figure B.30: Adding preferred owners Figure B.31: Creating a new resource

Figure B.32: The New Resource window Figure B.33: The TCP/IP Address Parameters screen Figure B.34: Creating a disk resource Figure B.35: Adding an IP dependency Figure B.36: Adding a network name resource Figure B.37: Adding an IP dependency Figure B.38: Network Name Parameters Figure B.39: Creating a file share resource Figure B.40: Adding a disk dependency Figure B.41: The File Share Parameters screen Figure B.42: Setting up Exchange 2007 Figure B.43: The Introduction screen Figure B.44: The License Agreement Figure B.45: The Error Reporting screen Figure B.46: The Installation Type screen Figure B.47: Readiness Checks Figure B.48: The Completion screen Figure B.49: Create a new group Figure B.50: Naming the group Figure B.51: The Preferred Owners window Figure B.52: Create a new resource Figure B.53: Adding a physical disk Figure B.54: The Possible Owners screen Figure B.55: The Dependencies window Figure B.56: The Disk Parameters window Figure B.57: Bringing the group online Figure B.58: Set up Exchange

Figure B.59: The Introduction screen Figure B.60: License Agreement Figure B.61: The Error Reporting screen Figure B.62: The Installation Type screen Figure B.63: The Server Role Selection screen Figure B.64: The Cluster Settings screen Figure B.65: Readiness Checks Figure B.66: The Completion screen Figure B.67: The EULA Figure B.68: SQL Server Setup Figure B.69: Prerequisites are installed Figure B.70: The SQL Server Installation Welcome screen Figure B.71: The System Configuration Check Figure B.72: The Registration Information screen Figure B.73: Select the components to install Figure B.74: The Instance Name screen Figure B.75: Service account information Figure B.76: The Authentication Mode screen Figure B.77: The Collation Settings screen Figure B.78: The Error And Usage Report Settings screen Figure B.79: The Ready To Install screen Figure B.80: The Setup Progress screen Figure B.81: The Cluster Administrator Figure B.82: The New Group screen Figure B.83: The Preferred Owners screen Figure B.84: Click New Resource Figure B.85: Select the Physical Disk resource

Figure B.86: The Possible Owners screen Figure B.87: The Dependencies screen Figure B.88: The Disk Parameters screen Figure B.89: Bring the clustered group online Figure B.90: The EULA Figure B.91: The SQL Server Setup screen Figure B.92: Prerequisites are installed Figure B.93: The SQL Server Installation Welcome screen Figure B.94: The System Configuration Check screen Figure B.95: Enter the registration information Figure B.96: Select the components to install Figure B.97: The Instance Name screen Figure B.98: The Virtual Server Name screen Figure B.99: The Virtual Server Configuration screen Figure B.100: The Cluster Group Selection screen Figure B.101: The Cluster Node Configuration screen Figure B.102: The Remote Account Information screen Figure B.103: Enter the service account information Figure B.104: The Domain Groups For Clustered Services screen Figure B.105: The Authentication mode screen Figure B.106: The Collation Settings screen Figure B.107: The Error And Usage Report Settings screen Figure B.108: The Ready To Install screen Figure B.109: The SharePoint product key Figure B.110: The EULA Figure B.111: The Installation Type screen Figure B.112: The Installation Progress screen

Figure B.113: Complete installation by configuring SharePoint Figure B.114: The SharePoint Products And Technologies Configuration Wizard Figure B.115: Service restart warning Figure B.116: Configuration tasks Figure B.117: The Configuration Successful screen

List of Tables
Chapter 1: Data Protection Concepts
Table 1.1: Common Issues Affecting Backup Design Table 1.2: Comparing Tape and Disk Backups Table 1.3: Volume Shadow Copy Service Components Table 1.4: DPM Replication Strategies Table 1.5: Elements That Define a DPM Protection Group

Chapter 2: Installing DPM


Table 2.1: DPM Server Hardware Requirements Table 2.2: DPM Server Software Requirements Table 2.3: Protected Server Software Requirements

Chapter 3: Using the DPM Administration Console


Table 3.1: Types of Jobs in DPM Table 3.2: Actions in the Protection Tab Table 3.3: Tape Management Actions

Chapter 4: Using the DPM Management Shell


Table 4.1: DMS Cmdlet Verbs Table 4.2: DMS Cmdlet Objects Table 4.3: DMS Cmdlets

Chapter 5: End-User Recovery


Table 5.1: EUR in DPM Supported Operating Systems and Patch Locations

Chapter 6: Protecting File Servers


Table 6.1: Advanced File Server Technologies Supported by DPM Table 6.2: Protected Server Software Requirements Table 6.3: Data Contained in the System State

Chapter 7: Protecting Exchange Servers


Table 7.1: Exchange Components Table 7.2: Protected Exchange Server Software Requirements Table 7.3: Data Contained in the System State

Chapter 8: Protecting SQL Servers


Table 8.1: Protected Server Software Requirements Table 8.2: Data Contained in the System State

Chapter 9: Protecting SharePoint Servers


Table 9.1: Protected Server Software Requirements Table 9.2: Data Contained in the System State

Chapter 10: Protecting Virtual Servers


Table 10.1: Comparing MSVS to SQL Server Table 10.2: Protected Server Software Requirements Table 10.3: Data Contained in the System State

Chapter 11: Protecting Workstations


Table 11.1: Protected Workstation Requirements Table 11.2: Data Contained in the System State

Chapter 12: Advanced DPM


Table 12.1: Protecting Other Microsoft Applications Table 12.2: DPM Port Requirements

Appendix B: Setting Up a Lab Environment


Table B.1: Lab Hardware Table B.2: Virtual Machine Memory Requirements

List of Sidebars
Chapter 1: Data Protection Concepts
For Experienced Backup Administrators Using Windows Backup Labeling Your Backup Media More about VSS What Is a Physical Disk Volume, Anyway? Why Do I Need Both Synchronization Frequency and Recovery Points? Protecting Non-Windows Servers with DPM?

Chapter 2: Installing DPM


The Release Notes Sharing DPM with Other Applications What About Network Attached Storage? Overriding the Default Storage Allocation

Chapter 3: Using the DPM Administration Console


REAL WORLD SCENARIO: Which Port Do I Use?

Chapter 4: Using the DPM Management Shell


A Historical Perspective A Matter of Style?

Chapter 5: End-User Recovery


Upgrading Your Active Directory Schema Deploying the EUR Client Using Group Policy End-User Recovery Limitations

Chapter 6: Protecting File Servers


Why Doesn't DPM Replicate Mount Points? Clustering Types A Note for DPM 2006 Users

Chapter 7: Protecting Exchange Servers


Clustering Types

Database Recovery Limitations

Chapter 8: Protecting SQL Servers


What About the Itanium? Clustering Types

Chapter 9: Protecting SharePoint Servers


Why Can't I Recover a Site to a Network Folder or Tape Copy?

Chapter 10: Protecting Virtual Servers


What About Virtual PC?

Chapter 11: Protecting Workstations


Protecting Portable Computers Why Doesn't DPM Replicate Mount Points?

Chapter 12: Advanced DPM


Backing Up DPM with a Non-VSS-Aware Application Jumbo Frames THE EFFECT OF THE WINDOWS SERVER 2003 SP2 IP STACK CHANGES Importing Certificates in Windows

Appendix B: Setting Up a Lab Environment


You Can't Always Be Virtual