You are on page 1of 256

Building a Windows Server 2008 Infrastructure

Greg Shields

Introduction

Introduction to Realtime Publishers


by Don Jones, Series Editor

For several years, now, Realtime has produced dozens and dozens of high-quality books that just happen to be delivered in electronic formatat no cost to you, the reader. Weve made this unique publishing model work through the generous support and cooperation of our sponsors, who agree to bear each books production expenses for the benefit of our readers. Although weve always offered our publications to you for free, dont think for a moment that quality is anything less than our top priority. My job is to make sure that our books are as good asand in most cases better thanany printed book that would cost you $40 or more. Our electronic publishing model offers several advantages over printed books: You receive chapters literally as fast as our authors produce them (hence the realtime aspect of our model), and we can update chapters to reflect the latest changes in technology. I want to point out that our books are by no means paid advertisements or white papers. Were an independent publishing company, and an important aspect of my job is to make sure that our authors are free to voice their expertise and opinions without reservation or restriction. We maintain complete editorial control of our publications, and Im proud that weve produced so many quality books over the past years. I want to extend an invitation to visit us at http://nexus.realtimepublishers.com, especially if youve received this publication from a friend or colleague. We have a wide variety of additional books on a range of topics, and youre sure to find something thats of interest to youand it wont cost you a thing. We hope youll continue to come to Realtime for your educational needs far into the future. Until then, enjoy. Don Jones

Table of Contents Introduction to Realtime Publishers................................................................................................. i Chapter 1: Introduction & Installation of Windows Server 2008 ....................................................1 Five Years Later, a New Windows OS ............................................................................................2 The Intent of this Guide ...................................................................................................................3 Ten Elements of Server 2008 ...............................................................................................4 Chapter 1: Introduction & Installation of Windows Server 2008 ........................................4 Chapter 2: Server Manager ..................................................................................................4 Chapter 3: Active Directory Design & Domain Controller Management ...........................5 Chapter 4: File Servers & Storage Management .................................................................5 Chapter 5: Server Core.........................................................................................................5 Chapter 6: Managing & Customizing Group Policy............................................................6 Chapter 7: Introduction to Terminal Services......................................................................6 Chapter 8: Advanced Topics in Terminal Services .............................................................6 Chapter 9: Securing Servers & the Domain.........................................................................7 Chapter 10: Windows Failover Clustering ..........................................................................7 Why Should You Upgrade? .............................................................................................................7 Componentization ................................................................................................................9 Security ................................................................................................................................9 Manageability ....................................................................................................................10 Introducing Windows Server 2008 ................................................................................................10 Windows Server Editions ..................................................................................................10 Hardware Requirements and Limitations ..........................................................................12 Supported Upgrade Paths...................................................................................................13 Licensing and Activation ...................................................................................................13 Installing Server 2008 ....................................................................................................................14 Manual Installations ...........................................................................................................15 Initial Configuration Tasks ................................................................................................16 Windows Preinstallation Environment ..............................................................................18 Automated Installations .....................................................................................................19 Scripted Installations with the Windows System Image Manager ....................................20 Image-Based Installations with ImageX ............................................................................23 Windows Deployment Services .........................................................................................25 Summary ........................................................................................................................................28

ii

Table of Contents Chapter 2: Server Manager ............................................................................................................29 Introducing Server Manager ..........................................................................................................29 Capabilities Become Roles, Role Services, and Features ..................................................31 Roles ......................................................................................................................32 Role Services .........................................................................................................33 Features ..................................................................................................................34 Componentization and Security .........................................................................................36 Navigating the Server Manager GUI .................................................................................37 Adding New Components ..............................................................................................................39 Example: Adding the DHCP Server Role..........................................................................39 Server Manager by Command Line ...............................................................................................41 Server Manager Components.........................................................................................................44 Event Viewer .....................................................................................................................44 Reliability and Performance Monitor ................................................................................48 Task Scheduler ...................................................................................................................49 Windows Server Backup....................................................................................................50 Disk Management ..............................................................................................................51 Server Manager Consolidates Management Activities ..................................................................52 Chapter 3: Active Directory Design & Domain Controller Management .....................................53 A Good AD Design Solves Many Problems..................................................................................54 Understanding the AD and Domain Controllers............................................................................56 The AD Forest....................................................................................................................56 The AD Domain .................................................................................................................56 Domain Controllers............................................................................................................56 Flexible Single Master Operation Roles ............................................................................57 Functional Levels ...............................................................................................................58 Sites ....................................................................................................................................59 Organizational Units ..........................................................................................................59 Domain Name Service .......................................................................................................59 Best Practices in AD Design ..........................................................................................................60 Installing Domain Controllers........................................................................................................61 Installing the DNS Server Role..........................................................................................61 Promoting a Member Server with DCPROMO .................................................................63

iii

Table of Contents Promoting Additional Domain Controllers........................................................................67 Upgrading Domain Controllers......................................................................................................73 Updating the Schema .........................................................................................................74 Promoting a Member Server with DCPROMO .................................................................75 Relocating FSMO Roles ....................................................................................................75 Demoting and Rebuilding Domain Controllers .................................................................76 Relocating FSMO Roles ....................................................................................................77 Functional Levels ...............................................................................................................77 Read-Only Domain Controllers .....................................................................................................77 AD Backup and Restore.................................................................................................................80 Backing Up the AD Database ............................................................................................80 Restoring Individual AD Objects.......................................................................................80 Restoring Full Domain Controllers....................................................................................81 AD Is a Central Part of Your Windows Infrastructure ..................................................................81 Chapter 4: File Servers & Storage Management ...........................................................................82 The Role of the File Server ............................................................................................................83 Basic and Advanced Folder Sharing ..............................................................................................84 Installing the File Services Role ....................................................................................................87 Share & Storage Management ...........................................................................................88 Access-Based Enumeration ...............................................................................................90 File Services Role Services............................................................................................................91 Distributed File System Namespaces..............................................................................91 Distributed File System Replication ...............................................................................93 File Server Resource Manager ...........................................................................................95 Quota Management ................................................................................................96 File Screening Management ..................................................................................97 Storage Reports Management ................................................................................98 Services for Network File System .....................................................................................99 Windows Search Service .................................................................................................101 Windows Server 2003 File Services ................................................................................102 Properly Managing Storage Eliminates Critical Downtime ........................................................103 Chapter 5: Server Core.................................................................................................................104 What Exactly Is Server Core? ......................................................................................................105

iv

Table of Contents Positioning Server Core in Your Environment ............................................................................107 Installing Server Core ..................................................................................................................109 Configuring Server Core ..............................................................................................................111 Initial Configuration.........................................................................................................111 Customizing Server Core .................................................................................................113 Installing Roles, Role Services, and Features ..................................................................118 Installing Active Directory Domain Services ..................................................................122 Server Core + BitLocker + RODC = A Secure Branch Office ........................................124 Other Powerful Tools for Managing Server Core........................................................................125 Server Core Command-Line Crib Sheet ......................................................................................127 A Compelling New and Different Way for Windows Server 2008 .........................................128 Chapter 6: Managing & Customizing Group Policy....................................................................129 The Benefits of Centralized Management with Group Policy .....................................................130 Navigating the GPMC..................................................................................................................131 Creating a Simple GPO....................................................................................................132 Applying That Simple GPO .............................................................................................135 Applying Multiple GPOs .................................................................................................135 Administrative Templates and the Group Policy Central Store...................................................136 Network Location Awareness ..........................................................................................139 Starter GPOs ....................................................................................................................140 GPO and GPO Settings Comments..................................................................................142 GPO Filters ......................................................................................................................144 Scripting the GPMC.........................................................................................................146 Group Policy Preferences ............................................................................................................147 Group Policys Centralized Control Enhances Your Ability to Manage Your Infrastructure.....152 Chapter 7: Introduction to Terminal Services..............................................................................153 What Exactly Is Terminal Services? ............................................................................................154 Introducing Windows Server 2008s Terminal Services Role ....................................................157 Server ...............................................................................................................................157 TS Licensing ....................................................................................................................158 TS Web Access ................................................................................................................158 TS Gateway......................................................................................................................158 TS Session Broker............................................................................................................158

Table of Contents The Remote Desktop Client .........................................................................................................159 Installing the Terminal Server Role Service ................................................................................161 Installing the TS Licensing Role Service.....................................................................................162 Managing Terminal Services .......................................................................................................165 Server Manager ................................................................................................................165 Installing Applications .....................................................................................................170 Managing User Profiles ...................................................................................................172 Printing with Terminal Services ..................................................................................................174 Summary ......................................................................................................................................175 Chapter 8: Advanced Topics in Terminal Services .....................................................................176 Advanced New Functionality for Terminal Services in Windows Server 2008 ..........................177 Deploying Applications with Terminal Services .........................................................................178 TS RemoteApps ...............................................................................................................179 TS RemoteApp Distribution Options...............................................................................180 RDP File Distribution ..........................................................................................180 Local Desktop Installation ...................................................................................181 Hosting via TS Web Access ................................................................................183 TS Web Access ............................................................................................................................183 Installing and Using TS Web Access...............................................................................184 Configuring TS Web Access ...........................................................................................185 TS Gateway..................................................................................................................................187 Installing TS Gateway......................................................................................................188 Configuring TS Gateway .................................................................................................189 Configuring Terminal Services for TS Gateway .............................................................194 TS Session Broker........................................................................................................................195 Installing and Configuring TS Session Broker ................................................................196 Terminal Services in Windows Server 2008 Narrows the Gap ...................................................198 Chapter 9: Securing Servers & the Domain.................................................................................199 Windows Server 2008 Incorporates New and Improved Security Features ................................199 Componentization ............................................................................................................200 Security Configuration Wizard ........................................................................................200 Windows Service Hardening ...........................................................................................201 Fine-Grained Password Policies ......................................................................................201

vi

Table of Contents User Account Control ......................................................................................................202 Windows Firewall with Advanced Security ....................................................................202 BitLocker Drive Encryption ............................................................................................202 Successfully Managing UAC .......................................................................................................202 Group Policy and UAC ....................................................................................................205 Common UAC Implementations .....................................................................................208 Introducing the Windows Firewall with Advanced Security.......................................................209 Three Profiles ...................................................................................................................210 Inbound & Outbound Rules .............................................................................................210 Connection Security Rules...............................................................................................212 Centralized Management & Group Policy .......................................................................213 Two Common Uses of the Windows Firewall with Advanced Security .....................................214 Example 1: Securing Laptops While off the Domain ......................................................214 Example 2: Simple Domain Isolation ..............................................................................216 Installing and Managing BitLocker .............................................................................................218 Prerequisites & Installation ..............................................................................................220 Installing BitLocker Without a TPM ...............................................................................221 BitLocker and Group Policy ............................................................................................223 Windows Server 2008 Is Microsofts Most Secure OS to Date ..................................................224 Chapter 10: Windows Failover Clustering ..................................................................................225 Understanding Windows Failover Clustering..............................................................................226 Reasons to Use WSFC .....................................................................................................228 Reasons Not to Use WSFC ..............................................................................................229 Components and Prerequisites .........................................................................................229 Cluster Validation ............................................................................................................230 Cluster Quorum Models...............................................................................................................231 Node Majority ..................................................................................................................231 Node and Disk Majority...................................................................................................231 No Majority: Disk Only ...................................................................................................232 Node and File Share Majority ..........................................................................................232 Installing WSFC...........................................................................................................................232 Configuring Networking ..................................................................................................233 Configuring the Shared Storage .......................................................................................233

vii

Table of Contents Validate and Create the Cluster .......................................................................................236 Post-Installation Quorum Reconfiguration ......................................................................237 Managing WSFC .........................................................................................................................238 Adding a Cluster Service .................................................................................................238 Managing Resources and Dependencies ..........................................................................241 Failover ............................................................................................................................242 Failback ............................................................................................................................244 Geoclustering ...............................................................................................................................245 Clustering Brings High Availability ............................................................................................246 Download Additional eBooks from Realtime Nexus! .................................................................246

viii

Copyright Statement

Copyright Statement
2008 Realtime Publishers, Inc. All rights reserved. This site contains materials that have been created, developed, or commissioned by, and published with the permission of, Realtime Publishers, Inc. (the Materials) and this site and any such Materials are protected by international copyright and trademark laws. THE MATERIALS ARE PROVIDED AS IS WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE, TITLE AND NON-INFRINGEMENT. The Materials are subject to change without notice and do not represent a commitment on the part of Realtime Publishers, Inc or its web site sponsors. In no event shall Realtime Publishers, Inc. or its web site sponsors be held liable for technical or editorial errors or omissions contained in the Materials, including without limitation, for any direct, indirect, incidental, special, exemplary or consequential damages whatsoever resulting from the use of any information contained in the Materials. The Materials (including but not limited to the text, images, audio, and/or video) may not be copied, reproduced, republished, uploaded, posted, transmitted, or distributed in any way, in whole or in part, except that one copy may be downloaded for your personal, noncommercial use on a single computer. In connection with such use, you may not modify or obscure any copyright or other proprietary notice. The Materials may contain trademarks, services marks and logos that are the property of third parties. You are not permitted to use these trademarks, services marks or logos without prior written consent of such third parties. Realtime Publishers and the Realtime Publishers logo are registered in the US Patent & Trademark Office. All other product or service names are the property of their respective owners. If you have any questions about these terms, or if you would like information about licensing materials from Realtime Publishers, please contact us via e-mail at info@realtimepublishers.com.

ix

Chapter 1 [Editor's Note: This eBook was downloaded from Realtime NexusThe Digital Library. All leading technology guides from Realtime Publishers can be found at http://nexus.realtimepublishers.com.]

Chapter 1: Introduction & Installation of Windows Server 2008


When you think about Windows Server 2008, what comes to mind? A new look and feel, more like Vista and less like Windows XP. An improved Server Manager administrative console that aggregates most administrative tasks into a single interface. A focus on componentization, breaking apart what we used to think of as Windows into multiple, installable components. An improved installation routine, moving all the questions ahead of the actual installation process. A new Initial Configuration Tasks wizard that consolidates many of the initial server personalization activities under a single console. An improved event viewer with better viewing and management of system events and the ability to send event information from machine to machine. A new task scheduler that takes away much of the guesswork in creating scheduled tasks. An improved Performance Monitor that takes total server reliability into account. A host of improvements to Windows Backup, including capabilities for bare-metal restoration. An enhanced Disk Management subsystem, finally granting administrators the ability to shrink volumes in addition to expanding them. An integrated memory diagnostics tool that checks hardware RAM for physical errors. A segregation of typical server elements into Roles, Role Services, and Features for better separation of duties. An upgrade to Internet Information Services, adding centralized manageability and a reduced attack surface area. A new File Services Role, formalizing the traditional service of serving files. A set of long-desired improvements to Terminal Services, enhancing virtually every component of remote application support. A set of new and improved network security capabilities, like Network Access Protection and Server and Domain Isolation, that protect a networks vulnerable interior. A set of enhancements to Active Directory (AD), improving domain controllers, adding the new Read-Only Domain Controller role, and adding greater overall directory services resiliency An added capability for native hardware virtualization right out of the box, as well as the ability to eliminate the graphical user interface (GUI) altogether. Anything else? All this and more are some of the new and improved features of Microsofts latest Windows Server operating system (OS). Get ready because Windows Server 2008 is here, and within this guide, well talk about how best to use this new server OS to build a Windows infrastructure.

Chapter 1

Figure 1.1: Server 2008s logon screen, like much of its GUI, more closely resembles Vista than Windows XP.

Five Years Later, a New Windows OS


With all of the new and improved capabilities discussed in the intro, were still just scratching the surface of Windows Server 2008, the first new Microsoft server OS to be released in 5 years. With all these new capabilities, Windows Server 2008 stands to be Microsofts best OS yet. That statement might sound like hyperbole, but considering the previous feature list combined with what youll find throughout this guide, there are a number of compelling features that are likely to drive you to quickly implement Server 2008. If you havent taken a look at Windows Server 2008, take the time to download and install the software into a test environment. Server 2008 shares much of its codebase with Windows Vista, so youll notice immediately upon installation the similarities between the Server 2008 and Vista GUIs. Though Server 2008 doesnt natively include the high-gloss and resource intensive Aero interface, its components havent moved around much between these two iterations. Because of this, youll find that making the jump from Vista to Server 2008 will be much easier than making the jump from Windows XP to Vista.

Chapter 1 Where the real compelling features arrive with Server 2008, at least according to Microsofts marketing, are in its improved security and manageability as well as providing a solid foundation for business workloads. Server 2008s entire code development process was the first server OS to be completed fully under the banner of Microsofts Trustworthy Computing Initiative. As part of this initiative, Server 2008s code was created with established secure coding frameworks and security best practices throughout its architecture and development. The result is a more secure OS, a reduction in its attack surface, and a reduced exposure to the outside world. Also greatly enhanced is this OSs capacity for management. Youll find new and better mechanisms for managing your servers such as Server Manager and the Remote Server Administration Tools. Youll experience improved and better centralized control over server and workstation configuration through an enhanced Group Policy engine. Youll even find yourself learning more about command-line management through scripting languages like PowerShell and new OS roles like Server Core. A major focus of this OS upgrade is around making Windows easier to manage, which reduces its total cost of ownership (TCO) and speeds your ability to enact change and fix problems when necessary.

The Intent of this Guide


Within the next 10 chapters, well be talking at length about all these new capabilities. Well link each of them to an understanding of how they fit best within your infrastructure. Some of these new features will change Windows administration as we know it. Others may see a lot of hype in the market but really arent all that compelling. Some have yet to shine but have some potential for the future. As the title of this guide suggests, the intent is to provide you with the knowledge you need to build a Windows Server 2008 infrastructure. That means my target as author is to assist you in learning the parts of Server 2008 that are important to accomplishing that goal. At the same time, I hope to illustrate the features from which you might want to steer clear or merely put on the back-burner for a little while. As you can see in the introduction to this chapter, theres a lot in Server 2008 to digest. Figuring out which pieces will integrate best is critical to successfully deploying it within your computing environment. Another of my intents with this guide is to revisit topics that have become established dogma within Windows administrator circles. Due to the feature sets and capabilities weve been using over the past 5 years, some topicssuch as domain design, application delivery, and vulnerability managementhave become established standard operating procedure for IT everywhere. With the introduction of Server 2008, some of those known facts change a little. Some get completely thrown out the window. Some of them remain mostly the same, but with a few changes to your operating procedure. My goal with this guide is to help you understand where you might want to update your thinking when building and administering your Windows infrastructure. Throughout all of this, well be taking a look at some of the biggest new and improved elements that make up Server 2008. Well peel back the layers of this new OS to expose some of the most exciting new capabilities youll want to implement immediately within your infrastructure. Most importantly, well learn the steps necessary to get it up and running as well as manage it over the long term.

Chapter 1 Ten Elements of Server 2008 This guide is broken into ten chapters, which are ordered in such a way that we begin with the information most critical to getting you started. That first set of information will help you get Server 2008 into your environment while at the same time understanding its core management and capabilities. Once we understand its basic structure, well move on to the specific feature sets in an order youre likely to want to implement them. After chapters on installation and core management, well continue with a chapter on AD and domain controllers. Then we move to the no-longer-unexciting File Services role with some much-needed conversation on the topic of storage management. Well then duck into Server Core, Microsofts svelte new GUI-less OS role. Youll learn why Server Core is compelling and where you might consider implementing it in your network. Our next stop is to central manageability with Group Policy. The Group Policy engine as well as individual policies and troubleshooting becomes a lot easier with Server 2008. Following this are two chapters on Terminal Services, quite possibly Server 2008s greatest addition. There, well talk about installing and managing the new Terminal Services as well as advanced topics associated with application delivery and transport security. Following is a chapter on system security that will include a discussion on the Windows Firewall with Advanced Security, an oft-ignored functionality in Server 2003 and Windows XP. With Server 2008, the Windows Firewall gains much improved management capabilities that make it easier to use and maintain. We conclude the guide with a discussion on clustering with Windows Server Failover Clustering. Clusters are an excellent way to improve your overall service uptime, and the new clustering capabilities in Server 2008 make them easier than ever before to configure and administer. Lets take a focused look now at these upcoming chapters and what you can expect from each. Chapter 1: Introduction & Installation of Windows Server 2008 In this, our first chapter, well introduce the guide as well as set up the structure of the upcoming chapters. Focused towards the new installation features, well discuss each of the three potential mechanisms for getting Server 2008 installed onto physical hardware. These range from the traditional manual installation through scripted installations and concludes with rapid deployment options such as Windows Deployment Services and the Windows Automated Installation Kit. Well also discuss the new Initial Configuration Tasks wizard that speeds the deployment of new servers. Chapter 2: Server Manager Once the OS is installed onto hardware and the initial configuration is complete, weve got to somehow manage that installation. In Server 2008, much of that management is consolidated into a new tool called Server Manager. This tool centralizes many of the traditionally segregated management consoles into a unified location. Well discuss the separation of a servers responsibilities into Roles, Role Services, and Features, and learn about the uses of each. Well then dive deep into some of the components youll see in Server Manager, like Event Viewer, the Reliability & Performance Monitor, Task Scheduler, and Disk Management among others.

Chapter 1 Chapter 3: Active Directory Design & Domain Controller Management Ive argued with many that the low-hanging fruit for early Server 2008 implementations will be an environments domain controllers. As domain controllers are typically few in number and dont often include many third-party tools, they make an excellent starting point for building your Server 2008 infrastructure. In this chapter, well discuss some of the new thinking when it comes to AD domain design. Well then dive into the domain controller creation process itself, talking about the enhancements to DCPROMO as well as the new Read-Only Domain Controller option intended for branch offices. Well conclude with a discussion on AD backups and restores as well as the new finegrained password policies that allow you to create multiple password policies within a single domain. Chapter 4: File Servers & Storage Management File Servers are in some ways the least respected servers in the environment. Quietly humming away in the corners of your data center, these servers require little administration other than the occasional permission change or group update. But these servers are at the core of your Windows infrastructure. Storing and protecting users files and folders is one of the most critical tasks of the IT department. In Server 2008, this hasnt gone unnoticed. What has been a traditionally organic process of serving up files and folders is now formalized into the File Services Role. This role comes equipped natively with advanced capabilities like Network File Services, Distributed File System, the File Server Resource Manager, and iSCSI support. This chapter is dedicated to educating you on all these new capabilities as well as how best to administer them. Chapter 5: Server Core If you listen to the press, Server Core is a hot topic thatll change completely the way we administer certain classes of Windows servers. More like a Linux box than a Windows box, Server Cores user interface is minimal to the point of non-existence. If youve thus far been a fan of command-line management, youll immediately understand the benefits of this new server role. Conversely, if youve relegated yourself to administration through the graphical interface only, using Server Core will incur a steep learning curve. Server Core supports only a limited set of roles, with further limitations within those roles. This chapter will show you how to install and manage it as well as detail where it has the potential for best use as part of your Windows infrastructure.

Chapter 1

Chapter 6: Managing & Customizing Group Policy Group Policy is one of Microsofts greatest centralized management toolsets. Using Group Policy alone, you can configure and otherwise lock down the configuration of the servers and clients on your network. But Group Policy has always included complexities as well. Documenting your configured Group Policy settings has always been challenging. Too many policies have traditionally led to SYSVOL bloat. Figuring out just the correct policy to apply and later troubleshooting that policy when it doesnt apply has historically been challenging. All of these and more are topics that get a lot easier in Server 2008. In this chapter, well discuss the changes to Group Policy that fix many of these long-held issues as well as a few morelike multiple local Group Policy Objects (GPOs) and Group Policy Preferencesthat extend policy application even further. Chapter 7: Introduction to Terminal Services Terminal Services is arguably Server 2008s most compelling and exciting feature set. At the very least, its my personal favorite. In this, the first of our two chapters on Terminal Services, well talk about why thats the case. With a host of new capabilities, some that administrators have been waiting for since Server 2003 R2, this chapter will show you those that youll want to implement immediately. Well discuss the use of features previously only available in Citrix Presentation Server like TS RemoteApps, TS Web Access, and TS Gateway. Well also discuss the best ways to manage your Terminal Servers including profiles, licensing, and resource management. Chapter 8: Advanced Topics in Terminal Services Continuing our two-chapter series on Terminal Services, Chapter 8 will dig deep into some of the advanced topics in Terminal Services. Specifically dealing with the complexities of application delivery, well talk about the best ways to get server-based applications deployed to your users. Whether that is through TS Web Access or via directly installed application links, youll appreciate the new ways to make apps available to your users. Well also discuss at length the process of securing the Remote Desktop Protocol (RDP) through TS Gateway as well as implementing a high-end but low-cost load-balancing capability for your Terminal Servers using TS Session Broker.

Chapter 1

Chapter 9: Securing Servers & the Domain Security is one of the major focuses for this OS release. Thus, no guide on Server 2008 is complete without a comprehensive chapter on securing servers and your Windows domain. In Chapter 9, well revisit the concept of componentization and how it is realized in Server 2008. Well discuss some of the new and improved security features enjoyed by this new OS, like the Security Configuration Wizard, User Account Control, Windows Service Hardening, and BitLocker Drive Encryption. Well also talk at length about the Windows Firewall with Advanced Security, a new feature shared with Windows Vista. Adding Server 2008 to your infrastructure gives you critical management capabilities that make easy the administration of this no-added-cost security product. Chapter 10: Windows Failover Clustering Lastly, well conclude our guide with Chapter 10 on Windows failover clustering. Server 2008s release includes a number of big improvements to clustering with the Windows Server Failover Clustering Feature. As many organizations have not yet used Windows-based clustering due to previous problems or the high cost of implementing previous versions, this chapter will include comprehensive instruction on building, administering, and fitting clusters best into your Server 2008 infrastructure.

Why Should You Upgrade?


Ill admit that Im personally excited about the Server 2008 release. Though initial sales of Windows Vista have been slower than expected, there is a high likelihood that Server 2008s adoption will be a different event entirely. A big statement, but the reasons and decisions to implement a server OS are different than those when considering a desktop OS upgrade. Lets take a look at some of the pros and cons associated with upgrading both types of OSs and gain an understanding of why I believe youll move quickly to adopting Server 2008 in your infrastructure.

Chapter 1

Reasons to Upgrade Windows Server 2008 Smaller number of instances Instance purposes are well-known Server OS considered more reliable Fewer applications = fewer conflicts Enhanced security Vendor-tested driver sets Known/fewer hardware configurations Better security, especially for laptops Improved search capability Improved (though different) GUI

Reasons Not to Upgrade Time and cost Concern for application conflicts Potential for outage Interconnectedness between services

Windows Vista

Time and cost More instances across the network More applications = more conflicts Hardware age and upgrade requirements Driver incompatibility Fear (from administrators) Fear (from users)

Table 1.1: A list of reasons to consider before upgrading servers and workstations.

Of those listed, there are a few considerations that should be particularly highlighted as key reasons Server 2008 adoption is likely to be fast. First, in general, organizations tend to deploy a much smaller number of servers than workstations. Thus, there are fewer instances overall that require upgrading. Also notable is that server OSs can be upgraded in a stepwise fashion with much less impact to the environment than a wholesale upgrade to desktops. When desktops are upgraded, there are certain benefitssuch as potential application of Group Policiesthat are not realized until the entire set is upgraded. Servers dont suffer from this same limitation. Second, the hardware in servers is typically more well-known than with desktops. Their resource use is also more well-known and often undersubscribed. Servers typically operate in normal conditions with known workloads. Industry-wide, these workloads average about 7% utilization, which means that there are plenty of available resources to support the enhanced needs of an OS upgrade. Specific to the hardware itself, server-class hardware is usually more rigorously tested with driver sets than is desktop and laptop hardware, so driver sets tend to be more mature. Specific to the features that will drive a quick adoption, we can group them into three rough categories: Componentization means a more stable OS instance. Enhanced security means a better posture for preventing failures in your environment. Enhanced manageability means that your job gets easier.

Lets look at each of these in turn.

Chapter 1

Componentization Well talk at length about Server 2008s componentization in Chapter 2 on Server Manager and again in Chapter 9 on security. But, for now, know that Server 2008s codebase has been reconfigured to segregate each of the traditional activities supported by a server. Each activity has been isolated into a Role, a Role Service that supports a Role, or a separate Feature. Further separation has been done by restricting each components executables and supporting files. When a component is not installed onto the server, that components files and registry keys are not even present on the server. Fewer files and registry keys mean less of a chance for conflict on that server. It also means there is no chance that a security exploit could make use of this code. Reducing the total file count and registry size means less responsibility for patching that server. If your server doesnt include a component that has a known vulnerability, you dont need to patch that server. Security The tenet of componentization dovetails perfectly into the improved security posture youll see with Server 2008. Server 2008s code has been reworked to reduce the instances of user-level code operating within the kernel level. This further isolates kernel code from the potential for external attack. Back in Server 2003, Microsoft released a tool called the Security Configuration Wizard (SCW). This tool could lock down a Windows server to just the components necessary to perform its daily activities. So, for example, if the server is a DNS server, the SCW would lock down all the unnecessary ports while keeping open those necessary for DNS functionality. The SCW got little attention by most administrators, in my opinion due to its complexity. In Server 2008, much of the early work done with the SCW has been rolled up into the individual Roles, Role Services, and Features themselves. So when you install a Role Service, the necessary elements are opened to enable that Role Service to function. This goes further down the default off path started with Windows XP SP2 and Server 2003. In addition to this, the behavior of services is updated to restrict their access. Services in Server 2008 can be assigned a distinct SID. That SID can be assigned individual access permissions across the machineand indeed across multiple machines. Service privileges have been limited to only the necessary access. Fewer services run with full SYSTEM privileges, reducing the ability for external attacks to use running services as a launch pad for doing damage. Other security features baked into the system include Address Space Layout Randomization, which prevents exploits from attacking system code based on a known location in RAM and improved Kernel Patch Protection, which helps eliminate ghostware such as rootkits from updating kernel code to their own nefarious ends. All these combine to give Server 2008 the smallest attack surface right out of the box of any Microsoft server OS to date.

Chapter 1

Manageability Many of the security features of Server 2008 occur under the covers. So it is within the manageability features where you the administrator will visibly see the most changes. Server Manager is one example of this, where dozens of previously segregated management consoles have been aggregated into a single interface. Other examples include the Remote Server Administration Toolkit, which brings to your management workstation much of the functionality of Server Manager. Group Policys move to the new ADMX file format for administrative templates along with its new Group Policy processing engine mean a greater likelihood of successful policy application. Continuing along this theme, a vastly improved Event Viewer means that problems are more easily recognized. Using it, more data across multiple systems can be analyzed all on the same screen. AD itself is reconfigured as a restartable service, which improves your ability to fix it when it has problems. Even IIS gets a new management console and rights schema that means your Web site administrators no longer require Administrator rights to the entire system to do their job. Why should you upgrade? Simply because the experience of managing your Windows infrastructure becomes more stable, more secure, and ultimately more manageable once you do.

Introducing Windows Server 2008


Lets move away a bit from my grandiose predictions of your adoption of Server 2008 to a more down-to-earth discussion about its specific capabilities. Server 2008 comes in eight flavors, though youre likely only to use two or three of these editions in most places within your infrastructure. In this section, well talk about the various editions youll need to know about as well as their hardware and licensing requirements. Windows Server Editions As stated earlier, Windows Server 2008 comes in eight editions. The differences among editions can be relatively slight, though the cost differences between some editions can be large. So be aware of the kind of server you need before you make a purchase. If you dont require the additional functionality that Enterprise Edition offers, you might not want to pay the multiple thousand dollar difference between it and Standard Edition. If you dont plan to use Hyper-V, the new Microsoft virtualization engine, you might not want to purchase it.
That being said, the cost difference between Server 2008 with and without Hyper-V at the time of this writing is only $28. So for an extra thirty-odd bucks, it might be worth the cost.

10

Chapter 1 In Table 1.2, Ive included the list of editions currently announced as of this writing. Also there is a short description of the edition and how it compares with its neighbors.
Windows Server Edition Description Standard Edition is the core of each of the other editions. Standard Edition supports nearly all functionality of Server 2008 with the exception of those listed in Enterprise Edition. Be aware that Server Core is not considered an edition but instead an installation option. Thus, purchasing any version of Server 2008 nets you the ability to install it as a Server Core instance. Enterprise Edition adds to Standard Edition the ability to address more processors and RAM, a benefit to systems like database servers that require excessive quantities of memory or additional processors beyond those supported in Standard Edition. Enterprise Edition also adds the Windows Server Failover Clustering Role, the ability to hot-add memory, and additional AD features such as Active Directory Federation Services and the advanced pieces of Active Directory Certificate Services. Enterprise Edition lifts restrictions on use for standalone DFS roots as well as RRAS and Terminal Services Gateway connections. In a very compelling move, Enterprise Edition bestows special virtual server license benefits. For every physical instance of Enterprise Edition licensed in an infrastructure, four virtual instances can be hosted on the same machine. Datacenter Edition is intended to be the workhorse edition for workloads with the greatest needs in the environment. Datacenter Edition provides no extra Roles over and above Enterprise Edition. However, it does allow for a much greater amount of resources to be addressed by the system, scaling to 32 processors for x86 systems and 64 processors for x64 systems. Datacenter Edition also adds the ability to hot-replace memory as well as hot-add and replace processors. One of Datacenters most compelling benefits also has to do with virtual server licenses. For every physical instance of Datacenter Edition licensed in an infrastructure, an unlimited number of virtual licenses can be hosted on the same machine. Windows Server for Itanium-based systems is a special edition of Server 2008 designed for installation to systems with Itanium processor architectures only. This edition is similar in resource capabilities to Datacenter Edition but is limited to very specific Roles and Features, supporting only the Web Services (IIS) and Application Server Roles. Also limited in functionality is Web Server Edition. This edition is limited to installing only the Web Server (IIS) Role and is intended for use exclusively as a Web server. Consequently, it has a much lower price point than other editions. Web Server Edition resource limits are the same as with Standard Edition.

11

Chapter 1

Windows Server Edition

Description All three of the most common editions Standard, Enterprise, and Datacenter can all be purchased without Microsofts native hardware virtualization support, called Hyper-V. The regular editions of these three support the use of Microsofts hardware virtualization. These slightly less expensive editions do not include that support.

Table 1.2: Windows Server editions and explanations of the benefits of each.

Hardware Requirements and Limitations As times change, so do hardware requirements. With every new edition of an OS, it seems like more resources are necessary to power that OS. Server 2008 is no exception. However, it can be argued that the hardware requirements for Server 2008 have less to do with a greater need for them by the OS and more to do with a right-sizing of what constitutes the true minimum requirement. You be the judge. For all editions, the following minimum and recommended hardware resources are suggested: Processor1GHz minimum for x86 processors. 1.4GHz minimum for x64 processors. 2GHz or faster recommended for all architectures. Memory512MB RAM minimum. 2.0GB RAM or greater recommended. Available Disk Space10GB space minimum. 40GB or greater recommended. Less space is required for Server Core installations.

Table 1.3 shows the upper limits on those same resources for each edition.
Maximum x86 Processors Standard Edition Enterprise Edition Datacenter Edition Itanium Edition Web Edition 4 8 32 N/A 4 Maximum x64 / IA64 Processors 4 8 64 64 4 Maximum x86 RAM 4GB 64GB 64GB N/A 4GB Maximum x64 / IA64 RAM 32GB 2TB 2TB 2TB 32GB

Table 1.3: Windows Server Editions and physical resource maximums.

12

Chapter 1

Supported Upgrade Paths Though the safest way to upgrade any network service is usually through a complete reinstallation of the OS, it is often beneficial to undergo an upgrade for some services. One example of a service that is typically upgraded at least once involves your domain controllers. This is done so that AD itself can be upgraded as part of the process. There are some limitations to which OSs and editions can be upgraded. Those limitations are listed in Table 1.4.
If you have this OS and Edition Windows Server 2003 R2 Standard Edition (Gold, SP1, or SP2) Windows Server 2008 Standard Beta 3 Windows Server 2003 R2 Enterprise Edition (Gold, SP1, or SP2) Windows Server 2008 Enterprise Beta 3 Windows Server 2003 R2 Datacenter Edition (Gold, SP1, or SP2) Windows Server 2008 Datacenter Beta 3 You can upgrade to this OS and Edition Windows Server 2008 Standard Edition Windows Server 2008 Enterprise Edition Windows Server 2008 Enterprise Edition

Windows Server 2008 Datacenter Edition

Table 1.4: Windows Server versions and editions and upgrade capabilities.

Be aware that upgrading to a Server Core installation is not supported for any OS or edition. Server Core installs a wholly different set of files to build its OS. Thus, even though it is considered an installation option, it is really much like a completely different OS entirely.

Licensing and Activation If youre used to using the same license keys for installing multiple copies of Windows Server, youll sorely miss this capability in Server 2008. The licensing mechanisms, which Microsoft titles Volume Activation 2.0, have grown less user-friendly with the upgrade to Server 2008. License keys entered for a particular server must be activated either over the Internet or through another mechanism such as calling the Microsoft Activation Center.
Due to this change in how Windows is licensed, be cautious when youre creating demonstration, test, and evaluation machines. If you activate the license for a short-lived evaluation machine, that activation goes against your total count of activations. In many cases, if youre planning to use that evaluation machine for only a short timesuch as when youre testing out the things you learn in this guideconsider not activating it. Youll need to rebuild the server every so often once the evaluation period expires. But as youll learn later in this chapter, the process to build a new Windows server is very quick and easy. Trust me on this. Before really realizing this new way of operating with evaluation systems, I found myself contacting the Microsoft Activation Center a few times to get an activation struck from my permanent record. Lets just say that that process can be quite painful.

13

Chapter 1 Two options are available for the infrastructure of license assignment. Depending on the needs of your organization and the type of licensing arrangement you have with Microsoft, you will be assigned one of these two possible options: Multiple Activation Key (MAK)With MAKs, a single key can be used for multiple systems. The use of each key must be activated either through the Internet or another form of contact to the Microsoft Activation Center. A specific number of activations are enabled for any particular MAK. Activations over the limit assigned to the key will be rejected at the point of activation. With MAKs, it is also possible to activate multiple computers at the same time using the Volume Activation Management Tool. This tool allows for a group of computers to be activated at the same time through a single connection or call to the Microsoft Activation Center. Key Management Service (KMS)A KMS key is used along with a central activation service that is hosted within your infrastructure. KMS activations are typically used in environments with large numbers of needed activations and the need to locally manage the count of activations. In this configuration, the key is used only to enable the KMS service. The KMS service then manages the activation of servers within the infrastructure. Computers that make use of KMS must be reactivated every 6 months to maintain licensing requirements. Reactivation is done automatically by the KMS server to all activated systems on the network. In order to use a KMS service, a minimum of 25 workstations or 5 servers must be on the network.

Because of this, any server or workstation that is activated through a KMS must be in network contact with the KMS server at least once during every 6-month period. This can be a problem for servers that are activated using KMS and then relocated to another network unreachable by the KMS service.

So there you have it. You now have everything you need to start installing Server 2008 onto your available hardware. You will still need the media of course.

Installing Server 2008


Now that weve come to understand the requirements that Server 2008 is going to place onto our hardware, lets take some time to analyze the different possible ways that we can get it installed. Traditionally, the most often used mechanism for completing an installation has been the manual drop-in-the-disk-and-go method. This installation mechanism has long been the most timeconsuming and painful way to install an OS, but it is usually the most successful. Its also arguably the easiest. Heres why. Microsofts installation routine in previous versions had a tendency to prompt for input throughout its process. In the beginning, the installation would request information about the drive to install and the type of formatting for the partition. Later on, it would prompt for questions about time zones and keyboard layouts. Even further down the line would be questions about networking configuration. The end result is that for a manual installation, youre effectively required to babysit to completion. I myself feel like Ive lost a few years of my life waiting for server installation timers to count to zero. Im sure you have as well.

14

Chapter 1 Manual Installations I tell this story because Server 2008 changes this manual installation process in a highly exciting way. Simply put, with the manual installation routine in Server 2008, all the questions required to complete the installation are asked either at the beginning of the process or once the process is fully complete. Take a look at Figure 1.2. Youll see the excessively simple Install now button that arrives once you boot the server from the installation media and after you answer the first three questions about language, time, and currency format, and input method.

Figure 1.2: The excessively simple Install now button for Server 2008 manual installations.

Also note the line in the lower-left titled Repair your computer. Well discuss the use of this in Chapter 2 when we talk about server backups and restore. But keep this location in your mind for later, as it is within this location where bare-metal restores and other repair operations can be launched.

15

Chapter 1 Once you click to begin the installation, youll be asked to set only a very few initial configurations: Your product key The edition of Server 2008 The end-user licensing agreement acceptance Whether you want a clean installation or an upgrade installation The disk you want the installation to target

Once youve completed those few steps, the installation begins. Now you can walk away from the machine. Initial Configuration Tasks Depending on whether youre installing the server with or without the Server Core option, the installation can range from a few minutes to several. Control is not returned until the installation is complete, the server has completed its post-installation reboot, and the system is ready for the initial login.
Remember that a server can be installed either with or without the Server Core option. Once installed in one configuration, the only way to switch to the other is through a complete reinstall. Well talk in more detail about Server Core in Chapter 5.

Once this process completes, youll first be prompted to set the password for the Administrator account. An error message will appear on the screen that explains The users password must be changed before logging on the first time. What this effectively means is that the local setting for User must change this password at next logon is set for the Administrator account as a component of the installation. Once this password is set, the system will automatically log you in as Administrator.

16

Chapter 1

Figure 1.3: The Initial Configuration Tasks wizard appears immediately after the initial post-installation logon.

Once youve logged in, youll be greeted with the Server 2008 desktop. Shortly after logon, a new wizard will appear called Initial Configuration Tasks that looks similar to the image in Figure 1.3. The first thing youll notice is that in order to eliminate all the mid-install questions, Microsoft has taken some license with the answers to these former questions. As an example, in order to complete the installation, the server requires an initial computer name. But rather than asking you for that information during the installation routine, with Server 2008, Microsoft enters placeholder information for you. See in Figure 1.3 that the initial computer name for this server is set to WIN-XN2FJBV8XT1. The Initial Configuration Tasks wizard is broken into three sections, each of which includes necessary initial configurations that were previously done elsewhere using different management consoles. The wizard itself isnt so much a new console for making these changes. It is little more than a series of pointers to the standard wizards for each of the other configurations. Thus, for example, when you click the link to Configure networking, Initial Configuration Tasks brings forward the Network Connections control panel.

17

Chapter 1 Most settings within the three sections are self-explanatory. Youll need to set the proper name for the computer and configure its networking, time, and time zone. For the second section, youll need to enable automatic updates and download and install any available updates to patch the server. The third section discusses the installation of Roles and Features as well as enabling Remote Desktop or configuring the Windows Firewall. For the most part, leave the Roles and Features configuration alone. Installing those is better served through Server Manager, which well talk about in the next chapter. Once youve completed the necessary sections in Initial Configuration Tasks, click Close.
One element available in Server 2003, though not with the same level of exposure, is the configuration of feedback to Microsoft. If you click the link for Enable automatic updating and feedback, and then select Manually configure settings, youll be greeted with a screen that allows you to configure update settings as well as feedback settings. In addition to configuring for necessary software updates, this screen provides a single location where Windows Error Reporting and the Windows Customer Experience Improvement Program settings are configured. Be aware of how you configure these settings. Unlike much of the rest of the Server 2008 OS, these settings can enable the submission of information about system errors and your usage information back to Microsoft. If your corporate policy mandates that this information is not released, enabling participation in these programs can violate that policy. The programs are designed with the intent of collecting information about where Windows is having problems and how it is being used. According to Microsoft, personal information is not collected.

Windows Preinstallation Environment Throughout this installation process, it should be evident that the blue installation screens of previous OSs are no longer present. This is due to the integration of the Windows Preinstallation Environment (WinPE) directly into the installation routine for Windows. WinPE is an installation tool that has been available for certain enterprise-level customers since the initial release of Windows XP. But its availability for all installations of Windows arrived only with the release of Windows Vista. The text-only blue installation screens of previous versions have been removed for a number of reasons (and not only for their lack of aesthetics)not the least of which are the requirements for certain drivers to include real mode DOS-compatible drivers to support bootstrapping the installation routine. Many manufacturers have eliminated support for these types of drivers. Because of this, an installation bootstrapper that could provide the same level of OS support as Windows itself was necessary. WinPE has its roots in the Windows OS, and thus can support rich installation options like a full TCP/IP stack, mapped drives, and internal customization. It also works well with the automated deployment mechanisms well discuss in the next section.

18

Chapter 1

Automated Installations With Vista and Server 2008, a number of automated installation capabilities are possible for speeding the process of getting OSs applied to hardware. Two methods for doing an automated installation are commonly used: scripted and image-based installations. Both are supported through Microsoft toolsets: Scripted installations. Scripted installations make use of the regular installation media used in manual installations. Added to that media is an unattended installation file that includes the answers to questions asked as part of the installation or through the Initial Configuration Tasks wizard. The benefit of scripted installations is that they tend to be device-independent. Because the standard media-based installation of Windows will locate and install most of the correct drivers required by the server, only the unattended installation script is necessary to answer questions about specific configuration parameters. Thus, the same unattended installation script that is used to install to an HP server can often be used to install to a Dell server or an IBM server. Image-based installations. Scripted installations are great for heterogeneous environments but they suffer from two issues: First, the scripts are challenging to develop, as well learn in a second. Second, the time to complete a scripted installation can be lengthy. Depending on the level of customization needed, the time to complete an installation can consume as much or more time as for a manual installation. Image-based installations solve this through the creation of a standard image, the copying of which to the target system automatically generates an OS instance. Because this standard image is an already-configured server, complete with patches, applications, and customizations, the single-step process of installing the image results in a fully realized server instance. With scripted installations, applications and patches over and above the initial OS installation must be applied after the initial installation is complete. The downside of image-based installations is that driver sets between hardware types are often different. Because images include the driver set of the source machine as part of the image, it is challenging and sometimes impossible to use an image from one hardware configuration onto another.

Lets start by taking a look at the process to create an unattended installation file for a scripted installation.

19

Chapter 1 Scripted Installations with the Windows System Image Manager To create an unattended installation file, youll first need to download and install the Windows Automated Installation Kit (WAIK) from the Microsoft Web site. This large download of nearly a gigabyte includes the necessary components to interrogate OS media, enumerate a list of configuration options, and ultimately create an unattended installation file. Part of the WAIK is the Windows System Image Manager tool. WSIM replaces what used to be called Setup Manager in previous versions and comprises the tools used to create your unattended installation files.
Be aware that the WAIK downloads with an .IMG extension. To install it, you will either need to burn it to a DVD or mount it using an ISO/IMG mounting tool. If your mounting tool doesnt support the .IMG extension, you can safely rename its extension to .ISO.

Once created, you will use that file along with the Server 2008 installation media to boot a candidate system. The file, which must be named autounattend.xml, is typically copied to FAT32-formatted removable media like a floppy disk or USB hard drive and inserted into the system along with the installation media as the system is booted. The system will read the installation media along with the instructions provided as part of the autounattend.xml file to complete the installation.

Figure 1.4: The Windows System Image Manager showing a configured setting for DNS suffix search order.

20

Chapter 1 To create your autounattend.xml file, complete the following steps:


1. Download and install the WAIK. Youll want to install this onto a workstation and not the

server you intend to build using the unattended installation file. The workstation you use can be your desktop.
2. Locate and copy install.wim. WIM, or Windows Imaging Format files, is a file-based disk

image format. These files are used for booting to WinPE and ultimately installing OSs. The install.wim file needed by WSIM is found on the Server 2008 media in the \sources folder and contains the metadata information necessary to do an installation of Server 2008. Copy this file to your desktop.
3. Launch WSIM. Upon launching WSIM, youll see a relatively busy screen similar to what is

shown in Figure 1.4, though initially your screen will start with much less information. Each of the windows within the WSIM link to each other. As an example, a distribution share (in the upper-left) can have multiple Windows Images (in the lower-left). Youll interrogate the possible settings of those Windows images and configure the ones of interest using the middle and right panes. The answer file in tree format will show the configured settings. The right-most pane will include the options both available and set for a desired configuration.
4. Create a new distribution share. The first step with WSIM is to create a share that will

contain your working folders as you create your unattended installation file. Do this by rightclicking Select a Distribution Share, and clicking Create Distribution Share. Select an appropriate folder and click Open.
5. Select a Windows image or catalog file. Next, youll need to locate and interrogate the

install.wim file you copied from the Server 2008 media. Do this by right-clicking Select a Windows image or catalog file and choosing Select Windows image. Locate your install.wim file and click Open. WSIM will search the file for available installations and provide you with a list of those available. In our case, well install Windows Longhorn SERVERENTERPRISE. Yours may be slightly different. An error will appear noting that no catalog file exists and asking if you want to create a new one. Select Yes to continue. The process to create the catalog file can take an extended period of time. WSIM at this point is discovering and cataloging all of the potential configuration options available.
6. Create a new answer file. In the center pane, right-click Create or open an answer file and

select New answer file to create a new answer file to be stored in your distribution share. The terms answer file and unattended installation file here are used interchangeably. Youll notice that the answer file has seven components. Each of these components, called a pass is one phase of the installation process. Depending on the configuration you intend to include, that configuration may be set in one of the seven possible passes.

21

Chapter 1
7. Configure the settings for your unattended installation file. The lower-left pane will now

include the list of options available for configuration in your answer file. Heres where the hard part begins. Discovering which of the options are necessary and interesting is a timeconsuming process. Youll want to dig through the components to find configurations of interest. For example, if you want to set the DNS suffix search order, you can right-click x86_Microsoft-Windows-DNS-Client_6.0.6001.16659_neutral and select Add setting to Pass 4 specialize. Then highlight the new setting under 4 specialize and in the upper-right screen look for DNSDomain. There, youll want to enter the correct value for suffix search order. Continue with setting your desired configurations as necessary.
8. Validate the answer file. Due to linkages between settings, some settings require others to

also be set. Once you have completed setting those of interest to you, youll want to validate the answer file to ensure that it includes the necessary components. Do this by clicking Tools and then Validate Answer File. Saving the answer file also completes a validation prior to the save.
9. Save the answer file. From the File menu, select Save Answer File. Save the file as

autounattend.xml to use it for a server installation.


10. Copy the file to removable media and boot from the Server 2008 media. Once youve

completed your autounattend.xml file, you can copy it to the root of your removable media and use it to boot a candidate system along with the Server 2008 DVD. If youve done everything properly, the installation should complete and a fully-configured server will be the result.
As has been stated above, the sheer number of options available in WSIM makes complicated the process of creating an unattended installation script. Adding to this complexity are certain elements that are required for the script to function. This will be a trial-and-error process to find the correct set of configurations necessary for your environment. Specific information is available on the Internet about configurations that are required and others that are good suggestions to include.

22

Chapter 1 Image-Based Installations with ImageX The process above discusses the creation of a scripted install. But in some cases what you really want is the replication of a fully-realized server image. Using that image, you can drop copies of your reference computer (often called the golden image or standard image) onto other machines in a single step. Also part of the WAIK is ImageX, which is a command-line tool used along with WinPE-bootable media to create and later deploy those images. ImageXs focus on the command line makes it a somewhat complex tool to use as well. Lets take a look at the steps you would want to use in creating and later deploying an image using ImageX and WinPE. Create your reference computer. The first step is to create a computer whose configuration will become the image you will later deploy. For your reference computer install the OS and any necessary patches. Also install and customize any applications that you want residing on your image. When youre finished you can use SysPrep to reseal the installation and eliminate any personalization information that links your image to a particular computer. Do this by running the command C:\Windows\System32\sysprep\sysprep.exe /generalize /shutdown. Create your bootable WinPE CD. Your next step will be to create a bootable CD with the WinPE OS. This CD will run the OS separately from the one installed to your reference computer to allow to image to be captured. The process to create this CD involves multiple steps: o With the WAIK installed onto your desktop, from the command prompt change to the folder C:\Program Files\Windows AIK\Tools\PETools. Then enter the command copype.cmd x86 C:\{target folder}. This will create a bootable WinPE instance for an x86 architecture computer called winpe.wim in the target folder. o This initial process does not include the executable for ImageX itself. So youll need to mount the image you created in the previous step and add it. You can mount the boot image you just created with the command imagex.exe /mountrw C:\{target folder}\ISO\sources\boot.wim 1 C:\{target folder}\mount. Once youve mounted the image, navigate to C:\{target folder}\mount. From there, drill deeper into the \Windows\System32 folder and copy the imagex.exe file from your desktop computer to this location. o If your source computers will require additional device drivers not on the standard Server 2008 media youll need to pre-stage into the images driver store. First, unpack the driver from any installation executables and find its .INF file. Then add it into the image with the command peimg.exe /inf:{path to drivers INF file} C:\{target folder}\mount\windows. o Once complete with any configurations, youll need to unmount the image and commit your changes with the command imagex.exe /unmount C:\{target folder}\mount /commit. o Create a bootable ISO file with the folders youve created in C:\{target folder}. You can use any standard CD burning tool to do this. If you dont have one available, the WAIK includes a CD burning tool as well called oscdimg.exe. To use this tool to create your ISO, use the command oscdimg.exe -bc:\{target folder}\etfsboot.com -n -o c:\{target folder}\ISO c:\{target folder}\winpe.iso. o The result of the last command will be an ISO file that includes a bootable version of WinPE called winpe.iso in C:\{target folder}. Burn that ISO image to a CD.

23

Chapter 1 Capture an image of your reference computer. To do this, boot the reference computer with the CD you just created. Once the CD has booted youll be greeted with the WinPE OS, which will look similar to Figure 1.5. Youll see here that WinPE is effectively a command-line-based OS. To capture an image of your reference computer with maximum compression, use the command imagex.exe /capture C: C:\w2008.wim Windows 2008 Image /verify /compress maximum. This will create a WIM image named w2008.wim in the root of your reference computers file system. Note that since WinPE enjoys a full TCP/IP stack, it is possible to map a drive to store this image elsewhere if you wish. Deploy your image. To deploy your captured image to another computer, boot that computer with your bootable CD created earlier and map a drive to the location where your WIM image is stored. Then, to deploy the image use the command imagex.exe /apply {path to WIM image}\w2008.wim 1 C:.

Figure 1.5: WinPEs command-line-centric OS can be used for image-based installations.

24

Chapter 1 Due to the CLI-centric nature of all those commands, the steps above probably look more complicated than they are in actuality. The most complex part of the process is creating the WinPE bootable CD. Once complete there, the only commands necessary are those to gather and later deploy your images.
Remember too that the process used above to pre-stage drivers into the bootable CDs image can also be used for the image created by your reference computer. If you have target computers with other driver set needs, you can pre-stage them into the image you plan to deploy as well.

Windows Deployment Services If youd like to further automate the deployment of images once youve captured them, Microsoft provides a GUI-based tool that makes much of this relatively easy. This tool, Windows Deployment Services, arrives as an installable Role on your server. By installing the WDS Role to your server you can make your images available across the network for installation anywhere. WDS supports Preboot Execution Environment (PXE) instructions, which can be used by target servers that support PXE booting to eliminate the need for bootable CDs.
Be aware that network equipment is often configured not to allow multicast traffic to pass through network subnet boundaries. As WDS multicast installations require this traffic to operate, you may need to ensure that your clients and server are both within the same subnet or make an exclusion within your network security policy for this kind of traffic.

After youve installed the WDS Role to your Server 2008 instance, youll need to prepare that server for use. Do this by right-clicking the server name of interest in the WDS node of Server Manager and choosing Configure Server. The configuration wizard will ask for a path to store your system images as well as an initial configuration for PXE boot responses. Now you can add your images. When prompted, either supply a path to the images you created using the steps above with ImageX. Or, if youd rather install standard images from the Microsoft-supplied media, point the wizard to the \sources folder on your installation media. Once the images have been added, the screen will resemble Figure 1.6.

25

Chapter 1

Figure 1.6: Windows Deployment Services showing the standard installation images from the Server 2008 DVD.

The process to create a new over-the-network installation can be as easy as right-clicking Multicast Transmissions and selecting Create Multicast Transmission. Youll be prompted for a name for the transmission and to select the image you want to deploy as well as the type of multicast transmission you want to use. Two options are available for start criteria: Auto-Cast. With Auto-Cast transmissions the multicasting process starts immediately after the first client checks in. Any clients that arrive after the transmission has started will receive their missing pieces at the end of the transmission. Scheduled-Cast. With Scheduled-Cast transmissions you can configure either a number of clients or an amount of time at which you want the transmission to begin. This puts the transmission into a paused state until the correct number of clients checks in or the configured time arrives.

In order to use PXE booting to connect to the WDS server, the servers PXE response settings will need to be configured to listen for clients. This can be configured within the WDS properties pages. Clients who attempt a PXE boot should locate the WDS server and after completing the boot process you should see a screen similar to Figure 1.7. Youll see that this screen shows that the installation is being sourced from Windows Deployment Services.

26

Chapter 1

Figure 1.7: Once the target server has completed its PXE boot and located the WDS server, the screen above will show that the impending installation is sourced from the WDS server.

There are a number of additional options available within WDS for the configuration of PXE and the deployment of images, not the least of which is the capability to integrate a scripted installation as we discussed earlier in this chapter with an image-based installation. By rightclicking any image listed within the WDS interface and choosing Properties, it is possible to assign an unattended installation file with an image if desired. This allows for the combined use of both types of automated installations.

27

Chapter 1

Summary
So now you know some of the best ways, both manual and automated, to get Microsofts newest OS onto available hardware. Depending on the needs of your organization, you may choose the easy-to-start manual method or you may opt for one of the automated methods. Any of the automated methods involve a greater startup cost in terms of time to find the correct configurations you want to include as part of the automation. But once that initial work is done, it becomes easy and fast to replicate new servers across your network. In our next chapter well start looking at the management of Server 2008. Well take a hard look at Server Manager, where all the Roles, Role Services, and Features live. It is also the place where much of the rest of this guide will concern itself as we go through the new features and capabilities of Server 2008.

The content of this chapter was written based on pre-release information, specifically the RC0 version of Microsoft Windows Server 2008.

28

Chapter 2

Chapter 2: Server Manager


Getting an OS installed onto a piece of hardware is only the first step in building your Windows Server 2008 infrastructure. With Server 2008s improvements to the installation experience, the experience is shorter and less painful than ever before. Though this guide is all about building your infrastructure, the actual building part is really the easiest part. Faithfully managing and maintaining that infrastructure is where the real complexities arise. In this guide, its my job to help you, the budding Server 2008 administrator, understand the needs and expectations associated with not only building but also properly supporting that environment. In this chapter, well focus on one specific element new to Server 2008 that I suspect youll come to appreciateServer Manager. Server Manager represents a unification of a number of previously segregated administrative consoles used in Server 2003 and earlier. Consoles for DHCP, DNS, Active Directory, Group Policy, and many others that formerly had their own elements have now been combined into a single location for centralized management. I think youll find the result quite handy. But with any change to an operating procedure, there are also a few limitations of which youll need to be aware. Though Server Manager goes far in unifying much of our administrative experience, it doesnt do so completely. There are yet some administrative consoles that havent made their way into Server Manager. Some are missing for specific reasons. Some at first blush seem to be obvious omissions. If youre used to administration with Server 2003, one of the hardest parts of getting used to Server 2008as with every new version of a Microsoft OSis likely just learning where the new controls are.

Introducing Server Manager


Upon finishing the installation of your Server 2008 instance and completing the necessary configurations within Initial Configuration Tasks, closing this screen will automatically bring forward Server Manager. As Figure 2.1 shows, Server Manager looks much the same as any Microsoft Management Console (MMC). It is specifically designed as an update to the Computer Management console used in previous OSs: We are presented with a left pane and tree view that can be expanded to show more or less detail for any particular node. Clicking any node brings forward information about that node in the right pane. Selecting View | Customize gives you the ability to customize the elements that are presented in the interface. The Action menu bar item allows you to perform various actions, depending on which element is highlighted in the left pane. In every case, right-clicking the highlighted node in the left pane brings forward a context menu that is the same as clicking the Action menu bar item. Clicking the File menu bar item gives you the ability to perform limited disk cleanup associated with Server Manager views.

29

Chapter 2

Figure 2.1: Server Manager resembles the MMC views of Server 2003 but with some differences.

These capabilities are very similar to the MMC consoles were used to seeing. This is because Server Manager itself can also be a snap-in to any custom MMC console. As with Computer Management, the elements within Server Manager are relatively static. With the exception of the Roles, Role Services, and Features well talk about in a moment, the elements available in this view wont change.
If at any point you accidentally close Server Manager and need to start it again, you can do so either through the Start menu under Administrative Tools or you can right-click the Computer icon and choose Manage. Note that in Server 2003 and earlier, this would have brought forward the Computer Management console. As stated earlier, if you want to include Server Manager as a component of an existing custom MMC console, this can be done from that console by choosing File | Add/Remove Snap-in and adding the Server Manager snap-in. If you want to view the old-style Computer Management console, it is still available. Access it through Administrative Tools | Computer Management.

30

Chapter 2

Figure 2.2: A sample MMC console shown here to illustrate the differences between Server Manager and a traditional MMC console.

Capabilities Become Roles, Role Services, and Features Server Managers major use is involved with the capabilities that run atop a server instance. These capabilities are those responsible for running a Windows infrastructure such as DHCP services, DNS services, file services, and all the other components necessary for your computing environment. Server 2008 approaches these capabilities in a much different way than in previous OSs. With previous OSs, installing a capability to a Windows server involved the installation of that service through the Add/Remove Windows Components Control Panel applet. Some capabilities were already pre-installed with the installation of the OS. The executables for these capabilities were already present on the server to make easier their use. In some cases, executables for some capabilities relied on others for their operation. This was great from a usability standpoint but challenging from the standpoint of security. Server 2008 reformulates how the installation and operation of traditional Windows capabilities are handled by the system. As youre looking at the Server Manager console, two things youll immediately notice are the two nodes at the very top of the console window labeled Roles and Features. These two nodes represent two of three categories in which capabilities can be held by a server, with the third category being Role Services. Lets talk for a minute about the three different categories and what they mean for the operation of our Windows infrastructure.

31

Chapter 2 Roles Roles are considered primary functions of a Windows computer, but their scope is limited to just that. The components perform the majority of the work to fulfill a role are categorized under Role Services. Roles can be considered the major category associated with the function you intend the server to fulfill. Figure 2.3 shows the Add Roles Wizard where the available Roles as of Server 2008 RC1 are shown. Roles already installed onto the server are unavailable (grayed out).
Think of Roles as a capability that a server aspires to be.

Figure 2.3: Server Managers Add Roles Wizard shows the available Roles that can be added to a server.

32

Chapter 2

Role Services Whereas Roles are those elements that describe the intended function of a server, there must be a set of components that enable that functionality to occur. Role Services are the elements that drive the functionality of each Role. Some Roles have no Role Services. Others may have numerous ones. In every case, a Role Service is linked to a Role and provides some kind of functionality in support of that Role. This is best illustrated with an example. If I want to make my server a Terminal Server, I would first need to add the Terminal Services Role. This tells the server that I intend that server to operate as a Terminal Server. In Server 2008, though, a Terminal Server can operate in multiple configurations. It may be a full-fledged Terminal Server that serves applications to users. It may be a Terminal Services Licensing server, providing licenses to other machines. It may operate as a TS Web Access server, a Web server front-end for Terminal Services (well talk more about this option in Chapters 7 and 8). By separating Role Services from Roles, I can configure my server to serve in any combination. Figure 2.4 shows an example of the Role Services that support the Terminal Services Role. Once the Role is installed, I can at any point install any or all of the Role Services I need to fulfill my needs for that particular server. By breaking down the traditional Terminal Services capability into more granular components, I can very narrowly define what I intend my server to do. If, later on, I need to modify those responsibilities, I can do so by changing which components are installed.
Think of Role Services as functions of a server that support a Role.

33

Chapter 2

Figure 2.4: Server Managers Add Roles Wizard shows the available Roles that can be added to a server.

Features As discussed earlier, any Role Service is specifically designed to support the functionality of a particular Role. But sometimes there are additional capabilities that dont necessarily link to Roles or Role Services. These additional capabilities are grouped into Features. Features can augment existing Roles and Role Services but are not directly linked to them. Features can also support added functionality that isnt necessarily categorizedat least by Microsoftas a major capability of a server. Like the others, this is best explained through an example. One Role that can be installed to a server is the Print Services Role. This Role installs the components necessary to enable serving printers from that server. Role Services that link to the Print Services Role are the Print Server itself as well as the LPD Service and the Internet Printing Role Services. In combination, these three support the full functionality of the Print Services Role.

34

Chapter 2 Also available as a Feature is the Internet Printing Client. Though a necessary component of printing is using the Internet Printing Protocol, this optional Feature doesnt directly support the Print Services Role. Thus, it is labeled as a Feature. Another example is the Failover Clustering Feature. Though clustering is an important component for high availability in an environment, it alone does not provide a functionality that directly impacts the Windows computing environment. It is consideredagain, by Microsoft an optional component that may or may not work in tandem with other functionalities. Thus, it too is labeled and is available as a Feature.
Think of Features as other capabilities of a server that may or may not support a Role.

Figure 2.5: A partial list of Features that can be installed to any Server 2008 instance.

35

Chapter 2

Componentization and Security At first look, all this separation of capabilities into Roles, Role Services, and Features may sound like a lot of unnecessary added complexity. Yes, there will be a period of re-acclimation with the location of Server 2008s pieces and parts. But this separation was done for a reason. Segregating the capabilities of a server instance into as many granular components as possible increases that servers overall security posture. Two of the major tenets of Server 2008 have to do with improved security and componentization, with both relating to the other. Server 2008 is Microsofts first OS coded completely within the guidelines of Microsofts Trustworthy Computing Initiative, first announced back in 2002. This initiative involved Microsofts decision to incorporate security best practices into all elements of their design and coding process. By completely segregating each potential server component and creating a single engine for component installation, this enhances the security attack surface for the server. Components that are not installed via Server Manager are not present on the system. Their files are not available for a would-be attacker. If a component is no longer necessary, its elements can be easily cleaned up and removed from the system by the engine. This reduces the possibility that a long-forgotten component could be used as a future attack vector. Ultimately, Server 2008s componentization makes it one of Microsofts most secure OSs yet. Also improved through the componentization of each server capability is the patching process. With Server 2003 and earlier OSs, patching was usually an all-or-nothing process. If the patch related to your version of Windows Server, you likely needed to install it on the off-chance that its related files or registry keys were present on your server. Considering Server 2008s componentization, youll find that the need for patching is reduced. If a server patch released by Microsoft relates to a Role, Role Service, or Feature that is not installed onto your server, you can safely ignore it. Its worth mentioning that Server 2008s componentization also reduces the potential for conflicts between server capabilities. As each capability is now atomic, and all prerequisites are known for the operation of a capability, the chance that a conflict may occur is reduced. A capability cannot be installed without its required pre-requisites, which reduces the chance that a service is improperly installed. Fewer conflicts also mean greater server reliability, which improves your overall uptime.

36

Chapter 2

Navigating the Server Manager GUI If you browse around the Server Manager interface, youll notice that many of the elements present are similar to ones weve seen in Server 2003s Computer Management console. Event Viewer, Device Manager, and Disk Management are all present just like they were in Computer Management. Other configuration items are there as well, though re-ordered under the headings of Diagnostics, Configuration, and Storage. Well talk more about a few of these later on in this chapter. For any Role installed to the server, if you highlight that Role in the left pane, youll be greeted with a screen similar to the one shown in Figure 2.6. There we see a summary of the events and system services associated with that Role. This view is particularly handy with administering your Server 2008 instance, as it shows only the events and services directly related to the Role within this view. This assists greatly with the troubleshooting process. Rather than using Event Viewer to view all the events the server has created over a period of time, this view filters out just those that relate to the operation of the Role in question.

Figure 2.6: A partial view of Server Manager, showing just the events and system services associated with the Active Directory Domain Services Role.

37

Chapter 2 Figure 2.6 shows a partial screen associated with the Active Directory Domain Services (ADDS) Role. Within the System Services location shown are the services and their status associated with this Role. When problems occur, we can from this single location start, restart, or shut down services as necessary to troubleshoot and fix problems as necessary. By clicking the Preferences link in the right of the screen, we can choose which services we want to monitor through this interface. Also available but not shown are the Role Services that are installed and those not installed as well as additional advanced tools that relate to the Role. In the case of the ADDS Role, the advanced tools include GUI and command-line tools such as ADSIEdit, netdom, dsadd, nltest, among others. These are here as a visual reminder of the toolsets we can use in troubleshooting and resolving our server problems. If youve traditionally been a GUI-based administrator unaware of the command-line troubleshooting tools that exist, this listing of common toolsets will serve as a constant reminder of the tools that are available. Each Role also comes with a set of Resources and Support documentation that assists you with recommended configurations, tasks, best practices, and online resources that you can employ for building and administering your Windows infrastructure. At the bottom of, but not shown on, the screen in Figure 2.6 is that list of resources that links to local and online help documentation. Another useful place to visit is the root node for Server Manager itself. Clicking on Server Managers root node provides a list of information about the server, a summary of the installed components, and additional resources and support. Important here is that this is the location where Internet Explorers Enhanced Security Configuration can be set for administrators and users. It also includes links to configuration screens for computer, network, security, and error reporting settings. Many of these settings are replicated from what you saw back in Initial Configuration Tasks in the case that they need to be changed at a later time.
Be aware of one major limitation with Server Manager that has the potential to limit its usefulness within environments of multiple machines: Server Manager is not remoteable. Thus, Server Manager can only be used when logged in to the console of the server. A Server Manager instance can only be used on the machine where it is currently being run. It cannot be used to manage other server instances. Because of this limitation, you may find yourself using Server Manager less than you expected once youve completed your initial configuration. For remote management, Microsoft suggests the use of the Remote Server Administration Tools. This set of tools, an update to the Admin Pack (adminpak.msi) used with previous OSs, can be used against remote servers for administration. It can also be installed on desktop systems for administration from those locations as well.

38

Chapter 2

Adding New Components


Using Server Manager, adding a new component to a Server 2008 instance is relatively easy. In comparison with previous versions of the OS, the process is improved because Server Manager integrates many of the initial configuration questions right into the installation. Thus, installing a Role or Role Service will automatically prompt for answers to many necessary initial configurations right within the interface. This eliminates the problem of forgetting certain configurations upon their installation. Most Windows infrastructures make use of DHCPs automatic IP assignment capabilities. So as an example, lets take a look at the installation of the DHCP Server Role to get an idea of how this process works. Example: Adding the DHCP Server Role To install the DHCP Server Role, first right-click the Roles node in Server Manager and choose to Add Roles. Skipping through the initial configuration screen, the resulting Add Roles Wizard will bring forward the list of possible Roles to install to the server. This list looks similar to the screen in Figure 2.3. Select the DHCP Server Role, and click Next. The left pane will change to include the list of actions that must be completed to properly install the Role. For the DHCP Server Role, this includes setting IPv4 and IPv6 DNS and WINS settings, among others.
As another huge benefit of Server Manager, if a component requires other components as prerequisites for its functionality, Server Manager will prompt you to install those components as well. You cannot install a component without also installing its necessary prerequisites.

The following screen will provide a set of information about the DHCP Server Role. Depending on the component you want to install, this screen can include a description of the component as well as links to additional information about the use and operation of the Role. Click Next to continue. In the case of a DHCP Server Role installation, the next seven screens request configurations needed by the DHCP server to properly start its functionality. Figure 2.7 offers one example of these seven screens. Here, Server Manager is requesting to know the parent domain and preferred/alternate DNS servers IPv4 address. This information is necessary for the correct functionality of the DHCP Server Role.
If Role Services are available for a particular Role, Server Manager will include a screen showing a list of available Role Services. There you can install the required Role Services as part of the Role installation. If a Role Service requires initial configuration questions, they will be added to the list of questions required by the Role. In this example, the DHCP Server Role has no available Role Services, so none are presented.

39

Chapter 2

Figure 2.7: A partial view of Server Manager, showing just the events and system services associated with the ADDS Role.

To complete the installation of the DHCP Server Role, continue through the initial configuration questions and choose to Install the Role. The Role will install the components from the Server 2008 media and complete. If the process completes successfully, a green check will appear with the text Installation Succeeded. Click Close to continue. Once the Role is installed, it can be managed through Server Manager in much the same way we have discussed up to this point. Additional configuration, such as scope options and reservations, can be done through Server Manager at this point.
Be aware that sometimes a restart of Server Manager is required upon completing an installation of a new component. This is sometimes necessary to see the configuration nodes associated with the newly installed component.

40

Chapter 2

Server Manager by Command Line


Using Server Managers GUI is an easy way to add and remove capabilities from a Server 2008 system. But at times you might want to script an installation or install multiple components through a batch file. When those needs arise, consider using Server Managers command-line tool servermanagercmd.exe. Unlike the full GUI version of Server Manager, its command-line version is used only for the enumeration, installation, and removal of components. You wont be reviewing Event Log entries or disabling Internet Explorers Enhanced Security Configuration through this interface. To first enumerate the components currently installed to the server, use the command
servermanagercmd.exe query

For the example server weve been discussing up to this point, the result will look a little like Listing 2.1.
C:\Users\Administrator>servermanagercmd -query . ----- Roles ----[ ] Active Directory Certificate Services [AD-Certificate] [ ] Certification Authority [ADCS-Cert-Authority] [ ] Certification Authority Web Enrollment [ADCS-Web-Enrollment] [ ] Online Responder [ADCS-Online-Cert] [ ] Network Device Enrollment Service [ADCS-Device-Enrollment] [X] Active Directory Domain Services [X] Active Directory Domain Controller [ADDS-Domain-Controller] [ ] Identity Management for UNIX [ADDS-Identity-Mgmt] [ ] Server for Network Information Services [ADDS-NIS] [ ] Password Synchronization [ADDS-Password-Sync] [ ] Administration Tools [ADDS-IDMU-Tools] [ ] Active Directory Federation Services [ ] Federation Service [ADFS-Federation] [ ] Federation Service Proxy [ADFS-Proxy] [ ] AD FS Web Agents [ADFS-Web-Agents] [ ] Claims-aware Agent [ADFS-Claims] [ ] Windows Token-based Agent [ADFS-Windows-Token] [ ] Active Directory Lightweight Directory Services [ADLDS] [ ] Active Directory Rights Management Services [ ] Active Directory Rights Management Server [ ] Identity Federation Support [ ] Application Server [Application-Server] [ ] Application Server Foundation [AS-AppServer-Foundation] [ ] Web Server (IIS) Support [AS-Web-Support] [ ] COM+ Network Access [AS-Ent-Services] [ ] TCP Port Sharing [AS-TCP-Port-Sharing] [ ] Windows Process Activation Service Support [AS-WAS-Support] [ ] HTTP Activation [AS-HTTP-Activation] [ ] Message Queuing Activation [AS-MSMQ-Activation] [ ] TCP Activation [AS-TCP-Activation] [ ] Named Pipes Activation [AS-Named-Pipes] [ ] Distributed Transactions [AS-Dist-Transaction] [ ] Incoming Remote Transactions [AS-Incoming-Trans] [ ] Outgoing Remote Transactions [AS-Outgoing-Trans]

41

Chapter 2
[ ] WS-Atomic Transactions [AS-WS-Atomic] DHCP Server [DHCP] DNS Server [DNS] Fax Server [Fax] File Services ] File Server [FS-FileServer] ] Distributed File System [FS-DFS] [ ] DFS Namespaces [FS-DFS-Namespace] [ ] DFS Replication [FS-DFS-Replication] ] File Server Resource Manager [FS-Resource-Manager] ] Services for Network File System [FS-NFS-Services] ] Windows Search Service [FS-Search-Service] ] Windows Server 2003 File Services [FS-Win2003-Services] [ ] File Replication Service [FS-Replication]

[X] [X] [ ] [ ] [ [

[ [ [ [

Listing 2.1: A partial listing showing the results of the servermanagercmd.exe query command.

The full list retrieved with this command is actually quite a bit longer, so the list in Listing 2.1 is only a snippet of the options shown when running the command. Contrary to within the GUI where Role Services are only illuminated once the Role is installed, the command-line version provides a comprehensive list of all installable components that you can add to your server instance. This feature makes it very easy to locate the components you believe you need as well as the prerequisites associated with those components. Youll see that with each entry an X appears when the component is installed. The text is also turned green to further illustrate the component installation. Next to the friendly name of the component is the name to be used when installing it from the command line. If youd rather save this information to a file, you can do so by appending a filename with an XML extension to the end of the -query command. An example of doing this would be
servermanagercmd.exe -query result.xml
Note that when doing this, the result will not look like what is shown on the screen. Instead, it will arrive as an XML-formatted file.

Installing a new role is done using the command


servermanagercmd.exe -install <name>

where <name> is the name of the component to install. Removal is done in much the same way, but by replacing the -install switch with -remove. One important difference regarding the installation of components through Server Managers command line is that, by default, none of the GUI-based configuration questions are asked as part of the installation. So, for example, if you want to install the Active Directory Certificate Services Role onto your server, you need only to use the command
servermanagercmd.exe -install ad-certificate

42

Chapter 2 The result will resemble Listing 2.2. Youll see that the installation succeeds without any further prompting for configuration. Required configurations are automatically populated as part of the default installation.
C:\Users\Administrator>servermanagercmd -install ad-certificate . 109.7850.0: 0x80070002 (WIN32: 2) 109.7869.0: 0x80070002 (WIN32: 2) 109.7850.0: 0x80070002 (WIN32: 2) 401.335.0: 0x80070002 (WIN32: 2) 401.431.0: 0x80070002 (WIN32: 2) 401.1255.0: 0x80070002 (WIN32: 2): C:\Windows\CAPolicy.inf 454.244.0: 0x80004005 (-2147467259): AES-GMAC 454.334.0: 0x80004005 (-2147467259) 454.334.0: 0x80004005 (-2147467259) 454.334.0: 0x80004005 (-2147467259) 454.244.0: 0x80004005 (-2147467259): AES-GMAC 454.244.0: 0x80004005 (-2147467259): AES-GMAC 454.244.0: 0x80004005 (-2147467259): AES-GMAC 454.244.0: 0x80004005 (-2147467259): AES-GMAC 454.244.0: 0x80004005 (-2147467259): AES-GMAC 454.244.0: 0x80004005 (-2147467259): AES-GMAC 454.244.0: 0x80004005 (-2147467259): AES-GMAC 454.244.0: 0x80004005 (-2147467259): AES-GMAC 454.244.0: 0x80004005 (-2147467259): AES-GMAC 420.110.0: 0x80090014 (-2146893804): Microsoft Software Key Storage Provider 420.117.0: 0x80090014 (-2146893804): realtime-windowsserver-W2008A-CA Start Installation... [Installation] Succeeded: [Active Directory Certificate Services] Certification Authority. <100/100> Success: Installation succeeded. C:\Users\Administrator>
Listing 2.2: Installing the Active Directory Certificate Services Role with servermanagercmd.exe does not require you to answer installation questions required by the GUI version.

43

Chapter 2

Server Manager Components


Server Manager includes much more than just the administrative consoles for installed Roles, Role Services, and Features. It also includes a set of consoles used for common management tasks. Though youre likely familiar with using many of these management consoles from Server 2003, others have new features and functionality worth mentioning. In this section, well discuss a few of the items youll be using in managing your Server 2008 infrastructure. Event Viewer Event Viewer in Server 2008 gets a major facelift as compared with Server 2003. With a new engine that is the same used in Microsoft Vista, one useful trait of this improvement is that if youre used to Vistas new Event Log, youll be well-prepared for using it in Server 2008. When setting the focus to the top-level node of the Event Log, youll see in Server Managers right-pane an overview and summary of the events that have arrived in the Event Log over a period of time. This Summary of Administrative Events is particularly useful in determining the health of the server over periods of time.

Figure 2.8: A partial view of Server Managers Event Log node, showing the Summary of Administrative Events and Log Summary panes.

Digging deeper into this node provides more granular information about the types of events adding to the total number. For any event of particular interest, it is possible within this interface to right-click the event and select View All Instances of this Event to create a filtered view of just those events of interest. This removes the previously cumbersome activity of bringing up the entire Event Log and manually parsing through its contents in search of a single event of interest.

44

Chapter 2

To explain this further, consider a server that is experiencing a problem. You may not necessarily know what the problem is or where it is located. But a problematic server often shows a larger (or sometimes smaller) than average number of events in the Event Log. As you can see in Figure 2.8, when the number of events over the past 24 hours or 7 days has change significantly, that can be a good indication of a server that is experiencing problems. Once youve identified that a change in the rate of event creation has occurred, you can then drill down into the events themselves to see which events are causing the change.

Another big problem with Server 2003 involved the inconsistent configuration of individual logs. When logs are not correctly configured, it is possible to lose events that would be critical in solving a problem or understanding a security situation. To remedy this problem, the Log Summary view is available (see Figure 2.8). There, log configurations for all logs on the server are listed in a single view, allowing you to see their configuration from a single location and minimizing the potential for misconfiguration. As with Vista, event logs with Server 2008 have been broken out into more granular categories. Youll see in Server Managers interface that in addition to the Application, Security, and System event logs were used to seeing in Server 2003, new event logs based on function are available under the Applications and Services node. Logs here are created and enabled based on the components installed to the server. Admin and Operational events, those most useful by systems administrators in troubleshooting, are available by default in the interface. More detailedand thus more complicated to understandAnalytic and Debug logs are also available but must be enabled in the interface by clicking View | Show Analytic and Debug Logs. The individual logs themselves have also been augmented with much-needed improvements. Date and time fields, separated in Server 2003, are now combined into a single column in Server 2008. This makes the process of sorting by date/time of entry much easier. Event properties are now stored using XML. This provides for better programmatic manipulation using scripting and coding languages. An XML view that shows the raw data formatting associated with each log entry is available by clicking any particular event, selecting the Details tab, and choosing the XML view. Event tasks are also available that allow the administrator to create an action to occur as an event occurs. By right-clicking any event in the event log and choosing Attach Task to this Event, the Create Basic Task Wizard appears. From this interface, it is possible to start a program or script, send an email, or display a message to the screen when events of interest arrive into the Event Log.

45

Chapter 2

Figure 2.9: Event tasks give the administrator the ability to easily create notifications or perform actions as events occur.

Especially useful for certain troubleshooting tasks is the ability to forward events from the Event Log of one server to another. Though not intended to be an enterprise solution for log aggregation, this new feature allows events across multiple machines to be collected into a single location. By collecting events in this way, it is much easier to gain an understanding of the activity that occurs between multiple servers on the network. Two computers are minimally necessary to create an event subscription. The forwarder computer is the server that sends its event information elsewhere. The collector computer is the server that collects events from other machines. Understanding this, in order to set up an event subscription, multiple steps are required: Click the Subscriptions node on the collector computer. When prompted, start the Windows Event Collector service. From a command prompt on both computers, run the command
winrm quickconfig

Doing so enables the Windows Remote Management service as well as configures it for sending and receiving events. Proper authentication is necessary for allowing events to transfer between computers. Do this by adding the computer account of the collector computer to the Administrators group on the forwarder computer. This may require a reboot of both systems. On the collector computer, right-click the Subscriptions node and select Create Subscription. A screen will appear similar to Figure 2.10. Enter the necessary information here including the name, destination log, subscription type and source computers, the events to collect, and any advanced settings.

46

Chapter 2

Figure 2.10: Setting up an event subscription involves many steps, including defining the types of events of interest to be transferred between servers.

Once completed, the subscription will begin forwarding events from the forwarder computer to the collector computer as per the filter created in the subscription properties. For the example that Figure 2.10 shows, we are collecting only the events marked Critical from the DNS Server log on the forwarder computer.

47

Chapter 2

Reliability and Performance Monitor Reliability and Performance Monitor (RPM) is yet another improvement Server 2008 shares with Windows Vista. RPM represents the combination of the traditional Performance Monitor used for years in all versions of the Windows OS with the previously separate tool called Server Performance Advisor (SPA). With Server 2003, SPA was downloaded and installed to a server to extend Performance Monitor. It was used to diagnose performance problems by measuring resource use based on the type of workload installed to the server. Server 2008s RPM is broken down into four major elements. Depending on the type of monitoring desired for the system, any combination of elements can be used: Performance MonitorServer 2008s Performance Monitor hasnt changed much from previous versions. The types of graphs and reports used to view data are the same, as is the way in which the console looks. One area of improvement is the process used to add counters. Reliability MonitorReliability Monitor is an excellent tool for troubleshooting the effective stability of a system over time. When systems have problems, one of the most difficult parts of troubleshooting is finding the time period in the past in which that problem may have started. Reliability Monitor breaks down major system events into five categories: Software (Un) Installs, Application Failures, Hardware Failures, Windows Failures, and Miscellaneous Failures. When any of these events occur, a numerical index of the stability of the system is reduced. As periods of time pass with no events, that stability index is increased. By reading the graph of stability over time, it is possible to get a visual indicator associated with when in the past a server began experiencing problems of all types. Once this is understood, you can drill down into specific events to learn more about the root cause of that reduction in stability. Data Collector SetsAn augmentation of traditional Performance Monitor, Data Collector Sets are aggregations of data, such as performance counters, event trace data, system configuration information, or performance counter alerts, that are collected together. Collection of data associated with any Data Collector Set can be run on-demand or scheduled to occur at regular intervals. Comparing the reports associated with individual collections can assist the troubleshooting administrator with finding problems on the system. A small set of preconfigured Data Collector Sets is present on every Server 2008 system, and custom sets can be created within the interface. To start a collection interval, right-click a set and choose Start. The process can take a few minutes to complete. Once complete, select the appropriate report from the Reports node as described in the next bullet point to view the results of the collection. ReportsReports are the results of an on-demand or scheduled collection of data from a Data Collector Set. Once a collection has completed, a report based on its findings will be available under the Reports node. Clicking that report provides a comprehensive set of statistics that assist in troubleshooting performance and other problems on a server. Figure 2.11 shows an example of a report associated with ADDS.

48

Chapter 2

Figure 2.11: Starting the collection for a Data Collector Set results in a report that can be used to understand the performance and activities of a server.

Task Scheduler Task Scheduler in Server 2008 gains a host of new capabilities, most of them specific to how tasks are triggered to start. Weve already talked about one new category of trigger in our section on Event Viewer. There, tasks can be triggered based on the creation of an event. This capability was previously available in Server 2003, but only through the command-line tool eventtriggers.exe. It is now integrated into the Task Scheduler GUI. In addition to those based on events, tasks can now be triggered at log on, at startup, on idle, at the creation or modification of another task, on connection or disconnection of a user session, or upon the locking or unlocking of the computer. Possibly the greatest improvement with tasks is the ability to create multiple triggers and actions associated with a task. Multiple triggers allow for multiple conditions to trigger the initiation of a task. This comes in handy when multiple schedules are needed for the running of the same taska capability long desired in previous OS versions. Multiple actions enable the task to accomplish multiple functions upon triggering: sending an email, displaying a message on-screen, or running a program or script. Added conditions and settings are also provided for tasks that further granularize the level of control possible for individual tasks.

49

Chapter 2 Windows Server Backup Windows Server Backup is a new element not found within previous versions Computer Management. Windows Server Backup is an upgrade to what used to be termed NTBackup and comes with a host of new capabilities that improve the process of doing backups as well as the chance of a successful restoration. To use Windows Server Backup, the Feature associated with its use must first be added to the server. Do this in Server Manager by right-clicking the Features node and selecting Add Features. In the list of available features, choose to add Windows Server Backup Features, which will also automatically add the Windows Server Backup sub-component. Command-line tools based on and requiring Microsoft PowerShell can also be added. Once installed, the Windows Server Backup node in Server Manager can be used to initiate onetime or scheduled backups. Windows Server Backup integrates with the Volume Shadow Copy Service in order to backup all files on a system, even those that are open for use. This makes restoration a much simpler and more effective process. Windows Server Backup is primarily a disk-to-disk backup utility, no longer possessing the ability to back up servers directly to tape. Although this eliminates tape backups as a possibility for backups, it enables the creation of automated offsite relocation of backups for disaster recovery. Windows Server Backup also includes disk usage management tools to prevent backups from filling available disk space and causing problems with storage systems. Optical media drives such as DVD drives and removable media are now supported for direct storage of backup files. Using Windows Server Backup, bare-metal restorations are possible to the same server or another server with a similar hardware configuration. This can be done by inserting the installation media and choosing the Repair your computer option. In the resulting screen, select Windows Complete PC Restore. Chapter 1 discussed how the Windows Preinstallation Environment (WinPE) is now used for the installation of Windows. WinPE comes equipped with the proper networking capabilities to allow for connecting to resources elsewhere on the network. When using Windows Complete PC Restore, it is possible to connect to backup files over the network for purposes of completing a bare-metal restore. All of this is completed using Windows Complete PC Restore on the Server 2008 media.

50

Chapter 2

Figure 2.12: Booting from the Windows 2008 media and selecting the Repair your computer option brings forward Windows Complete PC Restore. This feature can be used for bare-metal restoration of a Server 2008 instance.

Disk Management Our last component of Server Manager to keep an eye on is Disk Management. Although Disk Management arrives relatively unchanged from Server 2003, it includes one specific and highly desired new featurethe ability to resize disks on the fly. With previous OS versions, it was possible to extend non-system disks using the command-line tool DISKPART but never to shrink them. Using DISKPART to resize disks also involved dismounting the disk, which meant a loss of service of the disk during the resizing operation. Using Server 2008, all disks can be resized through Disk Management without loss of service. To resize a disk within Disk Management, simply right-click the volume of interest, and select either Extend Volume or Shrink Volume. You will then be asked to select the amount of space to add to or remove from the disk. Figure 2.13 shows an example of the Shrink Volume screen used to select the new size of the disk after resizing.

51

Chapter 2

Figure 2.13: Shrinking a disk can be done natively within the Disk Management GUI.

Server Manager Consolidates Management Activities


Server Manager and the tools that it supports have the ability to significantly improve the quality of service for your Server 2008 instances. Server 2008s componentization goes far into improving system uptime while reducing the security attack surface of your servers. Server Managers compilation of traditional tools from Computer Management along with some new ones also makes easier the process of managing and maintaining the servers that make up your Windows Server 2008 infrastructure. The next chapter will move away from the management of individual servers in your infrastructure and focus on the directory that binds them togetherAD. In Server 2008, we get a number of changes to how Domain Controllers are set up and managed as well as some updates to best practices for AD design in general. In Chapter 3, well talk about all this, including whats new with forest and domain functional levels, the new DCPROMO process, Read-Only Domain Controllers, and more. Domain Controllers are a great starting point for any migration to Server 2008. The improvements to management and reliability for AD and your Domain Controllers that come with Server 2008 make them an excellent first step for not only building your Windows Server 2008 infrastructure but also upgrading from previous versions.

The content of this chapter was written based on pre-release information, specifically the RC1 version of Microsoft Windows Server 2008.

52

Chapter 3

Chapter 3: Active Directory Design & Domain Controller Management


In our first two chapters, weve discussed the topics of server installation and management from the perspective of a single server. Chapter 1 dealt with the needs of installing an operating system (OS) to a particular set of server hardware. Chapter 2 discussed the management needs of individual servers, specifically using the new Server Manager tool that arrives with the release of Server 2008. But in order to fully recognize your Windows Server 2008 infrastructure, it is likely that youll be installing multiple servers in your environment. When the number of computers in an environment grows much beyond one or two, the need for a centralized mechanism for security, authentication, and authorization grows necessary. With Windows systems, that centralized mechanism is Active Directory (AD). AD is at its core a directory of objects, much like a phone book directory. The directory contains information about the computers, users, and other configuration objects that are useful for its users. AD is also a source of information control and security. As AD becomes the database of record associated with these objects, it also serves as a location where these objects can identify themselves (identification), provide information that proves who they are (authentication), and request use of resources managed by AD (authorization). Thus, one of ADs major tasks is to provide the structure whereby resources such as files, folders, and registry keys among others are accessed in a controlled manner. AD provides a central location where security principals such as users and computers are assigned rights and privileges to access resources. In this chapter, well be taking a high-level and introductory approach to describing the structure and function of AD as well as the process for installing AD into your computing environment. Well discuss some best practices associated with the design of AD, and well conclude with one critical topic associated with the management of ADs Domain Controllersensuring proper backups and successful restores.

53

Chapter 3

A Good AD Design Solves Many Problems


With its near-universal presence within business networks, it is likely that youve already come into contact with AD at some point in your past. Since its introduction with Windows 2000, AD has become the standard for directory services in nearly all Windows-based environments. Thousand-page books have been written on the subject of AD, each including deep descriptions of its various components and internal workings. So in a short chapter like this, we must be judicious about the types of topics we tackle. That being said, the intent with this chapter is to give you an overview of the structure, best practices, and installation steps associated with building AD as part of your Windows 2008 infrastructure. AD is a complicated beast, with many configurations required in order to put it successfully in place. The good part about Microsofts implementation of AD is that much of that configuration is automated and simple while at the same time retains the ability to scale to a worldwide presence. Installing a simple AD with a single site and few internal objects is relatively easy and can involve only a few minimal steps. At the same time, installing a large-scale AD with multiple sites, worldwide reach, and large levels of objects is also possible. Ultimately, the complexity of the environment dictates the level of customization required. As stated in the header for this section, a good AD design will solveor at least preventmany IT problems right from the get-go. By incorporating good design into your AD, it is less likely to cause problems with authentication and users loss of access to necessary resources.
As with many things, with AD, the simplest design is often the best solution.

In the early years of AD, with the release of Server 2000, many IT organizations saw an explosion of AD domains. Zealous IT administrators found themselves with a new tool to play with, and in many places, IT found AD domains as a tool for laying the groundwork for IT team responsibilities. Unfortunately, the widespread creation of AD domains was not and is not necessarily the best solution with the business and its users in mind. Users and the business desire the greatest levels of transparency possible with authentication and resource security. Accessing resources across domain boundaries can involve extra steps and headaches for users when not properly set up. Because of the issues that large domain structures can pose, after the initial period of domain hypergrowth, many IT administrators found themselves later consolidating domains into fewer and fewer numbers.

54

Chapter 3 Considering this, a good AD design provides the following benefits to the organization (Source: http://technet2.microsoft.com/windowsserver2008/en/library/23d96652-a0d9-4f70-9742514110c99da61033.mspx?mfr=true): Simplified management of Microsoft Windowsbased networks that contain large numbers of objects. Objects can be managed through a unified interface, which eliminates management duplication and enhances the ability to manage the environment with fewer human resources. A consolidated domain structure and reduced administration costs. As stated previously, when domain structures are consolidated into as few as possible, this reduces the overall operational expenses associated with managing those constructs. The ability to delegate administrative control over resources, as appropriate. When objects are available within a centralized directory, the ability to secure those objects and assign rights and permissions to them becomes much easier. Reduced impact on network bandwidth. The correct network positioning of ADs Domain Controllers is critical to ensuring the lowest levels of network usage. Simplified resource sharing. Resource sharing information can be stored within ADs database, which allows users to easily search for and find resources they need. Optimal search performance. Searching of managed objects is also improved when they are stored within a single database of record. Low total cost of ownership. In contrast with environments in which each computer is managed independently, the centralized control improves the ability for administrators to manage through policies and reduces the cost to operate the environment.

As youll learn in the next few sections, creating the best user experience for the users of a Windows infrastructure is very much a function of creating AD based on best practices. When an AD design is properly created based on the needs of its users, those users will be able to easily authenticate, locate information and resources, and ultimately do their jobs.

55

Chapter 3

Understanding the AD and Domain Controllers


Without too many details, lets take a few minutes to discuss ADs major components. Understanding these components in relation to each other is critical to understanding how best to design your AD. The AD Forest An AD forest is a collection of one or more AD domains. These domains share a common logical structure, directory database schema and configuration, and scope of search. Domains, which well discuss next, are integrated into a forest when there is interest in sharing resources and respecting authentications between them. Whereas domains are commonly used as administrative boundaries, forests are available so that users across those boundaries can easily share information and work collectively. As domains within a forest implicitly trust each other, groups that assign resource privileges can be shared across domain boundaries. The AD Domain Domains are considered partitions of an AD forest. They are the boundary of user and resource administration and are the construct in which objects reside that are exposed to the user. A domain is the location where user identities are stored and where object authentication occurs. Users login to their respective domains and make use of objects within that domain. Once an object is authenticated, it can make use of approved resources within that domain as well as other resources as specifically identified in other domains within the forest.
Many domains can make up a forest, and each domain can belong to only one forest. A forest can contain either a single domain or multiple domains. Domains also can be linked to each other to create a hierarchy within the forest. The structure of how domains are arranged within a forest is usually based on intra-IT responsibility and the level of trust between IT organizations. It can also be set based on geographical considerations or along lines of business.

Domain Controllers Each domain requires a minimum of one Domain Controller to perform the management functions associated with the domain. Each Domain Controller hosts a copy of the AD database and handles the responsibilities of authentication and authorization. To ensure that the AD database is consistent across all Domain Controllers, each Domain Controller engages in replication with other Domain Controllers within the domain. Domain Controllers are replicated as peers without multiple-level hierarchies among any Domain Controllers within a domain. This replication process ensures that the database is loosely consistent between every Domain Controller within the domain, and that any Domain Controller can be used as a source of authentication or resource authorization. Because of this replication, it is possible for any Domain Controller to be used as the location whereby users login to computers on the domain. As domains can be widely dispersedeven globallythis replication process allows users to use any domain controller anywhere to search for and work with resources.

56

Chapter 3 Flexible Single Master Operation Roles In moving AD to peer-to-peer replication, five specific functions were isolated as needing to run on a single, dedicated host. It was not possible to run these five functions in the same distributed concept as is done with the other components of AD and its peer-to-peer Domain Controller interrelations. The five functions are called the Flexible Single Master Operation (FSMO) Roles. Theyre flexible because any Domain Controller can fulfill the role. Theyre single master because, as we discussed earlier, they cannot operate in a peer-to-peer concept. The five FSMO Roles, as defined by Microsoft (Source: http://www.microsoft.com.nsatc.net/technet/prodtechnol/windows2000serv/reskit/distrib/dsfl_utl _pavr.mspx?mfr=true): Schema Operations MasterThere is a single schema operations master role for the entire enterprise. This role allows the operations master server to accept schema updates. There are other restrictions on schema updates. Relative ID MasterThere is one relative ID master per domain. Each domain controller in a domain has the ability to create security principals. Each security principal is assigned a relative ID. Each Domain Controller is allocated a small set of relative IDs out of a domain-wide relative ID pool. The relative ID master role allows the Domain Controller to allocate new subpools out of the domain-wide relative ID pool. Domain-Naming MasterThere is a single domain-naming master role for the entire enterprise. The domain-naming master role allows the owner to define new crossreference objects representing domains in the Partitions container. PDC Operations MasterThere is one Primary Domain Controller (PDC) operations master role per domain. The owner of the PDC operations master role identifies which domain controller in a domain performs Windows NT version 4.0 PDC activities in support of NT 4.0 Backup Domain Controllers (BDCs) and clients using earlier versions of Windows. Infrastructure MasterThere is one infrastructure master role per domain. The owner of this role ensures the referential integrity of objects with attributes that contain distinguished names of other objects that might exist in other domains. Because AD allows objects to be moved or renamed, the infrastructure master periodically checks for object modifications and maintains the referential integrity of these objects.

57

Chapter 3

Functional Levels Each iteration of ADstarting with Windows 2000, through Windows Server 2003, and now with Windows Server 2008has added additional functionality to ADs core set of features. Enabling that added functionality is done by setting the Forest Functional Level and Domain Functional Level to the correct level. Raising the functional level requires that all Domain Controllers within that levels scope (either in the domain or in the forest) be running at that level. Hence, in order for the Domain Functional Level to be raised to Windows Server 2008, all Domain Controllers within the domain must be running Server 2008. No Server 2003 Domain Controllers can remain in the environment. Raising any functional level requires a manual switch to be flipped to enable the change. This separate step is required so that higher-level Domain Controllers can be introduced incrementally into a lower-level AD domain. As well explore later in this chapter, this allows for a rolling upgrade of Domain Controllers until all are at the updated OS. With Server 2008, there are actually no new features gained in upgrading the Forest Functional Level to Windows Server 2008. However, individual domains do gain new functionality: The SYSVOL replication engine is updated from using the File Replication System (FRS) to the new and more reliable Distributed File System (DFS). The Advanced Encryption Standard (AES) in both AES 128 and AES 256 is supported for the Kerberos protocol. Last interactive logon information is logged. Fine-grained password policies are enabled, which provide the ability to create multiple password policies within a single domain.

58

Chapter 3

Sites When domains grow geographically large, that replication process also can grow complex. Sites are an AD construct that allows users within a localized area of high network connectivity to ensure that they are authenticating and requesting resource access from Domain Controllers in an area of the network close in proximity. Sites are intended to be arranged by geographical area and are specifically linked to the network subnets that correspond to that geographical area. Three main elements are linked to site membership. ReplicationReplication between Domain Controllers is configured based on their site membership. Domain Controllers within the same site are assumed to have high network connectivity between them. Thus, their replication occurs more often with less consideration for network performance and capability. Domain Controllers in different sites replicate at lesser intervals so as not to impact the network between them. AuthenticationClients that attempt to authenticate will first look to Domain Controllers within their local site. This ensures that clients can complete the login process as quickly as possible. When Domain Controllers are unavailable in the local site, clients can them seek elsewhere for a Domain Controller to complete the process. Client ConfigurationUsing Group Policy, clients can be configured based on their site membership. This allows for clients to receive necessary configurations based on the site in which they currently reside.

Organizational Units Organization Units (OUs) provide useful constructs to simplify management of AD objects. OUs collect AD objects into groups so that policy-based configurations can be applied to those objects. The greatest use of OUs is for assignment of Group Policies to objects that reside within the OU. Any AD object can only reside in a single OU.
Chapter 6 will discuss OUs in detail.

Domain Name Service Network resolution of the elements that make up AD is critical so that clients can find servers and Domain Controllers and Domain Controllers can locate each other. Domain Name Service (DNS) is the network protocol that handles the resolution of servers and services. Using DNS, clients are able to locate Domain Controllers within the domain. Windows DNS and AD also make use of Service (SRV) records. These special records are used by clients to identify specific domain services and the servers that host those services. Because SRV records are hosted by DNS, and because DNS has the ability for dynamic updates, Domain Controllers can update the location of available services on the fly by manipulating their presence in DNS resolution.
Microsofts Web site includes hundreds of pages of additional information about the basic functionality of each of the components within the structure of Windows Server 2008. For more information, check out http://technet2.microsoft.com/windowsserver2008/en/library/b1baa483-b2a3-4e03-90a6d42f64b42fc31033.mspx.

59

Chapter 3

Best Practices in AD Design


In many ways, there are as many potential AD designs as there are ADs in the world. Thus, giving hard and fast advice as to the configuration of AD for your particular environment is challenging. That being said, there are a number of best practices associated with the design of AD that make sense across all instances. These best practices can assist you with creating an AD that works best for both administrators and users. Consider the following short list as a set of guidelines to assist you with creating your domain structure. There are numerous other resources available on the Internet that can provide additional best practices focused toward specific case studies. Forest and Domain Creation: Resources can be accessed across domain and forest boundaries but with additional steps to the user. Thus, minimizing the overall number of domains in the forest assists with providing best-possible access for users to their needed resources. When organizationally possible, a single domain structure is the best possible technical solution. The same holds true for forests. Consolidating resources into a single forest when organizationally possible is similarly the best possible technical solution. Due to a way in which Domain Administrator privileges can be maliciously elevated across domain boundaries, ADs boundary of security is the forest. If there are concerns about privilege escalation between Domain Administrators in separate domains, consider the use of multiple forests. When multiple domains are required, consider creating domains based on geographic scopes as these are less likely to change than organizationally based domains. Windows domains are long-lasting entities, and thus organizationally based domains are more likely to require change considering the long-term flow of business and that domain restructures tend to be expensive activities. An AD site is intended to be bounded by a region of high network connectivity. Establish as a site every geographic area of high network connectivity based on IP subnet addresses. Place at minimum one Domain Controller in every site and make at least one Domain Controller in each site a Global Catalog (GC). Windows and the Knowledge Consistency Checker (KCC) service have the ability to automatically determine the best site topology. Manually creating site links stops this automated process. It is a best practice to allow the KCC to manage site links automatically rather than to do so manually. When possible, create separate OUs for user and computer objects. This assists with the deployment of Group Policy. Consider creating as few OUs as possible. Create additional groupings only when Group Policy targeting mandates the group creation.

Site Creation:

OU Creation:

60

Chapter 3

Installing Domain Controllers


Architecting a good AD design is in many ways the hard part. Once an AD design is established and ready for implementation, the actual installation of Domain Controllers can be a relatively trivial task. That being said, there are a few steps you need to know in order to properly install your first and subsequent Domain Controllers. In this section, well go through a lengthy step-by-step process of installing a new Domain Controller to create a new domain as well as the needed additional Domain Controllers put in place for high availability. This section will include a few extra steps done manually that could otherwise have been automated so that you gain an understanding of the entire unaided process associated with domain creation. Also in this section, as many organizations already have AD in place, well discuss the procedure to upgrade an existing AD from Windows Server 2003 R2 to Windows Server 2008. Installing the DNS Server Role Although the Domain Controller promotion process can automatically install and configure DNS for you, it is usually a good idea to start any domain creation with a manual installation of DNS. With previous versions of Windows Server, the DCPROMO process historically has not done a good job of fully configuring all the pieces of DNS for operation with an AD domain. That process has gotten quite a bit better with the release of Server 2008, but even with its new capabilities, it is a good idea for you to understand the process so that you have the skills you need for later troubleshooting.
For the purposes of this chapter and this guide, we will be creating a domain and associated forward and reverse DNS zones named realtime-windowsserver.com on the two Domain Controllers w2008a and w2008b and for the 192.168.0.0/24 subnet. These DNS zones and their associated domain will host each of the resulting servers and services that we discuss throughout this guide.

To install the DNS Server Role, use the following steps. First, from Server Manager, right-click the Roles node and choose to Add Roles. From the resulting wizard, make the selection to install the DNS Server Role. The DNS Server Role has no additional Role Services, so its installation via Server Manager involves no additional configuration. Restart Server Manager after the installation of the role.

61

Chapter 3 Once the DNS Server Role is installed, well then need to prepare it for use by AD. In this step, we will be configuring the DNS server as well as creating and configuring the realtimewindowsserver.com zone. From Server Manager, click Roles | DNS Server | DNS | w2008a, and select Properties. Many of the server settings for DNS remain the same from their default configurations. However, well want to pay special attention to a few. Forwarders TabIf this server finds that it cannot resolve a particular request, we can instruct the server to ask another server for an answer. This process is called Forwarding. Note that this is not necessarily needed for Internet forwarding except in the case where this server is incapable of contacting other Internet-based DNS servers. Click the Edit button here to enter any servers to be used for forwarding. Advanced TabDNS in Windows 2008 is often configured to support dynamic updates. This allows clients to update their DNS records when their IP address changes. One result of this is that some DNS records can grow stale over time if they are not properly updated. When stale records are not cleaned out of the DNS database, they cause problems with proper resolution. Selecting the check box to enable automatic scavenging of stale records will enable the DNS server to remove records that have aged past a certain number of days.

Once weve completed configuring the DNS server itself, well need to create a forward lookup zone. This will resolve fully-qualified DNS names (FQDNs) to IP addresses. To create the forward-lookup zone, right-click Forward Lookup Zones, and select New Zone. When prompted, choose to create a new Primary Zone with the name realtime-windowsserver.com. Use the default for the zone file name and configure the zone to allow both nonsecure and secure dynamic updates. Well also need to create a reverse lookup zone to resolve IP addresses to FQDNsthe reverse of the zone created previously. To do so, right-click Reverse Lookup Zones, and select New Zone. When prompted, choose to create a new Primary Zone of type IPv4 Reverse Lookup Zone. Use the Network ID 192.168.0, select the default for the zone file name, and configure the zone to allow both nonsecure and secure dynamic updates. Next, well need to populate this new zone with the information about our server w2008a. To do so, we need to ensure that the full computer name for this server is set to w2008a.realtimewindowsserver.com. Do this by right-clicking Computer, and choosing Properties. From the resulting screen, click the link for Change settings under Computer, name domain, and workgroup settings. Click Change in the resulting screen and then the More button to bring forward the DNS Suffix and NetBIOS Computer Name screen. Enter realtimewindowsserver.com as the Primary DNS suffix of this computer. Changing the primary DNS suffix will require the computer to restart. Once the computer has restarted, return to Server Manager and verify that the entries are configured in the forward and reverse lookup zones similar to what you see in Figure 3.1.

62

Chapter 3

Figure 3.1: Two iterations of Server Manager, showing the correct forward and reverse lookup zones needed to start creating our Windows domain.

Promoting a Member Server with DCPROMO Once weve completed this process, we can begin the process of elevating our member server to a full Domain Controller. Doing so involves the use of the DCPROMO command. Though AD Domain Services is considered a role in Server 2008, its installation can only be done through DCPROMO rather than through Server Manager as it is with other roles. To elevate w2008a, well use the following process: From a command prompt, run the command dcpromo. The ADDS binaries will be installed, which will take a period of time. Once complete, the ADDS Installation Wizard will be presented. It is always a good idea to mark the check box to Use advanced mode installation so that we are presented with all the possible options for creating a new domain. Do this now and click Next. When asked to Choose a Deployment Configuration, select Create a new domain in a new forest. Then, enter realtime-windowsserver.com as the FQDN of the forest root domain. In our example, we are creating a new forest that contains only a single domain. If we were creating a down-level domain in an existing forest, we would instead need to include the root-level domain here. For Domain NetBIOS name, use the name REALTIME. This is the NetBIOS name for the domain and is what is shown at the logon screen when any member attempts to logon. For an example of how the logon screen will look for Server 2008 clients once they join the domain, see Figure 1.1 in Chapter 1.

63

Chapter 3 In the screen titled Set Forest Functional Level, we will set the level to Windows Server 2008. As this is a brand new domain with only one member, we do not need to worry about down-level Domain Controllers. The screen titled Create DNS Delegation will ask whether we want to create a DNS Delegation. This screen is used when we are attempting to create a domain and DNS zone that is a child of an existing DNS zone. In our case, our domain and DNS zone are equivalent to the zone we just created, so this screen is effectively unnecessary. For now, click Yes and then enter administrator credentials into the resulting box to force the promotion process to attempt the process anyway. You will see an error during the domain creation process related to this selection that you can safely ignore. In very large domains and forests, the AD database, log files, and SYSVOL can grow to become very large. They can also require a very fast disk subsystem to ensure best performance. In the case of a very large domain and forest, the next screen provides the ability to relocate these files to another location or disk drive. In our case, our domain will be very small, so we can leave these settings untouched. The Directory Services Restore Mode Administrator Password in the next screen is used in restoring all or portions of the AD database after an accidental deletion or a disaster. Enter a password in the boxes here and ensure this password is kept in a safe location. The final Summary screen discusses all the settings configured within the DCPROMO wizard. Youll also see here a button titled Export settings. Clicking this button forces the DCPROMO process to export a text file that contains the settings configured in the past screens. This file can be especially handy when creating Domain Controllers on Server Core instances, which well discuss in Chapter 5. For now, click Next to begin the creation of the domain and the installation of ADDS.

At this point, the ADDS Installation Wizard will begin the process of installing ADDS and creating the domain. A check box titled Reboot on completion can be marked to instruct the process to reboot the computer once complete. In any case, a reboot is required to complete the domain creation and installation of ADDS.

64

Chapter 3

Figure 3.2: The ADDS Installation Wizard going through its process of installing ADDS and creating our domain.

Once the reboot is complete, we will have successfully created the realtime-windowsserver.com domain. We can double-check this in a number of ways. First, after the reboot, you will want to log on to the domain as REALTIME\Administrator using the correct password. Once there, check the event log for an Event ID 29223 from source LsaSrv that occurs just before the reboot occurs. The text of this event should read This server is now a Domain Controller similar to the image in Figure 3.3. There should be few if any errors in the event log.

65

Chapter 3

Figure 3.3: The event log will provide an event showing the successful promotion of the member server to a Domain Controller.

Another test to verify the success of the domain creation is to look in DNS for the presence of the domains SRV records. These records are necessary for clients to find domain-related services. Youll see five new sub-zones under the realtime-windowsserver.com forward zone, similar to those shown in Figure 3.4. Youll find that under each of these five sub-zones are numerous additional zones. Without getting into too much of the detail of each of the zones, a good test is to walk through each of the individual zones present and look for anything out of the ordinary with their presentation. As our new domain only contains a single Domain Controller (and, really, a single server) named w2008a, any entries should point to this server.

66

Chapter 3

Figure 3.4: Upon the creation of the domain, a number of new sub-zones will be found under our initially created zone. These are for use by clients in identifying domain-related resources.

Promoting Additional Domain Controllers Because AD is the backbone upon which all our data and applications reside, it is always a good idea to include no less than two Domain Controllers for every domain. This ensures that should one Domain Controller go offline or incur problems, there is always another that can service clients and manage the operations of the domain.
Many very small networks have chosen not to build and manage two Domain Controllers due to cost but have later on experienced significant outages due to the loss of their only Domain Controller. It is very important to always plan for a minimum of two Domain Controllers per domain.

Once weve installed our first Domain Controller, we will install our second for redundancy. The process to install the second Domain Controller is much easier than the first, as the second Domain Controller gathers much of its configuration requirements from the first Domain Controller. To do this, first install the Server 2008 OS onto another computer to create a member server. For this example, configure the second server, w2008b, to point to w2008a as its DNS server and add it to the realtime-windowsserver.com domain.

67

Chapter 3 In our example, both of our Domain Controllers will also act as DNS servers to provide redundancy for both services. So to begin, login using an account with Domain Administrator privileges and install the DNS Server Role using the same method as outlined earlier. Once complete, well set up this server to operate as a secondary DNS zone: For server-specific settings, set the configuration of the Forwarders and Advanced tabs in the same way as done for the first DNS server. We will need to create secondary forward and reverse zones on w2008b that point to w2008a. To do so for the forward zone, right-click Forward Lookup Zones and select New Zone. When prompted, select to create a Secondary Zone and enter realtimewindowsserver.com as the zone name. In the Master DNS Servers screen, enter the IP address for w2008a. If correct, the column titled Validated will display OK and a green checkmark will appear to the left of the IP address. For the reverse zone, right-click Reverse Lookup Zones and select New Zone. When prompted, select to create a Secondary Zone, an IPv4 Reverse Lookup Zone, and enter 192.168.0 as the Network ID. In the Master DNS Servers screen, enter the IP address for w2008a. If correct, the column titled Validated should display OK and a green checkmark will appear to the left of the IP address. Upon completing the previous two tasks, youll immediately see that the zone cannot be loaded and displays an error. This occurs because of a security feature with DNS. DNS zone transfers are usually configured to be explicitly allowed, a setting we have not yet configured on w2008a. To allow w2008b to transfer the zone, return to Server Manager on w2008a and on both the previously created forward and reverse zones, select the Name Servers tab. On each, click the Add button. Enter the FQDN for w2008b and its IP address and click OK. For both the forward and reverse zones, the resulting tab should resemble Figure 3.5. Note that you may see an error message when attempting to do this. That error message can be safely ignored. After a few minutes, navigate back to Server Manager on w2008b and hit the F5 key to refresh the zone. If everything is correct, after a short delay, the zone should appear on w2008b similar that is an exact match to what is seen on w2008a.

68

Chapter 3

Figure 3.5: The Name Servers tab allows other servers to perform zone transfers.

Our next step is to complete the promotion process using DCPROMO. To start this process, from a command prompt, enter DCPROMO to bring forward the wizard and complete the following steps: As before, ADDS will start by installing its needed binaries. When control is returned, mark the checkbox to Use advanced mode installation, and click Next. Select to create this Domain Controller in an Existing forest and to Add a domain controller to an existing domain. Because we have already added this server to the realtime-windowsserver.com domain, in the next screen, the domain will have already been populated. We can also choose to use our existing credentials because we have logged in as a Domain Administrator. In the following screens, choose the realtime-windowsserver.com domain and the site named Default-First-Site-Name. At the Additional Domain Controller Options screen, ensure that the box is checked to make this server a Global Catalog. Do not select the box to make this a Read-only Domain Controller.

69

Chapter 3 The next screen titled Install from Media allows you to choose how you want to replicate the domain data to this new Domain Controller. In the case of a very large domain with a large database and a very slow network connection, this option allows you to replicate data using offline media such as a DVD. This is very handy when the replication of domain data could saturate our network connection. In our case, our domain is small and well-connected, so choose to Replicate data over the network from an existing domain controller. The following screen titled Source Domain Controller allows us to select the Domain Controller from which to replicate data. In dispersed networks with many Domain Controllers, it is possible for this promotion to choose a Domain Controller in a far removed site, which can increase the time to complete the process and/or saturate the network link between this site and the remote site. In that case, selecting a Domain Controller in close network proximity reduces the effect of the replication. Since our Domain Controllers are close in network proximity to each other, we can safely choose to Let the wizard choose an appropriate domain controller. The next two screens allow us the option to relocate the database, log, and SYSVOL files and set the Directory Services Restore Mode password. Well use the same settings and password as before in these two screens. At the Summary screen, click Next to start the ADDS installation process to promote our second member server to a Domain Controller. Once the installation and subsequent reboot is complete, we will have successfully installed ADDS onto a second server.

Once the Domain Controller promotion is complete and the reboot has occurred, log back into the server as a Domain Administrator and check the event log as before to verify that this server has successfully promoted. Once we have completed this process, we need to make a few modifications to DNS to move its database into AD, lock down dynamic updates, and enable record scavenging on our individual zones. This ensures that DNS is operating in conjunction with AD in the best and most secure way possible: Navigate to the DNS Server node of Server Manager on w2008a and bring forward the Properties screen for our forward lookup zone. On the General tab, click the Change button. In the resulting screen, mark the box for Store the zone in Active Directory (available only if DNS server is a domain controller). By doing this, we are moving the storage of DNS records out of files on the server to the AD database itself. This allows DNS information to be replicated throughout the domain through standard Domain Controller replication. Click Yes when asked Do you want this zone to become Active Directory integrated? Click the Apply button to convert the zone. For the drop-down box next to Dynamic updates, change the selection to Secure only. This forces clients to authenticate to the DNS server prior to updating records which prevents a rogue client from maliciously manipulating client DNS data. Click the button titled Aging. In the resulting screen, mark the box for Scavenge stale resource records. The aging and scavenging process we discussed earlier requires both a server-based and zone-based configuration. Selecting this check box fully configures the server to start the aging and scavenging process for this zone.

70

Chapter 3 Complete the previous three steps on the reverse zone to complete its configuration. Now, navigate to the DNS Server node of Server Manager on w2008b. Here, for both our forward and reverse zones, click the Change button to change the zone type from Secondary to Primary and click OK. Click the Change button again and mark the box for Store the zone in Active Directory (available only if DNS server is a domain controller). Also, as before, on both forward and reverse zones set Dynamic updates to Secure only and click the Aging button and mark the box for Scavenge stale resource records to enable aging and scavenging for these zones. Lastly, change the network settings for both servers so that each points to itself as a primary DNS server with the other as a secondary DNS server.

This completes the process of building our sample domain. Now be aware that technically weve done this the hard way. Because the DCPROMO process can do some of the DNS server configuration for us, this process could have been a little easier. However, DCPROMO doesnt configure all the little settings we discussed earlier. More so, the value in seeing the extended process gives you the understanding of the relationship between DNS and AD as well as how AD relies on DNS for the resolution of its necessary services. Also, by creating our secondary DNS server and later elevating it to an AD-integrated server, we get to see the differences in how both types of DNS configurations affect AD.

71

Chapter 3

Figure 3.6: Switching the DNS zone to AD-integrated allows DNS information to replicate in the same way AD information replicates throughout the domain.

72

Chapter 3

Upgrading Domain Controllers


The truth of the matter with Windows domains is that in a lot of cases you may find yourself upgrading an existing domain rather than installing a brand new one from scratch. At the very least, this will be the case with your production environments. The best way to upgrade Domain Controllers from Server 2003 to Server 2008 is really not an upgrade at all but a rolling reinstallation of the OS on each of your Domain Controllers in such a way that the AD database remains up and operational during the migration.
Why is this wholesale reinstall process arguably the best method? Consider the average age of service of your existing Domain Controllers. If theyve been around for a number of years, its likely that theyve accumulated a little extra girth around the waist. You may have installed unnecessary applications to them or uninstalled others that left pieces behind. Internet browsing may have left undesired resident code on them that you will want to clean up as part of the upgrade process. All of these bits accumulate to become a great reason to make your 2003-to-2008 upgrade a good time for a fresh installation rather than carrying forward the extras.

In this section, well talk about the upgrade process to get a domain from running on Server 2003 to Server 2008. Well assume that the desired goal for the project is to ensure that all upgraded Domain Controllers are running freshly installed copies of Server 2008. To illustrate the process, we will be discussing a relatively simple example. This example is important as it is likely the one that resembles the majority of Windows domains. Here, we will assume that our forest is comprised of only a single domain named realtime-windowsserver.com with two domain controllers named DC1 and DC2. Those Domain Controllers run Server 2003 R2, are fully patched, and serve as DNS servers. The FSMO roles on these servers are all housed on DC1 and both servers are GC servers. The process we will use to upgrade to Server 2008 will include the following general steps: Well first update the AD schema to support Server 2008. We will then add a third Server 2008 member server named DC3 into our domain and promote that server to become the third Domain Controller for the domain. If this is your production environment, consider using a virtual machine for this third Domain Controller as it will be used only temporarily during the upgrade. Once that server is added to the domain, we will transfer the FSMO roles to DC3. We can then rebuild each of our Server 2003 Domain Controllers to Server 2008, one at a time. To do so, we will first demote each server back to a member server to ensure its Domain Controller information is removed from AD. Once DC1 and DC2 have been rebuilt as Server 2008, we will transfer the FSMO roles back to DC1 and decommission DC3. Finally, we will upgrade the Forest Functional Level to Server 2008.

73

Chapter 3

Updating the Schema The schema update process can in many ways be one of the most difficult parts of the entire process. Schema updates involve large-scale changes to the structure of the AD database. Thus, the update could break needed functionality in environments which have made customizations to the AD database in support of custom applications or complex arrangements. That being said, the schema update process is fairly transparent. If you look on the Server 2008 media in the \sources\adprep folder, youll see a series of files with an .LDF extension that are readable in any text editor. Opening any of these files will show you text similar to what is seen in Listing 3.1. In this listing, we see one example of a schema change.
dn: CN=ms-DS-isGC,CN=Schema,CN=Configuration,DC=X changetype: ntdsSchemaAdd objectClass: attributeSchema ldapDisplayName: msDS-isGC adminDisplayName: ms-DS-isGC adminDescription: For a Directory instance (DSA), Identifies the state of the Global Catalog on the DSA attributeId: 1.2.840.113556.1.4.1959 attributeSyntax: 2.5.5.8 omSyntax: 1 isSingleValued: TRUE systemOnly: FALSE searchFlags: 0 schemaIdGuid:: M8/1HeUPnkmQ4elLQnGKRg== showInAdvancedViewOnly: TRUE systemFlags: 20
Listing 3.1: One example of a schema change as done by the Server 2008 upgrade process.

There are hundreds of additions and changes just like what is shown in Listing 3.1 associated with the Server 2008 schema upgrade. So, for environments that have either a heavy reliance on direct schema access or have incorporated custom changes, a thorough review of the files in this folder is in order. Doing so will assist with ensuring that the update will not cause problems with the production environment.
Its worth mentioning that a schema update has the potential for causing major problems. Because of this, a full backup of the domain and AD database should be completed prior to starting this process. An even better idea prior to starting the schema update is to power down one Domain Controller in your environment. AD backups are notoriously difficult to use for restorations. Conversely, if a Domain Controller is powered off during the upgrade process, it will not receive the updates and can later be used as the backup database should the upgrade fail.

74

Chapter 3 The actual schema upgrade process has three steps, with one step being optional. Each of these steps is done using the adprep.exe tool also found on the Server 2008 media in the \sources\adprep folder. This is a command-line tool that is run on specific Domain Controllers in the environment: Adprep /forestprepThe first step in the schema update is to update the forest schema using this command. Running this command must be done before any Server 2008 Domain Controllers can be introduced into the forest and must be run on the forests Schema Operations Master, which in our example is DC1. This step needs to be performed only once for the entire forest. Adprep /domainprepThis second step must be done once for each domain within the forest after the forest schema has completed. As our forest has only one domain, we need to accomplish this only once. Adprep /rodcprepThis optional step can be run after the earlier two steps in domains where Read-Only Domain Controllers will be used. Our domain will later make use of RODCs, so well also run this step.

For each of these commands, after running the command, ensure that a full replication has completed prior to moving to the next step. Both Domain Controllers in our example are in the same site, so replication effectively occurs immediately.

Promoting a Member Server with DCPROMO Once the schema has been updated, were ready to install our first Domain Controller into our domain. This process is basically the same as the process we performed earlier in the section Promoting Additional Domain Controllers, so we wont go over it again in detail. Be sure when building this third Domain Controller that it is configured to be a GC. Be aware that the addition of a Server 2008-based Domain Controller does not change the Forest Functional Level or Domain Functional Level. The Server 2008 Domain Controller will operate at the level of the current functional levels until such time as an administrator-initiated change is made. Well perform that activity last in this process. Relocating FSMO Roles Once weve incorporated a third temporary Domain Controller into the environment that is running on Server 2008, we can then begin the process of rebuilding our production Domain Controllers. Prior to accomplishing this, there are five FSMO roles within any Windows domain that must remain up and operational for the full functionality of the domain. We need to transfer the owner of those roles from DC1 to DC3 before starting any upgrades. Though this transfer can be done through the GUI, it is actually easier to accomplish this with a single line at the command prompt.

75

Chapter 3 To transfer the FSMO Roles to the new Domain Controller, log in as an account with Enterprise Administrator privileges and enter the following, all on one command line:
Ntdsutil roles connections "connect to server dc3" quit "transfer domain naming master" "transfer infrastructure master" "transfer pdc" "transfer rid master" "transfer schema master" quit quit

This command line actually runs a series of steps within NTDSUTIL to connect to DC3 and individually transfer each of the roles to the new server. If youd like to verify the success of this command, you can do so by entering this all on one command line:
Ntdsutil roles connections "connect to server dc3" quit "select operation target" "list roles for connected server"

If the role movement was successful, you should see a result that looks similar to Listing 3.2.
select operation target: list roles for connected server Server "dc2" knows about 5 roles Schema - CN=NTDS Settings,CN=DC3,CN=Servers,CN=Default-First-SiteName,CN=Sites,CN=Configuration,DC=realtime-windowsserver,DC=com Domain - CN=NTDS Settings,CN=DC3,CN=Servers,CN=Default-First-SiteName,CN=Sites,CN=Configuration,DC=realtime-windowsserver,DC=com PDC - CN=NTDS Settings,CN=DC3,CN=Servers,CN=Default-First-SiteName,CN=Sites,CN=Configuration,DC=realtime-windowsserver,DC=com RID - CN=NTDS Settings,CN=DC3,CN=Servers,CN=Default-First-SiteName,CN=Sites,CN=Configuration,DC=realtime-windowsserver,DC=com Infrastructure - CN=NTDS Settings,CN=DC3,CN=Servers,CN=Default-FirstSite-Name,CN=Sites,CN=Configuration,DC=realtime-windowsserver,DC=com Select operation target:
Listing 3.2: This result from the list roles for connected server command shows that all FSMO roles have been moved to the server DC3.

Demoting and Rebuilding Domain Controllers We can now safely demote the existing Server 2003 instances to remove their Domain Controller-related information from the AD database. Once weve completed that step, we will then rebuild them each as a Server 2008 instance. To complete the demotion, from a command prompt, run DCPROMO. Upon starting the Active Directory Installation Wizard, an error message will appear noting that This domain is a Global Catalog server. This error message warns us to ensure that at least one Domain Controller is also a GC, which is the case as we have ensured in this example that all Domain Controllers are also GC servers. Click OK to clear the error. Next, ensure that the box is not marked for This server is the last domain controller in the domain. At the following screen, enter the Administrator password to begin the demotion process. The demotion process will remove all references for the server in the AD. After completing this process, you can safely rebuild the server as a Server 2008 instance. Upon completing the installation of Server 2008, re-run the DCPROMO process and promote the member server back to a Domain Controller.

76

Chapter 3

Relocating FSMO Roles Once DC1 and DC2 have been demoted, rebuilt to Server 2008, and subsequently promoted, we are ready to relocate our FSMO roles back to their permanent home. As before, well use a command line to move them all at once from their temporary storage location on DC3 to their permanent location on DC1. Do this by entering the following all on one command line:
Ntdsutil roles connections "connect to server dc1" quit "transfer domain naming master" "transfer infrastructure master" "transfer pdc" "transfer rid master" "transfer schema master" quit quit

The same verification as earlier can be used to ensure that the transfer occurred successfully. Functional Levels We now have three Domain Controllers happily operating on Windows Server 2008, though our Windows domain and forest are still running under the old functional level. To upgrade the functional level for the forest and domain and receive the benefits associated with the new functional levels, navigate to Start | Administrative Tools | Active Directory Domains and Trusts. Right-click the top-level node of the resulting console and choose Raise Forest Functional Level. Under Select an available forest functional level, select Windows Server 2008 in the drop-down menu and click Raise. In our example, because our forest has only a single domain, raising the Forest Functional Level automatically also raises the Domain Functional Level. In forests with multiple domains, this will need to be a separate process for each domain. In order to do this in that circumstance, rightclick the domain name in AD Domains and Trusts and select Raise Domain Functional Level. Under Select an available domain functional level, select Windows Server 2008 and click Raise. Your Domain Controller upgrade process is now complete.

Read-Only Domain Controllers


Server 2008 introduces a new special kind of Domain Controller called a Read Only Domain Controller (RODC). This Domain Controller is in some ways similar to the NT-style BDC in that it contains a read-only copy of the AD database. That copy can be authenticated against by any clients. Different than BDCs, however, administrators can select which AD objects are replicated down to an individual RODCs. Objects that are replicated down to the RODC can be used for authentication. If an object is not present on the local RODC, an upstream RODC or Domain Controller can be used for authentication. RODCs are designed specifically to be used in remote site or branch office situations in which physical security for Domain Controllers may not be assured. With full read/write Domain Controllers, it is possible for a would-be attacker with console access to a Domain Controller to access the entire contents of AD. By stealing a single Domain Controller from an unsecured branch office, the attacker would have access to all data stored within the AD. This means that the theft of a single Domain Controller could require the repermissioning of all objects within ADan expensive and time-consuming activity.

77

Chapter 3 Since objects are replicated down to an RODC only when they are identified by an Administrator, only a subset of the total AD database is present at the remote site. Should an attacker attempt to break into or steal that RODC, they will have access only to that limited subset of objects. Thus, repermissioning will only be required on those objects that were replicated to the lost RODC.
As stated earlier in our section on the adprep.exe tool, the /rodcprep schema update must be performed on each domain prior to creating any RODCs.

Creating an RODC is nearly exactly the same as creating a regular Domain Controller. To create an RODC, follow the same steps as shown earlier using DCPROMO in Advanced Mode. The major difference is in the wizard titled Additional Domain Controller Options. Here, check the box for Read-only domain controller (RODC) as shown in Figure 3.7.

Figure 3.7: Checking the RODC box in the DCPROMO step will create the Domain Controller as an RODC rather than a regular Domain Controller.

78

Chapter 3 After checking the box for RODC, the next step in the DCPROMO wizard will be to Specify the Password Replication Policy. This is the location where individual users or AD groups are identified whose members should be replicated down to the RODC. Users and groups can be configured to either allow or deny the replication of their information to the RODC. By providing this ability in both directions, highly secure accounts such as Domain Administrator can be specifically prevented from RODC replication. The final additional step will be the Delegation of RODC Installation and Administration. Since an RODC is a tightly controlled version of a regular Domain Controller at a remote site, it is feasible for a separate individual or group to be granted administrative access to the server. As Domain Administrative privileges are typically required to manage a standard Domain Controller, this separation allows a local individual or group to handle the management and maintenance of the RODC without needing to add them to the Domain Administrators group. Once the DCPROMO process is complete, it is possible to further access the Password Replication Policy for the RODC by navigating to Start | Administrative Tools | Active Directory Users and Computers, and then locating the RODCs computer object. Right-click the computer object and choose Properties followed by the Password Replication Policy tab to see a window similar to Figure 3.8.

Figure 3.8: Password Replication Policy can be manipulated after the installation through the RODCs computer object in Active Directory Users and Computers.

79

Chapter 3

AD Backup and Restore


Our last topic in this chapter deals with the dual concepts of AD backup and restore. Taking consistently good backups of the AD database ensures the highest chance of a successful restoration should a problem occur. The Server 2008 native tool to accomplish this task is called Windows Server Backup, much of which weve already discussed back in Chapter 2. Backing Up the AD Database Backing up the AD Database using Windows Server Backup is trivial. With Windows Server Backup, select the drive letter of the volume to be backed up within its wizard. If that volume includes an instance of AD, it will be backed up as well. There is no ability to separately backup just the system state as with previous versions. This will have the tendency of increasing the size of backups but will also tend to increase the certainty that a restoration will occur with success.
If you have selected to move portions of ADs database, logs, or SYSVOL to other volumes, ensure that those volumes are backed up as well.

In this example, well assume that all AD components are configured to reside on the C volume. To create a backup, right-click Windows Server Backup and select Backup Once. In the resulting window, choose Different options to view the complete list of options. Then, select a Full server backup. This backup will capture all data as well as the necessary system state information that includes the AD database and other needed components. The next screen allows you to configure the storage of the backup. Local drives are available as well as remote shared folders. Select an option with enough storage space to store the backup. For the advanced option, it is possible to choose whether the Volume Shadow Copy Service a full or a copy backup. The choice here is dependent on whether another backup product is being used to backup applications. At the final screen, click the Confirm button to start the backup process. Restoring Individual AD Objects Restoring individual objects in Server 2008 is relatively unchanged from in previous versions. The process still involves switching the server to run in Directory Services Restore Mode and marking objects from previous backups to authoritative. That process is a cumbersome process that doesnt get better with the release of Server 2008.
You can find more information on the process to restore individual AD objects on the Microsoft Web site at http://technet2.microsoft.com/windowsserver/en/library/690730c7-83ce-4475-b9b446f76c9c7c901033.mspx?mfr=true.

One part of this process that does help with the restoration of individual objects is the use of Server 2008s DSAMAIN tool to capture a backups snapshot of the AD. Though use of DSAMAIN is out of scope for this chapter, this tool can be used to view parallel instances of AD to find in which backup the deleted object was captured. DSAMAIN itself cannot restore the deleted object. It is only capable of showing a read-only view into the copy of the database.

80

Chapter 3 Restoring Full Domain Controllers While restoring individual objects isnt all that much easier with Server 2008, the restoration of full Domain Controllersand in the same vein full serversis much improved with Windows Server Backup. As we talked about in Chapter 2, Windows Server Backup and its tie into Volume Shadow Copies allows for the creation of backups that support bare metal restoration of a server. By backing up a Domain Controller using Windows Server Backup, it is possible using the procedures we discussed in Chapter 2 to perform a bare metal restore of that server to similar hardware after a failure. Once the server is rebooted and brought back online after restoration, ADDS will recognize that it has recovered from a backup and will begin an integrity check and re-index on the AD database. Any objects that have changed since the time of the backup will be updated through normal AD replication. Because the bare metal restoration process is so easy using Complete PC Restore, the process of returning a Domain Controller back on-line after a server failure can be completed relatively quickly.

AD Is a Central Part of Your Windows Infrastructure


In environments that use it, all of the data, applications, and people that make up a Windows environment rely heavily on AD. ADs authentication, authorization, and management functions make it a critical component of any Windows network. Thus, a good AD design is important to ensuring that users have the best possible access to their needed resources with a minimum of downtime. In this chapter, weve talked about some of the design aspects of AD as well as the down-anddirty steps necessary to get it freshly installed as well as upgraded from previous versions. Weve discussed how AD relies on DNS as well as reviewed some examples of how that reliance can be configured. Next up, well drill down our focus away from ADs all-encompassing reach to talk specifically about one role that is present in nearly all Windows environmentsthe venerable file server. Though the process of serving files hasnt changed much from OS version to version, the tools we have to manage it have. With Server 2008, we get a new role dedicated to the serving of files, and a suite of new tools that improve our ability to manage that data that is critical to our business.

81

Chapter 4

Chapter 4: File Servers & Storage Management


This guide is designed to give you an overview of the topics and technologies you need to know to properly deploy a Windows Server 2008 infrastructure. Along those lines, the order of topics Ive chosen to present here should align with how youll likely be bringing these services into your existing Windows environment. We have spent time talking about the prerequisites and installation processes associated with getting servers onto hardware. We then focused on the centralized management tool Server Manager where youre likely to initially be spending a lot of time. In Chapter 3, we discussed Active Directory (AD) in-depth and its installation to candidate Domain Controllers. Now that we have a domain in place running atop Server 2008, the next likely place where Server 2008 will make its way into your environment is within your file servers. Why here? As with Domain Controllers, owing to their composition and requirements, file servers make excellent candidates for early Server 2008 adoption: They are typically not installed with large numbers of third-party applications other than the somewhat-common antivirus and backup software. The process of serving file shares is an inherent part of the Windows OS and does not require a large number of add-on components. File serversthough highly critical to business operationsare not highly complex. The files stored on file servers are often fully segregated from the OS. Thus, the wholesale OS replacement is easy because it has virtually no impact on data files.

Because of each of these, the risk associated with migrating file servers to Server 2008 is low. Cutting your teeth with these servers as a first penetration of Server 2008 into your environment will give you the skills and experience you need for future upgrades. In this chapter, well be talking about the role of the file server in IT organizations and how Server 2008 enhances file sharing through a greater formalization of its role. Well look at each of the Role Services that comes with the new File Services Role and how each augments traditional file serving. Well talk about the new wizards that enhance the process of provisioning new shares and volumes, and conclude with a look at the new tools available to administer this hard drive over the network.

82

Chapter 4

The Role of the File Server


Even in todays work of SharePoint portals and content management systems, the venerable file share remains a popular tool in many IT organizations for supplying access to file-based data. The reasons for this are as much historical as they are interface-driven. Back even before Windows NT, file shares have in one manner or another been a long-held mechanism for users to access their data. File shares are easy to locate, easy for inexperienced users to navigate, and are good tools when secured properly to present file-based data to users. Because of their historical advantage and easy setup as a native part of the Windows OS, file shares today still enjoy a widespread representation.

Figure 4.1: File shares have been around for longer than virtually every other form of file sharing tool. Thus, they are well recognized within the user community.

What is unique about file sharing in Server 2008 is in the codification of this historically subjective service. With Server 2003 and earlier versions, the process to create a file server involved little more than creating a file share on that server and declaring to users that the server now operated as one. With virtually all Windows servers using some form of file shares, in many cases for IT uses alone, this subjective identification of servers as file servers was the cause for some confusion within business organizations. In Server 2008, the role of file serving has been changed to make the process a bit more formal. The File Services Role is now a formalized Server 2008 Role that exists whenever a share is used for the purposes of housing files. This formalization of the File Services Role provides administrators with a better line in the sand to identify which servers are responsible for this functionality.

83

Chapter 4 Also changed with Server 2008 is the aggregation of many of the other technologies commonly associated with file serving in previous editions but not directly tied to their management. With Server 2008, these other components such as the Distributed File System, the File Server Resource Manager, and Services for Network File System have been combined into a single interface within Server Manager for easier administration.
As an example, if youve ever had to search to find and install Services for UNIX in previous OSs, youll be excited to know that this as well as other components are now easily installed and managed through the Server Manager interface.

But before we get into our discussion of these new features, lets talk a bit about how Server 2008 makes a few changes to the process of sharing folders.

Basic and Advanced Folder Sharing


Even before you get to installing the File Services Role, youll notice some changes have been made to the process of sharing folders in Server 2008. By right-clicking a folder and selecting Share, the File Sharing wizard appears. This wizard looks similar to the image in Figure 4.2. What youll immediately notice is that this wizard is somewhat less complex and includes much less functionality than its equivalent in Server 2003 and earlier.

Figure 4.2: Server 2008s simple File Sharing wizard.

84

Chapter 4 With Server 2008, Microsoft has made an effort to separate what we used to think of as traditional share configuration into two different management interfaces. This is similar to how file sharing was first separated at the client level with Windows XP. The first tool, what we see in Figure 4.2, is the simple sharing wizard. Here, Microsoft has simplified the process of sharing files. Whereas our standard options for setting permissions on shares was formerly relatively complex, here only three options are available for sharing: Reader, Contributor, and Co-owner: ReaderThis permission restricts the user or group to viewing files in the folder. ContributorThis permission enables the user or group to view and add files to the folder. It also allows them to modify or delete the files that they previously created. Co-ownerThis permission allows the user or group to view, modify, and delete any files in the shared folder.

Click Share after selecting the users and groups and their appropriate level of sharing. The resulting screen will provide a link to the newly created share that can be either copied to the clipboard or emailed to users as a way of notifying them that the share has been created. In many cases, this simplistic level of permissioning will be sufficient. However, file shares and their use have advanced to the point in many organizations where greater granularity is needed. Administrators in these environments require greater options for locking down and otherwise restricting the use of shares. When more advanced needs are required, it is possible to bring back the Advanced Sharing wizard. To do so, right-click the folder and choose Permissions. There, select the Sharing tab and then click Advanced Sharing. This brings forward the screen shown in Figure 4.3. Youll see that this wizard provides quite a bit more configuration granularity than the simple sharing wizard. It is possible through this wizard to set the maximum number of simultaneous users, provide comments associated with the share, adjust how this folder is handled by users when they synchronize offline folders, and set share permissions using the traditional permissions wizard.

85

Chapter 4

Figure 4.3: The Advanced Sharing wizard provides more granular setting of share permissions and settings.

If you prefer the Advanced Sharing wizard over what you get with simple sharing, you can change the default behavior. To do so, youll have to dig quite a bit within the interface. So much so, in fact, that one presumes Microsoft anticipates administrators will prefer using the basic wizard. To change the default, click Tools | Folder Options and navigate to the View tab. There, scroll the list of Advanced Settings and clear the setting check box named Use Sharing Wizard (Recommended). Click OK to complete the setting.
Why would Microsoft simplify share permissions? Likely because of the propensity for inexperienced administrators to apply too much permissioning. Remember that share permissions work in combination with NTFS permissions on files and folders. Thus, simplicity is best. When share and NTFS permissions are overused, this adds excessive complexity to later troubleshooting. By simplifying the share wizard, one presumes Microsoft is attempting to help reduce this complexity.

86

Chapter 4

Installing the File Services Role


With all the disparate shares across a Windows server, in previous editions, it was sometimes difficult to locate and manage each individual share and its settings. To assist with this problem, Microsoft incorporated into the File Services Role a number of new features. Among other things, those features consolidate shares and share configurations into a single manageable location. To enable this, you must first install the File Services Role. As with all Roles in Server 2008, installing the File Services Role is done via Server Manager. To begin the process, right-click the Roles node and click Add Roles. When prompted, select the File Services Role in the interface and click Next. The File Services Role is equipped with 10 additional Role Services that augment its functionality. Those services as seen in Server Manager are displayed in Figure 4.4 and will be discussed in detail in the next section.

Figure 4.4: A listing of the Role Services that augment the File Services Role.

87

Chapter 4

At this point, choosing only the default File Server Role Service will enable the basic functionality youre used to seeing with shares in previous OSs. Whats interesting about the installation of this role is that it is one of the few roles that does not require this formalized process to be completed in order to recognize its functionality. Sharing the first folder on a Server 2008 installation will also automatically install the role. Thus, this installation is much different than others done through Server Manager.

Share & Storage Management Once the role has been installed completely, a new node in Server Manager called File Services with a sub-node titled Share and Storage Management will be enabled. This new console is shown in Figure 4.5. The Share and Storage Management console provides a single location where all shares and volumes on the server can be managed. Right-clicking any of the shares available in the interface allows you to quickly Stop Sharing as well as modify advanced Properties.

Figure 4.5: The Share and Storage Management wizard.

88

Chapter 4 Of particular interest in this console are four new screens that handle the management of shares, volumes, sessions, and open files: Provision a Shared Folder WizardUnlike the basic and advanced tools discussed in the previous section for sharing folders, the Provision a Shared Folder Wizard aggregates all of the potential configurations for sharing folders into a single interface. With this wizard, it is possible to create a new share; provision storage for that share; set share and NTFS permissions for the share; enable the SMB and NFS protocols; set user limits, offline caching, and access-based enumeration; and publish the share to a Distributed File System namespace. All of these configurations are done as part of the wizard, which helps eliminate the missed steps that were often overlooked in previous OS versions. Provision Storage WizardThe Provision Storage Wizard can only be used when disks are available on the system that are online and have unallocated space. When no disks exist that meet these criteria, an error message appears in place of the wizard. Clicking the Provision Storage Wizard enables the creation of new disk space and the configuration of its size, drive letter or mount, format options, and allocation size. As with the Provision a Shared Folder Wizard, this tool unifies the previously separated tools found in Disk Management in Server 2003. Manage SessionsFrom time to time there is a need to stop idle users open sessions with their connected file server. This may be to free a file or perform some work on the server that requires users to be disconnected. In Server 2003, the tool to locate and close active sessions on a file share was buried in the Computer Management interface under System Tools | Shared Folders | Sessions. In Server 2008, it is prominently displayed in the actions page. Figure 4.6 shows an example of this wizard alongside the Manage Open Files tool. Manage Open FilesSimilar to the Manage Sessions Wizard but focused on files rather than users, the Manage Open Files tool provides a single location where all files across all file shares can be closed. This is often done when a user accidentally leaves a file in an open state and another user wants to make use of the file. When that occurs, closing the open file enables the second user to begin working with the file. This was also found in Computer Management within Server 2003 under Tools | Shared Folders | Open Files but is now available natively within Server 2008 in the actions pane.

89

Chapter 4

Figure 4.6: The Manage Sessions and Manage Open Files wizards showing the result of the Administrator user opening a connection to a shared folder.

Access-Based Enumeration Much of the use of the wizards within Share and Storage Management is relatively selfexplanatory, with the single exception of Access-based Enumeration (ABE). A feature that was first incorporated into Windows with the release of Windows Server 2003 R2, ABE is a feature that enables administrators to hide files and folders for users who do not have permissions to view them. ABE is not new to file shares, having been first used in Novell-based systems many years ago. However, it is relatively new to the users of Window-based file shares. When ABE is enabled for a particular share, the files and folders on that share are reconfigured such that they are not visible to users who do not have Read permissions. Although ABE is configured on a per-share basis, the results of ABE are seen by users on a per-file and per-folder basis. Files and folders that users have Read rights to are visible within the share, while those that dont are not visible. The incorporation of ABE can be either a help or a hindrance for users if they are not properly prepared for the ABE incorporation into file shares. Although enabling ABE outwardly helps to improve security on file shares by eliminating the visibility into files and folders that users dont have access to, it can also be a challenge when users are attempting to find files and folders for which they need access but dont have it.
Be careful with the use of ABE. Although it can seem like a good idea to prevent nosy users from snooping around in locations to which they dont have access, ABE makes it difficult for legitimate users to find files and folders that they may need to request legitimate access.

90

Chapter 4

File Services Role Services


In addition to the regular functioning and management of file shares and volumes, the File Services Role includes a set of nine optional Role Services that augment its functionality. These additional Role Services are used to ease the process of provisioning shares to users, monitoring their use and misuse, and enabling support for other shares to and from other OSs. In this section, well take a look at each of these Role Services in turn. Distributed File System Namespaces The Distributed File System Namespaces (DFS-N) is a tool that is used with Server 2008 to aggregate shared folders across multiple servers into a single location. This location is called a namespace and presents the appearance to the user of a single location from which all shares can be accessed. Using DFS-N, users in multiple locations can connect to servers in multiple locations, all through what appears to be a single file share. Why is this useful? Think for a minute about the shares in a typical Windows network. There may be many, if not dozens or hundreds, of file servers within the network. Those file servers each may contain multiple file shares. They may be located in different places on the network, and each location may have different network connectivity to the client. DFS-N provides a mechanism whereby the file shares across all these servers can be aggregated into a single share that includes pointers to the individual locations. Traditionally, when users needed to access file shares, IT made those shares available through the use of drive letter mappings. But the use of drive mappings can grow unwieldy when the number of file shares grows large. Adding to the complexity can be the dynamic nature of the file shares themselves. With a direct drive mapping at the client, when changes to shares are necessary, a similar change to drive mappings are required to all clients. Depending on the process in which drive mappings are made to clients, this can be a complicated process. With DFS-N, the process to change a shares representation happens in one place for all users. DFS-N effectively adds a layer of abstraction between the clients drive mapping and the file shares they intend to use. DFS-N installation is done through Server Manager by adding the Distributed File System and DFS Namespaces Role Services to the File Services Role. Upon installing the Role Service, you will be asked if you want to create a namespace immediately or later using the DFS-N management snap-in. Once the Role Service is installed, a new node called DFS Management will appear in Server Manager under File Services. Two sub-nodes are also available, one for Namespaces and another for Replication. In this section, well be using the Namespaces subnode. Two types of namespace exist. Standalone namespaces are designed for smaller uses of less than 5000 DFS folders or for environments that do not have AD in-place. Standalone namespaces also support clustering using Windows Server Failover Clustering. The second type, domain-based namespaces, are used when an AD is in place and administrators want the ability to publish the namespace to AD. The process of publishing the namespace to AD further abstracts this layer from users and their files and folders.

91

Chapter 4

Publishing the namespace to AD allows users to access the namespace by knowing only their AD domain name and the namespace name. So, for example, with a domain name of realtimewindowsserver.com and a namespace name of MyNamespace, users would access the namespace by using \\realtime-windowsserver.com\mynamespace.

As an example, lets create a new domain-based namespace in the realtime-windowsserver.com domain. To do so, right-click the Namespaces node and select New Namespace. In the resulting screen, supply the name of the server that will host the namespace. In this case, that server will be the server w2008a. In the following screen, provide a name for the namespace. In this example, we will use the name MyNamespace. A button appears at the bottom of this screen that allows for the configuration of the local namespace path as well as shared folder permissions. The default permission here is to provide all users with read-only access. The next screen resembles Figure 4.7 and provides the option to select the type of namespace. The domain being used is at the Windows Server 2008 domain functional level, so Windows Server 2008 mode is available as an option. The Windows Server 2008 domain functional level enables support for ABE within the namespace as well as increases the total number of folders the namespace can support. Youll also see that creating a domain-based namespace enables users to connect using the domain name rather than a specific server name. Clicking Create at the final screen creates the namespace.

Figure 4.7: Creating a domain-based namespace with the New Namespace Wizard.

92

Chapter 4 Once the namespace is created, you can add folder targets by right-clicking the namespace and selecting New Folder. In the resulting screen, provide a friendly name and path to the folder target to add the target into the namespace. Additional advanced configurations can be made by right-clicking the namespace and selecting Properties.
There are some server edition requirements to support DFS-N. Server 2003 and Server 2008 Standard Edition as well as Server 2003 Web Edition can support the hosting of only a single namespace. Server 2003 and Server 2008 Enterprise and Datacenter Editions can support the hosting of multiple namespaces.

Distributed File System Replication Whereas DFS-N is used for aggregating file shares into a single interface, the Distributed File System Replication (DFS-R) encompasses another entirely different capability. DFS-R enables Server 2008 to multi-directionally replicate files and folders among multiple servers. DFS-R can be considered the upgrade from Server 2003s File Replication Service (FRS). DFS-R is substantially improved from FRS, making possible its use for more than ADs relatively light SYSVOL replication requirements. DFS-R can be used to support replication of file-based data between multiple servers and across multiple network sites. It can be set up as a solution for replicating this data between two or more servers in a multi-master configuration. Alternatively, it can be used as a tool for aggregating data from remote site servers to a local file share for centralized backup. In this secondary configuration, bi-directional replication is established between two servers, though most of the replication will occur from the remote site to the local site. Once replication is established, backup software at the local site can then be used to back up the remote sites replicated data at the local site.
As a side note, one feature gained when an AD is upgraded to the Windows Server 2008 domain functional level is the use of DFS-R as the service to replicate SYSVOL.

Installing DFS-R involves no initial configuration as part of the installation. To install the Role Service, right-click the File Services node and select Add Role Services. Click through the resulting status screens to complete the installation. Once installed, the process to set up replication between two hosts involves a number of steps.
The DFS-R Role Service must be installed individually to each replication member.

First, right-click the Replication sub-node under DFS Management, and select New Replication Group. In the resulting screen, youll be given the option to select a Multipurpose replication group or a Replication group for data collection. Here, choose the first option to set up multimaster replication. Provide the replication group a name, an optional description, and a domain in the next screen, and in the screen following, enter the group members. The next screen, shown in Figure 4.8, allows for the establishment of the groups topology. As you can see in the figure, three topologies are available: hub and spoke, full mesh, and no topology. The hub and spoke topology is primarily used when replication data is mostly unidirectional with the hub being the source of most changes. The full mesh is used when equal levels of change are expected among members. The no topology option allows you to later determine the topology after the group is created.

93

Chapter 4

Figure 4.8: Selecting a DFS-R topology.

The next screen configures the bandwidth or schedule for replication. For continuous replication, you can select a bandwidth throttle in Kbps or Mbps. For scheduled replication, a schedule editor is provided. Bandwidth can similarly be throttled in the schedule editor if desired. Next is the selection of the Primary Member. This selection determines which members data is considered authoritative at the time of first replication and is a critical consideration when data exists to be replicated on multiple machines. Lastly is the selection of folders to be replicated. Click Add to select folders to be replicated. Here also is the ability to select how the folders are displayed and any repermissioning to be done. Before establishing the replication, the wizard presents a screen called Local Path of {Folder Name} on Other Members. Here you can select whether you want to enable the replicated folder to be available on remote servers. This is required if you want users to make use of the replicated folder on those remote systems. Once created, the servers will begin replicating once they have been informed of the update through standard AD replication. Additional members or replicated folders among members can be added by right-clicking the replication group and selecting New Member or New Replicated Folders. A new topology can additionally be designed as necessary by selecting New Topology.

94

Chapter 4

DFS-R is good for use as a replication tool when the same data is not being changed at the same time in multiple places. This would be the case when one user attempts to modify a file in one location while at the same time another user attempts to modify the same document in another location. When this behavior occurs within the environment, DFS-R can be problematic due to replication conflicts between the changed documents. In this case, another tool such as Microsoft Office SharePoint Server with its ability to check-in and check-out documents may be a better solution.

One element to pay close attention to is called Create Diagnostic Report. When replication begins to experience problems, this report creation utility can assist with quickly finding the result of the problem. Three types of reports are available: Health reports detail the health and efficiency of the replication connection Propagation tests verify replication by creating a test file in a replicated folder Propagation reports detail the replication of a test file through a replication test

File Server Resource Manager A management tool that first appeared with Server 2003 R2 and is natively available in Server 2008 is the File Server Resource Manager (FSRM). This tool enables administrators to monitor the use of files and folders within their environment. That monitoring can provide periodic storage reports to identify usage trends as well as create and manage template-based user quotas across multiple file shares. It can also enforce the use of file type screening that prevents users from storing inappropriate (and wasteful) files on file servers.
If youre tired of user MP3 files hogging your storage space, you can use FSRM to screen out these files from being saved to high-dollar enterprise storage.

Installing the FSRM Role Service to a server involves the same process as has been used for each of the other Role Services weve discussed to this point. When installing FSRM, you will be asked to optionally identify the disks to be monitored for Storage Use Monitoring. These disks can be selected during the installation or later after the installation is complete. Once installed, FSRM is administered under the Share and Storage Management node of the File Services Role. FSRMs capabilities are broken down into three areas: quota management, file screening management, and storage reports management. Well discuss each of these in the following sections.

95

Chapter 4

Quota Management Quotas are designed to be template-based policies that govern users capacity to store files and folders within a particular file share or server. These templates allow users to store a certain level of megabyte or gigabyte of space on a file share or server while at the same time providing reports and/or alerts to users when they go beyond their allowable limit. Figure 4.9 shows an example of a template that gives users a 200MB hard limit with user notifications at 85%, 95%, and 100% of that limit. Setting a hard limit means that users will be prevented from exceeding that limit. Soft limits are used for monitoring users use of space and will not prevent users from going beyond their stated limit.

Figure 4.9: An FSRM quota template limiting users to 200MB of space.

96

Chapter 4 By clicking any of the notification thresholds and selecting Edit, you will be shown the type of warning (email, event log, command, or report) and specifics for configuration of each. Email and report notifications require an SMTP server to be configured to accept the email traffic.
The concept with these notifications is to provide a visual indication to users when they are approaching and reach their maximum level of space on file servers. The idea is to encourage users to limit the amount of data they store on file servers and clean up their work when necessary so that you dont have to.

Once a template is created that includes the size limits, actions, and notifications of value, the next step is to create a quota based on that template. This can be done by right-clicking the template and choosing Create Quota from Template. In the resulting screen, youll be asked to identify the root path for the quota template to be assigned. The template can be extended to support all subfolders of that path to ensure full compliance with the template. File Screening Management File screens can also help in preventing or deterring users from storing inappropriate file types onto file servers. The difference between file screens and quotas has to do with the types of files prevented from storage as opposed to the quantity. File screening management is broken into three separate elements that work together in creating a policy: File groupsFile groups are categories of files and associated extensions that relate to that category. For example, the file group Audio and Video Files includes 37 file extensions such as .MP3 and .AVI for restricting audio/video files. New groups can be created and extensions added and removed from existing groups tailored to the needs of the environment. File screen templatesOnce a file group is assigned, it can be incorporated into a file screen template to enable the actions shown to the user when they attempt to store a configured file type. Similar to quotas, file screen templates can make use of email, event log, command, and report notifications to alert users when they have attempted to store a configured file type. File screen templates can be set to actively block the file type from storage or merely record its storage for monitoring purposes. File screensThis element brings together the settings from the other two and applies them to a location on a server. Figure 4.10 shows an example of how a Block Audio and Video Files template, which leverages the Audio and Video Files group, is being set for the C:\MyShare folder.

97

Chapter 4

Figure 4.10: A file screen that prevents the storage of Audio and Video files.

Storage Reports Management Storage reports management schedules and views reports based on the use of storage in a particular volume or folder. Eight reports are available, and each can be customized to a minimal extent. Depending on the needs of the administrator, reports can be run on-demand or scheduled to occur at other times. Also, depending on the size of the folder or volume being reported against, creating a report can take an extended period of time. Thus, scheduling reports to occur at regular intervals will prevent waiting for the report to complete. Preparing scheduled reports also helps with identifying trends in storage use. Reports can be optionally delivered to an email account when run or viewed within the interface. Figure 4.11 shows an example Storage Reports Task and the storage reports to be generated as part of the task.

98

Chapter 4

Figure 4.11: A Storage Report Task that analyzes usage on C.

Services for Network File System In heterogeneous environments, it is often necessary for UNIX- and Linux-based machines to connect with and exchange files with Windows-based machines. Because UNIX/Linux uses a different file system than does Windows, the conversion required in making this connection necessitates that software exist on one end or the other that supports the transfer. Server 2008 includes as part of the File Services Role a Role Service called Services for Network File System (NFS). This Role Service enables Server 2008 to either connect to an NFS mount on a UNIX/Linux host or host a Windows share as an NFS mount for serving to those same hosts. The process of installing the Services for NFS Role Service is similar to doing so for the other services weve discussed to this point.

99

Chapter 4 Once installed, Services for NFS creates new options that are seen within the Share and Storage Management node and adds the Services for NFS option in Administrative Tools. Once installed, click the link in the Actions pane titled Edit NFS Configuration. Because UNIX/Linux and Windows often have segregated identity management tools, it is necessary to configure identity mapping between UNIX/Linux identities and Windows identities if you want to use permissions other than anonymous. Clicking Identity Mapping Wizard in the resulting screen brings forward the wizard. This wizard can configure AD as the mechanism for identity mapping. Once mapping is set up properly, new NFS-enabled shares can be created through the Provision a Shared Folder Wizard. In the screen titled Share Protocols, enable the NFS protocol as shown in Figure 4.12 and provide a Share name. The Share path will be populated by the wizard and is the path that will be used by NFS clients for connection. Enabling the NFS protocol will create a new screen in the wizard titled NFS Permissions where client groups and host permissions can be set for the share.

Figure 4.12: Enabling NFS on a share in the Provision a Shared Folder Wizard.

100

Chapter 4 NFS handles permissions through a much different model than NTFS, allowing per-host restrictions but limiting permissions to Read-Only and Read-Write. Anonymous permissions can similarly be set on the folder by checking the box for Allow anonymous access and setting the NTFS permissions on the folder to grant the correct level of access to the Everyone group though this is not recommended due to security concerns. Also possible with Services for NFS is the ability to connect a Windows server to a UNIX/Linux NFS mount using Client for NFS and the command-line mount command. This command connects an NFS mount to a local drive letter and requires special configuration on the UNIX/Linux host for functionality.
Details about cross-OS file sharing and identity mapping are complex topics that are outside the scope of this chapter. More information about Services for NFS that includes client and server components can be found on the Microsoft Web site at http://technet2.microsoft.com/windowsserver2008/en/library/187ea492-4d0e-41d9-a11c05f5fea922061033.mspx?mfr=true.

Windows Search Service In environments with small file servers and few users, Microsoft provides the native Windows Search Service. This tool, which replaces the Indexing Service in Server 2003, speeds the process of searching for files and folders on file shares. Clients must be specifically configured to support the Windows Search Service; Windows Vista includes native support while Windows XP and Server 2003 require a separate installation of the Windows Desktop Search client. This client is a free downloaded from the Microsoft Web site. The process of installing the Windows Search Service Role Service is similar to that of the other services weve discussed to this point. As part of the installation, you will be asked to select the local volumes you want to index.
The indexing process can consume a noticeable amount of server processor resources.

Once installed, the configuration for the Windows Search Service is found under Control Panel | Indexing Options. There you can configure which folders should be indexed and which should be excluded from indexing. Figure 4.13 shows the result of selecting Advanced. There, it is possible to configure the indexing of encrypted files as well as change the location of the index, restore defaults, or completely rebuild the index. As youll see, much of the indexing service other than configuring which folders should be part of the indexoccurs behind the scenes.

101

Chapter 4

Figure 4.13: The advanced options within the Indexing Options control panel settings.

Windows Server 2003 File Services Three additional Role Services are available for legacy support of Server 2003 file services. These Role Services specifically install the predecessor to DFS-R, FRS, as well as the predecessor to the Windows Search Service, called the Indexing Service. These two features are used in environments in which support for previous versions is needed to maintain functionality with legacy systems. In both cases, DFS-R and Windows Search Service provide performance, stability, and management benefits to their previous versions. Thus, these legacy versions should be installed only in situations in which previous version support is required.

102

Chapter 4

Properly Managing Storage Eliminates Critical Downtime


Server 2008 comes equipped with a host of new and improved features that improve the experience of hosting and managing storage on server systems. The proper management of that storage ensures the highest levels of uptime. Ensuring that storage is being used for the right business purposes, is easy to locate, can be replicated to locations both local and remote for redundancy and performance reasons, and can be quickly searched for critical data ensures that users gain access to their critical file-based data with the highest levels of efficiency. As youve seen throughout this chapter, Server 2008 provides a host of options for fulfilling each of these critical storage needs. In our next chapter, well move away from what were used to considering the full version of Server 2008 to what some think of as Server 2008 lite. Server Core is a command-line version of Server 2008 that is specially designed for certain applications. Its fewer hardware requirements mean it can be installed to older equipment, which extends the lifetime of existing hardware. And its reduced attack surface makes it an excellent addition for quasi-secured environments and branch offices. The only hard part is learning how to administer it completely from the command prompt. In Chapter 5, well talk about Server Core with a special focus on how to do just that.

103

Chapter 5

Chapter 5: Server Core


At times throughout the career of every IT administrator comes the thought, Theres got to be a different way to do this. Administering servers in the data center, out in the field, and within branch offices across the network involves different sets of skills and experience. The processes that work well for daily administration on a Windows server physically residing down the hall are quite different for those that sit down the street or on the other side of the world. In addition to handling the common tasks associated with the daily care and feeding of Windows servers, administrators are required to possess another suite of skills to keep those servers secure. Windows has long-held a stigmamost often directed from the UNIX worldthat it is an easily hackable operating system (OS). This stigma in many ways originates from Windows design goals. Windows Server has always tried to be everything to everyone. This monolithic structure to the Windows OS makes it highly flexible. The same Windows OS that runs a Web server can also be used as a file server or an email server. Applications are relatively guaranteed to function across all instances of the same OS version. But at the same time, this flexibility introduces numerous additional touch points for a would-be attacker, each of which is a vector for potential compromise. With Windows Server 2008, Microsoft takes a hard look at that critical thought. In developing Windows Server 2008, Microsoft has realized that in a few special circumstances there is another way to do Windows administration. There are situations where administrators want to sacrifice flexibility in place of greater security, a lower profile, and easier remote administration. Server hardware that would otherwise be seeing its end of life can be repurposed for special needs. Enlightened administrators who are willing to change their processes just a little for these special circumstances can stand to gain through a streamlined OS that sports minimal interfaces and limited configuration options. That new and different way is Windows Server Core.

104

Chapter 5

What Exactly Is Server Core?


To talk first about what Server Core is, it is in many ways easier to start with a discussion of what it is not. Understanding what Server Core isnt designed to do for your network environment goes far in understanding what it can do. Once you understand its focused potential, youll find that there are likely places within your network environment where it can be wellpositioned. In short, Server Core is not: A completely different OS or OS version. That being said, Server Core is limited in the types of workloads that it can process. Most of the typical applications that you would install to a full Server 2008 instance wont work in Server Core. A different SKU or Windows Server edition. Microsoft does not consider Server Core to be a separate edition of Server 2008. Thus, buying a copy of Server 2008 nets you the privilege of installing it either as a full instance or a Server Core instance. Server 2008 in Standard, Enterprise, or Datacenter editions can be installed with the Server Core option, and each installation enjoys the benefits associated with its related edition. Linux. Considering how it looks at first blush, Server Core could have been titled Windows Server 2008: DOS Edition. The skills necessary to install, manage, and use Server Core are not all that different than those needed to manage a Windows installation from the command line. Server Core is fundamentally command-line driven, with the command prompt as its primary interface at the console. Traditional Windows commandline tools that do not have a graphical interface are your tools for configuring and managing your Server Core instances at the console. Because of this, if you are familiar with the command-line variants of common GUI tasks, you wont find yourself learning a new OS in order to use Server Core. A solution for every IT need. Server Core by default can support only a limited number of Roles and Role Services otherwise enjoyed by a full Server 2008 instance. Thus, you wont be able to run some workloads on Server Core. Period. In this chapter, well talk about the Roles available for Server Core and when and where youre likely to use them.

In considering these bullets explaining what Server Core is not, we can now further define what a Server Core instance is intended to be. First and foremost, Server Core is designed to be a slimmed-down version of the full Server 2008 OS. If you remember back to Chapter 1, we discussed how one of the major tenets of Server 2008 was involved with the componentization of the Windows OS. This breaking apart of the traditional processes associated with running the OS has enabled Microsoft to nail up hard dividing lines between functionalities and gain a true understanding of their interrelations. As we discussed there, most of the fruits of Microsofts componentization efforts are all behind the scenes and something youll never see. But otherssuch as Roles, Role Services, and Featuresare very obvious new additions. Once Microsoft took the time to formalize the interconnections between individual components of the Windows OS, it then became a much easier task to figure out which pieces could be pulled out to create this minimal new OS installation option. By trimming out large sections of the Windows OSthe GUI, numerous services, management endpoints, code execution runtime, and so onMicrosoft was able to create Server Core as an OS solution that runs on slower and older hardware while sporting fewer places for a would-be attacker to penetrate.
105

Chapter 5 Whats particularly exciting about Microsofts implementation of Server Core in Server 2008 is that most of the traditional GUI-based remote management interfaces still remain available. Though your console interaction with Server Core will be highly command-line driven, as seen by the sparse desktop in Figure 5.1, most of your administration will be through remote interfaces such as the Remote Server Administration Tools (RSAT).

Figure 5.1: Server Cores desktop at the console displays little more than a command prompt window.

RSAT are the replacement for the Administration Tools (adminpak.msi) previously found on the Windows Server 2003 media and typically installed to manage workstations like Windows Vista. This toolset includes many of the tools originally found in adminpak.msi as well as a number of new tools and can be downloaded from the Microsoft web site at http://support.microsoft.com/kb/941314.

106

Chapter 5

Positioning Server Core in Your Environment


This understanding of what Server Core does and doesnt attempt to be should assist you with understanding what it truly is. At the same time, it should help you gain an understanding of where it can be best positioned in your environment. Server Core provides three specific categories of benefit to the IT environment: Reduced hardware needs. At minimum, Server Core can be installed to a server with a 1GHz x86 processor or a 1.4GHz x64 processor, 512MB of RAM, and 10GB of disk space. Although these are the same minimum requirements for the full version of Windows Server 2008, Server Cores minimal footprint brings down its effective needs closer to the minimums than seen with full instances. This is particularly useful when environments want to retain aging server hardware that would otherwise be nearing the end of its operational life cycle. It is also handy when Server Core is used to host virtualization environments such as Hyper-V. With Server Core as the primary partition in a Hyper-V installation, the reduced resource requirements free processing power for residing virtual machines. Reduced attack surface. Microsoft patching touches all components of the Windows OS. Thus, when fewer components of the OS are present on a particular server, this reduces the number of patches required to keep the instance up-to-date. With fewer interruptive patches required for a Server Core installation and fewer touch points for a potential attacker, Server Cores reliability and security are greatly enhanced. Reduced management requirements. With the reduction in capability to host server functionalities comes an equal reduction in the level of management required for a Server Core instance. Thus, managing Server Core will likely involve less time than with a full instance. Going further with this concept is the nature of command-line management itself. Because individual commands used to manage a Server Core instance can be easily wrapped into batch files or scripts, youll quickly develop a pool of configuration scripts useful for rapidly completing most major tasks.

If Server Cores initial learning curve frustrates you, consider this last point thoroughly. Once youve completed educating yourself on the new commands necessary to get a Server Core instance operational, keep those commands close at hand. Youll be able to consolidate them into a small number of batch files and/or scripts for later configuration of additional systems. Well talk about many of the critical ones youll need to get started later in this chapter.

107

Chapter 5 Considering these categories of benefit along with its architecture, there are a few areas within your IT environment where positioning Server Core can best fulfill your needs: Branch office environments. In many cases, branch office IT environments dont include the same level of security and control as those at a business primary location. When these small environments determine the need for server resources to be hosted on-site at the branch office, those server resources often find themselves stored in quasi-secured locations such as closets and under the desks of semi-trusted individuals. This reduction in overall security in many ways makes these branch office servers some of the highestrisk assets in an IT infrastructure. The graphical nature of the full Windows OS can also make challenging the administration of these servers across latent WAN connections. By leveraging Server Core in these environments, it is possible to use command-line management and the added intrinsic security of Server Core to better handle the needs of branch offices. Also, the resource needs of branch offices are typically lower, so Server Cores reduced hardware footprint enables the use of lower-performing or otherwise end-of-life hardware. Infrastructure servers. Infrastructure servers such as DNS, DHCP, domain controllers, and file servers typically have an overall lower usage profile than other servers. These servers often fulfill a small function or even a single function that does not require substantial hardware resources. In these cases, Server Cores reduced hardware footprint and streamlined OS makes it an excellent fit for infrastructure servers. Server Cores streamlined OS reduces the chance for application or configuration conflict, while its hardware needs allow otherwise end-of-life machines to remain in service for low-use needs. Virtualization. In many ways the opposite of the previous two examples, Server Core works well in the high-use environments typically seen on virtualization hosts. For virtualization platforms such as Hyper-V, a primary partition is needed to handle the functionality of the host machine. That primary partition is not a virtual machine, but its presence and functionality is necessary for the host to support residing virtual machines. Server Core requires substantially fewer resources than a full instance of Server 2008. This reduced footprint enables more resources to be targeted for use by residing virtual machines, which increases the capacity for virtual machine workload as well as the number of simultaneously hosted virtual machines on the host.

108

Chapter 5

Installing Server Core


The installation of Server Core involves the same processes as installing a full instance of Server 2008. Server Core can only be installed through the Server 2008 media as it is an installation option rather than an installable Role or Role Service. Considering this, there are some important notes considering the installation of Server Core: It is not possible to upgrade from a previous version of Windows Server to a Server Core installation. A clean installation is the only supported option. It is not possible to upgrade from a full installation of Server 2008 to Server Core. It is not possible to upgrade from Server Core to a full installation of Server 2008.

Considering these gotchas, the first step in installing Server Core is to simply insert the Server 2008 media into the drive of your computer and start the machine. Then, use the following steps to complete the installation:
1. When the Install Windows screen appears immediately after startup, click Next. 2. At the resulting screen, click Install Now. 3. A screen will appear that looks similar to Figure 5.2. Youll see that Server Core

installation options appear for each edition of the OS. Select this version of the correct edition for which you have purchased, and click Next.
4. Select the I accept the license terms check box, and click Next. 5. Click Custom (advanced) to perform a clean installation. This should be the only option

available.
6. In the resulting screen, you will be provided options for the drive to install Windows.

Select the appropriate drive, and click Next. If additional tasks are required, such as the loading of disk drivers or the formatting of existing drives, those can be done through the interface at this point.

109

Chapter 5

Figure 5.2: The Server 2008 installation is the only location where Server Core can be selected for installation.

After these very simple steps, the Server Core instance will begin its installation. Note the speed at which the installation completes. A Server Core installation uses substantially fewer files, which decreases the total time necessary to complete its installation.

110

Chapter 5

Configuring Server Core


As with the full installation of Server 2008, Serve Cores install routine does not ask most of the questions youre used to seeing in previous OS versions. Thus, those configurations must be done after the installation. In the full version, the Initial Configuration Tasks wizard handles many of these needs. But with Server Core, there is no GUI, so there is no Initial Configuration Tasks. Thus, to get your newly installed Server Core instance on the network, added to the domain, and generally ready for management through remote utilities, there are several actions that need to be completed from the command line. If you are unfamiliar with the command-line adjuncts for the GUI configurations youre used to seeing in the full version, this process can be somewhat daunting. But know that the process to set up an initial configuration requires only a short setup before the machine is ready for remote administration. In this section, Ill provide you the short list to get you started. Once we get our new machines network connection correctly configured, well then discuss customization elements as well as how to begin the process of installing Roles, Role Services, and Features. Initial Configuration The first step after the installation process completes and the machine reboots is the initial login. After the installation, a Server Core instance can be logged into using the administrator username and a blank password. Upon logging in, the interface will require an initial password change. Complete this process to bring forward the Server Core user interface. The resulting desktop looks like the screen weve already seen in Figure 5.1. Once there, use the following set of tasks to complete an initial configuration:
1. First, reset the password for the administrator account if necessary. If the initial

administrator password you just set requires changing either now or at a later point, you can do this using the command
net user administrator {password}
2. As with a full version installation, the initial computer name will be a randomized

string of characters. That computer name can be changed to something more appropriate to your organizations naming standards using the command
netdom renamecomputer %computername% /newname:{newComputerName}
3. Restart the computer to accept the name change. To immediately restart the computer,

forcing closed all running processes and applications, use the command
shutdown /r /f /t 0.
4. Once the reboot is complete, identify the network interfaces available on the computer.

Do so with the command


netsh interface ipv4 show interfaces

111

Chapter 5 For a server with a single interface, the result from this command may resemble the result that Listing 5.1 shows.
C:\Users\administrator>netsh interface ipv4 show interfaces Idx --2 1 Met MTU State Name --- ----- ----------- ------------------10 1500 connected Local Area Connection 50 4294967295 connected Loopback Pseudo-Interface 1

Listing 5.1: The result from running the netsh interface ipv4 show interfaces command.

5. You will use the interface name information from step 4 to change the IP address, subnet

mask, and default gateway of this server. Do so all in one step using the command
netsh interface ipv4 set address {interfaceName} static {ipAddress} {subnetMask} {defaultGateway}

Replace {interfaceName} with the name received from step 4. In the example shown in Listing 5.1, that name is Local Area Connection (with the quotes).
6. This server will require name resolution in order to connect to a domain. Connect it to a

DNS server using the same interface name as with the previous step and using the command
netsh interface ipv4 add dnsserver name={interfaceName} address={dnsServerIpAddress} index=1
7. It might be necessary to install and activate the license key for this server. That license

key can be installed via the command line using the slmgr tool. This tool is used in Vista and Server 2008 for managing all components of licensing from the command line. Install and activate a license key using the twin commands
slmgr -ipk {licenseKey}

and
slmgr -ato
8. Next, youll want to join this computer to the domain. Once the Server Core instance is

joined to the domain, Group Policies that relate to the instance will begin applying, which will assist with some of the further configuration tasks. You can join this computer to the domain using the command
netdom join {computerName} /domain:{domainName} /userD:{domainUsername} /passwordD:{domainPassword}
9. Reboot the computer to complete joining the computer to the domain.

112

Chapter 5

Though at first blush, this can seem like a complicated process that would normally be a few mouse clicks though a GUI, remember that command-line actions are automatically easily scriptable. It is trivial to group all these commands together into a few batch files that complete all the necessary pieces for you all at once. Consider two batch files: Batch File 1 netdom renamecomputer %computername% /newname:w2008c shutdown /r /f /t 0 Batch File 2 netsh interface ipv4 set address Local Area Connection static 192.168.0.20 255.255.255.0 192.168.0.1 netsh interface ipv4 add dnsserver name=dnsserver1 address=192.168.0.10 index=1 slmgr -ipk XXXXX-XXXXX-XXXXX-XXXXX-XXXXX slmgr -ato netdom join w2008c /domain:realtime-windowsserver.com /userD:administrator /passwordD:P@ssw0rd! shutdown /r /f /t 0 Or, if youre not a fan of batch files but like using scripted installations using Windows System Image Manager (WSIM), it is also possible to create an autounattended.xml file that installs Server Core with many of these configurations already pre-generated. By using WSIM like we discussed in Chapter 1 to create unattended installation files, this entire process can be rolled into the core installation itself.

Customizing Server Core Even upon the conclusion of the previously listed tasks, there may still be several customizations you want to add to your Server Core console to make comfortable your later management activities. Once these tasks are complete, your Server Core instance will still look a lot like what we first saw back in Figure 5.1. Also, unless Group Policies are created that specifically set these next few values, they remain set with their default configurations. In this section, well talk about some of the additional commands you might want to run to further configure your Server Core instance. In many cases, the process to customize the interface involves directly manipulating the registry. One of the few Server Core GUI tools available at the console is Registry Editor, launched with the
regedit

command (see Figure 5.3). But in keeping with the command-centric nature of Server Core, the following registry manipulations will use the command-line Registry Editor tool reg.

113

Chapter 5

Figure 5.3: The same Registry Editor seen with other versions of Windows is available in Server Core, launched using the regedit command.

114

Chapter 5 This tool is available in virtually every edition of Windows and is useful for adding and removing keys and values as well as manipulating existing keys and values. Learning the effective use of the reg tool is useful for scripting many of these operations, and experience with it dovetails well into the same processes you can use to manage full Server 2008 instances: Removing unused IPv6 interfaces. Most IT environments havent yet made the jump to IPv6. But with Server 2008, Microsoft has made IPv6 an important part of the networking stack. In fact, if IPv6 functionality is not removed from your Server Core (or Server 2008) instance, you might get IPv6-related responses occasionally when running common networking commands. To prevent this, it is possible to remove IPv6 functionality from your Server Core instance by hacking the registry. Do so with the command
reg add HKLM\SYSTEM\CurrentControlSet\Services\tcpip6\Parameters /v DisabledComponents /t REG_DWORD /d 0xff

Enable and Configure the Windows Firewall. The Windows Firewall is significantly improved in Server 2008, with a large effort towards making the firewall easily manageable through Group Policy. It can, however, also be activated and configured individually through the command line or through the Windows Firewall with Advanced Security console in Administrative Tools. The command-line tool netsh is typically used to manage Windows Firewall connections, though the syntax to use this command can be somewhat challenging and lacks much of the easy-to-use configuration interfaces available in the GUI console. If you do not plan to use Group Policy to manage your Windows Firewall but instead want to manage it for the local machine only, it is possible to enable remote management of the firewall through another computer. Do so with the command
netsh advfirewall set currentprofile settings remotemanagement enable

Once entered, create a new MMC on a full Server 2008 or Vista machine using the mmc command. Click File | Add/Remove Snap-In and add the Windows Firewall with Advanced Security snap-in to the console. In the resulting screen, youll be asked to provide the computer whose firewall you want to manage. Enter the name of your Server Core computer into the resulting screen. You can now remotely manage the firewall of your Server Core computer through the GUI. Manage other configurations and connections. The settings of six additional configurations for your Server Core instance are consolidated into the scregedit.wsf tool. This script is designed to assist with the configuration of a series of somewhat unrelated configurations. The configurations that can be managed through scregedit.wsf are outlined in Table 5.1 for better clarity. To use the scregedit.wsf tool, first navigate to the C:\Windows\system32 folder and use the command format
cscript scregedit.wsf {configuration} {value}

115

Chapter 5 For each, to view the currently set value, use the command
cscript scregedit.wsf {configuration} /v.
Scregedit Configuration cscript scregedit.wsf /AU 4 cscript scregedit.wsf /AU 1 cscript scregedit.wsf /AR 0 cscript scregedit.wsf /AR 1 cscript scregedit.wsf /CS 1 Explanation Automatic Updates: Enable Automatic Updates and automatically download and install updates. Automatic Updates: Disable Automatic Updates Terminal Services: Allow Remote Administration Connections Terminal Services: Disable Remote Administration Connections Terminal Services: Require Network Level Authentication for Terminal Services Connections. This setting requires the Remote Desktop Client to be at version v6.0 or greater. Terminal Services: Do not require Network Level Authentication for Terminal Services Connections. This setting allows Remote Desktop Clients of any version to connect to the server. IPSec Monitor: Allow remote management of IPSec using the IPSec Monitor. IPSec Monitor: Do not allow remote management of IPSec using the IPSec Monitor. DNS SRV: Changes the priority for Domain Controller DNS SRV records for this server to {value}. This number can be a value from 0 to 65535. The recommended value is 200. DNS SRV: Changes the weighting for Domain Controller DNS SRV records for this server to {value}. This number can be a value from 0 to 65535. The recommended value is 50. Command Line Reference: This command provides a list of common commands used at the command line to manage a Server Core instance.

cscript scregedit.wsf /CS 0

cscript scregedit.wsf /IM 1 cscript scregedit.wsf /IM 0 cscript scregedit.wsf /DP {value}

cscript scregedit.wsf /DW {value}

cscript scregedit.wsf /CLI

Table 5.1: The scregedit.wsf command can be used to set a number of Server Core configurations.

Viewing and manipulating running processes. Though its instantiation is not obvious, it is possible to launch the standard Task Manager to view processes, running applications, and the performance of the server. Do so using the Control-Alt-Delete sequence to bring forward the Windows security screen. Then click Start Task Manager. Creating additional command prompt windows. If you like to multi-task with more than just one command prompt window, it is possible to create additional ones through the Task Manager. First, bring forward the Task Manager as explained in the previous bullet. Then, click File | New Task (Run). In the resulting window, enter
cmd

and click OK. Any number of additional command prompt windows can be created using this method.

116

Chapter 5

When you bring forward the New Task (Run) window, youll see the Browse button. But, if you click the button, youll notice that nothing happens. Here and in a number of places within the Server Core OS, there are elements and buttons that dont work. In this case, the components that run Windows Explorer are not present in a Server Core instance. So, if you find a button that doesnt work, dont fret. It is possible that its functionality has been removed.

Modifying Time, Date, Regional, and Language Options. Though most GUI controls have been removed, two Control Panels actually still function within Server Core. These are the Date and Time control panel as well as the Regional and Language Options control panel. To configure these settings, use the commands
control intl.cpl

and
control timedate.cpl

Changing the Screen Resolution. Though Server Core is predominantly focused on command-line use, it still resides within a graphical quasi-desktop. Hence, the blue-green screen upon which the command prompt window appears. To change the resolution of the screen at the console, you can do so at command line by modifying the registry. First, you must determine the hexadecimal values for width (the x axis) and height (the y axis) of your desired resolution. This can be done using the Windows calculator. As an example, the 800 600 resolution converts to the hexadecimal values of 320 258. Each Server Core instance will also have a number of GUID values in the registry location HKLM\System\CurrentControlSet\Control\Video. Trial and error will assist you with locating the proper value to change for your console display. Then, plug those values into the two commands
reg add HKLM\SYSTEM\CurrentControlSet\Control\Video\{GUID}\0000 /v DefaultSettings.XResolution /t REG_DWORD /d {xAxisValueInHex}

and
reg add HKLM\SYSTEM\CurrentControlSet\Control\Video\{GUID}\0000 /v DefaultSettings.YResolution /t REG_DWORD /d {yAxisValueInHex}

Changing the number of colors. With the correct GUID information for your console display, it is also possible to change the number of colors displayed on the desktop. The number of colors is determined by a value known as bit depth. For bit depth, 8 bits is equal to 16 colors, 15 bits is equal to 256 colors, 16 bits is equal to High Color, and 24 bits is equal to True Color. Use this information as well as your GUID in the command
reg add HKLM\SYSTEM\CurrentControlSet\Control\Video\{GUID}\0000 /v DefaultSettings.BitsPerPel /t REG_DWORD /d {new size in hex}

117

Chapter 5

Installing Roles, Role Services, and Features After you have the interface configured and customized to your liking, the next step is to install the necessary Roles, Role Services, and Features that enable the desired functionality of your server. As weve already discussed, Server Core does not include all the functionality associated with every component normally found in the full edition of Server 2008. Server Core is limited to only a small subset of the complete set of Roles, Role Services, and Features. That subset effectively includes the following Roles and their associated Role Services.
Note that for each Role, not all Role Services are necessarily available. As an example, for the Web Server (IIS) Role, only a small subset of the more than 40 possible components is available.

The following list highlights the Roles available in Server Core as well as their titles as labeled in the interface: Active Directory Domain Services (DirectoryServices-DomainControllerServerFoundation) Active Directory Lightweight Domain Services (DirectoryServices-ADAM-ServerCore) DHCP Server (DHCPServerCore) DNS Server (DNS-Server-Core-Role) File Services (FRS-Infrastructure) Web Server (IIS) (IIS-WebServerRole) Print Services (Printing-ServerCore-Role) Streaming Media Services (MediaServer) Hyper-V (Microsoft-Hyper-V)

Be aware of the restrictions not only on Roles but also Role Services when considering what workload you intend your Server Core instance to host.

118

Chapter 5 Server Core also has the ability to host a small subset of features as well. These features typically relate to server-based functions and for obvious reasons do not include most of the typical clientside management utilities. Bitlocker (BitLocker) Client for NFS (ClientForNFS-Base) Failover Clustering (FailoverCluster-Core) Web Server Management Tools (IIS-WebServerManagementTools) Removable Storage Management (Microsoft-Windows-RemovableStorageManagement Core) MultiPath I/O (MultipathIo) Network Load Balancing (NetworkLoadBalancingHeadlessServer) Quality Windows Audio Video Experience (QWAVE) Server For NFS (ServerForNFS-Base) SNMP (SNMP-SC) Subsystem for UNIX Applications (SUACore) Telnet Client (TelnetClient) Windows Server Backup (WindowsServerBackup) WINS (WINS-SC)

Be aware that for some components, an additional download and installation may be required, as in the case of the Hyper-V role.

Although the full version of Server 2008 includes a native command-line tool for automatically installing and uninstalling these components in servermanagercmd.exe, that tool does not function on the Server Core OS. Because of Server Cores inability to run managed code, it requires another tool to manage package installation and uninstallation. Two commands are used in Server Core for this activity. The first, oclist.exe is used in much the same way as servermanagercmd.exe is used with its -query switch. Running oclist.exe on a Server Core instance shows the components that can be installed as well as those that are already installed. Listing 5.2 shows an example snippet of the results from this command. To use the command, simply run
oclist

from the command prompt.

119

Chapter 5

C:\Users\administrator>oclist Use the listed update names with Ocsetup.exe to install/uninstall a server role or optional feature. Adding or removing the Active Directory role with OCSetup.exe is not supported. It can leave your server in an unstable state. Always use DCPromo to install or uninstall Active Directory. ======================================================================= Microsoft-Windows-ServerCore-Package Not Installed:BitLocker Not Installed:BitLocker-RemoteAdminTool Not Installed:ClientForNFS-Base Not Installed:DFSN-Server Not Installed:DFSR-Infrastructure-ServerEdition Not Installed:DHCPServerCore Not Installed:DirectoryServices-ADAM-ServerCore Not Installed:DirectoryServices-DomainController-ServerFoundation Not Installed:DNS-Server-Core-Role Not Installed:FRS-Infrastructure Not Installed:IIS-WebServerRole | |--- Not Installed:IIS-FTPPublishingService | | | |--- Not Installed:IIS-FTPServer | | | | |--- Not Installed:IIS-WebServer | | | |--- Not Installed:IIS-ApplicationDevelopment | | | | | |--- Not Installed:IIS-ASP
Listing 5.2: The result from running the oclist command.

Youll see in Listing 5.2 that components are tiered to show dependencies. Components that rely on other components are tiered for easy reading within the command prompt interface. Installed and not installed components are marked in the result, showing you which components are already functioning on the server.
Installed components operate much the same here as in the full version of Server 2008, in that once installed, there are usually more configurations required to bring them to the needed level of functionality.

What is good about how Microsoft has broken apart the components of Server Core is that the remote management interfaces for making use of installed components can still be used through other interfaces. So, once the component is installed on the console, it is possible to then manage that component through your typical suite of management tools.

120

Chapter 5 Installing any of these components is done using the ocsetup.exe command. This command includes the necessary functionality for setting up logging of the installation, pointing to unattended installation files, and supplying parameters to an underlying installer. One quirk about using this command is that it does not provide a response back to the console showing the success or failure of the component installation. It also by default returns control immediately back to the console as it completes an installation in the background. Thus, some additional commands are necessary to ensure that control is not returned until the installation is complete. For example, to install DNS Server onto your Server Core instance and to delay returning control back to the console until after the installation completes, use the command
start /w ocsetup.exe DNS-Server-Core-Role
Beware! Although the Server Manager and its servermanagercmd.exe command-line adjunct will locate and automatically install the proper dependent components, ocsetup will not. If a component you want has dependent components, youll need to install them manually.

Though ocsetup.exe doesnt directly provide information about the success or failure of the installation, it is possible to get this information through either of two locations. First, by rerunning the oclist command, you can get an immediate heads-up response regarding the success of the installation. Notice in Listing 5.3 how after the DNS Server and File Services components have been installed, the value Not Installed has changed to Installed for these two components.
Microsoft-Windows-ServerCore-Package Not Installed:BitLocker Not Installed:BitLocker-RemoteAdminTool Not Installed:ClientForNFS-Base Not Installed:DFSN-Server Not Installed:DFSR-Infrastructure-ServerEdition Not Installed:DHCPServerCore Not Installed:DirectoryServices-ADAM-ServerCore Not Installed:DirectoryServices-DomainController-ServerFoundation Installed:DNS-Server-Core-Role Installed:FRS-Infrastructure Not Installed:IIS-WebServerRole
Listing 5.3: The result from running the oclist command after a component installation.

In the case in which additional information is needed to confirm the installation or determine why an installation didnt complete successfully, the Component-Based Servicing log includes detailed information about the progress of the installation. This debug-level log, located at %WINDIR%\logs\cbs\cbs.log includes substantial information about the progress of the installation and its success. Listing 5.4 shows a snippet from this log associated with the installation of the FRS-Infrastructure installation. For space, not all lines from the log are copied; only those that relate to the initiation and completion of the installation.

121

Chapter 5

2008-04-10 11:21:54, Info CBS Exec: Installing Package: MicrosoftWindows-FileReplication-Package~31bf3856ad364e35~x86~~6.0.6001.18000, Update: FRS-Infrastructure, InstallDeployment: x86_microsoft-windowsf..licationdeployment_31bf3856ad364e35_6.0.6001.18000_none_ff9be9468cd78e7e [snip] 2008-04-10 11:21:56, Info CBS Pkgmgr: Completed installing selectable updates for: Windows Foundation, hr: 0x0
Listing 5.4: The CBS log includes detailed information about the success or failure of a component installation. Two lines from the log are included that show the beginning and end of the FRS-Infrastructure component installation.

It is also possible to use the ocsetup command to install other types of packages such as Microsoft Installer (MSI) files or EXE files; however, these are advanced installations. See the Microsoft document OCSetup Command-Line Options for more information at http://technet2.microsoft.com/WindowsVista/en/library/9a9fe5ed-5cfb-47f0-99e3af4ef1442ee71033.mspx?mfr=true.

Installing Active Directory Domain Services Unfortunately, with every group, theres always got to be a one-off. Server Core is no different. Although it appears possible to install the Active Directory Domain Services (AD DS) components using ocsetup, the installation of AD DS through this mechanism is not supported. The reason for this differing installation is likely due to the unattended installation files required for properly setting up AD DS. Unlike other components installed through ocsetup, AD DS requires certain configurations to be made at the point of installation. As a result of this, installing AD DS is completed by using the DCPROMO command. DCPROMO in Server Core is different, however. As weve discussed already, Server Core does not support the types of GUIs required by the standard DCPROMO installation routine. Thus, DCPROMO in Server Core has been limited to function only when used with a text-based unattended installation file. The good news is that the creation of these files has gotten much easier with Server 2008. With previous versions, the only way to create an unattended text file was by manually building the file in a text editor. With Server 2008, this process becomes significantly easier through a single button added to the last screen of the DCPROMO utility in the full version of Server 2008. Figure 5.4 shows this button, titled Export Settings.

122

Chapter 5

Figure 5.4: The Export Settings button provides an easy way to create unattended installation files for a Server Core domain controller installation.

Theres a trick to using this new button. To get the unattended installation file that contains the configurations you want for your Server Core instance, first launch DCPROMO on an existing domain member server. Then answer the configuration questions in the interface as if you were answering it for how you plan to install AD DS on your Server Core instance. For example, if your Server Core instance will be a secondary domain controller, select it in the interface. If it will host Global Catalog (GC) functionality, select that as well. At the final screen (which looks similar to Figure 5.4), click Export settings and save the resulting text file to a location on the network. Do not hit Next at this screen; instead click Cancel to cancel the DCPROMO process. What you have at the conclusion of this process is an unattended installation file that can be used by another computerin this case, your Server Core instanceto configure DCPROMO. Listing 5.5 shows an example of this file in the case where a secondary domain controller is to be created for the realtime-windowsserver.com domain. Copy this file to a location that is accessible by your Server Core instance. Then run DCPROMO on the Server Core instance with the following syntax to begin the AD DS installation
dcpromo.exe /unattend:{unattendedTextFile}

123

Chapter 5

; DCPROMO unattend file (automatically generated by dcpromo) ; Usage: ; dcpromo.exe /unattend:C:\dcpromo.txt ; ; You may need to fill in password fields prior to using the unattend file. ; If you leave the values for "Password" and/or "DNSDelegationPassword" ; as "*", then you will be asked for credentials at runtime. ; [DCInstall] ; Replica DC promotion ReplicaOrNewDomain=Replica ReplicaDomainDNSName=realtime-windowsserver.com SiteName=Default-First-Site-Name InstallDNS=Yes ConfirmGc=Yes CreateDNSDelegation=No UserDomain=realtime-windowsserver.com UserName=* Password=* DatabasePath="C:\Windows\NTDS" LogPath="C:\Windows\NTDS" SYSVOLPath="C:\Windows\SYSVOL" ; Set SafeModeAdminPassword to the correct value prior to using the unattend file SafeModeAdminPassword= ; Run-time flags (optional) ; CriticalReplicationOnly=Yes ; RebootOnCompletion=Yes
Listing 5.5: A sample unattended installation file to be used to create a secondary domain controller.

Server Core + BitLocker + RODC = A Secure Branch Office Its worth stopping at this point to talk about how an effective combination of Server Core and the RODC option secured through BitLocker can substantially improve the security of servers at branch office locations. Weve already talked in Chapter 3 about the benefits of RODCs to quasisecured locations. Different than full domain controllers, this new category of domain controller prevents the loss of a single domain controller from requiring the repermissioning of virtually the entire AD forest. By restricting which accounts are synchronized down to the individual domain controller, the loss of an RODC can substantially restrict the loss of accounts. Server Cores reduced attack surface adds to this security by reducing the number of potential interfaces a would-be attacker can use to dig into the server. Additionally, its command-based management means that most management activities can be done over the WAN with little need for passing full desktops across the wire.

124

Chapter 5 But when you take an RODC installed to a Server Core instance and combine it with Microsofts BitLocker technology, this combination elevates the security of a branch office server to an entirely different beast. As well talk about in Chapter 9, when we focus on system security and BitLocker in more detail, BitLocker is an entire-drive encryption tool. This tool, when removed from its native network location where decryption of the drive contents can be done, ultimately turns useful data into unreadable gibberish. The encryption protocols used by BitLocker are powerful enough to prevent even the most determined hacker from discovering the true contents of the drive.
Well discuss BitLocker in more detail in Chapter 9, but for now, consider this three-fold combination as a highly secure possibility for your quasi-secured branch office locations.

Other Powerful Tools for Managing Server Core


In addition to the command line, Group Policy, and RSAT tools used in managing the configuration of a Server Core instance, there are a few tools that can come in handy in certain situations: Terminal Services, PowerShell, and Windows Remote Shell.Terminal Services provides a powerful mechanism for remoting the desktop of any server. In Server Core, Terminal Services can be used to the very same ends. Utilizing Terminal Server can gain you access to the desktop of your Server Core instance for performing the same types of management activities you would normally do at the console. As Table 5.1 illustrated, the command to enable Remote Administration connections to a Server Core instance is
cscript scregedit.wsf /AR 1
One caution with Terminal Services is to ensure that you enter the command logout to end your session rather than closing the command prompt or by simply closing the Remote Desktop Client window. It is possible to leave sessions active that will later need to be cleared out with the reset session command.

Windows PowerShell is a powerful scripting language that has a somewhat odd relationship with Server Core. Although PowerShell commands cannot directly run on a Server Core instance, they can be used on a management workstation to manage elements of a Server Core instance. PowerShell works with elements such as the Windows Management Instrumentation (WMI) and other management databases and methods that are exposed for remote access. Windows Remote Shell is a third powerful tool that brings the command line of a Server Core instance directly to the command prompt of a management workstation. Think of Windows Remote Shell as a variant of the highly popular Sysinternals tool psexec.exe. Windows Remote Shell is a component of Windows Remote Management and can be used on a management workstation such as a Vista box or another Server 2008 instance to locally launch the Server Cores command prompt.

125

Chapter 5 To do so, youll first need to enable Windows Remote Management on your Server Core instance by running
winrm quickconfig

from the console of the Server Core instance. This will complete the initial configuration of Windows Remote Management to enable it for use. This initial configuration includes starting the Windows Remote Management service, creating a listener, and opening the correct firewall ports to enable its functionality. Once this is complete, from your management workstation, run the command
winrs -r:{computerName} cmd

As Figure 5.5 shows, running this command changes the local command prompt on the management workstation to operate as if it were really local on the remote Server Core instance. Notice how the result from the hostname command changes after launching the remote shell. This is the case because the shell being operated against is actually on the Server Core instance named w2008c.

Figure 5.5: Windows Remote Shell lets you remotely launch the command prompt from your local management workstation.

126

Chapter 5

Server Core Command-Line Crib Sheet


Lastly, to give you a summary of the commands that are available in Windows Server core for console management, take a look at Table 5.2, which includes a short crib sheet of available options. This table gives you a single-glimpse cheat sheet for assisting you with the process of configuring and administering your Server Core instance. Each of these can also be viewed within a Server Core instance by navigating to the C:\Windows\system32 folder and entering the command
cscript scregedit.wsf /CLI

Some are duplicates with what weve already discussed in this chapter.
Command cscript slmgr.vbs -ato cscript slmgr.vbs -ipk {volume license key} cscript slmgr.vbs -ato cscript slmgr.vbs -skma {KMS FQDN} set c, ipconfig /all, systeminfo.exe, & hostname.exe netdom renamecomputer %computername% /NewName:new-name /UserD:domain-username /PasswordD:* netdom renamecomputer %computername% /NewName:new-name wmic computersystem where name="%computername%" call joindomainorworkgroup name="{new workgroup name}" start /w ocsetup [packagename] oclist Control-Shift-ESC logoff wmic computersystem where name="%computername%" set AutomaticManagedPagefile=False wmic pagefileset where name="C:\\pagefile.sys" set InitialSize=500,MaximumSize=1000 control timedate.cpl control intl.cpl msiexec.exe /i {msiPackage} wmic product wmic product get name /value wmic product where name="{name}" call uninstall Sc query type= driver Pnputil -i -a {path}\{driver}.inf Explanation Activate a license Configure KMS volume licensing Activate KMS licensing Set KMS DNS SRV Record Determine the computer name Rename the computer while already domainattached Rename the computer while not domain-attached Change workgroups

Install a Role, Role Service, or Feature View available and installed Roles, Role Services, or Features Start Task Manager Logoff a Terminal Services session Disable system pagefile management

Configure the pagefile Control the timezone, date, or time. Control regional and language options. Manually install an .MSI application List installed MSI applications Uninstall MSI applications List installed drivers Install a driver. Driver files must first be copied to the Server Core instance.

127

Chapter 5

Command netsh interface set interface name="Local Area Connection" newname="PrivateNetwork" netsh interface set interface name="Local Area Connection 2" admin=DISABLED wmic datafile where name="c:\\windows\\system32\\ntdll.dll" get version wmic qfe list wusa.exe [patchame].msu /quiet netsh winhttp set proxy {proxy_name}:{port} reg.exe add /?, reg.exe delete /?, reg.exe query /?
Table 5.2: Server Core command-line crib sheet.

Explanation Rename a network adapter Disable a network adapter Determine a files location List installed patches Install a patch Configure a proxy Add, delete, or query a registry value.

A Compelling New and Different Way for Windows Server 2008


So weve seen in this chapter that there really is a new and different way to manage Windows Server. With Server 2008 and Server Core, we can devolve our long-held GUI-based management in certain circumstances to the elegance and easy automation found in other command-driven OSs. Though Server Core does indeed have a learning curve if youre not already familiar with the command-line adjuncts to common GUI configuration tools, the process of learning them will directly augment your abilities to manage other full Server 2008 instances. Server Core is a compelling installation option for Server 2008 for which specific organizations should find excellent positioning for certain uses within their environments. Though it was glossed over in this chapter, Server Core instances also follow all the rules (that theyre capable of) associated with Group Policy. So, if you have Group Policies that relate to a Server Core instance, they will configure themselves per those settings as well. In our next chapter, well talk about Windows Group Policy and how you can use them to manage, control, lock down, and ultimately secure your computing environment. Well talk about how the Group Policy Central Store improves its replication, how Group Policy Preferences bring item-level control to your management quiver, and a host of other topics that I think youll find useful in managing your Windows Server 2008 infrastructure.

128

Chapter 6

Chapter 6: Managing & Customizing Group Policy


The advent of client/server computing brought about many changes to the tasks commonly associated with IT. In the mainframe days, actual computers were few in number, with terminals being the mechanism for connecting users to their applications. This centralization was a boon to systems management, as relatively few touch points were in need of control and all were centralized onto just a few computers. But times change and so do computer architectures. The mainframe computing model eventually gave way to the client/server approach, where processing was distributed between the server and the clients connecting to that server. This new way of computing reduced the reliance on massive computers in the data center, but at the same time, significantly increased the total count of computers under management. With more individual computers to manage, IT found itself with a new problem: How to control the configuration of the machines across the network. Early on, Microsoft recognized this growing management problem. Upon the initial release of Active Directory (AD) for Windows 2000 Server, Microsoft attempted to solve the problem with a centralized control mechanism called Group Policy. This mechanism for centralized control of individual desktops was made possible through integration with AD. Since every computer was a member of AD, each could be forced to follow the rules as laid down through Group Policy configurations. Using Group Policy, it became possible to create a single policy that mandated the configuration of multiple systems. This mechanism for centralized control has been hugely successful within Windows environments and has been continually augmented and improved with each successive OS upgrade. The release of Windows Server 2008 is no different. With Windows Server 2008, Group Policy gains new policy settings, deployment abilities, and troubleshooting toolsets that add to its already rich set of capabilities as a powerful tool in centrally managing desktop configurations in any AD environment.

129

Chapter 6

The Benefits of Centralized Management with Group Policy


When an IT organization makes the decision to manage its Windows Server 2008 infrastructure using Group Policy, it immediately gains a set of operational benefits. Those benefits include: Centralized controlA single policy with one or more configuration settings can be deployed to multiple sets of users and/or computers to configure them identically. Capability for mass changeWhen changes are made to a deployed policy, all objects that are assigned that policy will make the change. This provides a method for cohesive configuration control to managed objects. Enhanced machine resiliencyA large portion of break/fix problems within an IT infrastructure relate to inappropriate configurations. If those configurations are controlled through an enforcement mechanism such as Group Policy, there is a reduced chance of machine failure. This translates to greater uptime for the IT environment. Improved securityGroup Policy provides a mechanism for controlling the security posture of systems as well. By controlling the necessary security settings through a centralized and controllable mechanism, the overall security of the environment is enhanced.

With the release of Windows Server 2008, there are now more than 2500 individual settings that can be set and controlled through Group Policy. These settings relate to virtually every part of a Windows computer, from desktop settings to security and firewall configurations; from pluggable devices to power management. In fact, there are so many possible settings that in many environments, the most difficult part about embracing Group Policy is merely deciding what to control. This chapter will attempt to unravel some of the complexities of implementing Group Policy in your Windows Server 2008 infrastructure. That being said, the body of knowledge surrounding Group Policy is huge, and a comprehensive conversation on it can require hundreds of pages. So this chapter will focus specifically on simple examples of how you can immediately use Group Policy to benefit your Windows Server 2008 infrastructure. Well focus on the Group Policy Management Console (GPMC) that is used to interact with policies within an AD domain, and then drill down into the creation and application of Group Policies.

130

Chapter 6

Navigating the GPMC


Virtually all interaction with domain-based Group Policy is done using the GPMC. This tool, the initial screen of which is shown in Figure 6.1, can be installed onto a Server 2008 instance by adding the Group Policy Management feature through Server Manager. For Vista workstations, the GPMC is available as part of the Remote Server Administration Tools (RSAT), a separate download that can be obtained from the Microsoft Web site. The GPMC can also be used on Windows XP through a separate download also from Microsofts Web site.
For Windows Vista, the RSAT can be downloaded from http://support.microsoft.com/kb/941314. For Windows XP, the GPMC can be downloaded from http://www.microsoft.com/windowsserver2003/gpmc/default.mspx.

Figure 6.1: The GPMC is available natively in Server 2008 and as a separate download for Windows Vista.

131

Chapter 6 Once installed and launched, the GPMC looks much like what is shown in Figure 6.1. On the left is a tree view that shows available domains and sites as well as Group Policy Modeling and Group Policy Results tools. By navigating down through the tree view to the domain of choice, the configured Organizational Units (OUs) for that domain are displayed. Any Group Policy Objects (GPOs) that are linked to a particular OU are shown in the tree view below that OU. For example, in Figure 6.1, you can see that the Default Domain Controllers GPO is currently attached to the Domain Controllers OU. When considering the operational use of Group Policy, you can think of each policy as having a number of elements. The combination of these elements enables you to make a configuration change on targeted users or computers: GPOsThe GPO itself is the object that contains the settings to be changed. It can also contain filtering information used for specific targeting. Group Policy settingsEach GPO will have one or more settings. These settings represent the configuration changes the policy will enact on targeted objects. Group Policy linksOnce a GPO is ready to be used to configure settings for targeted users or computers, it is then linked to an OU. This linkage instructs the targeted objects to begin processing the GPOs configurations at the next processing interval.

Creating a Simple GPO Considering all this, lets talk now about the process by which a set of computers can be configured through Group Policy. To do so, well use an example. Lets assume that you are interested in configuring Event Log settings for a set of targeted computers so that the Maximum system log size is set to 40,000KB. This is a useful setting within many environments to support storing a larger-than-default quantity of troubleshooting information about client computers. For any GPO to configure a user or computer, that GPO must be linked to either the domain itself or an OU within the domain. In Figure 6.1, an OU for testing has already been created named Test OU. For the purposes of this example, this OU contains a set of computers to be used in testing the creation and application of this new GPO. To create a new GPO, right-click Group Policy Objects, and choose New. In the resulting window, name the GPO Configure System Event Log and click OK.
This process creates an empty GPO but does not yet link that GPO to an OU or the domain. Although it is possible to create a new GPO that is already linked to an OU or the domain, this is not often a best practice. Any users or computers in the linked OU will automatically begin making configuration changes based on the settings within the GPO. Although an initially created GPO will always be empty and lacking any settings, once a setting is enabled, it will begin applying to users and computers. For this reason, it is usually a good idea to first create unlinked GPOs and link them only once they are fully configured and ready for deployment.

Once you have created your GPO, you then need to configure within it your settings of interest. To do so, right-click the GPO, and choose Edit. The resulting screen will look similar to Figure 6.2. This screen is the Group Policy Management Editor (GPME), and is used to enable settings and set their configuration within each GPO.

132

Chapter 6

Figure 6.2: The GPME is used to manage individual settings within a GPO.

As the figure shows, each GPO is broken first into two halves. The first half, called Computer Configuration, is used to manage settings that generally relate to the entire targeted computer. The second half, called User Configuration, is used to manage settings that generally relate to an individual user. Settings that are configured in the Computer Configuration section of a GPO are only relevant when that GPO is applied to a computer object. Conversely, the settings that are configured in the User Configuration section are only relevant when the GPO is applied to a user object. OUs within your domain typically contain one or the other of these objects, and sometimes both. Well talk about the process of linking a GPO in a minute, but for now, be aware that by default only half of any particular GPO will typically apply to a particular object type.
Actually, that last statement isnt entirely true. It is possible for a computer object to process settings from the User Configuration half when a Group Policy setting called Loopback Mode is enabled. This is done when you want the settings within the User Configuration half of the GPO to apply based on the OU location of the computer object. For more information on Loopback Mode, see http://support.microsoft.com/kb/231287.

133

Chapter 6 For our example, you want to manage the configuration of the Event Log on targeted computers. To do so, navigate to Computer Configuration | Policies | Windows Settings | Security Settings | Event Log. The resulting view is displayed in Figure 6.2, where you should see the configuration for Maximum system log size in the right pane. By double-clicking the setting, you will be presented with a window similar to Figure 6.3. There, select the Define this policy setting check box and set the value to 40000. Each policy setting must be specifically enabled for it to apply to targeted users or computers. Click OK to complete the configuration.

Figure 6.3: An individual setting within a GPO that configures the Event Log.

Youll note that there is no location to save the configuration. There is no Save button and no Save link under the File menu. This is the case because policy settings are saved immediately upon clicking OK. This can become an issue if you are not careful in how settings are configured. This is also a good reason why GPOs should not be linked to an OU until they are fully created, tested, and ready for production.

134

Chapter 6

Applying That Simple GPO Once the Group Policy is configured, you can close the GPME and return back to the GPMC. To begin applying this change to the computers in the Test OU, right-click the Test OU and select Link an Existing GPO. Select the Configure System Event Log GPO from the list and click OK. It is important to note at this point that clients will not immediately begin processing the GPO. Group Policy on each individual client is by default configured to check for new GPOs and GPO changes each time the machine is powered on as well as every 90 minutes thereafter (plus a randomized zero to 30 minute offset). Thus, once the GPO is linked to the Test OU, client computers in that OU that have successfully authenticated to the domain will begin processing this new GPO and its settings at some random time between 90 and 120 minutes later. This period of time is called the Refresh Interval. It is possible to speed this process if you dont want to wait out the refresh interval period. To speed the process on any individual client, use the command
gpupdate /force

This command will instruct the client to ignore the refresh interval and check for new policies immediately. It is possible to verify that the GPO is applying correctly by using the gpresult command. For Windows XP, from a command line, enter
gpresult

to show a report detailing which policies have and have not been applied. For Windows Vista, use the command gpresult /r. Applying Multiple GPOs This example shows how a single simple policy can be applied to an OU. But what if you want to apply multiple GPOs to the same OU? How do the client computers know which policy to process first? Figure 6.4 shows an example of the Linked Group Policy Objects tab shown when focusing the view on an OU in the GPMC. There, you can see three GPOs that have been linked to the Test OU. From their names, each GPO appears to configure similar settings in the System Event Log. Clients will process multiple Group Policies linked to the same OU based on their Link Order, starting with the highest number first and working backwards. The Link Order is shown in the left column of the right pane of Figure 6.4. In this case, the Configure System Event Log One More Time GPO will actually apply first. Layering over the topand potentially overwriting any settings that are in conflictwill be the Configure System Event Log Again policy. Last to applyagain with any settings in conflict being overwritten by its settingsis the original policy Configure System Event Log. It is possible to reorder the Link Order by selecting a policy and clicking the arrows to the left of the right pane.

135

Chapter 6

Figure 6.4: Link Order determines the order in which multiple GPOs are applied.

Administrative Templates and the Group Policy Central Store


One component of GPOs that weve glossed over to this point is found within the GPME. Under the Policies node in either Computer or User Configuration is the Administrative Templates node. These templates are used by Group Policy in the configuration of numerous settings within client systems. They make up the largest part of the 2500 settings that can be enabled and configured with Group Policy.
If you think of using the GPME to enable and configure GPO settings as similar to filling out a form, then you can easily think of the Group Policy templates as the forms themselves. The templates are files that store the possible settings you may be interested in enabling for any particular GPO.

Prior to the release of Windows Vista, five templates were natively available for configuring Windows (system.adm), Internet Explorer (inetres.adm), NetMeeting (conf.adm), Windows Media Player (wmplayer.adm), and Automatic Updates (wuau.adm). These five templatesand primarily system.adm where the vast majority of settings were storedwere used for the storage of potential settings under the Administrative Templates node for every newly created GPO. However, over time, a number of problems emerged with the implementation of these templates. Most important of these was in how they were stored. Part of every GPO is stored in a domains SYSVOL. You can see this part in your own domain by navigating to \\{domainName}\SYSVOL\{domainName}\Policies. In that location, youll see a number of GUIDs, each that relates to a configured GPO. Drilling further into any particular GUID, you will find a series of files the contents of which instruct clients to process configured GPO settings. In the adm subfolder, you should also find the five templates discussed earlier. ADs SYSVOL is replicated between all domain controllers in the domain and has historically had problems with that replication when the size of the SYSVOL grows large. The combination of these five templates adds a little more than 4MB of space to every newly created GPO. Thus, when the number of GPOs grows large, the size of the SYSVOL grows large as well.

136

Chapter 6 One new feature that actually arrived with the release of Windows Vista is the Group Policy Central Store. The Central Store is a new architecture for storing Group Policy template files that alleviates some the problems of the old storage mechanism. Specifically, rather than replicating template files into each GPOs SYSVOL folder, a single folder is created to store them all. This store houses all the administrative template files for the entire domain within a single folder in the SYSVOL, reducing the overall size of the SYSVOL and the stress on replication.
Another change that arrived with the release of Windows Vista has to do with the file format to the templates themselves. Formerly, Microsoft used a proprietary scripting language to create the template files but with the release of Windows Vista, that language has changed to XML. One of the benefits of this change is the new support for multiple languages. Template files are now broken into two halves, with one half (the ADMX file) containing the data about the setting to be configured. The other half (the ADML file) contains the text within the GPME that explains to the administrator what the template will do. By breaking the files apart in this way, one ADMX file can support multiple ADML files, and so one template can explain text in multiple languages.

The Group Policy Central Store is not created by default. Instead, you must manually create it within your domains SYSVOL through a manual process. To create the central store, as a Domain Administrator complete the following steps: From the Run prompt, navigate to \\{domainName}\SYSVOL\{domainName}\Policies. In the Policies folder, create a subfolder named PolicyDefinitions. From a Windows Server 2008 computer, navigate to C:\Windows\PolicyDefinitions. Copy the contents of this folder to the PolicyDefinitions folder you just created. Navigate into the PolicyDefinitions folder, and create a subfolder there named after your particular language code. For the English language, this subfolder will be named en-US. Finally, from the same Windows Server 2008 computer, navigate to C:\Windows\PolicyDefinitions\{language}. Copy the contents of this folder to the language folder you just created in the SYSVOL. For the English language, this will be C:\Windows\PolicyDefinitions\en-US.

Once this has been completed, the GPMC on Windows Vista and Windows Server 2008 machines will immediately begin using the templates within the central store instead of the original five within each individual GPO. If youre used to using Group Policy with the old templates, youll immediately see another added benefit of the change. Separated into 146 different files, these new templates provide much more granular controls over Windows clients. As you can see in Figure 6.5 where the Administrative Templates node is expanded, there are a vast number of settings and categories of settings that can be configured.

137

Chapter 6

Figure 6.5: The vast majority of possible settings are contained within the Administrative Templates.

For the most part, once the setup process is complete, configuring Administrative Template settings happens in much the same way as was explained previously for Windows Settings. Simply create a GPO, then open the GPME to edit that GPO. Under the Administrative Templates node, enable the settings of interest and set any necessary configurations. When configuring settings, be aware of the product for which the setting is supported. When viewing the properties of any setting, look at the bottom of the window for the words Supported on. If you attempt to apply a setting to a client that is below the Supported on level, that setting will not work on that client.

138

Chapter 6

Network Location Awareness As you can see, Group Policy is a great tool for locking down the configuration of clients within your domain. But the processing of GPO settings can be a network-intensive task, especially when clients are connected to the domain over slow links. Due to these situations, Group Policy has always had the ability to determine the link speed of the connection between client and domain controller. If the connection speed was not at a level that Group Policy deemed fast enough, non-critical portions would simply not apply.
More information about Group Policy processing, how link speed is determined, and what parts of Group Policy are and are not processed over slow links can be found at http://technet2.microsoft.com/windowsserver/en/library/89d7ec5f-a909-4f61-adedc5b69f5f730b1033.mspx?mfr=true.

Prior to the release of Windows Vista and Windows Server 2008, that determination was done using ICMP (ping). This protocol and the timing response received from sending and receiving a ping packet was used to determine the speed of the connection. However, one problem with this method is that ping is often disabled in highly secured networks such as those used for remote access. Because of this and other issues, in Windows Vista and Windows Server 2008, Group Policys reliance on ping was replaced with a new protocol called Network Location Awareness (NLA). Without diving too deep into its technical detail, NLA is used by Group Policy to uniquely identify each network the computer connects to as well as determine the effective bandwidth and status of each network. As the status of a connected network changes, the NLA service is responsible for notifying the clients Group Policy engine of the change. The result is an elimination of the reliance on ping and an enhanced ability on the part of the client to process Group Policy during periods of changing network connectivity. These periods include establishing VPN sessions, recovering from hibernation or standby, successfully exiting quarantine, and docking a laptop, all of which were previously challenging situations for Group Policy application.
NLA is not solely used by Group Policy. Any network-aware application can leverage its network status and notification features. The most obvious way to see how NLA is functioning is to view network properties. The Network and Sharing Center shows a graphical representation of the currently connected network and its status. This information comes from the NLA service (see http://msdn.microsoft.com/en-us/library/ms739931.aspx).

139

Chapter 6

Starter GPOs As discussed above, one of the hardest parts about embracing Group Policy can be in navigating through the sheer number of policies to find the ones of interest. With over 2,500 policies available for Windows Vista and Windows Server 2008, determining which are useful and which can be problematic for your environment can be a challenging activity. To assist with this problem, another new and interesting new feature that arrives with the release of Windows Vista and Windows Server 2008 are Starter GPOs. Starter GPOs in implementation are actually quite simple. They are little more than a mechanism to collect, export, and distribute GPO settings to others, specifically those within Administrative Templates. Consider the situation where you have completed a large project to lock down the configuration of workstations and servers within your domain. That activity likely looked through all the possible GPO settings to find just the ones of interest to you and your environment. Perhaps the activity also aligned the configured settings with those required by established security practices or compliance regulations.
Microsoft provides a set of four sample Starter GPOs that relate to recommended security configurations at http://www.microsoft.com/Downloads/details.aspx?familyid=AE3DDBA7-AF7A4274-9D34-1AD96576E823&displaylang=en.

In this case, after implementing them in your organization you may want to share the fruits of your work with others in your organization or within the IT community. Starter GPOs are a way to export your configured GPO settings into a CAB file that can be distributed to others. Other organizations can import the file into their GPMC and use your GPO settings as a starting point for the creation of their own lockdown configuration. Hence the name: Starter GPOs. To begin making use of Starter GPOs, you must first create their containing folder. Within the GPMC, click the Starter GPOs node. In the right-pane of the resulting screen, you will find the text The Starter GPOs folder does not currently exist in this domain. Click on the button below to create this folder. Click the button to create the folder. Once created, Starter GPOs can be used as the starting point for the creation of new GPOs in the domain. For example, lets assume that a Starter GPO named My New Starter GPO has already been imported into the GPMC. That Starter GPO includes Administrative Template settings to be used in the creation of a standard GPO. When creating a new standard GPO, change the value for Source Starter GPO to My New Starter GPO to start the new GPO with the already-configured settings from the Starter GPO. The New GPO window where this is selected is shown in Figure 6.6.

140

Chapter 6

Figure 6.6: Starter GPOs provide a way to pre-populate new GPOs with settings.

Once imported into a domain or newly created within a domain, Starter GPOs are configured in the same way as standard GPOs. Figure 6.7 shows an example of the GPMC with the My New Starter GPO available. Right-clicking My New Starter GPO and selecting Edit brings forward the Group Policy Starter GPO Editor, where the contents and configuration of the Starter GPO can be edited. Clicking the Load Cabinet and Save as Cabinet button provides for the import and export of GPOs from external sources.
There are a couple of fairly significant limitations associated with Starter GPOs. First Starter GPOs can only configure the settings found in the Administrative Templates. The other elements of Group Policy outside the Administrative Templates simply arent available for Starter GPOs. Also, there currently is no direct way to turn a standard GPO into a Starter GPO. The only way to create a Starter GPO is by manually re-creating the settings.

141

Chapter 6

Figure 6.6: A view of the Starter GPOs node with one Starter GPO imported into the interface.

GPO and GPO Settings Comments Although you may not have made use of the feature in your own environment, virtually every object within Active Directory contains the ability to attach a comment. This commenting feature allows you to make notes about the object that are contained directly within that object. This is handy for sharing information in environments where multiple administrators work within the same directory. However, prior to the release of Windows Vista and Windows Server 2008, there was no commenting capability for GPOs and their settings. This omission has now been remedied. With the GPMC on Windows Vista or Windows Server 2008, comments can now be attached to either the GPOs itself or to individual settings within a GPO. This new feature is a boon to environments where information about configuration settings needs to be kept close at hand. Adding a comment to a GPO enables administrators to share information about the creation, use, and reason for the GPOs presence. It also allows administrators to easily show ownership of the GPO.

142

Chapter 6 For settings within a GPO, adding comments to individual settings allows administrators to document the reason, creation or modification date/time, owner, and other necessary information about individual settings. If youve ever created a GPO and wondered years later how and why its settings got changed, youll be excited to make use of this feature. To add a comment to a GPO, first edit the GPO in the GPME. Then, right-click the top-level node for the GPO and select Properties. Comments can be added under the Comment tab. To add a comment to an individual setting within a GPO, double-click the setting and navigate to the Comment tab. Figure 6.7 shows the resulting window with a sample comment entered.

Figure 6.7: A sample comment added to a GPO setting.

143

Chapter 6

GPO Filters Another major omission in previous versions of Group Policy was in an ability to search and filter through available policies to locate those of interest. Without a searching and filtering mechanism in place, the only way previously to identify which policies were enabled or configured was to navigate through the entire folder structure within the GPME. With Windows Vista and Windows Server 2008, this inability would be particularly pronounced due to the large number of folders that now make up the structure. Thankfully, a filtering and searching feature is now available with the GPMC on Windows Vista and Windows Server 2008. This filtering and searching feature provides a way to narrow down the large field of possible GPO settings to just those of interest. It further allows an administrator to easily create a listing of just those GPO settings that have been configured for a particular GPO without needing to step through the entire folder structure as was the case in previous versions. Figure 6.8 shows an example of the Filter options available for narrowing down the field of settings. As you can see there, settings can be filtered through any of five possible selections: Managed. Group Policy Administrative Templates can be those provided by Microsoft or others that you create yourself. Those provided by Microsoft have the benefit in that they dont permanently change the registry, allowing them to revert back to the original setting when removed. These policies are termed Managed. Other customized policies are termed Unmanaged. Configured. When a policy has been set to anything other than Not Configured, it is considered Configured by the filter. Commented. When a policy has a comment attached to it, it is considered Commented by the filter. Keyword filters. A keyword can be any word that exists in the setting title, explain text, or within an attached comment. Requirements filters. The Supported on label as seen within each Group Policy setting determines which OS or application level is required for the setting to apply. Requirements filters can limit the available settings to just those that support a particular OS or application level.

144

Chapter 6

Figure 6.7: An example of a GPO filter that restricts the view to only those settings that have been configured and relate to Windows Vista or Windows Server 2008.

Youll also notice that after clicking the Administrative Templates node in the GPME that a filter icon appears in the toolbar. Once a filter has been created, implementing that filter is done by either clicking the icon or by right-clicking Administrative Templates and selecting Filter On. Once the filter has been applied, the Administrative Templates node will restrict to show only the results of the filter.
This includes the folder structure as well. Once a filter is applied, the tree structure only shows those parts of the tree that contain filtered settings.

145

Chapter 6 Another useful tool within the Administrative Templates node that can be used either in with or without the use of filters is the All Settings node. With no filter in place, this node effectively eliminates the entire tree structure associated with the Administrative Templates. It instead lists all the potential settings in a flat format. This can be exceptionally useful if you desire a more user-friendly way to browse through available settings without having to navigate the tree structure. With a filter in place, this node will showalso without the tree structureonly those settings that relate to the filter. Thus, after applying a filter, you can use All Settings to find the complete list of settings that relate to the filters characteristics.
The combination of commenting and filtering should significantly reduce the headache associated with locating configured settings as well as finding new settings of interest.

Scripting the GPMC As youve already seen in this chapter, the GPMC arrives full of features that can be accessed through the GUI. However, sometimes the workflow within your Windows Server 2008 infrastructure requires the use of command-line tools and scripts for scheduling or custom purposes. In those cases, Microsoft has made available a set of GPMC scripts that augment the capabilities of the GUI. These scripts provide scripting and command line support for a set of needed Group Policy functionality. The GPMC scripts with Windows Vista and Windows Server 2008 are not natively installed with the GPMC, but are instead a separate download. Be aware that this is a different behavior than with Windows XP, where the scripts were available with the GPMC installation.
You can download the GPMC scripts from the Microsoft Web site at http://www.microsoft.com/downloads/details.aspx?familyid=38c1a89b-a6d2-4f2a-a9449236999aee65&displaylang=en.

Thirty-three in number, the GPMC Scripts enable a set of mostly VBScript-based tools that integrate with Group Policy to enable its manipulation via the command line. Once installed, you can access the scripts from the location C:\Program Files\Microsoft Group Policy\GPMC Sample Scripts. These tools provide the ability to list, locate, copy, delete, and get reports on GPOs as well as their status. They also enable a very easy way to backup and restore GPOs and their settings right from the command line. As an example, one script in particular can be extremely handy for emergency situations. The BackupAllGPOs.wsf provides a way to create a file-based backup of all GPOs within a domain along with their settings. This file-based backup is significantly easier to restore in the case of an accidental deletion than would be the Active Directory-based restoration process otherwise required. By combining this script with the Windows Task Scheduler, it is possible to create a daily backup of all GPOs to be used should a GPO be accidentally deleted.

146

Chapter 6 To create a scheduled backup, navigate to the Windows Task Scheduler and create a new task that runs a command similar to the following on a regular basis:
cscript.exe C:\Program Files\Microsoft Group Policy\GPMC Sample Scripts\BackupAllGPOs.wsf {backupLocation}

The above command assumes that youve installed the GPMC Scripts to their default location and that the {backupLocation} folder has already been created. Upon running this script, any GPOs within the domain as well as their settings will be backed up to {backupLocation}. To restore any of these previously backed up GPOs use the RestoreGPO.wsf script with the following syntax:
cscript.exe C:\Program Files\Microsoft Group Policy\GPMC Sample Scripts\RestoreGPO.wsf {backupLocation} {backupID}

In the command above, the value for {backupLocation} should be the location where the scripts were previously backed up. The value for {backupID} will be the name or GUID of the GPO to restore.
In addition to this simple example, there are a number of additional functions that can be run via command line or scripted using the GPMC scripts. For more information on the scripts and the functionality they can provide, navigate to the article GPMC Scripting: Automate GP Management Tasks at http://technet2.microsoft.com/windowsserver/en/library/885ed84e-80da-4025-bd760ea4d05127f11033.mspx?mfr=true.

Group Policy Preferences


Even with the more than 2500 individual settings that can now be configured with Group Policy, the nature of Group Policy itself may not fulfill all the needs of your Windows Server 2008 infrastructure. Due to their highly-customizable nature, IT infrastructures have traditionally made use of login scripts to handle the customized needs of their individual environment. But there has always been a few issues with login scripts for these sorts of customizations. First, they are only processed at the time of login. If you desire a custom change to occur, you must first change the login script and then wait for each client to re-login in order for the change to process. Additionally, the coding of complex customizations can often be challenging using shell scripting or VBScript languages. In order to properly use login scripts, you need to learn these scripting languages and the best practices associated with their use. Upon the release of Group Policy with Windows Server 2008 comes a much-desired new enhancement to Group Policy called Group Policy Preferences (GPPs). GPPs bring together much of the customization power of login scripts with the rich targeting and regular update capabilities of Group Policy.

147

Chapter 6 Take a look through the settings found in the traditional Group Policy Administrative Templates. There, youll find a significant level of ability to control the configuration of workstations and servers attached to your domain. But that configuration control is limited to just the areas that Microsoft has made available through the Administrative Templates. If you want to make your own customized changes that arent already a part of a Group Policy setting, youre forced to code your own template using XML. This rather difficult process can make cumbersome the process of customization. GPPs overcome this limitation by making available a set of tools that allow for GUI-based customization of areas commonly handled through login scripts. Take another look at any particular Group Policy within the GPME. Within the left pane of the tree view, as shown in Figure 6.8, you will see that the both the Computer Configuration and the User Configuration nodes are further broken into two halves apiece. Each contains two top-level nodes titled Policies and Preferences. The Policies node is where traditional Group Policy settings are configured. The Preferences node is where preferences are enabled.

Figure 6.8: Group Policy Preferences enable rich customization of the windows environment, including elements like mapping drive letters to common shares.

As youll also see in Figure 6.8, the potential for customizable control available through GPPs is remarkable. Within either half, one can easily control elements like drive mappings, environment variables, files and folders, data sources, local users and groups, power options, printers, and much more.

148

Chapter 6 To give you an example of one use of GPPs that has traditionally been accomplished through login scripts, consider your need for setting drive mappings for users home drives. With login scripts, the process to accomplish this task typically involves creating the script, storing that script in the domains SYSVOL, and configuring each user to process the script through their user object within Active Directory Users and Computers. Using GPPs this process gets quite a bit simpler. In this example, lets assume that home drives are typically mapped to the H: drive and are stored within the \\w2008a\homefolders share. To use a GPP to set this for all computers in the domain, use the following process: Create a new GPO and launch the GPME. Navigate to User Configuration | Preferences | Windows Settings | Drive Maps. In the right pane of the resulting screen, select New | Mapped Drive. The window will look similar to Figure 6.9. Within that window, change the selections to match what is shown in Figure 6.9. Click OK when complete. Close the GPME and link the GPO to the domain.

Figure 6.9: Much of the ease of GPPs stems from their graphical interface for common administrative tasks. This window configures drive mappings.

149

Chapter 6 By completing these four steps you have accomplished the same drive mapping that required scripting knowledge as well as the time-consuming management of user-specific settings within Active Directory Users and Computers. Yet this is completed in a much shorter amount of time and with much easier management and troubleshooting in the future. Through their reliance on Group Policy for distribution, GPPs enable common customizations to be managed through the same tools used to manage Group Policy.
The client-side code required to process GPPs is natively available within Windows Server 2008. However, Client-Side Extensions (CSEs) must be downloaded and installed to all other operating systems for them to recognize and process GPPs. You can find links for CSEs at: http://support.microsoft.com/kb/943729.

What we havent discussed yet with GPPs has to do with one of their greatest strengths. Unlike most of traditional Group Policy, GPPs have the unique capability in that they can be configured to be mere suggestions rather than enforced policies as were used to seeing. Consider the situation where you want to suggest an initial environment variable setting for users, but allow them the ability to later change that setting if they desire. Using traditional Group Policy, this is not possible because traditional Group Policy is intended to be an enforcement mechanism. Each time the Group Policy Refresh Interval passes, the Group Policy client will change any modified settings back to their initial configuration.

Figure 6.10: A major portion of GPPs power arrives from what can be set within the Common tab.

150

Chapter 6 Figure 6.10 shows the Common tab found within all GPP settings. There look for the configuration titled Apply once and do not reapply. By checking this box, the GPP will make the configuration change, but it will not reset that change if a user later decides that they want to change their setting away from what you suggest.
This ability to make GPP settings optional gives you the ability to set up a standard operating environment, while still allowing your users to customize that environment to their needs.

Yet another of GPPs powers arrives from the ability to further target GPP application based on a set of characteristics. Also seen under the Common tab is a link titled Item-level targeting. Checking this box and then clicking the Targeting button brings forward a screen similar to Figure 6.11. Item-level targeting provides you the ability to link a GPP to an Organizational Unit, but instruct the policy only to process when the client object meets a preconfigured set of criteria.

Figure 6.11: GPP item-level targeting enables an enhanced mechanism for granularly targeting GPPs to specific objects.

151

Chapter 6 In Figure 6.11 item-level targeting has been set to only apply the GPP when the objects CPU speed is greater than 1000 MHz, free disk space is greater than or equal to 80 GB, the operating system is Windows Vista, the machine is a portable computer that is docked, undocked, or unknown, and the RAM is greater than or equal to 512 MB. Considering the options available, the level of targeting can be as granular as your needs. To set item-level targeting, simply click the New Item button and select an item. Upon selecting an item, configure its options in the bottom pane. Click OK to complete the process. Once complete, objects will only apply the policy when they meet the targeting guidelines.

Group Policys Centralized Control Enhances Your Ability to Manage Your Infrastructure
In this short chapter, we have only scratched the surface of what you can enable and control using Group Policy and Group Policy Preferences within your Windows Server 2008 infrastructure. There are additional topics associated with targeting, best practices, design and implementation elements, and supportability that go far beyond what we can cover in this short chapter.
If you want to learn more about Group Policy in Windows Vista and Windows Server 2008, start your learning with this Web site: http://technet2.microsoft.com/WindowsVista/en/library/5ae8da2a-878e48db-a3c1-4be6ac7cf7631033.mspx?mfr=true.

In Chapter 7, well start a two-chapter series on the topic of Terminal Services. Terminal Services gets a substantial facelift in Windows Server 2008, finally getting some of the benefits formerly only available through other application platforms like Citrix XenApp (Presentation Server). This two-part series on Terminal Services will start with an introductory exploration of the new features and later conclude with a discussion on the new and advanced topics that youre sure to enjoy.

152

Chapter 7

Chapter 7: Introduction to Terminal Services


At the beginning of the previous chapter on Group Policy, we talked about how the world of computing has evolved from its early focus on relatively dumb consoles connecting to mainframe computers back in the data center. These days, a large percentage of business computing is done using the client/server model, where clients and servers both handle some component of workload processing. Servers sit back in the data center and typically handle the processing of a single function or service, while powerful desktops are used in concert to locally accomplish much of the workload processing. There is great power in this division of processing between client and server. The actual work involved in processing users data needs is distributed among dozens to hundreds of processors within many machines, rather than consuming the resources of a smaller number of total processors within a much smaller collection of computing platforms. Users are given greater freedoms with the types of workloads they can accomplish. And adding new applications or services imposes a comparatively smaller impact to the greater user environment. But with this greater distribution of workload processing comes a related explosion of touch points for administration and security. With the client/server computing model, each server and client runs an OS all its own, with all the associated management requirements. Administering such an IT environment today means controlling the configuration of computers in both the data center and at the desktop, which is a bigger scope to wrap your arms around. Also problematic are applications that require chatty network conversations between the client and server halves. These network conversations are particularly talkative, which increases the level of networking connectivity required between client and server. As clients that use such applications move farther away from their respective servers, overall performance declines, particularly as the number of WAN links involved increases. One solution that brings together the power of client/server processing with the improved security, administration, and networking architectures of the mainframe days is Terminal Services. With Windows Server 2008, Terminal Services arrives as an installable Role that enables users to share the use of a server-class system in much the same way as in the old mainframe and terminal days. By installing applications and pointing users to the Terminal Server, users gain remote access to applications while administrators gain substantial management benefits.

153

Chapter 7

Application Server

s ire ion qu ect Re onn C st Fa

Opera te Slow C s over onnec tion

Client Terminal Server

Requir Fast Co es nnection

Database Server

Datacenter Boundary
Figure 7.1: Terminal Services enables an interface for clients on slow connections to access applications as if they were within the high-speed data center boundary. Moving client applications closer to their respective servers results in improved application performance.

But easier management isnt the only reason to move applications to Terminal Services. Making this solution even more useful to IT is the nature of the network protocol used by Terminal Services itself. The Remote Desktop Protocol (RDP) used to connect clients to Terminal Services servers is designed to consume an extremely small amount of bandwidth. Thus, the same Terminal Server connected to by clients from the local network over high-speed links can also be connected to by clients in remote locations over slow connections. RDP is extremely tolerant of latent and low-bandwidth network conditions, making it an excellent solution for enabling far-reaching access to applications.

What Exactly Is Terminal Services?


To put it bluntly, Terminal Services effectively turns a Windows server into a giant Windows workstation, the use of which is shared by each connected user. Unlike virtually all other servers in the data center, the installation of Terminal Services enables non-administrative users to access a server session in the same way they would their own desktop. On that Windows server are installed the applications and other resources required by its users. Users are given a client, the Remote Desktop Client (RDC), with which to enable the connection to the desktop of that Terminal Server.

154

Chapter 7

Figure 7.2: An example of a Terminal Services-hosted desktop connected using the RDC.

The Windows OS has the native capability to operate multiple sessions on the same server. Each session hosts the desktop and operating environment for a single user. The first of these, the console session, is used when an administrator is actively working at the console of the server. Additional sessions are created when users connect via the RDC. Sessions are administratively separate from each other, containing their own processes and process threads. Applications installed on the server are shared by all sessions, with the processes associated with each application being invoked individually into each session.
Applications hosted on Terminal Services in certain circumstances have the ability to share available resources between session processes. However, this process depends on how the application was coded and the rate at which DLLs are modified during use. External third-party tools are often necessary to ensure that resource sharing is being accomplished at optimum levels.

155

Chapter 7 The end result for the client is a connection either to a Terminal Server desktop, as shown in Figure 7.2, or one of its individual applications. The user interacts with applications on their local desktop as if they were running locally, while in fact those applications are actually being processed by the Terminal Server. The RDC processes little more than screen updates and mouse and keyboard commands between the server and the client for user visualization. Due to compression on the part of the protocol, screen updates require very little bandwidth to maintain an acceptable user experience and are relatively tolerant to network latency.
With Terminal Services, youll often hear about the concept of user experience. This is a key point of Terminal Server administration. With large numbers of users simultaneously sharing resources on the same server, one of the roles of the Terminal Server administrator is to play systems babysitter, ensuring that users are not consuming more than their fair share of resources. The goal of this administrative activity is to ensure that the users experience with the Terminal Server-hosted application is equivalent or better than what they would have experienced in a local installation of the application. Youll find that managing that experience is one of the biggest jobs of the Terminal Server administrator, especially as the count of Terminal Servers increases. Tools both native to Terminal Server as well as available through third parties are available to assist with the management of this experience.

With this understanding of what Terminal Services is, it is important to also understand the benefits it can provide. By leveraging the Terminal Services Role atop a Windows Server 2008 computer, the IT environment gains a number of benefits to administration and workload processing: Centralization of Management. In Chapter 6, we talked about Group Policy and how Active Directorys Group Policy enables administrators to control large numbers of computers through a single configuration policy. Group Policy is effective for accomplishing this task, but its use still incurs an administrative cost for managing the configuration of each desktops settings. By moving applications from the desktop to shared Terminal Servers, this reduces the total count of points of configuration. Fewer points of configuration mean fewer places where mistakes and omissions can be made, and an overall reduction in the cost to administer the environment. Centralization of Applications. When applications are distributed to individual desktops, this incurs a management cost associated with their administration. That cost relates to the time required to install the application, updates that are later required, as well as the cost to physically travel to the local machine when troubleshooting support is required. By moving applications from the desktop to shared Terminal Servers, large numbers of users are effectively sharing the use of a smaller count of application instances. Although this does not necessarily reduce the licensing cost for those applications, it does reduce the number of touch points for managing those applications. Application updates are similarly affected because fewer application instances ultimately require fewer updates. Centralization of Security. Ensuring the security of a distributed environment can also be a management headache due simply to the large number of areas that require securing. Applications in addition to OSs require controls in place to maintain the security of data and processing. By moving applications from the desktop to shared Terminal Servers, the sheer number of points required to secure is reduced.

156

Chapter 7 Proximity of Desktop Applications to Servers. Some desktop applications require a large amount of communication to occur between the client application and the server. When clients running these applications are far removed from their servers, the end result is a reduction in performance for the application. Moving these applications from the desktop to shared Terminal Servers that are close in network proximity to their application servers results in an increase in application performance. A picture of this is shown in Figure 7.1. This increase occurs because the chatty client application is now within the high-speed data center boundary. The lightweight RDP is then used to pass screen updates and keyboard and mouse commands between client and Terminal Server while the chatty communication between Terminal Server and application server stays within the highspeed data center boundary. Internet-based Application Hosting. Since the network is no longer the bottleneck between applications and their back-end servers, it is similarly possible to enable the hosting of otherwise internal applications over the Internet. With the right authentication and encryption technology in place, which is natively available within Terminal Services, this allows the IT organization to securely extend the reach of critical corporate applications to virtually everywhere with an Internet connection.

Introducing Windows Server 2008s Terminal Services Role


With the release of Windows Server 2008, all the functionality commonly associated with Terminal Services has been encapsulated into an installable Role. Like the other Roles weve discussed in this guide to this point, the Terminal Services Role contains a set of Role Services that support its functionality. The benefit of this movement is that Role Services in support of Terminal Services can be installed without needing to install the core Terminal Services components. The Terminal Services Role in Server 2008 itself enables effectively no functionality. Installing the Role minimally requires the installation of at least one of the five Role Services enumerated in the subheads that follow. For smaller installations, you might find multiple Role Services installed onto a single server. For larger environments, splitting the processing of these Role Services onto separate servers enables administrative separation as well as better security for each component. Server The Terminal Server Role Service is the component most commonly associated with Terminal Services. Installing this Role Service enables the server to operate as a Terminal Server, accepting inbound RDP clients and handling multiple session creation. Due to how multiple sessioning impacts applications installed on the server, it is a best practice to install the Terminal Server Role Service first, prior to the installation of any applications. As well discuss later, applications installed to Terminal Services must be installed using a special server mode for them to function properly.

157

Chapter 7

TS Licensing Clients that make use of Terminal Services require a special kind of license called a Terminal Server Client Access License (TS CAL). This special and additional license can be distributed on a per-user or per-device basis. Management of those licenses is handled by the TS Licensing Role Service. At a minimum, one instance of TS Licensing must be present within the Active Directory Forest before clients will be allowed to connect to the Terminal Server. TS Web Access As well discuss later on in this chapter, once Terminal Services is installed to a server there are multiple mechanisms that can be provided to connect users to hosted applications. One mechanism involves pointing users to a Web site where links to hosted applications are made available. The TS Web Access Role Service is the built-in mechanism for hosting this Web site. TS Web Access interfaces with configured applications on a Terminal Server to provide a friendly Web-based interface for users. TS Gateway By default, the RDP network protocol passes data across the network in an unencrypted format. For internal-only or low-risk environments, this is often an acceptable solution. But some environments require higher levels of security for this data. The TS Gateway Role Service enables IPSec-based transport-level authentication and encryption for RDP network traffic. As a network gateway, the TS Gateway Role Service also serves as a type of proxy server, proxying traffic between external clients and Terminal Services. This function serves to obfuscate internal services and is an excellent solution for hosting applications over the Internet. TS Session Broker Organizations that invest in Terminal Services for hosting of applications often require highavailability support to ensure that the loss of a single server will not mean significant downtime of the application. TS Session Broker is a built-in load-balancing and high-availability tool that allows clients to connect to multiple back-end Terminal Servers as if they were a single entity. This capability provides a single point of contact for multiple servers.
In this chapter, well talk about the client side of Terminal Services as well as the first two of these Role Services. In Chapter 8, well continue the discussion with a detailed explanation of the use and utility of the other three Role Services as well as some best practices associated with the use of Terminal Services.

158

Chapter 7

The Remote Desktop Client


In addition to the server-side components discussed earlier, individual clients require the installation and use of the RDC to connect to Terminal Servers. The RDC is a small tool that enables a client to connect to the Terminal Server. It handles receiving screen updates from the server while sending keyboard and mouse commands back to that server. The RDC can either be run interactively (the console is shown in Figure 7.3) or it can be run in the background. Depending on how you plan to distribute links to hosted applications to your users, those users may actively use its administrative interface to connect to servers and applications or they may click links on Web pages or installed to their desktops to launch the client in the background.

Figure 7.3: The RDC tool can be run either interactively or in the background. When run interactively, a number of configuration settings can be made by users as they initiate their connection to a Terminal Server.

159

Chapter 7 Running the client interactively enables a user to connect directly to the desktop of a Terminal Server. As you can see in Figure 7.3, to connect to the desktop of the server w2008b, the user needs only to launch the client and enter that servers name in the box next to Computer. Credentials can be associated with a server connection if desired. By clicking through the tabs at the top of the console, the user can configure elements of their user experience such as display size, color depth, local resources that are connected into the remote session, experience elements that add or remove graphical features from the session, and advanced functionality such as server authentication and TS Gateway settings. Although using the RDC in interactive mode is useful for connecting directly to a Terminal Servers desktop, it is not possible through this interface to access an individually-hosted application. These hosted applications, called TS RemoteApps, allow the user to interact with a single application rather than an entire desktop. Well talk in the next chapter about the functionality and benefits of TS RemoteApps, but know that connecting directly to a TS RemoteApp requires the use of a link either on the users computer, hosted via a file share, or through a TS Web Access site.
For the purposes of the functionality enabled with Windows Server 2008 and for our discussion in this and the next chapter, the minimum RDC version should be RDC v6.1. At the time of this writing, RDC v6.1 is available through the installation of Windows XP Service Pack 3 or Windows Vista Service Pack 1, and is available as a separate download for Windows XP Service Pack 2. RDC v6.1 is natively available on Windows Server 2008 RTM.

More information about the features available in the RDC v6.1 can be found at http://support.microsoft.com/kb/951616.

160

Chapter 7

Installing the Terminal Server Role Service


Switching back to the server side, before any client can connect to a Terminal Server, we must first install the Terminal Server Role Service. Do this within Server Manager by installing the Terminal Services Role and then the Terminal Server Role Service. Server Manager will prompt you to make three configuration determinations as part of the installation: Authentication method. Network Level Authentication (NLA) is an enhanced security mechanism that requires clients to authenticate to the Terminal Server before they are given access to a session. The use of NLA requires support at both the client and the server, and is only supported on RDC versions v6.0 or later. If your clients are at this level, it is a good idea to set this to Require Network Level Authentication. Licensing Mode. Terminal Services licensing can be done either Per Device or Per User. The determination about which licensing type to use will be up to your unique environment conditions. As an example, if you have a small number of users that roam among a larger number of devices, then Per User can be a good selection. If the reverse is true, consider using Per Device. Should you have a general one-to-one mapping of users to deviceswhich is often the case in typical office environmentsconsider choosing the Per User mode. Be aware that the type of TS CALs purchased from Microsoft must match this selection. User Groups. Lastly, add the users or groups that have access to connect to this server. These can be individual users but are more often selected by Domain Global Group.

As stated earlier, it is critical that this Role Service is installed before any applications are installed to the server. Once the installation and reboot have completed, clients will immediately be able to connect to the servers desktop using the interactive mode explained previously. Figure 7.2 shows an example of a hosted desktop that has been connected to using this procedure.

161

Chapter 7

Installing the TS Licensing Role Service


Once the Terminal Services Role Service is installed and its initial configuration has completed, clients will be immediately able to connect to sessions on the server. However, TS CALs are still required and a TS Licensing server must be available. Upon the installation of Terminal Services, each server automatically enjoys a 120-day grace period before this requirement is enforced by the server and unlicensed clients are prohibited from connecting. This grace period is put into place because the process of obtaining and installing TS CALs involves a separate purchase from Microsoft and subsequent installation of licenses to a TS Licensing server. Assuming that the necessary TS CALs of the correct type (Per User vs. Per Device) have been purchased either directly from Microsoft or through a Microsoft partner, the first step in making available permanent licenses is to install the TS Licensing Role Service. Do this from Server Manager by right-clicking the Terminal Services node under Role and choosing to Add Role Services. Add the TS Licensing Role Service. Server Manager will prompt you to make one configuration decision as part of the installation: TS Licensing Configuration. Any TS Licensing server can be configured to serve licenses to its residing Workgroup, Domain, or Forest. Setting the discovery scope for the license server to one of these three values determines the boundary for serving licenses. Also asked here is the location for the TS Licensing database, which is a relatively small database installed local to the server.

Be aware of some idiosyncrasies associated with how the chosen licensing scope affects an individual Terminal Servers behavior in attempting to discover an available license server: The Workgroup licensing scope is only available when the computer is a member of a Windows Workgroup and not a member of a Domain. When the Workgroup licensing scope is enabled, Terminal Servers will automatically locate the TS Licensing server in their workgroup without additional configuration. When the Domain licensing scope is enabled, Terminal Servers will only be able to automatically locate the TS Licensing server when it is installed onto a Domain Controller. If TS Licensing is not installed onto a Domain Controller, each Terminal Server must be specifically configured to point to a license server. This can be done from within Server Manager | Terminal Services | Terminal Services Configuration | Edit Settings | Licensing tab. There, choose to Use the specified license servers and enter potential license server names separated by commas into the text box. When the Forest licensing scope is enabled, Terminal Servers will be able to automatically locate the TS Licensing Server without any additional configuration. This is because TS Licensing Servers that use the Forest licensing scope automatically publish information about their location into Active Directory. The installing administrator must have Enterprise Administrator privileges to accomplish this task. If TS Licensing is installed on the same server as Terminal Services, that server will be able to automatically locate the license server no matter what scope is selected.

162

Chapter 7 Once TS Licensing is installed, the license server must be activated and licenses installed. To access the TS Licensing Manager console, navigate to Administrative Tools | Terminal Services | TS Licensing Manager. The resulting screen looks similar to Figure 7.4. Two steps are required to properly add licenses. First, right-click the server name and select Activate Server to launch the Welcome to the Activate Server Wizard. This wizard registers TS Licensing for this server with the Microsoft Clearinghouse, a process that can either be done over the Internet, via phone, or using a Web browser.

Figure 7.4: An example of the TS Licensing Manager immediately after installation and prior to activation and installation of licenses.

If this server has Internet access, choose Automatic connection (recommended) for the Connection method. The server will ensure that a connection can be made with the Microsoft Clearinghouse and request Company Information such as name, company, and region as well as optional physical and email contact information. Once entered, the server will register with the Microsoft Clearinghouse and activate the license server. Step two is to actually install the TS CALs needed by your environment. Once licenses have been purchased either directly through Microsoft or through a partner they will be made available through the Microsoft Clearinghouse. For retail purchased license packs, a license key will be required. For other types of agreements, the agreement number will be required. Once registered, the licenses will be installed to the server. Licensing for Terminal Services has historically been a confusing process for many administrators due to the idiosyncrasies with the licensing system. Because of these historical problems, two new features with Windows Server 2008 are the Review Configuration wizard and the Licensing Diagnosis node in Server Manager. The Review Configuration wizard can be launched from within the TS Licensing Manager console by right-clicking the server name and choosing Review Configuration. This tool performs a series of tests against the configuration of the license server itself and notifies the administrator when known issues are seen. Figure 7.5 shows an example of the warning message that appears when the license server is installed with the Domain licensing scope. Youll see there that this is also the location where the licensing scope can be later changed if desired.

163

Chapter 7

Figure 7.5: The Review Configuration wizard alerts the administrator when the TS Licensing configuration may experience known problems.

Also available is the Licensing Diagnosis node in Server Manager where a more comprehensive view of the licensing configuration is shown. For this server (see Figure 7.6), this screen displays information about the number of TS CALs available for distribution to clients, and any warnings regarding configurations that may impact the ability to serve licenses to clients. Of particularly handy use is the bottom screen where all discovered license servers are displayed. This window is useful for identifying which servers are currently serving TS CALs to clients and can be located by this Terminal Server.

164

Chapter 7

Figure 7.6: The Review Configuration wizard alerts the administrator when the TS Licensing configuration may experience known problems.

Managing Terminal Services


Once Terminal Services has been installed and correctly licensed, there are a number of steps to accomplish to make the server available for use by users. Within Server Manager are a number of configurations that determine how users interact with the server. There are also best practices associated with installing applications and managing user profiles. In this section, lets discuss each of these management steps in turn. Server Manager Upon the installation of the Terminal Server Role Service, Server Manager is extended to include three new nodes: TS RemoteApp Manager, Terminal Services Configuration, and Terminal Services Manager. Figure 7.7 shows an example of this with the Terminal Services Configuration node highlighted.

165

Chapter 7

Figure 7.7: Initial configuration of Terminal Server is done via Server Manager. In this figure is displayed the Terminal Services Configuration node where RDP protocol and server-specific settings are configured.

Skipping over the TS RemoteApp Manager node until the next chapter, there are a number of configurations of value within Terminal Services Configuration. This console is used to manage the configuration of the RDP protocol itself as well as a few server configurations. If you doubleclick the RDP-Tcp connection within the console, a properties window appears. This window includes eight tabs for configuring the properties of the protocol: General. This tab provides for configuring the security and encryption level for the protocol as well as identifying the certificate to be used. As well discuss in the next chapter, certain services require the use of a server certificate for authentication and/or encryption. By default a self-signed certificate is available for use; however, a trusted certificate is necessary for production use of these features. Log on Settings. By default, clients are configured to provide their own logon information. This is set so that each individual client is authenticated based on their own user permissions. Alternatively, in a low-security environment, it is possible to configure the server to automatically logon each client with a preconfigured user account. Doing so eliminates the ability to map individual people to sessions, but enables a type of anonymous logon.
166

Chapter 7 Sessions. By default, settings related to session disconnection, session reset, and idle limits are configured within each individual users Active Directory object. This tab provides a place to override the user object configuration for all connections to this server. In environments where a cohesive policy is desired for these settings, it is a best practice to enable this override here so that these settings do not need to be configured for each individual user.

It is often a best practice at this screen to end disconnected sessions after a short number of minutes (such as 5 minutes) and set an idle limit to a large value (such as 240 minutes). This allows accidentally disconnected sessions to automatically reset after a few minutes, while also resetting sessions that have become idle for a long period of time. When session limits are reached or connections are broken it is often a best practice to simply end the session here rather than disconnect the session. This is done to free system resources that would otherwise never be released until the session is correctly logged out.

Environment. Three settings are possible in this tab. By default, the configured setting will Run initial program specified by user profile and Remote Desktop Connection or client. This setting enables both desktops and TS RemoteApps to be used on this server. Alternate options are to disallow an initial program to be launched, which has the effect of restricting users to full desktops only and to hard code a specific application to be launched at connection in the case where the Terminal Server only hosts a single application. Remote Control. One of the administrative benefits to Terminal Services is the ability for users and administrators to look over the shoulder of another session for troubleshooting or cooperative work. This tab enables the configuration of those Remote Control settings either based on the AD user object, or via an override. As with session information, it is often a best practice to override the user settings at this screen for easier administration. Client settings. This tab includes the master toggle switches for disabling certain features for all connecting clients including color depth, connected drives, audio, clipboard sharing, and other features. Network adapter. It is possible in this tab to identify the specific network adapter to use for this instance of the protocol as well as the number of concurrent connections to support on that adapter. The settings here are usually left alone. Security. Within this tab it is possible to granularly identify which users have what kinds of access to the Terminal Server.

One use of this tab is in granting individual non-administrative users the ability to Remote Control other sessions. This is done by granting the Allow Remote Control privilege on the Remote Desktop Users group under the Advanced button.

167

Chapter 7 By double-clicking any of the entries in the Edit settings section of Terminal Services Configuration, this brings forward another properties box. Weve already talked about the contents of the Licensing tab, and well talk about the TS Session Broker tab in Chapter 8. But the General tab of this box is useful during initial server configuration to control the several settings that can affect the performance of the Terminal Server (and, correspondingly, the experience of its users). Four options are available:

Delete temporary folders on exit. Through the course of daily operations, user sessions
tend to accumulate files within their temporary folders. This folder by default is mapped to C:\Users\{username}\AppData\Local\Temp\1. The collection of these files over time can fill up the available drive space, which is a particular problem with the default configuration because temporary folders are stored on the system drive. Checking this box instructs the server to empty the users temporary folders location at logoff.

Use temporary folders per session. The temporary folders path noted in the previous
bullet is enabled when this checkbox is selected. This checkbox identifies a separate temporary folder per user, and its selection is a best practice for ensuring that users and their temporary folder usage does not step on each other.

Restrict each user to a single session. When this selection is disabled, it is possible for
users to open multiple sessions to the same Terminal Server, which can consume an unnecessary excess of resources. There are some situations, however, where multiple sessions may be useful. By default this configuration is selected, but each environment should weigh their need for resource conservation against the restriction against multiple sessions.

User logon mode. This configuration is used during maintenance operations to prevent
new logons from occurring to the server. In the case where maintenance is desired, it is not often a best practice to simply kick users off the server if the maintenance activity is of low priority. This setting, new to Windows Server 2008, enables the administrator to prevent any new logons to the server while draining existing connections as users naturally complete their work and logoff.
Be aware that the Server Manager version of Terminal Services Configuration discussed earlier and Terminal Services Manager discussed shortly can only work with the local server. By navigating to Administrative Tools | Terminal Services | Terminal Services Configuration or Terminal Services Manager, it is possible to manage multiple servers from the same interface. This is done by rightclicking the top-level node and choosing to Connect to Computer.

168

Chapter 7 The Terminal Services Manager node is another node available within Server Manager whose primary use is in identifying and interacting with the sessions and users currently logged into Terminal Servers within the domain. Three tabs are available for use by administrators:

Users. Individual users that are currently logged into connected servers are shown in this
tab. From this tab, user sessions can be disconnected, reset, or logged off. Administrators can send network messages to specific users from this screen or initiate a Remote Control session.

Sessions. This screen includes all of the currently open sessions on the server and is
shown in Figure 7.8. Slightly different than the Users screen, all open sessions including listener sessions and console sessions are shown here. Similar actions can be done to sessions within this screen as can be done on the Users screen.

Processes. Each session on each server is comprised of a number of individual processes.


Those processes are what drive the activities being completed by each user. But sometimes those processes use too many resources or need killing due to problems. Within this screen, all processes are listed by their owning user and session. Rightclicking any process allows an administrator to End Process.

Figure 7.8: A look at the Terminal Services Managers Sessions screen showing some of the actions that can be done to individual sessions.

169

Chapter 7

Installing Applications Once the initial configuration of the Terminal Server settings is complete, you are ready to begin installing the applications you wish to host. First and foremost, be aware that some applications do not behave properly when run within a multi-user environment like Terminal Services. Often, these applications inappropriately rely on centralized locations such as the HKEY_LOCAL_MACHINE registry hive to store user-specific information. Some applications like these may require specific tweaking, registry manipulation, or other hacking for them to properly function when used within the Terminal Server environment. Testing all applications, and most specifically simultaneous access by multiple users, prior to distribution is critical to ensuring their proper functionality. That being said, Terminal Services does come equipped with an installation mode that assists with some of these types of incompatibilities. Prior to installing any application to a Terminal Server, you must ensure the following: Ensure that no users are logged into the server. Prior to installing any application, all RDP sessions to the Terminal Server should be closed and all users logged out other than your user account. Install applications from the console. It is a good practice to install all applications from the console itself rather than through an RDP session. This is due to how some application installations are coded to work with the console session. With Windows Server 2008, the console session is structured differently than with previous OSs, no longer using what is called session 0. Although this change to the structure of the console session reduces the requirement for applications to be installed only via the console, in production it remains a good practice. Enter the server into install mode. Prior to launching the setup file for your application, you must first enter the server into install mode. There are two ways to accomplish this. Entering change user /install at a command prompt will complete the switch. Alternatively, the Control Panel includes a link called Install Application on Terminal Server. This Control Panel option will switch the server into install mode and prompt you for the setup file within a wizard format. Install the application. Complete the installation as necessary. Reboot the computer or return the server to execute mode. Once the installation is complete, the server will need to be returned back to execute mode. This process instructs the Terminal Server to stop watching for incoming installations and to process whatever installation it has just logged. Do this by either completing the Install Application on Terminal Server wizard or from the command prompt enter change user /execute. Any reboot of the server automatically brings that server back on-line in execute mode.

170

Chapter 7

This last bullet is an important point because some installations require a mid-installation reboot. The reboot occurs at some point during the installation, and once the server is logged back in the installation continues. The problem with installations that have a mid-install reboot is that the server returns from the reboot back in execute mode and not in the proper install mode. Should your application have an installation of this type, it is critical for you to identify how the installation notifies itself to restart after the reboot. Often, this is done by adding a link to the RunOnce key in the registry, found at HKEY_LOCAL_MACHINE\Software\Microsoft\Windows\CurrentVersion\RunOnce.

This install mode is necessary because many applications do not have install routines that are considered Terminal Server-aware. By default, applications typically install system-wide settings to the HKEY_LOCAL_MACHINE registry hive and user-specific settings into the HKEY_CURRENT_USER hive. For typical servers and desktops that only serve a single user at a time, this behavior is normal and desired. But with Terminal Servers, installing user-specific information to the installer users HKEY_CURRENT_USER means that other users will not necessarily have the registry information they need to properly run the application. The switch to install mode instructs the Terminal Server to watch for any registry updates to the HKEY_CURRENT_USER\Software key. Any updates to that location, typically done during an installation, are then logged to a special Terminal Server key located at HKEY_LOCAL_MACHINE\Software\Microsoft\Windows NT\CurrentVersion\Terminal Server\Install. Within this key, often called the shadow key, is stored the registry information needed by users when they later log on. When the server is returned back to execute mode and logons are re-enabled, the Terminal Server will then check each users HKEY_CURRENT_USER hive as they logon to see if they have the proper information they need. If not, that information is copied from the shadow key location.
As stated in the beginning of this section, some applications simply dont function very well when run atop Terminal Server. The process explained here ensures that most applications function properly, but some remain problematic even with this process and may require further tweaking. Testing of all applications, and specifically testing of multiple, simultaneous user access is critical for all applications.

171

Chapter 7

Managing User Profiles User profiles and their management are another important part to managing the users experience with Terminal Services. The reason for this is because user profiles by default are system specific and local to the computer where the user logs in. When a user logs into a Windows Server 2008 computer, that computer creates a profile for them as a subfolder of the location C:\Users. This works well in the traditional one-user-at-a-time situation seen on non-Terminal Servers. But can cause issues when used in the Terminal Services environment. First, local profiles can have a tendency to grow very large. This is particularly the case when users leave large files on their desktop or in profile-housed folders. The relatively few numbers of profiles on typical workstations and non-Terminal Servers usually doesnt cause a problem with disk space usage. But, with large numbers of users potentially logging into a Terminal Server, all of which create a profile, the potential is there for large amounts of disk space to be consumed by user profiles.
This has been a known problem with Terminal Servers since inception, and to combat the problem a number of solutions are available. Some solutions utilize third-party integrations to reduce profile size and/or enforce mandatory profiles that never change. Others leverage special coding to merge user specific settings with default or mandatory profiles. All extend the native functionality of Windows to support more efficient use of user profiles.

One solution for assisting with the problems of profiles is to use what are called Terminal Services User Profiles. These special roaming profiles are only used when a user logs into a Terminal Server, and are especially critical when multiple Terminal Servers are available in an environment. Because users expect to see the same environment when they login, Terminal Servers that are load balanced with each other require some form of roaming profile. This ensures that no matter which server the user logs into, they see that same environment. Terminal Services User Profiles are found within each users user object in Active Directory Users and Computers (ADUC). Open any users object within the ADUC console and navigate to the Terminal Services Profile tab and youll see a window similar to Figure 7.9. For each user that will be logging into a Terminal Server, it is possible to identify a file server and file share location which will be the storage location for their Terminal Server-specific roaming profile. As you can see in the figure, it is a best practice to identify a subfolder to the roaming folder share named after the users username. Also available in this location is a mapping to the users home drive.
Making use of Terminal Services User Profiles is a critical part of Terminal Services administration, most especially in multi-server environments. However, their use will increase the login process as the server copies the roaming profile from the file share. You will need to work with your users to ensure that their profile sizes do not grow excessive, or their login and logoff times can increase substantially. The careful use of Group Policy and/or third party tools for Terminal Server management is also possible to assist with this process.

172

Chapter 7

Figure 7.9: Terminal Services User Profiles allow users to log into different Terminal Servers and get the same environment.

Using Terminal Services User Profiles enhances the experience for users when they connect to multiple Terminal Servers, but alone this step does not eliminate the problem of profile storage on the Terminal Servers themselves. What is needed along with this step is yet another that instructs the Terminal Server to delete any locally-copied roaming profiles once the user logs off of the system.

173

Chapter 7 This can be done via a Group Policy, found in the Group Policy Management Editor in the location Computer Configuration | Policies | Administrative Settings | System | User Profiles. There, enable the policy titled Delete cached copies of roaming profiles and attach the Group Policy Object to the Organizational Unit that contains your Terminal Servers. Enabling this policy instructs the targeted machines to delete any locally-copied roaming profile information from C:\Users once the user logs off of the system.
Be aware that this only happens when users log off. If their session is reset instead of going through the logoff process, the server does not complete the deletion. For environments where resets are a regular occurrence, this can impact the processing of profiles.

Printing with Terminal Services


Printing has long been a pain point within Terminal Services environments. Because of the proliferation of printer drivers available on the market, nearly none of which work across multiple types or manufacturers of printers, one historically painful task has been the installation of printer drivers onto the local Terminal Server for use by its users. The reason for this problem is due to how printers are used by local clients. With clients connecting to Terminal Servers from all across the networkand, potentially, the Internetthe use of a print server that is local to the Terminal Server doesnt often connect the user to the printer they want. As an example, if a user in Denver has connected to a Terminal Server in Los Angeles and wants to print a document, they likely want to print it to their local printer and not one that is served off of the print server in Los Angeles. To enable this to occur, Microsoft long ago enabled the RDC to print jobs to local printers. But those printers could be of multiple manufacturers and/or multiple models, each requiring its own individual driver. Thus, one job of the Terminal Server administrator was to continuously watch the Event Log for error messages that alert for missing printer drivers and subsequently locate, download, and install those drivers. With previous OS versions, this process needed to occur on each individual Terminal Server. The administrator was required to unpack the device driver and from the Add Printer wizard install that print driver to the server. Doing this for dozens or hundreds of printers across even a small number of Terminal Servers quickly grew into a management nightmare.
In fact, such a management nightmare that a number of third party companies have developed solutions to ease the pain of deploying the right printer drivers to Terminal Servers. Those solutions automate much of these manual processes, and are critical in larger environments where printing is necessary.

174

Chapter 7 Another solution for printing problems that arrives with Windows Server 2008 is a new feature called Easy Print. This feature adds the ability to eliminate driver installations to Terminal Services completely. Easy Print leverages the XPS print path that is natively available with the RDC v6.1 in Windows Vista Service Pack 1 and is installed to Windows XP with the installation of Service Pack 3. The .NET Framework v3.5 is also required for Windows XP clients. This print path enables printed jobs to be spooled down to the local client and then be processed by the local clients print driver instead of using a print driver that is installed onto the Terminal Server. Another benefit of Easy Print is that when an RDC client views printer settings, they will see the same printer configuration wizard they are used to seeing on their local machine. This will help eliminate confusion about printer configuration and settings while within an RDP session. Two Group Policies are available that work with Easy Print. These policies instruct the server how to process printer driver requests from incoming clients. Both are found at the location Computer Configuration | Administrative Templates | Windows Components | Terminal Services | Terminal Server | Printer Redirection. The first, titled Use Terminal Services Easy Print printer driver first, when enabled instructs the server to attempt to use the Easy Print functionality first before attempting to use any locally-installed printer drivers. This is handy for the situation where clients do not have the proper prerequisite components available for Easy Print functionality such as the right Service Pack or .NET Framework components installed. The second, titled Redirect only the default client printer, when enabled instructs the Terminal Server to use only the default client printer. Enabling this second policy is a good practice when possible because of performance issues that can occur during the login process as the server attempts to connect to each client printer.

Summary
With the right administrative configurations in place, the addition of Terminal Services to your Windows Server 2008 infrastructure can significantly benefit the workload of the administrator while extending the reach of corporate applications. As weve seen in this chapter, Terminal Services provides a mechanism for consolidating applications back into the data center not unlike the mainframe days. It reduces the total number of configuration and security touch points required to be managed by IT administrators, while making those applications readily available to users on even slow or latent network connections. But this explanation of Terminal Services has only begun. Windows Server 2008 adds a number of new and expanded features to Terminal Services that reduces confusion for users, enables seamless connections from client desktops, and provides for security and load-balancing support for high-risk and high-availability environments. In Chapter 8, well continue our discussion on Terminal Services and focus on those new and expanded features like TS RemoteApps, TS Web Access, TS Gateway, and TS Session Broker. Youll find that these new additions to the venerable Terminal Server makes its administration more flexible and easier, while providing an enhanced user experience to your users.

175

Chapter 8

Chapter 8: Advanced Topics in Terminal Services


Terminal Services and Terminal Server arent new technologies. Originally available with the release of Windows NT as Windows NT Terminal Server Edition, the bits that make up Terminal Server have been around since 1998, making this most recent operating system (OS) release a 10 year celebration of Windows remote application support. But Terminal Server has always had a complex history in relation with its related product Citrix XenApp (also previously called Citrix Presentation Server and Citrix MetaFrame). Due to long-standing agreements between Microsoft and Citrix, the two applications have been tied to each other throughout their history. Citrixs product and its features have traditionally been targeted for environments with higher-end requirements, with the Citrix product including extra management features like Published Applications, transport level security, a customizable web front-end, multiple mechanisms for deploying applications, and a rich load balancing engine for distributing incoming client session requests. These and other feature sets have historically only been available in the higher-end and higher-cost Citrix product. Contrast this to Terminal Services, which over its history has usually been relegated to uses within smaller environments, those that cannot afford the added features of Citrix, or have no needs for them. A major factor in this decision is that Terminal Services is significantly less expensive than the Citrix solution, owing to the fact that no extra license costs are required other than for the TS CALs discussed in our last chapter. In contrast, using Citrix in the environment requires additional per concurrent user licenses along with their accompanying maintenance costs over and above Terminal Services TS CALs, Windows licenses, and maintenance. This makes Citrixs features a natural up-sell for those that find themselves needing its extra functionality.

Figure 8.1: A Terminal Server has a lower cost of entry and recurring cost of ownership than does a comparable Citrix server.

176

Chapter 8

Advanced New Functionality for Terminal Services in Windows Server 2008


The relationship between these two products changes substantially with the release of Windows Server 2008. Specifically, the difference in functionality between what is available in the Citrix solution and what Terminal Services includes gets closer than ever before: With Windows Server 2008, Terminal Services adds application publishing through TS RemoteApps. Adding to this are multiple new ways in which applications can be deployed to clients, through the sharing of RDP files, direct installation, or hosting via a web site. Certificate-based transport level security as well as the ability to proxy session traffic arrives with the inclusion of TS Gateway. Microsoft gains its own pre-built web front-end for hosting Terminal Server connections through the addition of TS Web Access. Load balancing of multiple Terminal Servers gets greatly improved with the new TS Session Broker.

Though not all the features that differentiate Microsoft and Citrix are now aligned, Terminal Server administrators who have been looking longingly from the sidelines at the exciting features previously only available with Citrix can now add some of them into their Terminal Services environment for no added cost. In this chapter, well talk in detail about how to set up and use each of these new advanced features, and how each has the ability to significantly improve your users experience.
If you are considering an upgrade to Citrix XenApp, consider carefully the features you need. Those features may now be available with Terminal Services alone.

177

Chapter 8

Deploying Applications with Terminal Services


In Chapter 7 we talked about how to use the Remote Desktop Client to connect to a Terminal Server desktop session. This process can be done by any user with the correct permissions to connect to a full server desktop along with its installed applications. But sometimes you the administrator dont want to provide access to that entire server desktop. Instead, you may want to provide access to only a specified few applications on the server. There are a number of reasons why enabling access to specified applications can be a superior solution than with deploying full desktops: Enabling access to specified applications can be easier for users to understand. Users know they need access to their applications. Giving them a secondary desktop with a full Start bar and all the other accouterments can be confusing. Enabling access to specified applications can consume fewer resources on the server. Because the full desktop and explorer.exe shell along with the other processes it relies upon does not need to be rendered for each user, fewer resources are consumed per user than with full desktops. Enabling access to specified applications can consume more predictable levels of resources. When users are given access to a full desktop, they typically have the ability to use any of the applications on that server. This makes it very difficult to profile resource consumption by users because their actions are less controllable. This inability to profile makes it more difficult to understand and plan for resource use, and thereby ensure the best possible users experience. Enabling access to specified applications can be easier to secure. When full desktops are provided for users, administrators must undergo a securing activity to restrict what activities users can do while logged in. When applications are provided, this activity needs be done per application instead of per desktop, a process that is much easier to accomplish.

These benefits arent the only factors that should drive your decision about how to make applications available to your users. There are a few gotchas associated with distributing applications rather than full desktops. For example, enabling access to specified applications can be more challenging when applications need to work with each other. This is the process whereby one application spawns a second application to process some form of data. Imagine the situation where Microsoft Outlook needs to launch Microsoft Word in order for Word to display an attached document. For this down-level application spawning to function correctly, each potential application that could be spawned must be collocated on the same server. A TS RemoteApp will automatically launch the second application when necessary, but only if it resides on the same Terminal Server.
Make sure that all applications potentially required by applications you plan to host are also located on your Terminal Servers. You may not necessarily need to create them as TS RemoteApps, but they must be installed.

178

Chapter 8 TS RemoteApps TS RemoteApps are configured from the TS RemoteApp Manager, and any application that will be configured as a TS RemoteApp must be first installed to the server. Once installed, right-click the TS RemoteApp Manager in Server Manager and choose Add RemoteApp Programs. This will launch the RemoteApp Wizard. Clicking Next will present a screen that lists the applications currently installed to the server, similar to what is seen in Figure 8.2. If the application you wish to distribute is available, select it from the list and click Next.

Figure 8.2: The RemoteApp Wizard interrogates the server to populate a list of available applications that can be distributed via Terminal Services.

Occasionally, the specific application you wish to distribute is not available in the list. When this occurs, click the Browse button and select the primary executable for launching the application. For this example, well choose the Calculator application, click Next, and then click Finish to create the TS RemoteApp.
Sometimes you may wish to host an application with special parameters or command line arguments. These arguments launch the application with special configurations or automatically launch a specific document. As an example, you can launch Microsoft Excel with a specific spreadsheet automatically loaded. Do this by clicking the Properties button. In the resulting screen, change the selection for Command-line arguments. In this same screen it is also possible to change the programs name, icon, location, or alias.

179

Chapter 8 TS RemoteApp Distribution Options Once created, there are three major mechanisms through which TS RemoteApps can be deployed to your users. Depending on how you want your users to interact with their applications, you may make available one or more of these options to users. Multiple distribution options can be used simultaneously as well if desired. The three distribution options are distribution of RDP files, RDP installation to local desktops, and hosting via TS Web Access. Well talk about each of these in the sections below. RDP File Distribution The easiest mechanism for distribution is one thats been used with previous OS versions for distributing published desktops. Providing access to RDP files through storage on a file share or distribution through email or other medium is one way to connect users to Terminal Services hosted applications. Once a TS RemoteApp has been created, right-click the application in the list of RemoteApp Programs and choose Create .rdp file. A wizard will appear that looks similar to Figure 8.3.

Figure 8.3: A wizard is available for configuring RDP files at the time of creation.

180

Chapter 8 Within this wizard, it is possible to configure where the resulting RDP file will be stored, as well as settings for the Terminal Server itself, TS Gateway, and any certificate settings used. By clicking the Change button under Terminal server settings it is possible to make modifications to the server or RDP port used by the RDP file in connecting to the server. For our example, well also remove the check next to Require server authentication. Although well talk in a later section about the integration of RemoteApps with TS Gateway, be aware that with Windows Server 2008 it is possibleand suggestedto digitally sign RDP files using certificates. Signing an RDP file enables the client to authenticate your identity as its publisher. It allows clients to verify the organization that authored the RDP file, and enables them to make informed decisions about whether to attempt a connection using the file. Click Next and Finish to complete the wizard. Once complete, any RDP client that can resolve the Terminal Server will be able to double-click the resulting file to launch the Calculator RemoteApp. Distributing this file to prospective users or storing it on a file share will make the connection available for users who need to access the program.
If you plan to enable access to RemoteApps from computers in other DNS zones such as from the Internet, ensure that the server name within the RDP file is fully qualified and that the client can resolve the Fully Qualified DNS Name (FQDN) of the Terminal Server from those network locations. This is particularly important if you plan to enable Internet-based access to applications.

Local Desktop Installation In terms of usefulness, RDP file distribution is arguably the least interesting of the distribution options available. Another more exciting option available once a RemoteApp has been created is to wrap the RDP file into an MSI file installation. This process enables the administrator to install the RDP file onto the desktop of users that may need to access the RemoteApp. To make this option even more useful, multiple options are available for the administrator to include with the MSI installation:

Start Menu shortcut. This option adds a link to the RemoteApp into the desktops Start
Menu, allowing the user to launch the RemoteApp just like they would a typical, locallyinstalled program. Like all Start Menu shortcuts, it is possible to specify a Start Menu folder where the link will be stored.

Desktop shortcut. In addition to the Start Menu link, it is also possible to include a
shortcut on the users desktop.

Take over client extensions. Every Windows desktop and server includes client extension
associations. These associations allow a user to double-click a file to automatically launch the application that is configured to use that file. As an example, when a file with a .DOC extension is double-clicked, computers will automatically launch Microsoft Wordif it is installedwith the file loaded. By selecting the option Associate client extensions for this program with the RemoteApp program, this will instruct the MSI installation to re-associate client extensions to launch the remote application rather than the local instance when a file is double-clicked.

181

Chapter 8 To create an MSI installation file, click the Create Windows Installer Package link in the TS RemoteApp Manager. While the first screen of the resulting wizard looks similar to what weve already seen in Figure 8.3, click Next to see the additional configuration information shown in Figure 8.4. On this screen of the wizard you can choose the installation parameters noted above. Click Next and Finish to create the MSI installation file.

Figure 8.4: The Configure Distribution Package screen enables choices for how the RDP file will be installed.

Once complete, the resulting MSI installation file will need to be installed to all clients who will use the remote application. This installation can be done manually, using a software deployment solution like Microsoft System Center Configuration Manager or others, or distributed using Active Directorys native software distribution capabilities.

182

Chapter 8

Hosting via TS Web Access The last option for distribution is to host the application on a TS Web Access web site. Well talk more about the installation and configuration of TS Web Access in the next section. But, for now, know that the process to make a TS RemoteApp available through TS Web Access is to right-click the application in RemoteApp Programs and select Show in TS Web Access. Doing this immediately makes the application available in the TS Web Access instance currently configured for this Terminal Server. If you wish to make a full desktop connection available in TS Web Access, right-click the TS RemoteApp Manager in Server Manager and choose Terminal Server Settings. In the resulting screen, check the box for Show a remote desktop connection to this terminal server in TS Web Access. Like with RemoteApps, this will immediately make an icon for the full desktop connection available in TS Web Access. One of the benefits of using TS Web Access for user access to RemoteApps is in the ease of adding and removing applications as they evolve over time. For example, if you have a Terminal Server-hosted application that regularly changes versions over time, using TS Web Access as the location for hosting its access is handy because the old version can be simply turned off when it is no longer relevant. At the same time, the new version can be turned on when it is ready for use. This is different than with the MSI deployment option, where changes to a hosted application can force a change to the installed MSI. This same situation holds true for applications that you want to disable for maintenance periods. It is very easy to remove the RemoteApps link in TS Web Access during its maintenance period through a single click.
Before making applications available to your users, consider well the ways in which you plan to distribute RemoteApp access to your users, as each mechanism has its own benefits and drawbacks.

TS Web Access
Before applications can be hosted via a TS Web Access site, the Role Service must be installed to a Windows Server 2008 computer somewhere within your domain. This computer need not be the same computer that acts as a Terminal Server, and for larger or more complex installations it is often a best practice to use a separate server. Installing TS Web Access to an existing web server is another option for environments that already have a dedicated web server in place. Installing the TS Web Access Role Service is done in the same way that other Role Services are installed through Server Manager. If not already present, adding the Role Service will install the necessary IIS components required to host the web site as well.

183

Chapter 8

Installing and Using TS Web Access If you add TS Web Access to an existing Terminal Server, installing the Role Service is the only step that must be completed for users to begin using the web site. Once installed, clients can access the TS Web Access site by navigating to the URL http://{serverName}/ts. An example of this web site with our Calculator application already available can be seen as Figure 8.5.

Figure 8.5: The TS Web Access web site connects users to their applications and desktops.

Three buttons are made available at the top of the screen: RemoteApp Programs, Remote Desktops, and Configuration. Under the tab marked RemoteApp Programs will be displayed any RemoteApps that have been configured to show in TS Web Access. After logging in, users will be able to double-click any listed applications to automatically launch them. Clicking on the Remote Desktop link brings forward an entry box that allows a user to connect to the desktop of a selected server. Options there are provided for customizing the connection in terms of screen size, devices and resources pulled into the session, as well as additional options like sounds, keyboard shortcuts, and performance.

184

Chapter 8 For installations where the TS Web Access Role Service is not installed to a Terminal Server, the third button named Configuration is important. Once installed, an administrator needs to first navigate to this button and input the name of the Terminal Server or Terminal Server farm to be used as the TS Web Access sites source for listed applications. By default, TS Web Access is limited to pulling its list of available applications from only a single Terminal Server or Terminal Server farm that is hosted via TS Session Broker.
As well discover later on when we talk about TS Session Broker, for a set of Terminal Servers to operate as a load balanced farm they must have the exact same configuration with the exact same TS RemoteApps on each instance. If you need to connect your TS Web Access instance to multiple Terminal Servers with different configurations, it is possible to programmatically configure multiple web parts to display at the same time. However, this is a complex task out of scope for this chapter. For more information on how to accomplish this using a Windows SharePoint Services integration see the web site: http://technet2.microsoft.com/windowsserver2008/en/library/7929a12e-552c-4409-91005a774a4cfa171033.mspx?mfr=true.

Lastly, for installations where the TS Web Access Role Service is not installed to a Terminal Server, one final task must be completed prior to making the TS Web Access site available for use by users. On the remote Terminal Server, navigate to Administrative Tools | Computer Management | Local Users and Groups | Groups and look for the group TS Web Access Servers. In this group add the computer account for the newly-created TS Web Access Server to the group and reboot the computer. Accomplishing this task enables the correct permissions for the Terminal Server and the TS Web Access server to work together.
Remember that the RDC v6.1 is required for clients to work with TS Web Access. Although no additional work is required for Windows Vista, for Windows XP the ActiveX component of the RDC client must be specifically enabled. Do this within Internet Explorer by navigating to Tools | Manage Add-ons | Enable or Disable Add-ons. Enable the Add-on named Microsoft Terminal Services Client Control (redist). If multiple Add-ons of the same name are present, enable each instance.

Configuring TS Web Access TS Web Access is a relatively light application, with few configurations available for the administrator and none that are easily accessible. Available configurations can be found within IIS Manager by navigating to the TS website and double-clicking the IIS control panel item ASP.NET | Application Settings. There you should see a screen similar to Figure 8.6 which displays those limited configurations available for TS Web Access. These settings relate to the connected TS Gateway server and credentials source, as well as a few settings that configure master toggle switches for how sessions are displayed to connected users. Double-click any setting to change its value.

185

Chapter 8

Figure 8.6: TS Web Access Limited Configuration Settings are set within IIS Manager.

The style elements that make up the TS Web Access page can be customized if desired, however this process involves a bit of coding. All files that make up the web page are stored in the folder C:\Windows\web\ts. By modifying the default images in the \images subfolder, it is possible to change the individual graphical elements that make up the web page. It is also possible to change the text at the top of the screen that by default says Windows Server 2008, TS Web Access. For the English language, do this by modifying the file C:\Windows\web\ts\en-US\Default.aspx. Look for two lines that resemble the following:
string L_WindowsServer2008_Text = "Windows Server<sup style=\"font-size:8px;\">&reg;</sup> 2008"; string L_TSWebAccess_Text = "TS Web Access";

By changing the text marked in italics above to your desired alternate text, you can customize the default text at the top of the TS Web Access screen to something that is relevant to your environment or organization.

186

Chapter 8

TS Gateway
Throughout the history of Terminal Services, making available applications over the Internet or through high-security environments has traditionally been a problem because good encryption for the RDP protocol hasnt been available. Additionally, without some form of proxy between the Terminal Server and its clients, the only way to enable access was through a direct connectiona solution that many security administrators will not support. With the release of Windows Server 2008, both of these problems get a resolution in the form of TS Gateway. The TS Gateway Role Service is designed to provide certificate-based transport level security for network traffic between clients and servers, while also serving as a proxy between clients out in other networks and the Terminal Server within your protected intranet. Figure 8.7 shows an example of how TS Gateway can be implemented within a DMZ environment for the purposes of proxying traffic between the Internet and Terminal Servers within the protected intranet. As you can see in the picture, the TS Gateway operates like a gateway that receives traffic from clients on the unprotected Internet (or other untrusted networks) and passes this traffic on to Terminal Servers in the protected internal intranet. This gateway functionality is necessary to obfuscate the internal workings of the internal Terminal Servers while at the same time protecting them from external attack.

Figure 8.7: The TS Gateway Server proxies traffic between clients in the unprotected Internet and Terminal Servers in the protected intranet.

187

Chapter 8 The typical series of events that occurs when a client attempts to connected to a Terminal Services environment that includes TS Gateway functionality resembles the following:
10. The client starts the connection by invoking a preconfigured RDP file or via an installed

remote program. In either case, information necessary to locate and connect to the TS Gateway server is included within the RDP file.
11. The client then creates an SSL tunnel between itself and the TS Gateway server. This

tunnel is established through the use of a preinstalled digital certificate. This certificate is used in authenticating the TS Gateway server as well as encrypting the connection.
12. Once authentication has successfully completed, the user is requested for their credentials

to authorize their connection into the environment. Using a TS Connection Authorization Policy (TS CAP) that authorization is checked against an available authentication store (such as Active Directory) to verify access.
13. When server authentication and user authorization have completed successfully, the

client then requests access to an internal Terminal Server resource. On the TS Gateway are one or more TS Resource Authorization Policies (TS RAPs) that instruct the TS Gateway which resources are available.
14. If the TS Gateway locates the users resource within a TS RAP, it then establishes a

connection on behalf of the client with the Terminal Services resource. From this point on, all communication between the client and the Terminal Services resource is proxied through the TS Gateway server. Inbound communication from the client occurs over TCP/443, while communication outbound from the TS Gateway occurs over the standard Terminal Services port TCP/3389.
15. Once the proxied connection is established, the client then attempts a Windows logon and

authentication to the resource. If this process completes successfully, the client is granted access to use the resource. In short, in order for the TS Gateway to function a TS CAP is required to identify and authorize the user to access the TS Gateway itself. Once this has completed successfully, the TS Gateway uses a TS RAP to identify which Terminal Server resources the user has access to use. Both of these are in addition to the standard permissions required at the Terminal Server to permit the user access to use resources. Installing TS Gateway The TS Gateway Role Service does not need to be installed to a Terminal Server, although it must be installed as a member of the Windows domain in which it will be used to authenticate. The TS Gateway server can be located within the DMZ, as is shown in Figure 8.7 and explained in this chapters example, or it can be located inside the protected intranet. There are security implications to both scenarios, and your architecture will depend on your corporate security policies as well as the level of security you wish to provide for connections to the TS Gateway server.

188

Chapter 8

If you wish to locate the TS Gateway server within your protected intranet but still require the added bridging and security gained through a DMZ-based device, consider using an ISA server located in the DMZ as an SSL bridging device. More information on how to accomplish this can be found at: http://technet2.microsoft.com/windowsserver2008/en/library/9f293f18-b0fd-48f8-b103957fad92d70b1033.mspx?mfr=true.

Installing the TS Gateway Role Service is done in the same way as with all the other Role Services weve discussed to date. Installing the TS Gateway Role Service additionally installs IIS as well as the Network Policy and Access Services used for authenticating users. The installation of the Role Service also involves a number of initial questions that need to be answered as part of the installation: Server authentication certificate. A certificate must be used for TS Gateway to successfully authenticate and encrypt network traffic as it passes. This certificate must either be signed by an external certification authority or a certification authority that is trusted by incoming clients. An option to use a self-signed certificate is available if other certificates are unavailable. However, that self-signed certificate must be manually installed to clients and is only suggested for use in very small or testing environments. TS Gateway User Groups. This selection identifies user groups that are allowed to access internal resources through the TS Gateway server. TS CAP. The TS CAP identifies how users will authenticate to the TS Gateway server. Options are exposed that include password and smart card authentication. TS RAP. Once users have been authenticated through a TS CAP, the TS RAP is used to identify which internal Terminal Server resources they are allowed to connect. Network Policy and Access Services and Web Server (IIS) Services. For both of these, the individual Role Services required by TS Gateway are already identified. For most installations, accepting the defaults here will successfully install the necessary prerequisite components.

Digital certificates and the authentication and encryption that they provide are a critical component of a TS Gateway installation. More information about the certificate requirements for TS Gateway can be found at: http://technet2.microsoft.com/WindowsServer2008/en/library/5fdeb161-31c7-41b2-aaa37a4d5f5e3cda1033.mspx#BKMK_ObtainCertTSGateway.

Configuring TS Gateway Once TS Gateway is installed, the TS Gateway Manager node will appear under Terminal Services in Server Manager. This console looks similar to Figure 8.8. As you can see here, the console provides little in the way of configuration, enabling access to view active connections, modify TS CAPs and TS RAPs, change the assigned certificate for the TS Gateway, and manage the creation of TS Gateway Server Farms.

189

Chapter 8

Figure 8.8: The TS Gateway Manager node in Server Manager includes minimal configurations.

Lets take a look at the TS Gateways server-specific configurations. All can be modified by right-clicking the TS Gateway node in Server Manager and choosing Properties: General tab. Under the General tab is the option to select the maximum number of allowed simultaneous connections that can be run through the TS Gateway server. This is done in order to throttle the level of incoming connections for performance reasons. Be aware that the Standard Edition of Windows Server 2008 has a maximum of 250 simultaneous connections through TS Gateway, while the Enterprise Edition has no such restrictions. Also possible here is the selection to Disable new connections. Like with the Terminal Server drain mode discussed in the last chapter, this allows the administrator to prevent new connections from initiating while not forcing existing connections closed. SSL Certificate. This tab identifies the certificate currently assigned to the TS Gateway and allows an administrator to change the assigned certificate. TS CAP Store. Under this tab is selected whether the local server will serve as a Network Policy and Access Services (NPS) server or if a central NPS server will be used. If this TS Gateway is used as part of a Network Access Protection infrastructure, it is possible here to select whether this server will request clients to submit statements of health upon connection.

190

Chapter 8

Server Farm. It is possible to connect multiple, similarly-configured TS Gateway servers together to create a load balanced farm. This farm provides for greater performance during periods of high load and ensures that the loss of a single TS Gateway does not impact clients ability to connect to Terminal Server resources. A separate load balancing solution must be implemented prior to creating a TS Gateway server farm and each must have identical configured TS CAPs and TS RAPs. Auditing. TS Gateway events are logged to the Event Log located at Application and Services Logs | Microsoft | Windows | Terminal Services-Gateway. This tab enables or disables the types of events that are logged to this location. SSL Bridging. This tab configures HTTPS to HTTP bridging on the TS Gateway server for situations where the TS Gateway is positioned inside the protected intranet and an ISA Server is located in the DMZ to perform SSL bridging.

Once the server has been configured as appropriate for your environment, the next step is to ensure that TS CAPs and TS RAPs are properly configured. If you created these during the TS Gateway installation, they should be already be equipped for connecting clients. Verify this within Server Manager by navigating to TS Gateway Manager | {serverName} | Policies | Connection Authorization Policies. In this location should be the policy created at installation. Double-click this policy to see three tabs: General. Here the name of the TS CAP can be changed and the policy can be enabled or disabled as necessary. TS Gateway has the ability to create multiple policies for different classes of users, computer, and authentication methods. Policies are processed in the order presented in the console. Requirements. This tab, shown in Figure 8.9, shows the types of authentication mechanisms available and configured as well as the user groups and computer groups that have been granted access to the TS Gateway. Granting access by user group is required. Locking down users to particular computers by configuring computer groups adds an additional layer of authentication to incoming connections.

191

Chapter 8

Figure 8.9: The TS CAP identifies the user and computer groups that have access to connect to Terminal Server resources through the TS Gateway.

Device redirection. This tab allows or prevents specified types of device redirection to occur for connections made through the TS Gateway. The device redirection policy identified here will supersede any policies set at the individual Terminal Server. Setting device redirection here is useful for preventing users from downloading documents, printing, and using their local clipboard when they are outside the protected intranet.

192

Chapter 8 In addition to verifying the TS CAP, we also need to ensure that the TS RAP is properly configured for external connections. Navigate to TS Gateway Manager | {serverName} | Policies | Resource Authorization Policies and double-click the policy (if present) to view four more tabs: General. As with the TS CAP, the General tab provides a location for changing the policy name as well as enabling or disabling the policy. User Groups. This tab identifies the user groups whose members are granted access to Terminal Services resources through the TS Gateway. Computer Group. Shown in Figure 8.10, once users have been authorized for connecting to Terminal Server resources, they are limited to connecting only to those computers identified in this tab. Computers can be selected via an Active Directory group, a local group that is managed by the TS Gateway itself, or all computers on the protected intranet.

Figure 8.10: The TS RAP identifies which resources on the protected intranet can be accessed by authorized users.

Allowed ports. By default, all traffic inbound to a Terminal Server is received on port TCP/3389. However, for some high security environments or in other situations it may be desired to change this to an alternate port. This tab configures the TS Gateway to attempt connecting to these computers over specified alternate ports.

193

Chapter 8

Dont forget that TS Gateway must have the appropriate network connectivity to an Active Directory Domain Controller in order to authenticate users. If you locate your TS Gateway server within your DMZ, those ports must be opened in order to enable this authentication to occur.

Configuring Terminal Services for TS Gateway Once the TS Gateway is setup and ready for use, each client must be configured to use the TS Gateway. Any RDP files that were created prior to the TS Gateway configuration must be replaced or updated to include the correct TS Gateway connection information. This information can be supplied to the clients in one of two ways: Manually supplying TS Gateway settings. From the TS RemoteApp Manager node click the Change link next to TS Gateway Settings. In the resulting window enter the connection information for the TS Gateway server and set the assigned logon method assigned. Once credentials have been added, new RDP files must be created for clients to recognize that they will connect through the TS Gateway server. Automatically detect TS Gateway server settings. Alternatively, it is possible to use Group Policy as the mechanism for populating TS Gateway server settings. The Group Policy for doing this can be found within the GPME by navigating to User Configuration | Policies | Administrative Templates | Windows Components | Terminal Services | TS Gateway. Three policies are provided: Set TS Gateway authentication method, Enable connection through TS Gateway, and Set TS Gateway server address. The combination of these three policies, when applied to target machines, accomplishes the same as the manual steps above. Since the default configuration for supplying TS Gateway settings is to automatically detect settings, using this method may not require all RDP files to be recreated. This is because they may have been initially created with this default setting already enabled.

The last step in this process is to ensure that the TS Gateways root certificate assigned at the beginning of this process has been added to the Trusted Root Certification Authorities certificate store on each of the clients that will connect through this TS Gateway server. This can be done either manually or using Group Policy. Once complete, you can begin creating new RDP files that contain the necessary TS Gateway connection information and subsequently launch those file from external clients. It is possible to monitor Terminal Server connections going through the TS Gateway from within the TS Gateway Manager by navigating to the {serverName} | Monitoring node. If youve done everything right, the external client will connect to the Terminal Server resource supplied in the RDP file and the Monitoring node will include information about the connection similar to what is shown in Figure 8.11.
On the TS Gateway tab of the RemoteApp Deployment Settings wizard is a checkbox called Bypass TS Gateway server for local addresses. Ensure that this checkbox is not selected if you want to ensure that all traffic passes through the TS Gateway indifferent of its origin.

194

Chapter 8

Figure 8.11: If everything is set up correctly, the TS Gateway Managers Monitoring node will show a successful connection.

TS Session Broker
Our last subject in this chapter helps with solving the problem of Terminal Server scalability. TS Session Broker is an update to the service previously called TS Session Directory in previous OS versions. TS Session Broker actually adds the functionality previously found in TS Session Directory to the clustering capabilities previously only found within Network Load Balancing clustering. Whereas the two individual services were required to operate together in previous OS versions, with TS Session Broker all the necessary components are built in. TS Session Broker is a rudimentary load balancing solution that enables multiple Terminal Servers to operate as a farm. Each Terminal Server must be identically configured with the exact same applications and configurations. TS Session Broker leverages either the native round robin DNS found in Windows Server 2008 or can use a third-party load balancer to handle balancing incoming client session requests. For servers that are not homogeneous in terms of hardware composition, TS Session Broker has the ability to weight servers in the farm. This has the result of sending fewer clients to servers that are less powerful. The end result is that servers in a TS Session Broker farm operate as a single unit, and clients are able to point to a single FQDN that takes them to one of many possible servers.

195

Chapter 8 Installing and Configuring TS Session Broker TS Session Broker is installed to only one Terminal Server of those that will participate in the farm. Installing the TS Session Broker Role Service is completed in the same way weve installed each of the other Role Services thus far. Its installation through Server Manager has no initial questions that need to be answered as part of the installation. Once the installation is complete, there are a few steps that must be completed in order to aggregate a series of Terminal Servers into a farm:
1. Build and configure Terminal Servers. Prior to installing TS Session Broker to one of the

farm members, build and configure the full set of Terminal Servers to be used in the farm. These servers must have the same set of applications as well as an identical configuration.
2. Install TS Session Broker. Install the Role Service to only one of the servers that will be a

member of the farm. This server will handle monitoring inbound session requests and tracking session information across all servers in the farm.
3. Add Servers to the correct group. After installation, a new Local Group will be found on

the TS Session Broker server named Session Directory Computers. Add the computer accounts for the computers that will participate in the farm to this Local Group. Once complete, the computers may require a reboot to recognize that they have been added to the group.
4. Join Terminal Servers to the farm. In Server Manager, navigate to Terminal Services

Configuration and double-click the link titled Member of farm in TS Session Broker. A screen will appear similar to Figure 8.12. Check the box next to Join a farm in TS Session Broker. Provide the server name of the server which has had the TS Session Broker Role Service installed and provide a name for the farm to be created. Check the box next to Participate in Session Broker Load Balancing and provide a relative weight for this server. Lastly, determine if IP address redirection will be used and provide IP addresses to be used for redirection. Most networking equipment has the ability to support IT address redirection. The steps here will need to be done for each server that will participate in the farm.
The absolute value of the number entered for relative weight is unimportant. What is important is the relative value of this number in comparison with the other numbers set in the farm. Thus, a server with a value of 50 will be sent half the number of clients than will a server with a value of 100.

196

Chapter 8

Figure 8.12: TS Session Broker enables multiple Terminal Servers to operate as a single unit.

5. Enable DNS round robin. If you will not be using a third-party load balancing solution for

handling the load balancing portion, you will need to create a round-robin entry in Windows Server 2008 DNS to accomplish this portion. To enable this, create an A or AAAA record named after the farm name that includes each IP address for each server that will participate in the farm. Once complete, reconfigure any RDP files to point to the farm name rather than an individual Terminal Server. TS Session Broker will ensure that clients are sent to servers as is appropriate based on the relative weighting configured for that server. Additional specifics about the configuration of TS Session Broker can be found at http://technet2.microsoft.com/windowsserver2008/en/library/8aa35e7d-bcff-4998-8ac26a8c5702c4161033.mspx?mfr=true.

197

Chapter 8

Terminal Services in Windows Server 2008 Narrows the Gap


The gap between the functionality previously only available with third-party products like those from Citrix grows a little closer with the release of Windows Server 2008. With Terminal Services in this new operating system (OS), a number of the long-desired features and capabilities have been folded into the native Windows OS for no extra charge. While there remains a compelling justification to move to the Citrix product lineup for environments that require its added management functionality and high-end capabilities for users, Terminal Services gains a lot in the transition. The topics discussed in this chapter bear out that statement. Your mileage will vary. After two chapters on this topic, we shift gears in Chapter 9 to talk about securing our new servers along with the domain. There, well discuss the new and improved security features that youll find in Windows Server 2008, as well as some controversial ones like User Account Control and the Windows Firewall with Advanced Security. Windows Server 2008 is reported to be the most secure OS Microsoft has released to date. In our next chapter, well discover exactly how and why.

198

Chapter 9

Chapter 9: Securing Servers & the Domain


Throughout this guide, Ive attempted to show you the features and functionality now available with Windows Server 2008 that are designed to help build and manage your Windows infrastructure. Some of these capabilities are new to this OS version, while others have minor upgrades or remain relatively unchanged. Yet all these great technologies found in Windows Server 2008 amount to exactly nothing if you cant properly secure them against external attack. This idea is central to much of whats different about Windows Server 2008. Beneath the covers and the systems administrators radar are a host of changes to the core OS itself that improve its security, enable better resistance against external attack, and ultimately improve its reliability. But those kernel-level improvements are only one part of the story. Layering atop the core enhancements are a set of features that make the management of security easier and more reliable. In this chapter, well talk about those features that enhance your data centers security posture. By making the upgrade to Windows Server 2008, the workloads you run in your organization stand to gain a higher level of uptime.

Windows Server 2008 Incorporates New and Improved Security Features


Although this chapter cannot touch on all the new security-related capabilities found in Windows Server 2008, well focus our attention on a few that can make the most impact in your infrastructure. Some of these features have essentially no user interface, making them low-level improvements for across-the-board use, while others must be specifically enabled and managed for use in your environment.
Be aware that due to the sharing of code base between Windows Server 2008 and Windows Vista, much of what you see here also relates to Windows Vista. Thus, the improvements enjoyed by Windows Server 2008 are also realized at the desktop upon the upgrade to Windows Vista.

199

Chapter 9

Componentization Weve discussed throughout this guide how the concept of componentization is a major shift in the development of the Windows OS. With previous versions of Microsoft Windows, Microsoft elected to heap virtually all the files associated with OS functionality onto every installed instance. This is the case even when some of that functionality was unused. This decision made it easy to add new functionality to an existing system, but at the same time, unnecessarily left potentially exploitable code on a system. The componentization activities that went into the development of Windows Server 2008 changed all that. By breaking apart the Windows OS into its disparate components and logically defining the linkages between each, functionality that is not used on a system instance is simply not present on the system. This componentization, much of which is below the level of visibility of the administrator, accomplishes three very important things: It means that Microsoft now has a much-improved view of system components as well as how each relates to and relies on others for functionality. By breaking apart and mapping system functions in relation to each other, Microsoft can better support and enhance the OS now and in the future. With a known functional map of components and their interrelations, unnecessary and extraneous files and folders need not be present on a system instance. This means that exploit code will not be able to use these system files in the case that they are not properly patched. Knowing how each function relies on others means that any desired component can be assuredly installed with its necessary prerequisites. This helps eliminate the situation in which administrators attempt to install new functionality without knowing what prerequisite components are necessary.

Security Configuration Wizard Although the componentization activity itself goes far into locking down system functionality to just the resources a server needs, sometimes additional hardening is necessary. For these cases, the Security Configuration Wizard (SCW) makes a return in Windows Server 2008 with special added functionality that supports Roles, Role Services, and Features. The SCW further tightens server configurations for environments that require a very high level of service lockdown.
Though we wont discuss the use of the SCW in this chapter, you can find out more at http://technet.microsoft.com/en-us/library/cc771492.aspx.

200

Chapter 9

Windows Service Hardening Windows services and their always on nature have traditionally been a source of security concern with the Windows OS. Windows Server 2008 reduces the threat from these sources by first reducing the number of services that are necessary to run by default. A few other capabilities are also introduced: Services are given the new ability to run with a per-service security identifier (SID). This isolates the running of the service to a particular SID while allowing explicit access control lists (ACLs) to be assigned to resources required by the service. Many services that used to run under the LocalSystem context have been moved to lesser-privileged accounts such as LocalService or NetworkService. Services are now linked to firewall policies, which limit the network exposure of the service to its intended functionality. Write-restricted access tokens can now be assigned to service processes. This restricts the services ability to update data on a system outside its intended functionality.

For the most part, these hardening tactics will be used by software developers in further securing their applications. Systems administrators will likely not make much use of these new capabilities except when instructed by application vendors or security checklists.

Fine-Grained Password Policies In all previous versions of the Windows OS, password policies were only applied at the domain level. This implementation meant that classes of accounts that required different policiessuch as password length, expiry, complexity requirements, and so oncould only be implemented through the creation of a completely new domain. With Windows Server 2008 and Fine-Grained Password Policies (FGPP), it is possible to create separate policies for individual groups within the same domain.
FGPPs arent necessarily an added security measure for your Windows infrastructure, but their capability for creating more than one password policy per domain makes them handy in a few special cases. For example, you might want to create a separate password policy for service accounts that does not require their passwords to be changed. This gives you the ability to change them on a regular basis rather than waiting for a password change event that could impact the functionality of the service. For detailed instructions on creating an FGPP, check out http://technet.microsoft.com/enus/library/cc770842.aspx.

201

Chapter 9

User Account Control User Account Control (UAC), originally released with Windows Vista, is a mechanism for enforcing the principle of least privilege. UAC splits administrator authentication tokens into two halves, FGPPwith and without administrative privileges, and manages the use of the correct half as needed by the administrative user. Windows Firewall with Advanced Security The Windows firewall gains new management flexibility with Windows Server 2008, making this version of the firewall the easiest to use to date. Integrating the Windows firewall with Group Policy and enhancing its management GUI allows administrators to easily create policy across the domain for cohesive firewall management. BitLocker Drive Encryption Lastly, with Windows Server 2008 comes the same full-drive encryption capabilities seen with Windows Vista. BitLocker Drive Encryption provides a mechanism for encrypting an entire Windows drive, effectively eliminating the risk of data exposure as a result of a stolen system. Although Microsofts implementation of new security controls under the covers is a boon to your environment security, there is little that you can do to manage these features. So for the rest of this chapter, well focus the discussion on these last three topics, providing the specific steps necessary to implement them in your Windows Server 2008 infrastructure.

Successfully Managing UAC


Depending on whom you ask, UAC can be considered Microsofts greatest addition to this latest round of OS releases or its greatest blunder. UAC has definitely gotten its share of pressboth good and badin relation to how often it makes itself known to the individual user. Although this chapter wont get into the political discussion of UACs efficacy, it will discuss the nature of UAC and some ways in which you can manage it in your organization. UAC is probably best described by looking at the tool it is intended to replace. The principle of least privilege suggests that users should operate computer systems and launch processes and applications with as few privileges as possible. For any action that needs to be run on a system, that action should be run with the lowest level of privileges necessary to accomplish its task. For regular users without administrative privileges, this has been accomplished by granting Domain User privileges along with the level of group membership necessary so that they can access the data necessary for their jobs. Administrators are a different breed entirely. With administrators, privileges tend toward the all or nothing variety, with membership in a computers Administrators group granting what is effectively full control over all objects on a system.

202

Chapter 9 In the days prior to Windows Vista and Windows Server 2008, Microsoft recommended that administrators maintain two separate accounts. Administrators would login using a nonadministrative account with minimal privileges to accomplish the majority of their tasks. Only when it was necessary to accomplish tasks that required elevated privileges would they login using their privileged account. This was typically done with tools such as runas, which could launch individual privileges under the context of their administrative-privileged user.
More information about the runas command can be found at http://support.microsoft.com/kb/294676.

The problem with tools like runas is in the administrative burden of managing two separate accounts and multiple logins. Administrators burdened with the overhead of double logins often found themselves not following proper procedures out of frustration. Standard users given administrative privileges on their local desktops were rarely capable of dealing with the complexities of double logins. Thus, with the release of Windows Vista and Windows Server 2008, Microsoft implemented UAC as a mechanism for automating this process. Using UAC, when a user logs into a computer with administrative privileges, the local system creates two separate sessions. In the first session, the user does not have access to their administrator privileges. In the second, the administrator privileges are available for use. The Windows OS detects when administrator privileges are required and automatically switches to the session with elevated privileges when necessary for the activity required. UAC notifies the user and requests permission before any session switching occurs to request the users consent for the elevation. The dialog box that does this looks similar to Figure 9.1.

Figure 9.1: The UAC elevation request prompt.

The elevation prompt shown in Figure 9.1 appears each and every time an administrative user attempts to complete a task that requires the elevation. With numerous activities on the system requiring administrative privileges to correctly functionespecially for IT professionals who perform IT functions constantly during their daily workflowthis elevation prompt has the tendency to appear a lot.

203

Chapter 9 Moreover, this elevation behavior does not occur for just members of the Administrators group. A user that is a member of any of the following groups will also experience the doublesessioning effect: Administrators Enterprise Administrators Policy Administrators Backup Operators Cryptographic Operators Power Users (deprecated) Network Configuration Operators RAS Servers Read-Only Domain Controllers in W2K8 Domain Administrators Schema Administrators Certificate Administrators Account Operators System Operators Print Operators Domain Controllers Enterprise Read-Only Domain Controllers Pre-Windows 2000 Compatible Access

In addition to these groups, a user who has been assigned any of the following nine privileges will experience the same behavior: Create Token object (SeCreateTokenPrivilege) Act as part of the OS (SeTcbPrivilege) Take Ownership (SeTakeOwnershipPrivilege) Backup files and directories (SeBackupPrivilege) Restore files and directories (SeRestorePrivilege) Debug programs (SeDebugPrivilege) Impersonate client after authentication (SeImpersonate) Modify object label (SeRelabelPrivilege) Load and unload device drivers (SeLoadDriverPrivilege)
By default, UAC is disabled for the original Administrator account. Initial logins to a freshly installed Windows Server 2008 instance login using this account.

The prompting doesnt stop there. Users who are not a member of any of the above groups and have not been assigned any of these nine privileges will also be prompted if they attempt to accomplish an action that requires administrative privileges. Unlike previous versions of the OS, instead of seeing an Access Denied error, users will instead be prompted with what is called an over the shoulder (OTS) elevation prompt. This prompt, shown in Figure 9.2, provides a mechanism for an administrator to enter credentials allowing the user to elevate in order to accomplish the task.

204

Chapter 9

Figure 9.2: An OTS elevation prompt seen by a non-administrative user who attempts to complete an action that requires administrative privileges.

Although ostensibly this prompt appears useful for helping out a user when you cant be present to assist with their problems, be careful with their use. When you hand over an administrative password to a user, that password can then be used for subsequent elevations by the user. If you want to use OTS elevations as a troubleshooting tool of last resort, remember to change any passwords after the user completes the necessary action. Group Policy and UAC Within a Windows domain, UAC is probably best controlled through the use of Group Policy. Ten settings are available for controlling UACs behavior on configured desktops and servers. All are found at the location Computer Configuration | Policies | Windows Settings | Security Settings | Security Options, which is displayed in Figure 9.3.

205

Chapter 9

Figure 9.3: UAC settings for computers in an Active Directory domain are best configured through Group Policy.

In the following bulleted list is a discussion on each setting, its use, and the impacts its configuration can have for your infrastructure. The section following will include a discussion of a few common ways in which UACs behavior can be adjusted to reduce the impact of its prompts on the environment: Admin Approval Mode for the Built-in Administrator Account. By default, the built-in Administrator account does not have UAC applied to it. Enabling this setting forces UAC to apply to this account. Allow UIAccess applications to prompt for elevation without using the secure desktop. When an application attempts to bring forward the prompt for elevation, by default it also switches to a special desktop called the secure desktop. This special desktop grays out the standard desktop and displays only the elevation prompt itself. It is strictly limited in what it can process. Its limitations generally allow the user to click the elevation prompt and little else. The secure desktop is in place to prevent malware from attempting to spoof the user into clicking something inappropriate and undesired. However, there may be cases in which the secure desktop actually breaks some applications. This Group Policy setting allows the elevation prompt to appear without switching to this special desktop.

206

Chapter 9 Behavior of the elevation prompt for administrators in Admin Approval Mode. This setting configures how UAC behaves during a request for elevation. Three settings are possible. The first, Prompt for consent is the default behavior and configures UAC to behave as has been discussed to this point. Prompt for credentials takes the elevation prompt another step and requires a user to re-enter their credentials to complete the elevation. The idea with this selection is that elevations will be more closely scrutinized when the added step of re-entering credentials is required for elevation. The final selection, Elevate without prompting, instructs UAC to elevate the user automatically and without prompting. Behavior of the elevation prompt for standard users. Standard users can be forced to Prompt for credentials, which is the default behavior explained earlier. Alternatively, Automatically deny elevation requests can be selected. This has the effect of reverting UAC behavior for standard users back to what was experienced in Windows XP. By configuring the setting in this way, users without administrative privileges will again see the equivalent of an Access Denied error when they attempt an administrative activity. Detect application installations and prompt for elevation. This setting configures the behavior seen when attempting to install applications. UAC includes logic that looks for certain types of executables that appear to be software installations. When this setting is enabled and UAC sees an attempt to launch what it believes to be an installation, it automatically prompts for elevation. This alleviates the administrator from having to right-click an installation file and select Run as Administrator to elevate the installation. This setting is often disabled in situations in which desktop management applications such as Group Policy Software Installation or System Center Configuration Manager handle application installations.

UAC looks at the file name to identify whether the executable is a software installation. If the filename includes the characters install, setup, or uninst, then UAC considers the executable a valid installation and will automatically elevate on a double-click. Other files that require elevation for their processing would need to be right-clicked and launched through the Run as Administrator context menu.

Only elevate executables that are signed and validated. This setting forces all executables to be digitally signed if they are to be elevated. Any executables that are not digitally signed will not be elevated, which has the effect of preventing their full functionality (or, indeed, their functionality at all). This can be an effective protection against malware but requires each and every executable in the environment to be correctly digitally signed, which can add a major administrative burden. Only elevate UIAccess applications that are installed in secure locations. Windows considers only the following locations as secure locations for the installation of software: \Program Files (and subfolders), \Program Files (x86) (and subfolders), and \Windows\system32\. By enabling this setting, if an application is not installed into one of these locations and requests elevation, its request will be denied. Effectively, this setting prevents the elevated launching of applications that are installed to insecure locations. As most legitimate applications install to secure locations, prevention of malware is a major reason for this setting.

207

Chapter 9 Run all administrators in Admin Approval Mode. This setting can be considered the master toggle switch for UAC. Setting this to Disabled effectively disables UAC and its related functionality. Switch to the secure desktop when prompting for elevation. Similar to the previous setting related to the secure desktop, this setting can be considered the master toggle switch for whether the secure desktop is used for all elevations or none. Virtualize file and registry write failures to per-user locations. As part of the security model for UAC, applications installed to the secure locations noted earlier are not allowed to store user-specific information in those locations. This protects the core installation of these applications from down-the-road user customizations. File and registry virtualization is a process used by UAC to spoof any writes to these locations towards alternate locations in the user profile and within HKEY_CURRENT_USER. This setting enables or disables file and registry virtualization for these legacy applications.

Common UAC Implementations In comparison with the relative simplicity of runas and double logins, UACs behavior can be somewhat complicated to understand. When and why it requires administrative elevation can be a challenge for the uninitiated. But the requirement for UAC to prompt for consent was designed to illustrate which processes require additional privileges. Even more important, when unwanted software such as malware attempts to elevate to accomplish its nefarious mission, this provides a way for an alert administrator to stop its activity. The default behavior of UAC is achieved through no manipulation of its policy settings at all. Simply living with UACs default behavior is one common UAC implementation. By using Windows Server 2008 and Windows Vista with the default configuration, UAC operates exactly as its designers intended. In short, it alerts when processes require administrative privileges. Although the purported benefits of UAC are obvious, some in IT disagree with the level of responsibility it places on the individual person. Answering its oft-seen elevation prompts can grow irritating for the IT administrator who constantly requires administrative privileges to do his or her job. Even worse, users who have been given administrative privileges but are untrained on what they are seeing may simply always choose Continue rather than truly analyzing the process that is attempting to elevate. Thus, while UACs intentions are excellent, its implementation may not be effective for some IT environments. In some environments, it might be desirable to turn off UAC until an alternative solution is found. To completely disable UAC, configure the following policy settings: Detect application installations and prompt for elevation. Set to Disabled. Behavior of the elevation prompt for standard users. Set to Automatically deny elevation requests. Run all administrators in Admin Approval Mode. Set to Disabled.

208

Chapter 9 There is a downside to completely eliminating UAC. One consequence of disabling UAC is that it eliminates a worthwhile OS protection against external attack. Also, shutting down UAC eliminates other protections gained as a function of Internet Explorer Protected Mode (IEPM), a special security mode that is also disabled with the disabling of UAC. Because of these downsides, completely disabling UAC is not considered a best practice. An alternative solution that retains many of UACs protections while eliminating its prompts is to configure UAC to operate in quiet mode. When UAC is configured to operate in quiet mode, it continues processing elevations as necessary. Users with administrative privileges still use two login sessions and switch between them as necessary to gain administrative privileges. However, this switching occurs in the background without prompting the user. Also, with quiet mode, the protections of IEPM remain in place. To configure UAC for quiet mode configure the following Group Policy setting: Behavior of the elevation prompt for administrators in Admin Approval Mode. Set to Elevate without prompting.
The downside to quiet mode is that users are unaware of when elevations occur because elevations occur automatically without an expressed consent. Yet for some environments this may be a better solution than charging users with the responsibility for consent and living with its prompts.

Introducing the Windows Firewall with Advanced Security


The Windows firewall has historically been a feature of equal disdain and appreciation with Windows administrators. With Windows Server 2003 and Windows XP, the firewall was available but notoriously difficult to use. Administering the firewall through Group Policy was difficult due to Group Policys text-based mechanisms for configuring the firewall and its necessary exclusions. Consequently, the built-in Windows firewall went relatively unused in many IT environments. With the release of Windows Server 2008, the firewall gains some much-needed maturity, both in the functions it can support as well as your ability to manage it. Built-in to the Windows Firewall with Advanced Security (WFAS) are new profiles, a new ability for outbound filtering, better mechanisms for creating IPSec-based connection security between clients, and much improved management control through Group Policy. If youre familiar with its old and painful management through Administrative Templates in previous editions, youll be happy to know that its configuration through Group Policy now closely mirrors its local GUI toolset.

209

Chapter 9

Three Profiles With Windows Server 2008 as well as with Windows Vista, WFAS includes three separate profiles for determining the type of currently attached network. Although these profiles are arguably more used by Windows Vista clients roaming between networks, they remain available for Windows Server 2008 instances as well: The Public Profile. This profile is roughly equivalent to what was called the Standard Profile in Windows XP. This profile is intended for use in untrusted network situations such as coffee shops and airports. The Public Profile is chosen by the logged on user as the computer connects to a new network and corresponds to the Public location network, seen as an icon of a park bench. The Private Profile. This profile, new to Windows Server 2008, is designed for use in partially trusted situations such as partner companies, home offices, or other areas where some expectation of security can be assured. Similar to the previous profile, the Private Profile is chosen by the logged on user as the computer connects to a new network. However, both the Work network and the Home network, seen as icons of an office building and a home, respectively, take the user to the Private Profile. The Domain Profile. This profile is automatically chosen when the server connects to a network where it can locate a domain controller for its attached domain. There is no user interface for choosing this network as it is always automatically chosen when the server can contact a domain controller.

Because they are generally always attached to the domain, there is a high likelihood that all your servers will always utilize the Domain Profile. These two additional profiles are available in the situation in which servers may move between networks. Inbound & Outbound Rules At the time of a Windows Server 2008 installation, around 90 inbound rules and 40 outbound rules are created by default. As seen in Figure 9.4, these rules relate to core networking, file and printer sharing, and numerous other core requirements for a new system. Because the server shown in Figure 9.4 is a domain controller, also present are additional rules in place to support Active Directory Domain Services and DNS functionality.

210

Chapter 9

Figure 9.4: Inbound firewall rules displayed on an example server operating as a domain controller.

This shows another benefit to Microsofts componentization activity. Any time a new Role, Role Service, or Feature is installed, part of its installation is to enable the correct inbound and outbound firewall ports that ensure proper functionality. This process alleviates much of the need to manually identify and open the necessary ports sometimes required with previous versions. Should you find the need to create additional exclusions, creating a new rule is done through the New Inbound Rule Wizard or the New Outbound Rule Wizard depending on the direction of the rule you need to create. As you can see in the first screen of the wizard, shown in Figure 9.5, it is possible to create rules based on a stated port, all ports used by an installed application, or even predefined Windows experiences that relate to some functionality being used on the server. In the example seen in Figure 9.5, the Windows experience being shown is the iSCSI Service. Configuring a rule for any Windows experience automatically opens the necessary exclusions in the firewall to support the needs of that experience.

211

Chapter 9

Figure 9.5: Configuring a new rule uses a wizard-like interface that enables rules based on actual ports, ports used by an installed application, or preconfigured Windows experiences.

Connection Security Rules The process for creating exclusions was challenging in previous OS versions yet not excessively complicated. However, creating connection security between two or more clients was. Connection security rules are mechanisms for two or more computers to authenticate themselves to each other prior to communicating. These rules leverage IPSec authentication to validate each other at a server level before either is willing to send or receive traffic from the other host. Connection security rules in Windows Server 2008 as well as the wizard used to create them are a much-improved way of setting up server-level authentication and even network traffic encryption. Rules can be created for setting up isolation groups, server-to-server authentication, and even tunnel authentication between gateway computers. Figure 9.6 shows an example of the first screen in the New Connection Security Rule Wizard that shows these options.

212

Chapter 9

Figure 9.6: Connection security rules of various types are created through the New Connection Security Rule Wizard.

Centralized Management & Group Policy Making these handy new graphical wizards even more effective is their mirroring into the Group Policy Management Editor (GPME). Unlike previous OS versions which used the text-based Administrative Tools interface, Windows Server 2008 leverages a more user-friendly graphical interface for enabling the firewall, creating new rules, and configuring connection security. Effectively, what you see in a local firewall configuration is very similar to what you see within a configured Group Policy.

213

Chapter 9

Two Common Uses of the Windows Firewall with Advanced Security


Although understanding the benefits of WFAS is helpful, learning a few specific ways in which it is often used by IT organizations will help in securing your Windows infrastructure. Though there are as many ways of configuring the firewall as there are firewall settings, well focus here on two common uses. These uses are selected because they are easy to set up and can offer some of the biggest bang for the buck in environments of any size.
The following two examples assume an environment that is fully comprised of Windows Vista and Windows Server 2008 computers. For environments that contain down-level OS versions, the steps used in these examples will be somewhat different.

Example 1: Securing Laptops While off the Domain Back when Microsoft released Windows XP Service Pack 2 (SP2), its automatic enabling of the firewall for all connections immediately broke networking in many environments. Consequently, many decided to disable the firewall completely. This first example leaves WFAS disabled for all computers while they remain attached to their home domain. Any computer that detaches from its home domain and roams to an alternative network locationsuch as the aforementioned coffee shops or airport loungeswill automatically enable the firewall for all inbound connections. This has the effect of automatically enforcing a high level of protection for laptop computers that leave the LAN and roam elsewhere. To do so, first create a new Group Policy Object (GPO) and edit that GPO in the GPME. Navigate to Computer Configuration | Policies | Windows Settings | Security Settings | Windows Firewall with Advanced Security | Windows Firewall with Advanced Security. Once there, click the Windows Firewall Properties link to see a screen similar to Figure 9.7.

214

Chapter 9

Figure 9.7: The master properties page for configuring WFAS through Group Policy.

Within this wizard, we first need to configure the Domain Profile to disable the firewall. This means that computers attached to the domain will not filter traffic through the firewalls rules. For the Public and Private Profiles, we want to block any inbound connections while allowing all outbound connections. While doing this, we also want to ensure that local users cannot change the firewall settings, enforcing the policy for all computers. To do so, use the following steps: On the Domain Profile tab, set the Firewall state to Off. In the Settings box, click Customize. In the Rule merging box of the resulting screen, set both Apply local firewall rules and Apply local connection security rules to No. These two settings prevent local users from changing firewall settings at their local computer. On both the Private Profile and Public Profile tabs, set the Firewall state to On (recommended). Set Inbound connections to Block all connections. Set Outbound connections to Allow (default). In the Settings box, click Customize. In the Rule merging box of the resulting screen, set both Apply local firewall rules and Apply local connection security rules to No. In the Firewall settings box of the same screen, set Display a notification to No. This prevents the user from seeing notifications about blocked inbound programs.

215

Chapter 9 Once these settings have been configured, close the GPO and apply it to the domain or an OU that contains the computers you want to manage with this policy. Once computers have processed the policy, any time they leave the LAN, their firewalls will automatically enable and reject all inbound traffic. Upon returning to the LAN, the firewall will automatically disable and process all traffic.
In this example, we configure both the Private and Public Profiles because the Vista user has the option of choosing a profile any time they connect to a new network. If you want to provide an option for users who connect to partially trusted networks that includes some firewall exclusions, consider configuring them into the Private Profile. Users will then be able to choose either the Home or Work network as they connect to lessen the policy-assigned restrictions.

Example 2: Simple Domain Isolation One issue with most Windows networks is that authentication is traditionally done after two computers make the decision to communicate with each other. With authentication occurring after communication begins, this leaves open the possibility for compromise should a rogue computer plug into the network. If your environment allows computers that are not members of the domain to attach to the network, you are putting yourself at risk should an attached computer contain some form of replicating malware. More importantly, any computer that connects to the network can also begin searching around that network for information it may contain. If any file share anywhere on your network has been configured to grant Read access to the Everyone group, that information could get disclosed to the rogue computer. Either of these two possibilities are nightmares for open networks. But with the combination of Windows Vista and Windows Server 2008 comes an easy way to prevent non-domain computers from ever communicating with computers on your domain. Called domain isolation, this way of configuring WFAS requires computers to authenticate to each other before they ever begin communication. Computers that arent a member of the domain dont have the necessary IPSec authentication, and therefore are denied access to communicate with computers on the domain.
Because domain isolation is designed to require authentication, it can impact your network operations. If you have non-domain computers like UNIX/Linux machines or Macintosh desktops that dont support connection security, implementing domain isolation can prevent these computers from communicating with others on the domain. Before implementing the information shown here, test it in a separate environment first.

216

Chapter 9 To set up domain isolation, first create a new GPO and edit it in the GPME. Configuring domain isolation is done from the same Windows Firewall with Advanced Security location as discussed in the earlier example. Configure the GPO using the following steps: Within the Windows Firewall with Advanced Security wizard, navigate to the Domain Profile. Set the Firewall state to On (recommended). Set Inbound connections to Block. Set Outbound connections to Allow (default). In the Settings box click Customize. In the Rule merging box of the resulting screen, set both Apply local firewall rules and Apply local connection security rules to No. On the IPSec Settings tab in the IPSec exemptions box, set Exempt ICMP from IPSec to Yes. Click OK to close the wizard. This process enables the firewall but instructs it to block any connection not specifically excluded. In the next step, well configure an exclusion that allows all traffic. This double negative is necessary to ensure that all traffic is passed through the firewalls filters. In this step we will create the rule that allows traffic. Right-click Inbound Rules and choose to create a New Rule. In the resulting screen, choose to create a Custom rule. For each of the following screens in the wizard, do the following: o Program. Choose All Programs and click Next. o Protocol and Ports. Leave the default configuration and click Next. o Scope. Leave the default configuration and click Next. o Action. Select Allow the connection and click Next. o Users and Computers. Leave the default configuration and click Next. o Profile. Select to apply the rule to only the Domain profile. o Name. Provide a Name and Description for the rule. Now that all traffic is passing through the firewalls filters, the next step is to require authentication. Once the inbound rule has been created, right-click Connection Security Rules and choose to create a New Rule. In the resulting screen, choose to create an Isolation rule. For each of the following screens in the wizard, do the following: o Requirements. Choose Require authentication for inbound and request authentication for outbound connections. o Authentication Method. Leave the default configuration and click Next. o Profile. Select to apply the rule to only the Domain profile. o Name. Provide a Name and Description for the rule.

217

Chapter 9

Due to issues with the propagation of Group Policy, it is a good idea to exclude client-todomain controller communication from the authentication requirement. In this case, we configure this traffic to request authentication from domain controllers rather than require it. By doing so, clients will always be guaranteed the ability to communicate with domain controllers. To do this, right-click Connection Security Rules, and choose to create a New Rule. In the resulting screen, choose to create an Isolation rule. For each of the following screens in the wizard, do the following: o Requirements. Choose Request authentication for inbound and outbound connections. o Authentication Method. Leave the default configuration and click Next. o Profile. Select to apply the rule to only the Domain profile. o Name. Provide a Name and Description for the rule.

To complete the previous step and target this exemption for domain controllers only, under Connection Security Rules, double-click the rule just created. On the Computers tab of the resulting window in the Endpoint 2 box, add the IP addresses for all domain controllers.

Once complete, close the GPME and apply the GPO to the domain. Once computers begin to receive and apply this Group Policy, they will begin requiring authentication from any computer that attempts to communicate with the single exception of the domain controllers configured in the last steps. Any rogue computer that connects to the LAN will not be able to communicate with any domain-attached computer.
Be aware that if your environment includes non-domain computers, computers of other OSs, network appliances, or any other network-attached equipment that is not a Windows Vista or Windows Server 2008 computer, you will need to create authentication exemptions for these devices. Additionally, you might want to exempt domain services such as DNS, WINS, or DHCP from authentication.

Installing and Managing BitLocker


Prior to the release of Windows Vista and Windows Server 2008, Microsofts primary solution for encrypting files on a Windows machine was the Encrypting File System (EFS). Using EFS, users and administrators could choose to encrypt individual files and folders on a Windows system, protecting them from inappropriate data disclosure. But there has always been a problem with EFSs file- and folder-based architecture. Encrypting specific files and folders of interest to a user indeed protects those locations from exposure. However, the processing of those files and folders by applications does not always keep every copy of the object in a protected location. This is perhaps best explained through an example. Using EFS, lets assume that a user desires to encrypt the contents of the C:\MyData folder. Using Windows XP, they view the properties of that folder and by clicking Advanced, they see the wizard shown in Figure 9.8. There, they are able to select the Encrypt contents to secure data check box. This process encrypts the contents of the C:\MyData folder.

218

Chapter 9

Figure 9.8: The Windows XP wizard page used to encrypt individual files and folders with EFS.

At some point in the future, the user needs to work with a document in that folder. The user double-clicks the document to spawn its linked application. To process the document, this application requires a temporary copy of the document to be created, which it stores in the location C:\Temp. This temporary copy is now stored in a folder that is outside the protections of the encrypted folder, storing the document in clear text. Making this situation even more problematic, even if the application deletes the clear-text temporary copy when it has finished processing, the deleted version can likely be later undeleted from the temporary location. Due to these limitations with file- and folder-based encryption, Microsoft recognized that the only way to ensure documents remain encrypted no matter how or where they are used is in encrypting the entire disk drive. The BitLocker Drive Encryption feature found in both new OSs accomplishes that goal. BitLocker is a volume encryption service that arrives in Windows Server 2008 as an installable feature. When BitLocker is installed to a computer, it works with on-board hardware to identify at every boot whether the hardware of the computer has been modified in any way. If the system drive appears to have been modifiedsuch as moved to another computer, for example BitLocker will not decrypt the encrypted volume until a recovery password is entered. That recovery password, which can take the form of a 48-digit password or a 256-bit key stored in a file, instructs BitLocker to decrypt the volume and allow the system to boot. Although BitLocker may not be necessary for all systems in an IT environmentits encrypting and decrypting activities add a small performance hit to a serverit can be an excellent addition to servers in quasi-secured locations such as branch offices or servers that contain highly sensitive material.
If an attacker were to remove a drive from a server protected by BitLocker, they would find little more than random characters on the drive. As BitLocker uses AES encryption in various strengths, it is virtually impossible using todays technology to decrypt the drive without the required restore key. For extremely sensitive uses, it is also possible to require a startup password to be entered every time the server is booted.

219

Chapter 9 Prerequisites & Installation There are a number of special prerequisites required for a system to support BitLocker. First, a system must be configured with a Trusted Computing Group (TCG)-compliant BIOS. It must also include a special chip built-in to its hardware called a Trusted Platform Module (TPM) with a minimum version of 1.2. BitLocker leverages these special hardware components for its encrypting and decrypting activities as well as its hardware verification. Contact your server manufacturer to determine whether your hardware includes the correct components necessary for BitLocker functionality. Another way to check for the presence of a TPM is by installing the BitLocker Drive Encryption feature to a Windows Server 2008 instance. Once installed, navigate to Control Panel | BitLocker Drive Encryption where you will see a screen similar to Figure 9.9. Youll notice there that two prerequisites are not present, one of which explains that the necessary TPM module is not present on the hardware.

Figure 9.9: The BitLocker Drive Encryption Control Panel will alert when necessary prerequisites are not present on the system.

Also necessary and shown in Figure 9.9 is a separate and unencrypted partition of at least 1.5 gigabytes that will act as the system volume. This volume will contain the bootstrap files necessary to boot the computer. Once the system has sufficiently booted, BitLocker can then decrypt any encrypted drives and enable the OS to continue the boot process.
If youve already built your server and have partitions in place, you will need to reconfigure them to create this necessary system volume. Microsoft provides a BitLocker Drive Preparation Tool that assists with the process of repartitioning the system. Once installed, the tool will create a new volume of the proper 1.5 gigabyte size and mark it as the active partition for booting the system. The tool along with information on its use can be found at http://support.microsoft.com/kb/933246.

220

Chapter 9 Once your system drives have been properly partitioned and a TPM correctly identified on the system, turning on basic drive encryption with BitLocker is done through the same Control Panel discussed earlier. Simply click the link titled Turn on BitLocker. If the TPM in the system has not been initialized at this point, the Initialize TPM Security Hardware wizard will appear. On the Save the recovery password page of the wizard, choose to save the BitLocker recovery password to a USB drive, a folder, or printed out. This password must be stored in a secure location as it is the only mechanism for recovering the volume. On the Encrypt the volume page, select the Run BitLocker system check box to have BitLocker run a set of tests that ensure that the service can read the recovery and encryption keys prior to encrypting the drive. At this point, the system will reboot and test key retrieval. If the key retrieval test functions correctly, the system will begin encrypting the drive.
BitLocker can also be configured at the command line using the script manage-bde.wsf. More information about using this script for encrypting data drives as well as detailed instructions on enabling, disabling, and recovering BitLocker can be found at http://technet.microsoft.com/enus/library/cc732725.aspx.

Installing BitLocker Without a TPM Although strongly suggested, the TPM chip is not a true prerequisite. It is possible to run BitLocker using a separate startup key stored on a USB flash drive. The startup key must be inserted into the computer during the boot process to authenticate the startup of the system. Installing without a TPM requires enabling BitLockers advanced functions through either Group Policy or a Local Policy. To enable the advanced functions, navigate to Computer Configuration | Administrative Templates | Windows Components | BitLocker Drive Encryption. There, enable the setting Control Panel Setup: Enable advanced startup options. When enabling the policy, select the Allow BitLocker without a compatible TPM check box. Once this policy has been applied to the computer, the remaining steps are equivalent to those discussed earlier with one exception. Without a TPM, a USB flash drive must be plugged into the system at every startup. Figure 9.10 shows the extra screen in the installation wizard that is displayed when attempting to install BitLocker without a TPM.

221

Chapter 9

Figure 9.10: The extra screen in the installation wizard that is displayed when attempting to install BitLocker without a TPM.

222

Chapter 9

BitLocker and Group Policy Although BitLockers configuration can be controlled using Local Policy at the individual machine, using Group Policy to cohesively control its installation across all instances is considered a best practice. Using Group Policy ensures that recovery keys are always stored in Active Directory (AD), the right level of encryption is set across all systems, and the correct password settings are used. Seven Group Policies are available: Turn on BitLocker backup to Active Directory Domain Services. This setting enables the automatic backup of recovery information to AD. By backing up this information to AD, the likelihood of the loss of a recovery key is reduced. Control Panel Setup: Configure recovery folder. This setting identifies the local folder where recovery information is stored on each computer protected with BitLocker. Enabling the local storage of recovery information in addition to a secondary AD backup further ensures that recovery information is readily available in the case that a system protected with BitLocker needs to be recovered. Control Panel Setup: Configure recovery options. This setting identifies which options can be used to recover a system. Options include the 256-bit recovery key as well as the 48-digit recovery password. Control Panel Setup: Enable advanced startup options. Discussed previously, this setting is used to instruct BitLocker to operate on systems that do not have a TPM installed. It can additionally be used to control whether startup keys and PINs are required, disallowed, or optional. Configure encryption method. With this setting can be controlled which of the four types of AES encryption and encryption strengths will be used by BitLocker. Prevent memory overwrite on restart. BitLocker by default instructs the computer to overwrite memory at every restart to ensure that BitLocker secrets do not remain resident. This memory overwrite process also increases the time needed to complete a restart. This setting selects whether memory overwrites are enforced or prevented at each restart. Configure TPM platform validation profile. For systems with a TPM, this setting allows the selection of which components will be validated by the TPM before allowing the system to gain access to the encrypted volume. These validations ensure that the drive has not been relocated to new hardware. When the TPM has identified that a change has occurred, the system will enter into recovery mode.

As stated earlier, BitLocker is likely not a solution for every server in your environment. But for those servers that have the highest risk for compromise along with the greatest impact if they do, it can be a lifesaver.

223

Chapter 9

Windows Server 2008 Is Microsofts Most Secure OS to Date


With features both exposed to the administrator as well as those under its covers, Windows Server 2008 provides a stable platform upon which to host workloads required by todays businesses. As discussed in this chapter, controllable elements such as UAC, WFAS, and BitLocker, as well as core changes to its operations, all aggregate to increase Windows Server 2008s overall security. Along with those security improvements comes a related increase in overall system uptime. This chapter has attempted to show where Windows Server 2008s built-in capabilities can enhance your data centers security profile while at the same time ensuring you systems remain up and operational. In the next and final chapter, well delve into one more specific feature of Windows Server 2008 that continues this discussion on uptime and reliability: Windows Server Failover Clustering. With the right hardware in place, Microsofts improvements to failover clustering can significantly improve the resiliency of services for business with high-reliability needs.

224

Chapter 10

Chapter 10: Windows Failover Clustering


The dream of every IT administrator is an environment of servers and services that never go down. With servers that never go down, the pager never goes off, sleep is never interrupted, and vacations are never put on hold due to data center emergencies. Although the never in that dream is likely to remain just a dream for a long time to come, there are technologies available today that can bring it a little closer to realization. One of those technologies is Windows Server Failover Clustering (WSFC), available with Windows Server 2008. Although WSFC isnt new to Windows, the updates it sees with the upgrade to Microsofts newest server operating system (OS) makes it a technology that is now eminently useable by a wide range of IT organizations.
A Humorous Personal Perspective on Windows Clustering Over the Years As a humorous anecdote, Windows clustering was in fact so painfully difficult in previous versions that this author nearly lost a job based on its implementation. Pitching high availability as a solution for file storage back in the days of Windows 2000, an early cluster using that OS version was implemented atop an existing file server. The problem was that clustering in Windows 2000 wasnt all that great and required substantial skill and patience to get it right. In the end, after numerous whole-cluster failures, a lot of painful meetings, and ultimately a de-clustering project, this authors mantra became, As the corporate expert in Windows clustering, I recommend you dont use Windows clustering. Thankfully, a lot has changed over the years. With each new OS version, the underlying stability of Windows clustering has improved significantly as well as its management. As youll find in this chapter, clustering in Windows Server 2008 is a much improved experience than 8 years ago.

The central problem with Windows clustering in previous versions was its significant complexity. The Windows clusters of yesterday were complex to set up and arrived with little assistance to the installing administrator. If you werent a specialist in Windows clustering, you were likely to have a problematic experience with getting your first few set up. Another issue with some of its earlier versions was clusterings reliance on expensive fibre channel SCSI for its shared data storage. Although fibre channel is an excellent medium for high-speed access to remote LUNs, it can be difficult to work with and requires a set of skills all its own. Although iSCSI support was first available in Windows Server 2003 SP1, Windows Server 2008 includes expanded support for iSCSI as the medium for shared storage. This version also improves the underlying low-level mechanisms that cluster nodes use to communicate with their shared storage, greatly improving the reliability of the storage subsystem itself.

225

Chapter 10

Understanding Windows Failover Clustering


Clustering, however, is not a solution for all needs. Clustering brings high availability to certain services in certain situationssome of which are not well understood by those that intend to implement clustering. Thus, before we can begin a discussion on implementing WSFC, it is important to understand at a high level exactly what Microsofts implementation of clustering truly is. With that understanding comes a much-needed discussion on the pros and cons of using clustering in your IT environment. Clustering in Windows Server 2008 is at its core a mechanism for bringing some forms of high availability to specific Windows services and applications. The last part of that sentence is critical. Most forms of traditional clustering using WSFC are not solutions for disaster recovery and clustering does not work with all services and applications.
Any service or application that runs on a Windows cluster must be cluster-aware. This means that the service or application has been specifically encoded to recognize that it is running atop a cluster and function appropriately. Services such as Dynamic Host Configuration Protocol (DHCP) and File Services as well as applications such as Microsoft Exchange and SQL Server are considered cluster-aware applications, and their documentation specifically spells this out. This may not necessarily be the case with other applications. Verifying a services or applications ability to run atop a cluster is critical before attempting to install it.

The high-availability features that arrive with Windows clustering stem from a clusters ability to share the potential for connections to data across multiple nodes. In a Windows cluster, between 2 and 16 nodes are configured to share access to one or more storage locations. This idea of sharing is actually a misnomer. Only one node at a time can actually interact with a particular storage location. When the node that owns the connection to a particular storage location experiences an outage situation, the cluster re-hosts the ownership of the resource to a different node.
Many IT professionals ask about the differences between active-active and active-passive clustering. At its core, because of this single-node ownership behavior, essentially all clusters and their resources have an active-passive configuration. When services or applications leverage an active-active configuration, they leverage two separate instances on two separate nodes to achieve this end.

Figure 10.1 shows the simplest example of how the pieces can interconnect. In Figure 10.1, two individual servers are configured as cluster nodes. Each has a connection to the same shared storage and has one or more connections to the Local Area Network (LAN). Typically, a single LUN per resource is created at the shared storage and made available to both cluster nodes. As well explore later in this chapter, this LUN sharing has some implications most especially during the initial cluster installation and configuration. Cluster nodes also minimally have one or more connections to the LAN. Although a single network connection can be used for all cluster communication, it is considered a best practice for clusters to use two separate connections at a minimumone for communication between cluster nodes and a second for communication with the rest of the network.

226

Chapter 10

Figure 10.1: A high-level example of the resources used to build a simple two-node cluster.

Two-node clusters work well for the hosting of small numbers of cluster-aware resources. They provide a redundant location for the service to re-host when problems occur on one of the cluster nodes. However, some environments have the resource needs for more than one location. Or they might have multiple resources that they want to load balance across a larger number of hosts. In this case, with the x64 version of Windows Server 2008 Enterprise Edition, it is possible to create clusters with as many as sixteen separate nodes. Figure 10.2 shows an example of how this works with a four-node cluster.
WSFC can be installed only to the Enterprise Edition of Windows Server 2008.

Figure 10.2: It is easy to see how clustering grows complex as more hosts are added.

227

Chapter 10 In this example, four nodes are configured to operate within the cluster. There are four LUNs created, each hosting the storage needs of a particular resource. Each of the four cluster nodes has the ability to host none, some, or all of the configured resources. It is easy to see that as the number of nodes in a cluster increases, so does its complexity. Although building a small twonode cluster can be an easy solution for even small environments, the administration complexities of large clusters can quickly grow unwieldy if not planned appropriately. Reasons to Use WSFC Because of the ways in which clusters enable high availability, they are a compelling add-on to existing or new services. There are a few classic instances where clustering specifically adds value to the IT environment: Hardware outages. Clusterings greatest benefit to the IT environment is arguably its process of seamlessly re-hosting a cluster resource upon the outage of a cluster node. When a cluster node can no longer run its configured service because of a condition on the nodethe node is down, it has crashed, it is hung, and so onthe remaining cluster nodes will identify the state and relocate any resources based on preexisting parameters. The benefit to IT is that the process happens rapidly and automatically when a node becomes unresponsive, eliminating the need for an IT technician to immediately troubleshoot and fix the problem to bring the service back to functionality. Software problems. Similar to hardware problems, software issues sometimes cause software to become unresponsive. Though this is a less-often recognized reason to move to clustering, software problems that can be resolved through an automatic restart can be made easier by hosting atop a cluster. When the software goes unresponsive, the cluster can automatically re-host its instance elsewhere, a process that involves the needed service restart. Patching-related outages. Another big plus for high-priority services is the way clusters help manage the outages involved with patching services host server. The patching process for Microsoft products typically occurs minimally once a month and sometimes more often. Re-hosting a cluster resource to a different node prior to patching helps reduce the overall downtime associated with the patching process. It also protects the critical hosted resource from the instance where the installation of a patch causes the server to crash. Having already re-hosted the resource elsewhere gives IT time to fix the problem without seeing a loss of service.

228

Chapter 10

Reasons Not to Use WSFC For all the items noted previously, it is likely that you will only host your most critical services and applications atop Windows clusters. Due to the added complexity clustering brings to the table, there is the potential that a poorly planned migration of a service to a cluster could decrease its availability. Thus, decisions about the use of Windows clustering must involve careful planning and should be limited to services whose continuous operations are critical. It is also important to note that most cluster architectures are not intended to be used as a form of disaster recovery. In all but one of the potential cluster configurations, each node must be physically proximate to each other as well as their data storage due to the limitations of fibre channel or Ethernet cabling. Thus, with one exception, individual cluster nodes are generally too close in physical proximity for the loss of one node due to a disaster not to affect other nodes. Additionally, when centralized shared storage is used, the loss of the storage constitutes a loss of cluster functionality. Thus, highly available centralized storage must be used if the storage subsystem is not to become a single point of failure all its own. With the release of Windows Server 2008, Microsoft has made some very important changes to the way in which clusters can be configured and the way communication takes place between nodes. As well discuss later in this chapter, this change brings about the possibility for multisite, geographically distributed clusters that can be used for disaster recovery. Components and Prerequisites Minimally, in order to build a simple two-node cluster, you will require two Windows Server 2008 Enterprise Edition servers to be used as candidate cluster nodes. Each of those computers will need a minimum two network cards each, one for cluster communication with a second for communication with the rest of the LAN. It is possible to aggregate these two network connectionsand we will in our example later in this chapterbut it is not recommended for production deployments. A shared storage location must also be available. As stated earlier, that shared storage location can be attached via an iSCSI connection or fibre channel. In the case of iSCSI, one or more additional network cards are necessary to carry the needed iSCSI traffic from the shared storage to the host. At minimum, on the shared storage, a single volume must be created and assigned a LUN. That LUN should be enabled for connection by all iSCSI network cards on both servers. The individual processes used for creating and exposing that LUN will differ based on the type of shared storage used and its management utility.
If you dont have physical iSCSI or fibre channel storage, you can use a software-based iSCSI target as the location for shared storage. That software-based iSCSI target must support the use of SCSI-3 commands and persistent reservations. At the time of this writing, very few software-based iSCSI targets support both of these requirements. One of the few that will work is the StarWind iSCSI Target, available at http://www.rocketdivision.com.

229

Chapter 10

Cluster Validation Available first in Windows 2003 and significantly improved with Windows Server 2008, the Cluster Validation Wizard is a tool that runs an extended series of tests on candidate cluster nodes as well as the storage and networking components. This tool is useful because any cluster must pass all tests prior to being considered a candidate for clustering. It eliminates many of the manual guess-and-check iterations often required with previous versions and ensures that before any cluster installation begins, all prerequisites are ready for installation. Figure 10.3 shows an example of some of the tests seen when attempting to run the Cluster Configuration Wizard. When creating a new cluster, once you have completed connecting the physical components, first run this wizard to ensure that youve completed every step correctly. To start the wizard requires that correct network connectivity to each cluster node is available using DNS resolution. It also requires that the Failover Clustering feature is installed to each candidate node prior to running the wizard.

Figure 10.3: The Validate a Configuration Wizard runs an extended series of tests to ensure a successful cluster creation.

230

Chapter 10

Cluster Quorum Models


Another critical part of planning your cluster implementation before any components are installed is the determination of the type of cluster quorum model to be used. Notwithstanding what kinds of resources you plan to host atop your cluster, one of the following quorum models is required. Quorum in WSFC is analogous to democratic voting bodies like the US legislature or you local city council meeting. Quorum in parliamentary procedure is defined as the number of people who must be present at a meeting if that meeting is to be able to hold a vote on issues. Quorum is often defined as 50% plus one of the total members, but can be a different number as defined by the rules of the group. It is put in place to ensure that a minority of the voting body cannot make voting decisions without a large enough group of people present. With Windows clustering, quorum is used to determine whether the cluster is really a cluster. When a cluster has quorum it effectively has enough functioning components in place that it can go about its business being a functioning cluster. When the number of functioning components drops below the threshold for quorum, the cluster can no longer operate as a cluster. All hosted resources then go offline. The rules of different voting bodies define what constitutes quorum for that body; in the same manner, quorum within various cluster products is defined differently for different platforms. For WSFC in Windows Server 2008, there are actually four quorum models that can be used. Which of these models youll use depends on the number of nodes in your cluster along with your anticipated uses for that cluster. The following sections offer a description of each of the four possible models.
The quorum model for a particular cluster is usually defined at its time of installation. However, in some cases, it is possible to change the model later within the Failover Cluster Management console.

Node Majority Using the Node Majority model, the individual nodes that make up the cluster are given votes that count towards quorum. The cluster considers itself to have quorum when a number greater than half of the nodes are up and available. Should the operational nodes in a cluster using this model go below that magic value, the cluster will cease operations completely. Because of this focus on only the nodes, this model is typically used when the number of nodes in the cluster is odd. Node and Disk Majority Two-node clusters, which are the most often-implemented cluster architecture, obviously do not have an odd number of nodes. Thus, a second quorum model is available called the Node and Disk Majority model. In this model, each cluster node gets a vote as well as the shared storage for the cluster quorum drive. Thus, this model is typically used when the number of nodes in the cluster is even.

231

Chapter 10 No Majority: Disk Only Another alternative configuration that can be used but is not suggested is the No Majority: Disk Only model. This model is not recommended for use because only the quorum drive itself is given a vote in determining quorum. If the quorum drive is unavailable, the cluster must shut down. This model is effectively the model used in previous versions but is no longer considered a best practice due to the single point of failure that is the cluster quorum drive. Node and File Share Majority Lastly, in certain circumstances, it may be desirable to build a cluster whose quorum drive is not attached via shared storage. In the Node and File Share Majority model, each node gets a vote in the quorum decision as well as the quorum drive. However, the difference with this model is that the quorum drive is not shared storage. It is instead a file share that is made available somewhere on the LAN to all nodes of the cluster. This model is possible and can be implemented across subnet boundaries with Windows Server 2008 due to its new ability to use TCP-based communication for cluster heartbeat communication rather than broadcasts. The over-thenetwork nature of this quorum model makes it useful for creating the multi-site, geographically distributed clusters discussed earlier.
For all of these models, youll notice that a component called the quorum drive is required. This drive is a special drive on shared storage (or via a file share) that is accessible by all cluster nodes. It is usually formatted with 500MB of space and is used exclusively by the cluster to determine quorum. It rarely if ever uses much of that assigned space. In creating your cluster, you will need to carve out and expose one LUN of this size to all hosts for this use. Although previous versions required this drive to be labeled with a drive letter, Windows Server 2008 does not have this requirement.

Installing WSFC
For this example, we will build a two-node cluster that will host a cluster service atop the servers \\w2008a.realtime-windowsserver.com and \\w2008b.realtime-windowsserver.com. To simplify this example, we will use two iSCSI data stores and only two network cards. The first iSCSI target will serve as the quorum drive and will be configured with 500MB of space. The second iSCSI target will serve as the shared storage for the hosted cluster service and will be configured with 2G of space. Although only two network cards are used in this example for simplicityone for the iSCSI connection and another for the production networkit is strongly recommended that additional network cards are used in production to separate traffic between that needed for the production network, the cluster heartbeat communication, and its connection to iSCSI. Moreover, because the network-based connection to its iSCSI disk can be a single point of failure with only one network card, redundancy in iSCSI network cards is similarly recommended.

232

Chapter 10

Configuring Networking It is strongly suggested that the connection to the iSCSI target be made over a different network than that which is used for the production network. An example of the IP configuration of each cluster candidate and the iSCSI target server can be set up as shown graphically in Figure 10.4. Your actual network configuration may differ, but this image shows how the networking is segregated between iSCSI and production networks. Configure the network cards that will connect to the iSCSI target with the proper IP address and subnet mask, but leave the gateway and DNS information blank. Also, remove the Client for Microsoft Networks service as well as the File and Printer Sharing for Microsoft Networks under the properties of the network card. Lastly, remove IPv6 if it is unused on this network.

Figure 10.4: The networking configuration of our two-node cluster. This segregation of networks ensures that traffic routes through the correct network cards.

Configuring the Shared Storage In this example, the two iSCSI targets have already been configured using the iSCSI data stores management utility. A LUN has been exposed to that iSCSI target and made available to each of the hosts. To connect to that iSCSI LUN on the first host, navigate to Start | Administrative Tools | iSCSI Initiator. The first time this tool is run, you will be prompted to start the Microsoft iSCSI Service and set it to Automatic. Click Yes to do so. You will also be prompted to unblock the Microsoft iSCSI Service so that it can operate through the Windows Firewall. Click Yes again to enable this firewall exclusion.
If your iSCSI target has special software or device drivers required for its use, this software must be installed prior to moving to the next step.

233

Chapter 10 The Microsoft iSCSI Initiator has six configuration tabs: General. This tab displays the name for the iSCSI initiator and provides a location to change that name as well as configure authentication via CHAP. In our example, we will not be configuring authentication for the sake of simplicity. However, in a production environment, this authentication protects rogue computers from connecting to exposed iSCSI LUNs over the network and its configuration is considered a best practice. The CHAP secret will need to be entered at both the iSCSI target and initiator to connect. Discovery. Click Add Portal. In the resulting screen, enter the DNS name or IP address for the iSCSI target that hosts the data storage location. Click Advanced. Ensure that the Local adapter is set to Microsoft iSCSI Initiator and the Source IP is set to the IP address for the network card you want to configure for use with iSCSI. Targets. If the connection was correctly made on the previous tab, selecting this tab will automatically display the available drives on the iSCSI target. A picture of how this should look is shown in Figure 10.5. Click each discovered target and then the Log on button. In the resulting screen, select the Automatically restore this connection when the computer starts check box, and click Advanced. Again, set Local adapter to Microsoft iSCSI Initiator, Source IP to the correct source IP for this servers network card, and Target portal to the iSCSI target address. If your iSCSI target uses special software that enables multiple connections to the target, you can select the Enable multi-path check box. Complete these steps for each discovered drive.

234

Chapter 10

Figure 10.5: If youve configured everything correctly, your iSCSI drives should appear on the Targets tab.

Favorite Targets. On this tab, you can view the properties of any connected drives. There is no further configuration to be done on this tab. Volumes and Devices. Click Autoconfigure. If everything has been set up correctly to this point, the Volume/mount point/device box should populate with information about the discovered drives. RADIUS. For the purposes of this example, there is nothing to do on this tab.

You will want to complete the previous steps on both candidate hosts to establish each servers connection to the iSCSI target. Note that at this point you have made a connection to a raw drive, but you have not yet initialized or formatted the drive, nor have you added a drive signature to that drive.

235

Chapter 10 To do so, launch Server Manager and navigate to Storage | Disk Management on one of the two nodes. The two drives should be present on the node, but both will be displayed in black as Unallocated disks. Right-click each disk and select to bring that disk Online. Then right-click one of the disks and choose Initialize Disk. The Initialize Disk wizard will appear with both disks selected. If this disk will never grow beyond 2TB, keep the disk as a Master Boot Record (MBR) disk. If you believe the disk will grow beyond that size at some point in the future, convert the disk to a GUID Partition Table (GPT) disk. Lastly, right-click each disk and select New Simple Volume. Create a new simple volume on each disk, assign a drive letter, and format the disk as NTFS. Validate and Create the Cluster By bringing the disk online and formatting it, the disk can be verified by the Cluster Validation Wizard, allowing that process to complete its testing. At this point, navigate to Administrative Tools | Failover Cluster Management. There, right-click the top-level node and choose to Validate a Configuration. This will launch the Validate a Configuration Wizard, which will prompt for the names of the candidate nodes and the tests to be run. Once run, the wizard will provide an HTML report of the results similar to what is seen in Figure 10.6. If any errors appear in the running of the wizard, you will need to fix the problem and re-run the wizard until all tests are passed.

Figure 10.6: An example of the Cluster Validation Report. All cluster components must pass all tests prior to attempting to create a cluster.

236

Chapter 10 Once youve completed the wizard and fixed any issues discovered in the validation process, it is time to create your cluster. Do so back in the Failover Cluster Management console by rightclicking the top-level node and choosing Create a Cluster. In the resulting wizard, enter the names of the candidate nodes. In the next screen, provide a name as well as an IP address for the cluster itself. This name and address will be used for connecting to the cluster for management. Finally, confirm the creation of the cluster. The wizard will create the cluster and return control when that process is complete.
Ensure that you click View Report after the completion of the installation to view the results of the installation process. Some clusters can be installed with warnings that later cause problems.

Post-Installation Quorum Reconfiguration Once the cluster has completed its installation, it can be managed via the Failover Cluster Management console. Immediately after creation, navigate to this console and verify that all network and storage resources are available and visible in the interface. The cluster installation wizard by default does not always install the quorum resource to the correct shared disk and sometimes does not always choose the correct quorum model. If either of these conditions is the case, these settings can be changed by right-clicking the cluster name and selecting More Actions | Configure Cluster Quorum Settings. In the resulting wizard, it is possible to change the quorum model as well as the shared drive that is to be used for the quorum. Figure 10.7 shows an example of the screen where the quorum drive can be changed. Click through the wizard to complete the reconfiguration.

Figure 10.7: It is possible to adjust the quorum drive and model after the cluster completes its installation.

237

Chapter 10

Managing WSFC
Once complete with the steps up to this point, you have successfully created your two-node Windows cluster. The creation of clusters with larger numbers of nodes happens in much the same way but with more planning of IP addresses and storage as the number of nodes increases. This completed cluster at this point, however, isnt serving any purpose. There are no services or applications running atop the cluster. In this section, well add a service and talk a little about the process of adding an application. What youll find is that once the cluster is created, youre only partially ready for operations. The management of cluster resources and services requires additional care to ensure that they function in appropriate and desirable ways. Adding a Cluster Service By default, a standard Windows Server 2008 cluster can support 13 types of services right out of the box. Each of these services can be added to the cluster by right-clicking the Services and Applications node and selecting Configure a Service or Application. The possible services that can be added directly through the interface are: DFS Namespace Server File Server Generic Service Other Server WINS Server Youll immediately notice that a number of these potential services are intended for generic instances of applications, services, or servers. These generic entries are used in situations where the service you want to host atop your cluster does not have its own native cluster installation routine. Prior to attempting to run an existing application as a generic cluster service, consult the documentation for that service to determine whether the service supports the configuration.
Some services require the accompanying role to be installed via Server Manager prior to enabling it for cluster hosting.

DHCP Server iSNS Server Print Server

Distributed Transaction Coordinator Message Queuing Virtual Machine

Generic Application Generic Script

For the continuing example of this chapter, we will create a clustered file server by adding the File Server service, as shown in Figure 10.8. Click that link in the list to start the process of adding the clustered service. Creating a file server service first requires the creation of a network name and IP address that users will use to connect to the service. This network name and IP address are not unlike naming and addressing the server that would have hosted file services in the traditional sense. Neither name nor IP address can be the same as the existing cluster name or IP address.

238

Chapter 10

Figure 10.8: The list of possible cluster services available in the interface.

Youll find that clusters, especially those that host multiple services, tend to consume large numbers of IP addresses. Plan accordingly.

Once a name has been given to the new service, you will need to assign it the available shared storage disk created earlier. This shared storage will be the location where files are stored by users once the service is fully configured. As youll see, the shared storage for resources such as file services can grow exceptionally large. Thus, there is the need to ensure enough storage is available for long-term storage needs at the time the cluster is created. Once the service has been created, clicking that service in the console brings forward status information about the service. This view is shown in Figure 10.9. There, youll see that the file server service REALCLUSTFS has been created. Also shown is that the service is online and currently owned by the server \\w2008b. The service is using Cluster Disk 1 and has a single hidden share currently created. Creating a new clustered share is done by right-clicking the service, and selecting Add a shared folder. Doing so brings forward the same Provision a Shared Folder wizard as discussed back in Chapter 4.

239

Chapter 10

Figure 10.9: Viewing the properties of the newly created file server service.

Installing Cluster-Aware Applications Applications are installed usually in a much different way than cluster services. As described earlier, it is possible to create a generic application service in much the same way as creating the file server service done in the previous section. However, it is worth mentioning again that when using the generic entities for creating new cluster applications, it is critical to verify first with the applications vendor that the application will indeed function in a clustered environment. Some applications such as Microsoft Exchange Server and Microsoft SQL Server have installations that are cluster-aware themselves. Thus, the process to install these applications to an existing cluster is not the process discussed previously. Instead, to install these applications, run their standard installation setup file. That setup will automatically recognize that the application is being installed to a cluster and provide the necessary installation questions needed to complete the installation.

240

Chapter 10

Managing Resources and Dependencies The internal logic used by a cluster in determining whether the resource is healthy or needs to be relocated to another node is handled through a series of dependencies. Resources that make up a cluster service will have a list of dependent and antecedent resources that map together to equal the sum total of the service. This list of dependencies can be seen by right-clicking the service and choosing Show dependency report. Figure 10.10 shows a snippet of that report for the newly created file server service, which displays how the Network Name resource depends on the IP address resource while the Physical Disk resource stands alone.

Figure 10.10: A dependency mapping of the file server service showing the three resources that make up the service.

For resources that have dependencies, any or all of their dependencies must be online for the resource to remain online. If a dependent resource goes offline, the resource itself will go offline. As well discuss in the next section, this can trigger a failover event. In the right pane of the screen shown in Figure 10.9, click any of the resources that make up the REALCLUSTFS service. On the General tab of the resulting screen, you will see information about that resource. Selecting the Dependencies tab will display the list of dependencies for the resource. Here, it is possible to manually add dependencies if your service architecture needs them. For example, if the outage of a completely separate IP address or service will impact the functionality of the resource, you would enter that other resource as a dependency here. Figure 10.11 shows how the Cluster Disk 1 could be added as a dependency to the REALCLUSTS service using the AND operator with the IP address.

241

Chapter 10

Figure 10.11: Adding dependencies to the file server service can trigger a failover if one of the dependencies fails.

Be cautious with the addition of dependencies. The outage of any dependency can trigger a failover.

Failover All this talk of dependencies directly drives the behavior of the cluster should an outage of a resource occur. When the outage of a resource takes place, the default behavior of a cluster is to attempt to restart the resource on the current node. If that cannot be accomplished, the cluster will attempt to fail over the entire service to the alternative node. This determination of failover behavior is configured by right-clicking to view the properties of the cluster service, and selecting the Policies tab. This tab is shown on the left side of Figure 10.12.

242

Chapter 10

Figure 10.12: The Policies and Advanced Policies tabs of the file server service. It is within these two locations where failover behaviors for the resource are configured.

You can see on the left side of Figure 10.12 that the resource will first attempt to restart itself on the current node before failing over to the other node. This is handy in the case where a resource sees a temporary outage due to environment conditions and you dont want the service to change node ownership. The right-pane of Figure 10.12 shows the advanced policies associated with the resource. These advanced policies come into play much more in clusters that include more than two nodes. Here, it is possible to identify which cluster nodes are allowed to own the resource in the case of a failover. When a failover occurs on a multi-node cluster, the entries in this list are used to identify which node can and ultimately will become the new owner of the resource during a failover event. Also available here are options for choosing the intervals used for verifying the health of individual resources. Complicating this configuration further, right-clicking and viewing properties on the service itself brings forward another two possible tabs that are shown in Figure 10.13. Here, it is possible to identify which cluster hosts are referred to own the resource. Although also more useful in multi-node clusters, the list of preferred owners shown on the left side of Figure 10.13 provides a place to identify which owners are preferred to own the resource. The most preferred owner is at the top of the list.

243

Chapter 10

Figure 10.13: Two tabs for setting failover behavior and node ownership relating to the service itself.

On the Failover tab, it is possible to customize the behavior for what defines failure for the service itself. Here, it is possible to customize the number of times a failure can occur over what period before the cluster considers the resource completely failed. In the case where the cluster completely fails a resource, the cluster no longer attempts to restart the resource. This failover protection is present to prevent a condition commonly known as bouncing where a cluster resource that cannot be brought online continually fails over between nodes. Failback Failback is another configuration that can optionally occur when you want a failed resource to return back to its preferred node when that node can resume servicing the resource. Failback is by default disabled. This is the case due to the potential for the same kinds of bouncing behavior discussed earlier. When a node fails on its preferred owner, it fails over to an alternative node. If the resource successfully restarts on the alternative node, failback will either immediately or eventually fail back the resource to its preferred node. Although this may sound like a desirable feature, be careful with the use of this configuration. In the case where the resource cannot start on its preferred node but restarts correctly on an alternative node, failback can actually cause the resource to return back to the host where it regularly fails. Once failed back, the service cannot restart, which causes another failover, ultimately resulting in a bounce condition until the cluster completely fails the resource.
Avoid failback unless you absolutely need it. Obviously, this means that some manual monitoring of cluster resources and their ownership is required. But that manual monitoring is arguably a much better solution than creating the potential for a painful bounce condition.

244

Chapter 10

Geoclustering
The decision to implement a geographically distributed cluster is not one taken lightly. Whereas the process to create a simple two-node cluster is relatively trivial, once the networking and storage concerns are planned, implementing a geocluster or geographically distributed cluster requires a lot more work and cost. Unlike traditional clusters, geoclusters leverage the use of separated but replicated data storage at each site where a cluster node is hosted. An example of this is shown graphically in Figure 10.14.

External Replicaton
Shared Storage A Shared Storage B

Cluster Node A

Cluster Node B

WAN
LAN Site A LAN Site B

Figure 10.14: A geocluster across multiple sites is possible with Windows Server 2008. However, Microsoft does not provide the needed replication tools to ensure the separate-but-equal data stores remain consistent.

First and foremost, Microsoft does not provide the replication tools necessary to replicate the storage subsystem between the two separated nodes. This replication is necessary to ensure that both nodes see the same set of data. Third-party replication tools are necessary that guarantee a very low latency to ensure high levels of data consistency between nodes. Microsoft has augmented WSFC in Windows Server 2008 with a change to the cluster heartbeat that loosens the restrictions on network latency for the cluster heartbeat as well as a conversion of heartbeat communication to routable TCP.
For more information about the third-party tools that enable replication between sites as well as detailed information on developing multi-site failover clusters, check out http://www.microsoft.com/windowsserver2008/en/us/clustering-multisite.aspx.

245

Chapter 10

Clustering Brings High Availability


So clustering in Windows Server 2008 finally brings low-cost high availability to specific windows services and applications. With the right iSCSI or fibre channel data storage in place, the only additional needs are the right network connections and a plan for failover of resources. Unlike essentially every previous version of Windows clustering, that which youll find in Windows Server 2008 is a service youll actually want to use for your highest-value services. And thus ends our guide. This guide in all its ten chapters has attempted to assist you with the process of building your Windows Server 2008 infrastructure. Starting with the concepts necessary to build the domain, working through various services such as file serving and Terminal Services, and including a few management components such as Group Policy along the way, this guide has hopefully given you the resources you need to build your infrastructure along the lines of best practice. The rest is up to you.

Download Additional eBooks from Realtime Nexus!


Realtime NexusThe Digital Library provides world-class expert resources that IT professionals depend on to learn about the newest technologies. If you found this eBook to be informative, we encourage you to download more of our industry-leading technology eBooks and video guides at Realtime Nexus. Please visit http://nexus.realtimepublishers.com.

246

You might also like