Professional Documents
Culture Documents
Active Directory Troubleshoot - Audit PDF
Active Directory Troubleshoot - Audit PDF
tm
The Definitive Guide To
Active Directory
Troubleshooting
and Auditing
Don Jones
(Sponsor Logo Here)
Introduction
Introduction to Realtimepublishers
by Sean Daily, Series Editor
The book you are about to enjoy represents an entirely new modality of publishing and a major
first in the industry. The founding concept behind Realtimepublishers.com is the idea of
providing readers with high-quality books about today’s most critical technology topics—at no
cost to the reader. Although this feat may sound difficult to achieve, it is made possible through
the vision and generosity of a corporate sponsor who agrees to bear the book’s production
expenses and host the book on its Web site for the benefit of its Web site visitors.
It should be pointed out that the free nature of these publications does not in any way diminish
their quality. Without reservation, I can tell you that the book that you’re now reading is the
equivalent of any similar printed book you might find at your local bookstore—with the notable
exception that it won’t cost you $30 to $80. The Realtimepublishers publishing model also
provides other significant benefits. For example, the electronic nature of this book makes
activities such as chapter updates and additions or the release of a new edition possible in a far
shorter timeframe than is the case with conventional printed books. Because we publish our titles
in “real-time”—that is, as chapters are written or revised by the author—you benefit from
receiving the information immediately rather than having to wait months or years to receive a
complete product.
Finally, I’d like to note that our books are by no means paid advertisements for the sponsor.
Realtimepublishers is an independent publishing company and maintains, by written agreement
with the sponsor, 100 percent editorial control over the content of our titles. It is my opinion that
this system of content delivery not only is of immeasurable value to readers but also will hold a
significant place in the future of publishing.
As the founder of Realtimepublishers, my raison d’être is to create “dream team” projects—that
is, to locate and work only with the industry’s leading authors and sponsors, and publish books
that help readers do their everyday jobs. To that end, I encourage and welcome your feedback on
this or any other book in the Realtimepublishers.com series. If you would like to submit a
comment, question, or suggestion, please send an email to feedback@realtimepublishers.com,
leave feedback on our Web site at http://www.realtimepublishers.com, or call us at 800-509-
0532.
Thanks for reading, and enjoy!
Sean Daily
Founder & CTO
Realtimepublishers.com, Inc.
i
Table of Contents
Introduction to Realtimepublishers.................................................................................................. i
Chapter 1: Introducing Active Directory .........................................................................................1
The Importance of Directories and Directory Management ................................................3
Many Eggs, One Basket...........................................................................................3
New Tools for New Times.......................................................................................4
Meet AD...............................................................................................................................4
The AD Database.................................................................................................................6
Logical Architecture of AD .................................................................................................6
Objects and Attributes..............................................................................................6
The Schema..............................................................................................................7
LDAP .......................................................................................................................9
Domains, Trees, and Forests..................................................................................10
Organizational Units ..............................................................................................14
The Global Catalog ................................................................................................15
Physical Structure of AD ...................................................................................................17
Domain Controllers................................................................................................17
Directory Replication.............................................................................................17
The Operations Masters .........................................................................................18
Sites........................................................................................................................19
AD’s Backbone: DNS........................................................................................................20
Introduction to AD and Windows Monitoring...............................................................................21
AD, Win2K, and WS2K3 Monitoring Considerations ......................................................24
Change Monitoring and Auditing ......................................................................................26
Problem Resolution, Automation, and Alerting ................................................................26
Other Considerations .........................................................................................................27
Summary ........................................................................................................................................27
Chapter 2: Designing an Effective Active Directory.....................................................................28
AD’s Logical and Physical Structures ...........................................................................................28
Logical Structures ..............................................................................................................29
Namespace .............................................................................................................29
Naming Context .....................................................................................................30
Physical Structures.............................................................................................................30
Designing AD ................................................................................................................................31
ii
Table of Contents
iii
Table of Contents
iv
Table of Contents
v
Table of Contents
vi
Table of Contents
vii
Table of Contents
viii
Copyright Statement
Copyright Statement
© 2005 Realtimepublishers.com, Inc. All rights reserved. This site contains materials that
have been created, developed, or commissioned by, and published with the permission
of, Realtimepublishers.com, Inc. (the “Materials”) and this site and any such Materials are
protected by international copyright and trademark laws.
THE MATERIALS ARE PROVIDED “AS IS” WITHOUT WARRANTY OF ANY KIND,
EITHER EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO, THE IMPLIED
WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE,
TITLE AND NON-INFRINGEMENT. The Materials are subject to change without notice
and do not represent a commitment on the part of Realtimepublishers.com, Inc or its web
site sponsors. In no event shall Realtimepublishers.com, Inc. or its web site sponsors be
held liable for technical or editorial errors or omissions contained in the Materials,
including without limitation, for any direct, indirect, incidental, special, exemplary or
consequential damages whatsoever resulting from the use of any information contained
in the Materials.
The Materials (including but not limited to the text, images, audio, and/or video) may not
be copied, reproduced, republished, uploaded, posted, transmitted, or distributed in any
way, in whole or in part, except that one copy may be downloaded for your personal, non-
commercial use on a single computer. In connection with such use, you may not modify
or obscure any copyright or other proprietary notice.
The Materials may contain trademarks, services marks and logos that are the property of
third parties. You are not permitted to use these trademarks, services marks or logos
without prior written consent of such third parties.
Realtimepublishers.com and the Realtimepublishers logo are registered in the US Patent
& Trademark Office. All other product or service names are the property of their
respective owners.
If you have any questions about these terms, or if you would like information about
licensing materials from Realtimepublishers.com, please contact us via e-mail at
info@realtimepublishers.com.
ix
Chapter 1
[Editor’s Note: This eBook was downloaded from Content Central. To download other eBooks
on this topic, please visit http://www.realtimepublishers.com/contentcentral/.]
1
Chapter 1
In Windows 2000 (Win2K), NT’s successor OS, Microsoft set out to deliver a directory capable
of addressing each of these limitations. Win2K’s new directory service, dubbed Active Directory
(AD), provides an industrial-strength directory service that can serve the needs of both small and
very large organizations, and everyone in between. Because it stores its data outside the system
registry, AD has virtually unlimited storage capacity (AD databases can contain hundreds of
millions of entries, as compared with the tens of thousands NT is capable of storing). AD allows
administrators to define physical attributes of their network, such as individual sites and their
connecting WAN links, as well as the logical layout of network resources such as computers and
users. Using this information, AD is able to self-optimize its bandwidth usage in multi-site WAN
environments. AD also introduces a new administration model that provides a far more granular
and less monolithic model than is present under NT 4.0. Finally, AD also provides a central point
of access control for network users, which means that users can log on once and gain access to
all network resources.
Although other directories such as Banyan’s StreetTalk and Novell’s NDS have existed for some
time, many NT-centric organizations have opted to wait and use Microsoft’s entry in the
enterprise directory arena as the foundation for their organization-wide directory environment.
Consequently, AD represents the first foray into the larger world of directories and directory
management for many organizations and network administrators.
Windows Server 2003 (WS2K3) introduces a new version of AD that is essentially a more
mature, refined version of the AD introduced in Win2K. Several minor enhancements have been
made to improve performance, improve the experience of interacting with the directory, and to
enhance the directory’s manageability. AD as implemented in WS2K3 is completely backward-
compatible with Win2K’s version of AD; a series of “functional levels” disable functionality that
isn’t backward-compatible until the entire organization is running the latest version.
WS2K3 AD is definitely an evolutionary product, meaning it represents small but important changes
over prior versions. Win2K’s AD, however, could reasonably be called a revolutionary product, as it
represented a complete and total change over the prior “directory” offered in Windows.
With widespread adoption of AD finally a reality, the directory is taking on new and unforeseen
roles within most organizations. The concept of the directory as a single repository for user
accounts has been vastly expanded. Today’s directories are expected to serve as centralized
identity management applications. Everything related to the identity of security principals—user
names, digital certificates, application information, and more—is being stored in the directory. In
its relatively short life, AD has become one of the most mission-critical applications in
organizations that have deployed it, serving as the lynchpin for a variety of enterprise
applications and providing centralized identity management across the organization.
2
Chapter 1
3
Chapter 1
Meet AD
Of all of the elements that comprise a Win2K or WS2K3 network, the most important by far is
AD, Windows’ centralized directory service. However, before we delve into the specifics of AD,
let’s first define some of the fundamental terms and concepts related to directory-enabled
networks. A directory (which is sometimes also referred to as a data store) maintains data about
objects that exist within a network, in a hierarchical structure, making the information easier to
understand and access. These objects include traditional network resources such as user and
machine accounts, shared network resources (such as shared directories and printers), and
resources such as network applications and services, security policies, and virtually any other
type of object an administrator or application wants to store within the directory data store.
4
Chapter 1
As mentioned earlier, a directory service is a composite term that includes both the directory data
store and the services that make the information within the directory available to users and
applications. Directory services are available in a variety of different types and from different
sources. OS directories, such as Microsoft’s AD and Novell’s NDS, are general purpose
directories included with the NOS and are designed to be accessible by a wide array of users,
applications, and devices. There are also some applications, such as ERP systems, HR systems,
and email systems (for example, Microsoft Exchange) that provide their own directories for
storing data specific to the functionality of those applications.
Microsoft Exchange Server 200x is a notable exception to this and is completely integrated with AD.
Exchange Server’s installation process extends AD’s structure to accommodate Exchange-specific
data and subsequently uses AD to store its own directory information.
AD is Microsoft’s directory service implementation in the Win2K and WS2K3 server OSs. AD
is hosted by one or more domain controllers, and is replicated in a multi-master fashion between
those domain controllers to ensure greater availability of the directory and network as a whole.
Any Windows server running AD is considered to be a domain controller; domain controllers,
then, are the only servers that implement the various services and features that comprise AD. In
addition to providing a centralized repository for network objects and a set of services for
accessing those objects, AD provides security in the form of access control lists (ACLs) on
directory objects that protect those objects from being accessed by unauthorized parties.
There are many features of other applications—such as Microsoft Exchange Server—that take
advantage of AD, although those applications themselves do not need to be running on a domain
controller.
The term multi-master indicates that multiple read/write copies of the database exist simultaneously,
one on each domain controller. Thus, each domain controller is effectively an equal peer of the other
controllers, and any controller can write directory updates and propagate those updates to other
controllers. This functionality is in notable contrast to NT 4.0’s single-master PDC/BDC replication
topology wherein a single domain controller, the PDC, houses a read/write copy of the database.
AD includes a complex and robust replication infrastructure that is designed to accommodate this
multi-master model. For example, conflicts, which occur when two controllers change the same thing
at nearly the same time, are resolved automatically.
5
Chapter 1
The AD Database
At a file-system level, AD uses Microsoft’s Extensible Storage Engine (ESE) to store the
directory database. Administrators familiar with Microsoft Exchange Server may recognize this
engine as the same database technology used in that product. Like Exchange Server, AD’s
database employs transactional log files to help ensure database integrity in the case of power
outages and similar events that interfere with the successful completion of database transactions.
In addition, AD shares Exchange’s ability to perform online database maintenance and
defragmentation. At the file level, AD stores its database in a single database file named Ntds.dit,
a copy of which can be found on every domain controller.
Although the building blocks that make up AD are largely masked by the directory’s high-level
management interfaces and APIs, the physical aspects of the directory are nonetheless an
important consideration for Windows administrators. For example, it is critical that all volumes
on domain controllers hosting the AD database and its transaction logs maintain adequate levels
of free disk space at all times. For performance reasons, it is also important that the AD
databases on these machines not become too heavily fragmented.
AD is a database, which effectively turns Windows domain controllers into critical database
servers on the network. These servers should therefore be treated no differently than any other
important database server in terms of fault tolerance preparation (for example, disk redundancy,
backups, and power protection) and capacity planning.
Logical Architecture of AD
To gain an appreciation for and understanding of AD and AD management concepts, it’s
important to first understand AD’s logical architecture. In this section, we’ll discuss the most
important concepts associated with AD, concepts which form the foundation of all Windows
networks.
6
Chapter 1
There are special types of objects in AD known as container objects that you should be familiar with.
Put simply, container objects are objects that may contain other objects. This design allows you to
organize a tree or hierarchy of objects. Examples of container objects include organizational unit (OU)
and domain objects. Container objects may hold both objects and/or other container objects. For
example, an OU object can contain both regular objects such as users and computers and other OU
container objects.
Although it’s perfectly acceptable to say “create” in lieu of “instantiate” when referring to the
generation of a new object within the directory, we’ll use the latter more frequently in this book. The
reason is that “instantiate” is more appropriate when you consider the underlying event that actually
occurs—that being the “creation of an instance of” an object.
The Schema
As you might imagine, all of the object classes and attributes discussed thus far have some kind
of underlying reference that describes them—a sort of “dictionary” for AD. In Windows
parlance, this “dictionary” is referred to as the schema. The AD schema contains the definitions
of all object types that may be instantiated within the directory. The AD schema is also
extensible, meaning it can be extended to include additional classes and attributes to support
future features, other applications, and so forth.
Even the AD schema itself is stored in the directory as objects. That is, AD classes are stored as
objects of the class “classSchema” and attributes are stored as objects of class “attributeSchema.”
The schema, then, is just a number of instances of the classes “classSchema” and “attributeSchema,”
with properties that describe the relationship between all classes in the AD schema.
To understand the relationship between object classes, objects, and the schema, let’s go back to
the object-oriented model upon which the AD schema is based. As is the case with object-
oriented development environments (such as C++ and Java), a class is a kind of basic definition
of an object. When I instantiate an object of a certain class, I create an instance of that particular
object class. That object instance has a number of properties associated with the class from
which it was created. For example, suppose I create a class called “motorcycle” that has
attributes such as “color,” “year,” and “enginesize.” I can instantiate the class “motorcycle” and
create a real object called “Yamaha YZF600R6” with properties such as “red” (for the color
attribute), 2000 (for the year attribute), and 600 (for the motorcycle engine’s size in CCs).
7
Chapter 1
Figure 1.1: Viewing the AD schema by using the Active Directory Schema MMC snap-in.
Editing the schema is a potentially dangerous activity—you need to know exactly what you’re doing
and why you’re doing it. Before you make schema changes, be sure to back up the current AD
database contents and schema (for example, by using ntbackup.exe or a third-party utility’s System
State backup option on an up-to-date domain controller).
8
Chapter 1
LDAP
One of the early design decisions that Microsoft made regarding AD was the use of an efficient
directory access protocol known as the Lightweight Directory Access Protocol. LDAP also
benefits from its compatibility with other existing directory services. This compatibility, in turn,
provides for the interoperability of AD with these other directory services.
9
Chapter 1
LDAP specifies that every AD object be represented by a unique name. These names are formed
by combining information about domain components, OUs, and the name of the target object,
known as a common name. Table 1.1 provides each of these LDAP name components and their
descriptions.
Attribute Type DN Abbreviation Description
Domain-Component DC An individual element of the DNS domain name of
the object’s domain (for example, com, org, edu,
realtimepublishers, Microsoft)
Organizational-Unit-Name OU An OU container object within an AD domain
Common-Name CN Any object other than domain components and
OUs (such as printers, computers, and users)
Organization-Name O The name of a single organization, such as a
company; although part of the X.500 and LDAP
standards, Organization is generally not used in
directories such as AD that use domain
components to organize the tree structure
Locality-Name L The name of a physical locale, such as a region or
a city; although part of the X.500 and LDAP
standards, Locality is generally not used in
directories such as AD that use domain
components to organize the tree structure
Country-Name C The name of a country; although part of the X.500
and LDAP standards, Country is generally not used
in directories such as AD that use domain
components to organize the tree structure
For example, the LDAP name for the user object for a person named Don Jones in the
realtimepublishers.com domain’s Marketing OU would be as follows:
CN=Don Jones,OU=Marketing,DC=realtimepublishers,DC=com
This form of an object’s name as it appears in the directory is referred to as the object’s
distinguished name (DN). Alternatively, an object can also be referred to using its relative
distinguished name. The RDN is the portion of the DN that refers to the target object within its
container. In the previous example, the RDN of the user object would simply be Don Jones.
10
Chapter 1
Domains also have several other important characteristics. First, they act as a boundary for
network security: each domain has its own separate and unique security policy that defines items
such as password expiration and similar security options. Domains also act as an administrative
boundary, because administrative privileges granted to security principals within a domain do
not automatically transfer to other domains within AD. Finally, domains act as a unit of
replication within AD—as all servers acting as domain controllers in an AD domain replicate
directory changes to one another, they contain a complete set of the directory information related
to their domain.
AD domain names don’t need to be Internet-registered domain names ending in Internet-legal top-
level domains (such as .com, .org, and .net). For example, it is possible to name domains with
endings such as .pri, .msft, or some other ending of your choosing. This of course assumes that the
domain’s DNS servers aren’t participating in the Internet DNS namespace hierarchy (which is by far
the most common scenario, due to security considerations with exposing internal DNS servers to the
Internet). If you do elect to use standard Internet top-level domains in your AD domain names, you
should register these names on the Internet even if they don’t participate in the Internet DNS
namespace. The reason is that most organizations are connected to the Internet, and using
unregistered internal domain names that may potentially be registered on the Internet could cause
name conflicts.
AD’s design also integrates the concepts of forests and trees. A tree is a hierarchical arrangement
of AD domains within AD that forms a contiguous namespace. For example, assume a domain
named xcedia.com exists in your AD structure. The two subdivisions of xcedia.com are europe
and us, which are each represented by separate domains. Within AD, the names of these domains
would be us.xcedia.com and europe.xcedia.com. These domains would form a domain tree
because they share a contiguous namespace. This arrangement demonstrates the hierarchical
structure of AD and its namespace—all of these domains are part of one contiguous related
namespace in the directory; that is to say, they form a single domain tree. The name of the tree is
the root level of the tree, in this case, xcedia.com. Figure 1.2 shows the single-domain tree
described in this example.
11
Chapter 1
A forest is a collection of one or more trees. A forest can be as simple as a single AD domain, or
more complex, such as a collection of multi-tiered domain trees.
Let’s take this single-tree example scenario a step further. Assume that within this AD
environment, the parent organization, Xcedia, also has a subsidiary company with a domain
name of Realtimepublishers.com. Although the parent company wants to have both
organizations defined within the same AD forest, it wants their domain and DNS names to be
unique. To facilitate this configuration, you would define the domains used by the two
organizations within separate trees in the same AD forest. Figure 1.3 illustrates this scenario. All
domains within a forest (even those in different trees) share a schema, configuration, and GC
(we’ll discuss the GC in a later section). In addition, all domains within a forest automatically
trust one another due to the transitive, hierarchical Kerberos trusts that are automatically
established between all domains in an AD forest.
The Kerberos version 5 authentication protocol is a distributed security protocol based on Internet
standards and is the default security mechanism used for domain authentication within or across AD
domains. Kerberos replaces NT LAN Manager (NTLM) authentication used in NT Server 4.0 as the
primary security protocol for access to resources within or across AD domains. AD domain controllers
still support NTLM to provide backward compatibility with NT 4.0 machines.
12
Chapter 1
In the case of a forest with multiple trees, the name of the forest is the name of the first domain
created within the forest (the root domain of the first tree created in the forest).
13
Chapter 1
Win2K provides the ability to create one-way, nontransitive trusts. For example, the xcedia.com
domain might be configured to trust the realtimepublishers.com domain. As the trusted domain,
realtimepublishers.com users could be given permissions to resources in the trusting xcedia.com
domain. However, the reverse would not be true because the trust is one-way. What’s more,
xcedia.com’s child domains wouldn’t participate in the trust because this manual, inter-domain
trust is nontransitive.
WS2K3 domains running at the highest forest functional level can also establish one-way,
nontransitive trusts between other entire forests. Microsoft provides this capability to correct a
problem with Win2K in the Enterprise Admins group. Because this group has overriding control
over every domain in a forest, many organizations were forced to create multiple forests to
maintain their desired security boundaries. However, without trusts between forests, providing
users in other forests with access to resources was difficult, often requiring users to maintain
accounts in each of an organization’s forests, and partially defeating one of the directory’s
primary purposes which is to have one user account per person. WS2K3 forest trusts allow
forests to be used as an ultimate security boundary, while still providing cross-forest access when
needed.
There are several resources you might find helpful when planning your organization’s AD structure
and namespace, such as the Microsoft white papers that contain valuable information about AD
design and architectural concepts, including “Active Directory Architecture” and “Domain Upgrades
and Active Directory.” These and others technical documents related to AD can be found on
Microsoft’s Web site at http://www.microsoft.com/windows2000/server.
Organizational Units
An OU is a special container object that is used to organize other objects—such as computers,
users, and printers—within a domain. OUs can contain all these object types, and even other
OUs (this type of configuration is referred to as nested OUs). OUs are a particularly important
element of AD for several reasons. First, they provide the ability to define a logical hierarchy
within the directory without creating additional domains. OUs allow domain administrators to
subdivide their domains into discrete sections and delegate administrative duties to others. More
importantly, this delegation can be accomplished without necessarily giving the delegated
individuals administrative rights to the rest of the domain. As such, OUs facilitate the
organization of resources within a domain. Figure 1.4 shows an example of OUs within a
domain.
There are several models used for the design of OU hierarchies within domains, but the two most
common are those dividing the domain organizationally (for example, by business unit) or
geographically.
14
Chapter 1
Broadly speaking, your OU structures should reflect the way you plan to delegate control over
your domain’s objects. If every object will be administered by one small group of administrators,
one OU might be all you need. If each office in your organization will be managed at least
somewhat independently (perhaps giving a local office administrator the ability to reset
passwords, for example), having one OU per office will facilitate your administrative model.
15
Chapter 1
Only Windows servers acting as domain controllers can be configured as GC servers. By default,
the first domain controller in a Windows forest is automatically configured to be a GC server
(this designation can be moved later to a different domain controller if desired; however, every
forest must contain at least one GC). Like AD, the GC uses replication in order to ensure updates
between the various GC servers within a domain or forest. In addition to being a repository of
commonly queried AD object attributes, the GC plays two primary roles on a Windows network:
• Network logon authentication—In native-mode domains (networks in which all domain
controllers have been upgraded to Win2K or later, and the domain’s functional level has
been manually set to the appropriate level), the GC facilitates network logons for AD-
enabled clients. It does so by providing universal group membership information to the
account sending the logon request to a domain controller. This applies not only to regular
users but also to every type of object that must authenticate to AD (including computers).
In multi-domain networks, at least one domain controller acting as a GC must be
available in order for users to log on. Another situation that requires a GC server occurs
when a user attempts to log on with a user principal name (UPN) other than the default. If
a GC server is not available in these circumstances, users will only be able to logon to the
local computer (the one exception is members of the domain administrators group, who
do not require a GC server in order to log on to the network).
• Directory searches and queries—With AD, read requests such as directory searches and
queries, by far tend to outweigh write-oriented requests such as directory updates (for
example, by an administrator or during replication). The majority of AD-related network
traffic is comprised of requests from users, administrators, and applications about objects
in the directory. As a result, the GC is essential to the network infrastructure because it
allows clients to quickly perform searches across all domains within a forest.
Although mixed-mode Win2K domains do not require the GC for the network logon authentication
process, GCs are still important in facilitating directory queries and searches on these networks and
should therefore be made available at each site within the network.
16
Chapter 1
Physical Structure of AD
Thus far, our discussion of AD has focused on the logical components of the directory’s
architecture; that is, the components used to structure and organize network resources within the
directory. However, an AD-based network also incorporates a physical structure, which is used
to configure and manage network traffic.
Domain Controllers
The concept of a domain controller has been around since the introduction of NT. As is the case
with NT, a Win2K or WS2K3 domain controller is a server that houses a replica of the directory
(in the case of Win2K or WS2K3, the directory being AD rather than the NT SAM database).
Domain controllers are also responsible for replicating changes to the directory to other domain
controllers in the same domain. Additionally, domain controllers are responsible for user logons
and other directory authentication as well as directory searches.
Fortunately, Win2K and WS2K3 do away with NT’s restriction that converting a domain controller to a
member server or vice-versa requires reinstallation of the server OS. Servers may be promoted or
demoted to domain controller status dynamically (and without reinstallation of Windows itself) by
using the Dcpromo.exe domain controller promotion wizard.
At least one domain controller must be present in a domain, and for fault tolerance reasons it’s a
good idea to have more than one domain controller at any larger site (for example, a main office
www.netpro.com
or large branch office).
Directory Replication
As we’ve discussed, domain controllers are responsible for propagating directory updates they
receive (for example, a new user object or password change) to other domain controllers. This
process is known as directory replication, and can be responsible for a significant amount of
WAN traffic on many networks.
AD is replicated in a multi-master fashion between all domain controllers within a domain to
ensure greater availability of the directory and network as a whole. The term multi-master
indicates that multiple read/write copies of the database exist simultaneously on each domain
controller computer. Thus, each domain controller is effectively a peer of the other controllers,
and any domain controller can write directory updates and propagate those updates to other
domain controllers. This is in notable contrast to NT 4.0’s single-master PDC/BDC replication
topology wherein a single domain controller, the PDC, houses the only read/write copy of the
database; other domain controllers—BDCs—contain a read-only copy replicated from the PDC.
AD’s replication design means that different domain controllers within the domain may hold different
data at any given time—but usually only for short periods of time. As a result, individual domain
controllers may be temporarily out of date at any given time and unable to authenticate a logon
request. AD’s replication process has the characteristic of bringing all domain controllers up to date
with each other; this characteristic is called convergence.
17
Chapter 1
This is not to say that schema changes require physical access to the domain controller holding the
schema master role; AD’s administrative tools are smart enough to seek out the schema master and
connect to it remotely when necessary.
• Domain naming master—The domain controller elected to the domain naming master
role is responsible for making changes to the forest-wide domain name space of AD. This
domain controller is the only one that can add or remove a domain from the directory or
add/remove references to domains in external directories.
The three domain-specific operations master roles are as follows:
• PDC emulator—If an AD domain contains non-AD-enabled clients or is a mixed-mode
domain containing NT BDCs, the PDC emulator acts as an NT PDC for these systems. In
addition to replicating the NT-compatible portion of directory updates to all BDCs, the
PDC emulator is responsible for time synchronization on the network (which is important
for Windows’ Kerberos security mechanism as well as some aspects of AD replication),
and processing account lockouts and client password changes.
Win2K and later clients synchronize their system clocks with the domain controller that authenticates
them to the domain. Domain controllers synchronize their time with the domain’s PDC emulator. The
PDC emulators in child domains synchronize their time with the PDC emulators of their parent; the
forest root domain’s PDC emulator should be configured to synchronize with some authoritative
external time source, such as the US Naval Observatory’s atomic clock.
18
Chapter 1
• RID master—The RID (relative ID) master allocates sequences of RIDs to each domain
controller in its domain. Whenever a domain controller creates an object such as a user,
group, or computer, that object must be assigned a unique security identifier (SID). A
SID consists of a domain security ID (this ID is identical for all SIDs within a domain)
and a RID. When a domain controller has exhausted its internal pool of RIDs, it requests
another pool from the RID master domain controller.
• Infrastructure master—When an object in one domain is referenced by an object in
another domain, it represents the reference by the Globally Unique Identifier (GUID), the
SID (for objects that reference security principals), and the DN of the object being
referenced. The infrastructure master is the domain controller responsible for updating an
object’s SID and DN in a cross-domain object reference. The infrastructure master is also
responsible for updating all inter-domain references any time an object referenced by
another object moves (for example, whenever the members of groups are renamed or
changed, the infrastructure master updates the group-to-user references). The
infrastructure master distributes updates using multi-master replication.
Except where there is only one domain controller in a domain, never assign the infrastructure master
role to the domain controller that is also acting as a GC server. If you use a GC server, the
infrastructure master will not function properly. Specifically, the effect will be that cross-domain object
references in the domain will not be updated. In a situation in which all domain controllers in a domain
are also acting as GC servers, the infrastructure master role is unnecessary because all domain
controllers will have current data.
Because the operations masters play such critically important roles on a Windows network, it’s
essential for proper network operation that all the servers hosting these roles are continually
available.
Sites
The final, and perhaps most important component of AD’s physical structure, is a site. Sites
allow administrators to define the physical topology of a Windows network, something that
wasn’t possible under NT. Sites can be thought of as areas of fast connectivity (for example,
individual office LANs), but are defined within AD as a collection of one or more IP subnets.
When you look at the structure of IP, this begins to make sense—different physical locations on
a network are typically going to be connected by a router, which in turn, necessitates the use of
different IP subnets on each network. It’s also possible to group multiple, non-contiguous IP
subnets together to form a single site.
So why are sites important? The primary reason is that the definition of sites makes it possible
for AD to gain some understanding of the underlying physical network topology, and tune
replication frequency and bandwidth usage accordingly (under NT, this could only be done via
manual adjustments to the replication service). This “intelligence” conferred by the knowledge
of the network layout has numerous other benefits. For example, it allows AD-enabled
computers hosting users who are logging on to the network to automatically locate their closest
domain controller and use that controller to authenticate, rather than crossing the WAN to do so.
In a similar fashion, sites give other components within a Windows network new intelligence.
For example, a client computer connecting to a server running the Distributed File System (Dfs)
feature in Windows can use sites to locate the closest Dfs replica server.
19
Chapter 1
It’s important to remember that sites are part of the physical structure of AD and are in no way
related to the logical constructs we’ve already discussed, such as domains and OUs. It’s possible
for a single domain to span multiple sites, or conversely, for a single site to encompass multiple
domains. The proper definition of sites is an essential aspect of AD network design planning.
For sites that house multiple domains (for example, an organization that divides business units into
domains rather than OUs, thus hosting multiple business unit domains on a single site), it’s important
to remember to place at least one, and possibly two, domain controllers for each domain that users
will authenticate to from that site (because users can only authenticate to a domain controller from
their domain). This outlines the biggest disadvantage of the business unit domain model: the potential
for requiring many domain controllers at each and every site.
This namespace duplication may be limited to the internal DNS namespace for companies using the
Microsoft-recommended configuration of separate DNS configurations for the internal LAN and the
Internet. It is possible, however, to use a “split-brain” DNS design in which a publicly resolvable DNS
name—such as realtimepublishers.com—is used for both the external and internal namespaces,
without exposing internal DNS servers to the public Internet.
20
Chapter 1
Finally, AD uses DNS as the default locator service; that is, the service used to convert items
such as AD domain, site, and service names to IP addresses. It’s important to remember that
although the DNS and AD namespaces in a Windows network are identical in regards to domain
names, the namespaces are otherwise unique and used for different purposes. DNS databases
contain domains and the record contents (host address/A records, server resource/SRV records,
mail exchanger/MX records, and so on) of the DNS zone files for those domains, whereas AD
contains a wide variety of different objects including domains, OUs, users, computer, and Group
Policy objects (GPOs).
Another notable connection between DNS and AD is that Windows DNS servers can be
configured to store their DNS domain zone files directly within AD rather than in external text
files. Although DNS doesn’t rely on AD for its functionality, the converse is not true: AD relies
on the presence of DNS for its operation.
Windows includes an implementation of Dynamic DNS (DDNS—defined by Request for
Comment—RFC—2136) that allows AD-enabled clients to locate important network resources,
such as domain controllers, through special DNS resource records called SRV records. The
accuracy of these SRV records is therefore critical to the proper functioning of a Windows
network (not to mention the availability of the systems and services they reference).
21
Chapter 1
22
Chapter 1
Although it is true that newer versions of Windows provide a far greater level of reliability and
performance than its predecessors, it also involves a higher number of “moving parts” and
dependencies that need to be accounted for. For example, newer versions of Windows have an
integrated Web browser, integrated media player, integrated Web server, additional networking
services and tools, and so forth. Although legacy NT networks have their own set of
dependencies and vulnerabilities, they are far fewer in number due to NT’s simpler (and less
capable) network architecture. Let’s quickly review the primary monitoring considerations in an
NT environment:
• PDC availability and performance—Due to the single-master nature of NT domains,
there is a high dependence (and thus, a high availability requirement) on the PDC of each
NT domain. Although BDCs exist to create fault-tolerance and load-balancing for client
logon authentication, an NT domain without a PDC essentially grinds to a halt until the
PDC is brought back online or replaced via the manual promotion of a BDC to PDC
status by a network administrator. In addition, network logon traffic loads on domain
controllers should be monitored to assess domain controller performance and the ability
to respond to client network logon authentication requests within an acceptable period of
time.
• Domain trust relationships—On multi-domain NT networks, there typically exists a
complex array of trust relationships between domains in order to accommodate network
access requirements for the business. NT trust relationships (formed between domain
controllers) are notoriously fragile and prone to failure, and thus require continual
monitoring and testing in order to assure the availability of network resources to users.
• Name servers—Another aspect of NT networks requiring continual monitoring is the
availability of network name servers. For the majority of NT-based networks (including
those with Windows 95/2K/XP clients), NetBIOS is the predominant namespace and
Windows Internet Name Service (WINS) the predominant name-to-IP address resolution
service. WINS databases and replication are also notoriously fragile elements of NT
networks, and must be regularly monitored to ensure their functionality. Even for
networks using DNS as the primary name resolution service, the availability of the DNS
name servers is equally important as it is with WINS.
23
Chapter 1
• Network browser service—NT, Windows 9x, and other members of the Windows
product family rely on a network browsing service to build lists of available network
resources (servers, shared directories, and shared printers). The architecture of this
service, which calls for each eligible network node to participate in frequent elections to
determine a browse master and backup servers for each network segment, is another
infamously unreliable aspect of Microsoft networks and requires frequent attention and
maintenance.
• Other critical services and applications—In addition to name resolution services such as
WINS and DNS, NT environments may house other mission-critical services required for
proper operation of the network or the business in question. For example, critical
applications such as backup, antivirus, mail, FTP, Web, and database servers should be
polled using intelligent service-level queries to verify that they are functioning properly
and at acceptable levels of performance.
• Basic network and system metrics—All networks, NT or otherwise, should be monitored
to protect against problems stemming from resource allocation problems on individual
servers or the network itself. For example, any good network monitoring regimen will
include the monitoring of CPU, memory, disk space resource usage, and network
connectivity and bandwidth usage on all critical servers.
The KCC is a special Windows service that automatically generates AD’s replication topology and
ensures that all domain controllers on the network participate in replication.
However, knowing what metrics to monitor is only a first step. By far, the most important and
complex aspect of monitoring network health and performance isn’t related to determining what
to monitor but rather how to digest the raw data collected from the array of metrics and make
useful determinations from that data. For example, although it would be possible to collect data
on several dozen metrics (via Performance Monitor) related to AD replication, simply having
this information at hand doesn’t tell you how to interpret the data or what you should consider
acceptable tolerance ranges for each metric. A useful monitoring system not only collects raw
data but also understands the inter-relation of that data and how to use the information to identify
problems on the network. This kind of artificial intelligence represents the true value of network
monitoring software.
24
Chapter 1
In order to ensure the health and availability of AD as well as other critical Windows network
services, organizations will need to regularly monitor a number of different services and
components, which are listed in Table 1.2.
Category Potential Problems
Domain controllers/AD Low CPU or memory resources on domain controllers
Low disk space on volumes housing the Sysvol folder, the AD
database (NTDS.DIT) file, and/or the AD transactional log files
Slow or broken connections between domain controllers
Slow or failed client network logon authentication requests
Slow or failed LDAP query responses
Slow or failed Key Distribution Center (KDC) requests
Slow or failed AD synchronization requests
NetLogon (LSASS) service not functioning properly
Directory Service Agent (DSA) service not functioning properly
KCC not functioning properly
Excessive number of SMB connections
Insufficient RID allocation pool size on local server
Problems with transitive or external trusts to Win2K or down-level
NT domains
Low AD cache hit rate for name resolution queries (as a result of
inefficient AD design)
Replication Failed replication (due to domain controller or network connectivity
problems)
Slow replication
Replication topology invalid/incomplete (lacks transitive
closure/consistency)
Replication using excessive network bandwidth
Too many properties being dropped during replication
Update Sequence Number (USN) update failures
Other miscellaneous replication-related failure events
GC Slow or failed GC query responses
GC replication failures
DNS Missing or incorrect SRV records for domain controllers
Slow or failed DNS query responses
DNS server zone file update failures
Operation masters (FSMOs) Inaccessibility of one or more operation master (FSMO) servers
Forest or domain-centric operation master roles not consistent
across domain controllers within domain/forest
Slow or failed role master responses
Miscellaneous problems Low-level network connectivity problems
TCP/IP routing problems
DHCP IP address allocation pool shortages
WINS server query or replication failures (for legacy NetBIOS
systems and applications)
Naming context lost + found items exist
Application or service failures or performance problems
25
Chapter 1
26
Chapter 1
Other Considerations
There are several considerations you should keep in mind when creating a Windows network
monitoring and troubleshooting solution. One is the overall architecture of the application(s)
being used in the solution. It’s important to understand how the product collects its data and what
impact this collection will have on your network and servers. For example:
• Does the product employ local agents to gather metrics or does it use remote queries?
• Do throttling features exist to control network bandwidth and system resource usage?
• Is there a machine/site/domain hierarchy that allows data to be passed to the central
collection database in an efficient manner?
• Does the product provide Web-based management?
All of these questions are important because the answers can have a significant impact on your
network environment and your overall satisfaction with the product.
Another differentiating feature about network monitoring software packages is whether they
provide a support knowledge base of common problems and solutions. This kind of knowledge is
invaluable from both a technical and financial standpoint because it serves to reduce the learning
curve of the supporting IT staff as well as the amount of time and money administrators must
expend researching and resolving problems. Some utilities augment this feature by allowing
administrators to add their own experiences to the knowledge base or a problem tracking and
resolution database, thereby leveraging internal IT staff expertise and creating a comprehensive
problem resolution system.
Organizations facing regulatory compliance issues may also seek software that provides specific
functionality for their specific regulatory issues. For example, tools providing real-time change
auditing and efficient, securable logs and databases may be more useful than tools that provide
batch notification of changes, simple text files instead of a robust database, or easily manipulated
logs that are subject to untraceable tampering. A final feature provided by some applications, and
one that may be of interest to IT shops engaged in SLAs, is the ability to generate alerts and
reports that address exceptions to, or compliance with, SLA obligations.
Summary
Although AD represents a quantum leap forward in the NT product line, it also introduces new
levels of network infrastructure complexity that must be properly managed in order to maintain
an efficient and highly available network. Real-time, proactive monitoring and management of
AD and other critical services is an essential part of managing Windows-based networks. In this
chapter, we discussed the most important features and components of Windows and AD, their
roles within the enterprise, differences between managing NT 4.0-based networks and Win2K or
WS2K3 AD-based networks, and some of the basic metrics and statistics that modern Windows
network administrators need to watch to help them ensure high availability on their networks. In
the remaining chapters of this guide, we’ll drill down and explore each of the vital areas of AD
and Windows networks in detail, providing the information, tools, and techniques you’ll need to
employ to maintain a healthy and highly available Windows network.
27
Chapter 2
28
Chapter 2
Logical Structures
Table 2.1 provides a list of logical structures used in AD.
Logical Structure Description
Namespace AD is a namespace because it resolves an object’s name to the object
itself
Naming context Represents a contiguous subtree of AD
Organizational Unit A container object that allows you to organize your objects and resources
Domain A partition in AD that provides a place to group together users, groups,
computers, printers, servers, and other resources
Tree A grouping of domains that have a parent-child relationship with one
another
Forest A collection of one or more trees
Trust relationship A logical connection between two domains that forms one administrative
unit
Global catalog A central source for AD queries for users and other objects
Table 2.1: The logical structures of AD, which are used to design and build the object hierarchy.
Two important logical structures that you need understand to design an AD are the namespace
and naming context. Although these two concepts seem similar, they’re actually different. To
help you understand how they differ, I’ll give you a quick overview of each. These structures are
also discussed throughout the chapter.
Namespace
Another term for a directory is namespace. A namespace refers to a logical space in which you
can uniquely resolve a given name to a specific object in the directory. AD is a namespace
because it resolves a name to the object name and the set of domain servers that stores the object
itself. Domain Name System (DNS) is a namespace because it translates easy-to-remember
names (such as www.company.com) into an IP number address (for example, 124.177.212.34).
AD depends on DNS and the DNS-type namespace that names and represents the domains in the
forest. It’s important to design your domain tree in a DNS-friendly way and to provide clients with
reliable DNS services. Although AD uses DNS to create its structure, DNS and AD are totally
separate namespaces.
One way to think about a namespace in non-technical terms is to compare it with a phone book.
The phone book for Las Vegas, Nevada is only capable of resolving phone numbers for names in
the Las Vegas area. In other words, you can’t use it to look up a phone number for New York
City. Therefore, the namespace of the directory is said to be Las Vegas. If you wanted to look up
phone numbers in New York, you would need to obtain a directory for that namespace.
29
Chapter 2
Naming Context
The naming context represents a contiguous subtree of AD in which a given name is resolved to
an object. If you look at the internal layout of AD, you see a structure that looks similar to a tree
with branches. If you expand the tree, you see the containers, the objects that reside in them, and
the attributes associated with the objects. In AD, a single domain controller always holds at least
three naming contexts.
• Domain—Contains the object and attribute information for the domain of which the
domain controller is a member
• Configuration—Contains the rules for creating the objects that define the logical and
physical structure of the AD forest
• Schema—Contains the rules for creating new objects and attributes.
Physical Structures
In addition to the logical structures in AD, several physical structures help you implement the
logical structures on your network. Table 2.2 describes these physical structures.
Physical Structure Description
Object and attributes An object is defined by the set of attributes or characteristics assigned
to it. Objects include users, printers, servers, groups, computers, and
security policies.
Domain controller A domain controller is a network server that hosts the AD service in a
domain. Many computers can belong to a domain without being a
domain controller, but only domain controllers actually run the software
that makes AD operate. All members of a domain must contact a
domain controller in order to work with the domain.
Directory server role A server that takes the role of Flexible Single Master Operation
(FSMO). Directory server roles are single-master servers that perform
special roles for AD, such as managing domains, managing schemas,
and supporting down-level clients (Windows NT clients, for example).
Site A location on the physical network that contains AD servers. A site is
defined as one or more well-connected Transmission Control
Protocol/Internet Protocol (TCP/IP) subnets.
Global Catalog (GC) server Stores the GC information for AD.
Table 2.2: The physical structures of AD, which are used to implement the logical directory structures on the
network
30
Chapter 2
Designing AD
Your primary objective in designing AD is to build a system that reflects the network resources
in your company. You need to arrange the forest and trees to reflect the location and placement
of your network resources. You need to design the domains and OUs to implement an
administrative and security structure for both users and administrators. When designing the
layout of AD, you also need to design the users’ groups and security policies as well as the
administrative methods that will be used.
From the list of logical and physical structures that you have to work with, four structures are
critical to the design of AD: forests and trees, domains, OUs, and sites. The process of designing
and implementing each of these four structures builds on the previous one. Implementing these
structures properly is crucial to a successful design. Design your AD structure in the following
order:
• Design the forest and trees
• Design the domains for each tree
• Design the OUs for each domain
• Design the sites for the forest and domains
In the next four sections, I’ll describe how to design each of these main structures.
Figure 2.1: Two organizations named company1.com and company2.com can form a forest in AD.
31
Chapter 2
The forest serves two main purposes. First, it simplifies workstation interaction with AD because
it provides a GC through which the client can perform all searches. Second, the forest simplifies
administration and management of multiple trees and domains. A forest has the following key
characteristics and components:
• Global schema—The directory schema for the forest is a global schema, meaning that the
schema is exactly the same for each domain controller in the forest. The schema exists as
a naming context and is replicated to every domain controller. The schema defines the
object classes and the attributes of object classes. In other words, every domain within the
same forest will share the same schema, giving them all access to the same classes and
attributes. This feature is especially important if you plan to deploy schema-altering
applications such as Microsoft Exchange Server, because every domain in the forest will
be updated to have the new Exchange classes and attributes—even if only one of those
domains will actually contain Exchange servers.
• Global configuration container—The configuration container exists as a naming context
that is replicated to every domain controller in the forest. Thus, it’s exactly the same
across the domain controllers in the forest. The configuration container contains the
information that defines the structure of the forest. This information includes the
domains, trust relationships, sites, site links, and the schema. By replicating the
configuration container on every domain controller, each domain controller can reliably
determine the structure of the forest, allowing it to replicate to the other domain
controllers.
• Complete trust—AD automatically creates bi-directional transitive trust relationships
among all domains in a forest. This relationship allows the security principals, such as
users and groups of users, to authenticate from any computer in the forest. However, such
is only the case if the users’ access rights have been set up correctly. This concept is
important: Trusts do not instantly confer access permissions. For example, a user in
Domain A cannot immediately access resources in Domain B just because the two
domains are in the same forest. The forest simply makes such access possible, enabling
an administrator to select the Domain A user’s account when assigning permissions to the
resources in Domain B.
• GC—The GC contains a copy of every object from every domain in the forest. However,
it only stores a select set of attributes from the objects; that subset is referred to as the
universally interesting information for the objects. By default, the GC isn’t placed on
every domain controller in the forest; instead, you determine which domain controllers
should hold a copy. The purpose of the GC is to provide a sort of cross-domain lookup
service.
To re-use the phone book analogy, imagine that you’re holding a Las Vegas phone book
and need to look up a number for a New York resident. The phone book you have might
contain a GC section, which lists a number for New York Directory Assistance. This
reference allows you to contact a directory in that other, New York namespace, to handle
your query. More specifically, the GC as implemented in AD will list each name
available in New York, and refer you to a New York directory for the number. In other
words, the GC is aware of every object in the forest and knows where to go to find more
information about each object.
32
Chapter 2
The other attribute shared across the entire forest is the built-in Enterprise Admins group.
Members of this group have the highest possible level of administrative control over every
domain within the forest. Whoever belongs to this group, then, must be trusted by every domain
within the forest to manage the entire forest and all of its domains. There are occasions, however,
in which no one group of users can be given this trust, in which case two or more forests will be
necessary.
For example, consider an organization that has just acquired another company. If the two forests
are merged into a single forest, all the users can view the entire AD. However, the forests might
not be merged because the two autonomous administrative groups might not agree on how to
manage the forest. The winner of this dispute depends on your priority: Do your users have a
higher priority than your administrators?
If the administrators win, the users inherit two forests and no longer have a single, consistent
view of AD. Each administrative group manages its own forest independently. This scenario is
common and is the reason that the forest is often referred to as AD’s ultimate security
boundary—because you can create a strict border between the two forests but cannot create a
border within a forest because of the built-in Enterprise Admins group.
The answer to the administrator vs. user priority question also depends on which type of
organization your company is. If it isn’t important for the users to have a consistent view of AD,
it might be appropriate to have multiple forests with separate administrators. For example,
consider an application service provider (ASP) company, which hosts AD services on behalf of
other companies. The users from those companies have no reason to view the host company’s
information. In addition, each administrative group wants its independence.
WS2K3 introduces a new capability called the forest trust. As with the domain trusts present
within a forest, a forest trust makes it possible for accounts in one trusted forest to be granted
access to resources in another, trusting forest. However, unlike the automatic, two-way,
transitive trusts within a domain tree or forest, inter-forest trusts must be manually created and
are one-way. In other words, the administrators in Forest A could decide to grant access to Forest
B user accounts, but the reverse will not be true unless the Forest B administrators implement
their own trust to Forest A. This inter-forest trust capability is only possible between forests
running at the WS2K3 functional level, which means all domain controllers within the forest
must be running WS2K3 and all domains within the forest must be running at the WS2K3
functional level as well.
The Types of Trusts
Remember that WS2K3 offers a variety of trust relationships. There are, for example, the automatic, two-
way, transitive trusts that exist between parent and child domains in a domain tree. There are also one-
way, non-transitive external trusts that you can create between, for example, a WS2K3 domain and an
NT domain. Forest trusts are similar to these external trusts, although they are WS2K3-specific. You can
also establish external trusts with UNIX Kerberos realms, a domain-like structure that exists in UNIX
systems, by using the Kerberos authentication protocol.
33
Chapter 2
34
Chapter 2
One situation in which you might consider managing multiple forests occurs when two organizations
merge or participate in a joint venture. This merger puts the administration of your network into the
hands of two autonomous groups. For this reason, multiple forests are typically more costly to
manage. To reduce this cost, organizations such as partnerships and conglomerates need to form a
central group that can drive the administrative process. In contrast, in short-lived organizations such
as joint ventures, it might not be realistic to expect administrators from each organization to
collaborate on forest administration.
There is another good example of a situation in which multiple forests may be required. Many
enterprise organizations elect to maintain separate, parallel AD forests for testing purposes. Other
organizations maintain multiple forests because they have a disjointed organizational structure with
no common infrastructure among business units. Although you’ll certainly want to keep your network
and AD design as simple as possible, your forest structure should follow the organizational,
administrative, and geographical structure of your organization.
35
Chapter 2
Figure 2.2: Two forests can allow user access between domains by establishing explicit one-way trusts. Only
the domains connected by the trusts can allow access between the forests.
Explicit one-way trusts aren’t transitive; the one-way trusts in Win2K are the same as the one-
way trusts that exist in NT. Creating one-way trusts among multiple forests or trees can be
complicated, so it’s important to keep it simple by limiting the domains that trust one another.
In WS2K3, as I’ve mentioned, you have the option to create inter-forest trusts. This setup allows
all domains in the trusted forest to potentially access resources in all domains of the trusting
forest. This feature greatly simplifies access management across forest boundaries, although only
forests running at the WS2K3 functional level have this capability. Thus, every domain
controller in every domain must be running WS2K3, every domain must have been raised to the
WS2K3 functional level, and both forests must have been raised to the WS2K3 functional level.
If so much as one Win2K domain controller exists in a single domain in either forest, inter-forest
trusts are not an option.
36
Chapter 2
Figure 2.3: The namespace of the domain tree shows the hierarchical structure of the tree.
In a single forest, in which all domains trust one another, the tree relationship is defined by the
namespace that is necessary to support the domain structure. For example, the root domain called
company.com might have two subdomains (or child domains) named seattle.company.com and
chicago.company.com. The relationship between the root domain and the two child domains is
what forms the tree, or namespace.
In the previous section, I emphasized that multiple forests in an organization are generally not
recommended if there is a way to avoid it. However, there are situations in which multiple trees
are appropriate or even recommended. For one, multiple trees allow you to have multiple
namespaces that coexist in a single directory. Multiple trees give you additional levels of
separation of the namespaces—something that domains don’t provide. Although multiple trees
work well in most situations, I recommend that you start by creating one tree until the
circumstances arise that call for more. For example, if your company has a domain named
company.com, then launches a subsidiary named corporation.com, it might make sense to have
both domain trees in the same forest. Doing so would provide you with the necessary
namespaces, while maintaining the convenience of a single forest’s central administrative
structure, common schema, and so forth.
You might be wondering if there are any extraordinary benefits to having multiple trees. For
example, do multiple trees reduce the replication or synchronization that occurs among domain
servers? The answer is no, because the schema and configuration container are replicated to all
domain controllers in the forest. In addition, the domain partitions are replicated only among the
domain controllers that are in the domain. Having multiple trees doesn’t reduce replication.
Likewise, you may be wondering whether multiple trees cause any problems. For example, does
having multiple trees require you to establish and maintain explicit one-way trust relationships?
Again, the answer is no because the transitive trust relationships are automatically set up among
all domains in the forest. This trust includes all domains that are in separate trees but in the same
forest.
37
Chapter 2
Figure 2.4: The domain structure is a piece of AD. It contains the users, groups, computers, printers, servers,
and other resources.
The purpose of domains is to logically partition the overall forest. Most large implementations of
AD need to divide the forest into smaller pieces. Domains enable you to partition AD into
smaller, more manageable units that you can distribute across your network servers.
Domains are the basic building blocks of AD. As you’ve seen, they can be connected to form
trees and forests; domains are connected by trust relationships, which are automatically
established and maintained within the forest. These trusts allow the users of one domain to
access the information contained in the other domains. When multiple domains are connected by
trust relationships and share a common schema, you have a domain tree. Every AD installation
consists of at least one domain tree, even if that tree has only a root domain.
38
Chapter 2
It’s your role as an administrator to decide the structure of domains and which objects, attributes,
groups, and computers are created. The design of a domain includes a determination of DNS
naming, security policies, administrative rights, and how replication will be handled. When you
design domains, follow these steps:
• Determine the number of domains
• Choose a forest root domain
• Assign a DNS name to each domain
• Partition the forest
• Place the domain controllers that will be used for fault tolerance and high network
availability
• Determine the explicit trust relationships that need to be established, if any
39
Chapter 2
40
Chapter 2
The first domain you create in an AD forest contains two forest-wide groups that are important to
administering the forest: the Enterprise Administrators, which is often referred to as Enterprise
Admins, group and the Schema Administrators, or Schema Admins, group. Containing these two
groups makes the root domain special. You cannot move or re-create these groups in another
domain. Likewise, you cannot move, rename (at least not easily), or reinstall the root domain. In
addition to these groups, the root domain contains the configuration container, or naming
context, which also includes the schema naming context.
After you install the root domain, I recommend that you back up the domain often and do
everything you can to protect it. For example, if all the servers holding a copy of the root domain
are lost in a catastrophic event and none of them can be restored, the root domain is permanently
lost. The reason for this loss is that the permissions in the Enterprise Administrator and Schema
Administrator groups are also lost. There is no method for reinstalling or recovering the root
domain and its groups in the forest other than completely backing up and restoring it.
Always have at least two domain controllers in every domain, and always have at least two
geographic locations, if possible, containing domain controllers for each domain. If you have two
offices, try to have domain controllers from your root domain in every office. This setup might not be
possible in every situation due to replication traffic and availability of technical personnel at each
location, but it’s a worthy goal because it helps reduce the likelihood that a single catastrophic event
will destroy the root domain. At the very least, ensure that the root domain is backed up frequently
and that copies of the backup are maintained offsite.
For more information about determining the number of domains, see “Determining the Number of
Domains” earlier in this chapter.
For a larger implementation with multiple locations around the world, however, you’ll probably
want to use a dedicated root domain. A dedicated root domain is a root domain that is kept small,
with only a few user account objects. Keeping the root domain small allows you to replicate it to
other locations at low cost (that is, with little impact on network usage and bandwidth). Figure
2.5 illustrates how you can replicate a dedicated root domain to the other locations on your
network.
41
Chapter 2
Figure 2.5: A dedicated root domain is small enough to efficiently replicate copies to the other locations on
your network.
A dedicated root domain focuses on the overall operations, administration, and management of
AD. There are at least two advantages to using a dedicated root domain in a larger
implementation of AD:
• By keeping the user and printer objects out of the root domain, you enhance security by
restricting access to only a few administrators.
• By keeping the root domain small, you can replicate it to other domain controllers on the
network at distant geographic locations. This approach helps increase the availability of
the network.
Because domain administrators can access and change the contents of the Enterprise
Administrators and Schema Administrators groups, having a dedicated root domain limits
normal access. Membership in these built-in groups should only be given to the enterprise
administrators, and they should only access the domain when doing official maintenance. In
addition, membership in the Domain Administrators group of the root domain should be granted
only to the enterprise administrators. Taking these steps allows you to avoid any accidental
changes to the root domain. You should also create a regular user account for each of your
administrators so that they don’t carry administrative privileges while doing regular work.
As I mentioned earlier, always replicate the root domain to multiple servers in an effort to
provide fault tolerance for this domain. Because a dedicated root domain is small (no user or
printer objects), it can be replicated across the network more quickly and easily. In addition to
replicating the root domain across the local area network (LAN), you can replicate the root
domain across the WAN to reduce the trust-traversal traffic among trees.
42
Chapter 2
43
Chapter 2
You may not be responsible for your company’s Internet access. In that case, it’s important that you
coordinate your AD naming efforts with the person or group that is responsible for that access to
ensure that you’re not creating any unmanaged name resolution problems. For exampling, choosing
to use your company’s registered Internet domain name requires additional internal and external
configuration steps to ensure uninterrupted name resolution for your domain clients and other
network clients.
Because you choose to use an Internet-registered name, however, does not mean you must
expose that name on the Internet. For example, you might register companyinternal.org as your
internal DNS name for AD’s use, and use company.com on the Internet. Doing so will help
avoid exposing your internal DNS infrastructure to the Internet. You can also use something
called a split-brain DNS. In this technique, you would use a single registered name—say,
company.com—for both your internal AD domain name and your external Internet presence. An
external DNS server, connected to the Internet, would resolve names for public hosts in your
domain, such as “www.” A separate internal DNS server would resolve internal host names and
support AD; it would also contain static records for external names such as “www.” This
technique allows the same domain name to serve both internal and external uses, while ensuring
that Internet users have no access to your internal DNS infrastructure.
44
Chapter 2
Figure 2.6: The first layer of domains directly under the root domain is named after the physical locations on
the network.
One strong recommendation for sticking with a geographic naming scheme is that other,
organizationally based naming schemes are prone to constant change. For example, the research
division might be absorbed into a larger operations division, and it wouldn’t make sense to have
the domain named “research” anymore. However, it’s unlikely that New York, for example, will
be changing its name. The AD domain hierarchy isn’t nearly as fluid or adaptable as the business
itself. Once you create and name domains, you cannot move or rename them easily. In addition,
you cannot move or rename the root domain.
WS2K3 provides the ability to rename domains, which is a new feature. However, renaming domains
is a complex process and there is no guarantee that the rename won’t break other applications that
rely on AD. For planning purposes, work under the assumption that domains can’t be renamed.
Using locations to name child domains is more flexible because physical locations on a network
seldom change. The organization at the specific site may change but not the physical location
itself. This design allows the tree to be more flexible to everyday changes. However, if the
physical location is changed or removed, the resources are moved (including the physical
resources, such as domain controllers, printers, and other equipment supporting the site).
45
Chapter 2
If your company is smaller and contained in one physical location, you could name domains after
the company or organization. These domains then hold all the objects and attributes for your
company. This design is easy and efficient. However, if your company has multiple physical
locations with network resources spread across them, you’ll want to create a second layer of
domains (under the root domain), and give the domains location names. The organizational
structures of business units, divisions, and departments will then be placed under each of these
location domains.
The caveat you knew was coming: Nobody can tell you the perfect domain naming scheme
without knowing more about your organization. It might very well be that an organizational-
based naming scheme makes absolute sense for your company. For example, some companies
have many divisions, all of whom share office space in various cities. However, these divisions
are independently managed and maintain their own network resources. In such a case, it makes
the most sense to name the domains after the divisions rather than the locations.
The term partition can be a confusing one in the world of directory services. AD defines the concept
of a partition as a segment of a particular domain or forest. In WS2K3, not every domain controller
may contain all of the domain database. For example, an AD-integrated DNS zone in WS2K3 might
be included in a partition that is only stored on domain controllers that are also DNS servers.
In the context of this discussion, I’m using partition in a somewhat different sense, mainly because
there really isn’t a better term to use. In all cases, partition refers to a portion of the directory services,
such as a portion of the forest, a portion of a domain, and so forth.
The domain physically stores the containers, objects, and attributes in that branch. Several rules
control the creation of partitions in AD and how they operate:
• The topmost partition is the root domain
• Partitions don’t overlap (one object cannot be held in two partitions)
• The partitions contain all the information for the naming context
46
Chapter 2
In AD, the basic unit of partitioning is the domain. Thus, when you create your first partition,
you’re actually creating a child domain under the root domain. The domains in AD act as
partitions in the database; for example, a child domain helps split up the total user objects
(among other things) that are contained within the directory. Thus, each domain represents a
portion, or partition, in the overall AD database. Partitioning this database increases its
scalability. As you partition AD, you break it into smaller, more manageable pieces that can be
distributed across the domain controllers, or network servers. Figure 2.7 illustrates how you can
divide the AD database into smaller pieces that can be distributed to the domain controllers.
Figure 2.7: You can partition the AD database into smaller pieces, then distribute them among network
servers or domain controllers.
Breaking AD into smaller pieces and distributing them among multiple servers places a smaller
load and less overhead on any one server. This approach also allows you to control the amount
and path of traffic generated to replicate changes among servers. Once you create a partition,
replication occurs among servers that hold copies.
In AD, you can create many partitions at multiple levels in the forest. In addition, copies of the
domain can be distributed to many different servers on the network. Although AD is distributed
using partitions, any user can access the information completely transparently. Users can access
the entire AD database regardless of which server holds which data. Of course, users must have
been granted the proper permissions.
Although a single domain controller may not contain the entire AD database (that is, the entire
forest-wide database), users can still receive whatever information they request. AD queries the
GC on behalf of a user to identify the requested object, then resolves the name to a server
(domain controller) address using DNS. Again, this process is entirely transparent to the user.
47
Chapter 2
I want to point out that the proper way to think about AD is in terms of the entire forest. The directory,
or the AD database, contains everything in the entire forest; domains represent a portion—or
partition—of that larger forest-wide database. Thinking about databases in terms of a single domain
or even a single domain controller makes it too easy to miss the larger scope of AD. Per-server
databases went away with NetWare 3.x; per-domain databases go away with NT. AD is a larger
database, consisting of many domains.
Similarly, too many partitions—domains—is something to be avoided. As I’ve mentioned previously,
you generally want as few domains as possible to ease both management and operational activities.
There are reasons, which I’ve covered, to have multiple domains, but simply creating more domains
to arbitrarily partition the forest isn’t one of those reasons.
48
Chapter 2
Figure 2.8: Each domain has a bi-directional transitive trust relationship between itself and each of its child
domains.
One of the advantages of these new trusts is that they’re automatically established among all
domains; this benefit allows each domain to trust all the other domains in the forest. Another
advantage is that these bi-directional trusts, which are automatically established using Windows’
Kerberos security mechanism, are much easier to set up and administer than NT–style trusts.
Having bi-directional trusts also reduces the total number of trust relationships needed in a tree
or forest. For example, if you tried to accomplish the same thing in NT, you would need to create
two-ways trusts between one domain and every other domain. This setup would increase the total
number of trusts exponentially with the number of domains.
If you have experience with NT domains, you may know something of trust relationships.
However, the trusts in AD differ from NT trusts because AD trusts are transitive. To help you
understand what this means, I’ll provide an example. AD transitive trusts work much like a
transitive equation in mathematics. A basic mathematical transitive equation reads as follows:
A=B, B=C, therefore A=C
When applying this transitive concept to trust relationships, you get an understanding of how
transitive trusts work among domains. For example, if Domain A trusts Domain B, and Domain
B trusts Domain C, then Domain A trusts Domain C. Figure 2.9 illustrates this idea. Transitive
trust relationships have been set up between Domain A and Domain B and between Domain B
and Domain C. Thus, Domain A trusts Domain C implicitly.
49
Chapter 2
Figure 2.9: A domain tree viewed in terms of its transitive trust relationships. Because transitive trust
relationships have been set up between Domain A and Domain B and between Domain B and Domain C,
Domain A trusts Domain C implicitly.
In NT, trusts were non-transitive, so they didn’t allow this implicit trust to exist. For one domain
to trust another domain, an explicit trust relationship had to be created between them.
When domains are created in an AD forest, bi-directional trust relationships are automatically
established. Because the trust is transitive and bi-directional, no additional trust relationships are
required. The result is that every domain in the forest trusts every other domain. Transitive trusts
greatly reduce your overhead and the need to manually configure the trusts. Because trusts are
automatically set up, users have access to all resources in the forest as long as they have the
proper permissions.
Transitive trusts are a feature of the Kerberos authentication protocol. The protocol is used by
AD and provides distributed authentication and authorization. The parent-child relationship
among domains is only a naming and trust relationship. Thus, the trust honors the authentication
of the trusted domain. However, having all administrative rights in a parent domain doesn’t
automatically make you an administrator of a child domain. Policies set in a parent don’t
automatically apply to child domains because the trust is in place.
50
Chapter 2
Figure 2.10: A one-way trust is established between a domain in Forest 1 and a domain in Forest 2. The trust
allows access to network resources in each domain.
The second use of one-way trusts is to create a relationship from an AD domain to backward-
compatible domains, such as an NT domain. Because NT domains cannot naturally participate in
AD transitive trusts, you must establish a one-way trust to them. You must to manage one-way
trusts manually, so try to limit the number you use.
In both of these situations, you can create two one-way trusts among the domains. However, two
one-way trusts don’t equal a bi-directional transitive trust in AD.
A third type of one-way trust is, as I’ve mentioned, the inter-forest trust. This type works exactly
like a one-way trust between two domains, except that it establishes trust between every domain
in the respective forests. Again, this type of trust is available only in an environment in which
every domain controller is running WS2K3.
51
Chapter 2
Walking up and down the domain tree branches lengthens the time it takes to query each domain
controller and respond to the user. To speed this process, you can establish a cross-link, or
shortcut, trust relationship between two domains. If you decide to use a cross-link trust, I
recommend that you place it between the two domains that are farthest from the root domain.
For example, suppose you have a domain tree that has domains 1, 2, 3, 4, and 5 in one branch
and domains 1, A, B, C, and D in another branch. Domains 5 and D are located farthest from the
root domain (see Figure 2.11).
Figure 2.11: The domain tree has two branches, domains 1, 2, 3, 4, and 5 are one branch, and domains 1, A,
B, C, and D are the second branch. The cross-link trust can be established between domains 5 and D.
Let’s say that a user in Domain 5 needs to access a resource in Domain D. To accomplish this
request, the authentication process must traverse up the first branch and down the second branch
while talking to each domain controller. Continuous authentications such as this create a
significant amount of network traffic. To alleviate this problem, you can establish a cross-link
between Domain 5 and Domain D.
The cross-link between Domain 5 and Domain D will serve as an authentication bridge between
the two domains. The result is better authentication performance between the domains.
The need for cross-link trusts is a good reason to keep your domain hierarchy as flat as possible.
Cross-link trusts should only be used when absolutely necessary, and a good design won’t require
them.
52
Chapter 2
AD comes with two built-in containers, Users and Computers, that look superficially like OUs but
aren’t. They can’t have GPOs linked to them, for example.
• OUs don’t need to be viewed by users—It isn’t necessary for you to design OUs with
user navigation in mind. Although users can view the OU structure, this structure isn’t an
efficient method for finding resources. The preferred method for users to find resources is
by querying the GC.
Now that you understand a few of the basic characteristics for OUs, consider the following
guidelines for designing an efficient and effective OU structure:
• Create OUs to delegate administration
• Create OUs to reflect your company’s organization
• Create OUs for Group Policy
• Create OUs to restrict access
53
Chapter 2
Figure 2.12: You can create the engineering OU in the Chicago domain, then assign permissions to the
engineering department administrators to manage all the objects.
54
Chapter 2
Another useful feature that I mentioned earlier is that OUs can be nested. This feature enables
you to build a hierarchy in each domain. For example, suppose that the testing group in the
engineering department wants full administrative control over all its resources, such as users,
printers, and computers. To accommodate this request, you simply create a new OU directly
under the Engineering OU in the Chicago domain. The hierarchical structure now looks like the
following: testing.engineering.chicago.company.com. After you’ve created the new OU and
placed the resources, you can give full privileges to the testing group’s administrator. If an OU is
nested, it inherits the properties of the parent OU by default. For example, if the Engineering OU
has certain security or GPOs set, they’re passed down to the Testing OU. The Testing OU is
considered nested under the Engineering OU.
Be careful to limit the number of OU layers you create. Creating too many layers can increase the
administrative overhead. Limiting the number of OU layers also increases user logon performance.
When a user logs on to AD, the security policies take effect. To find all these policies, the workstation
must search all layers of the OU structure. Having fewer OU layers allows the client to complete this
search more quickly.
Figure 2.13: OUs have been created in a domain based on an organizational chart.
55
Chapter 2
For an explanation of site links, see “Creating Sites and Site Links Based on Network Topology” later
in this chapter.
56
Chapter 2
It’s your role as an administrator to design the site objects and site links for your tree or forest
that assure the best network performance. It’s also your job to determine what speed assures this
performance and reduces server downtime as a result of network outages. Establish site objects
and site links based on network and subnet speed. Although many subnets can belong to a single
site, a single subnet can’t span multiple physical sites. To help you establish a design for the sites
in your forest, you need to consider the following guidelines:
• Create sites and site links based on network topology
• Use sites to determine the placement of domain controllers
• Use sites to determine the placement of GC servers
About Sites
Sites are groups of computers (or subnets) that share high-speed bandwidth connections on one
or more TCP/IP subnets. Subnets are groups of local segments on the network that are physically
located in the same place. Multiple site objects create a site topology. Figure 2.14 portrays a site
with TCP/IP subnets that exist between the servers and workstations. A LAN—as opposed to a
WAN, MAN, or other lower-speed connection—always connects a site.
Figure 2.14: A site is one or more TCP/IP subnets or LAN networks that exist between the servers and
workstations.
57
Chapter 2
One domain can span more than one site, and one site can contain multiple domains. However, for
design purposes, it’s important to remember that sites define how replication occurs among domain
controllers and which domain controller a user’s workstation contacts for initial authentication.
Normally, the workstation first tries to contact domain controllers in its site.
58
Chapter 2
Figure 2.15: The site topology is created from the site objects and the site links. The site topology helps the
replication process determine the path, cost, and protocol among domain controllers.
When you create the site topology, it’s useful to have a complete set of physical LAN and WAN
maps. If your company has campus networks at one or more locations, you’ll need to have the
physical maps of those locations. These maps should include all the physical connections, media
or frame types, protocols, and speed of connections.
When defining the sites, begin by creating a site for every LAN or set of LANs that are
connected by high-speed bandwidth connections. If there are multiple physical locations, create a
site for each location that has a LAN subnet. For each site that you create, keep track of the IP
subnets and addresses that comprise the site. You’ll need this information when you add the site
information to AD.
Site names are registered in DNS by the domain locator, so they must be legal DNS names. You
must also use Internet standard characters—letters, numbers, and hyphens. (For more information,
see “Using Internet Standard Characters” earlier in this chapter.)
After you’ve created the sites, you need to connect them with site links to truly reflect the
physical connectivity of your network. To do so, you need to first assign each site link a name.
By default, site links are transitive, just like trust relationships in AD. Thus, if Site A is
connected to Site B, and Site B is connected to Site C, it’s assumed that Site A can communicate
with Site C. This transitive connectivity is called site link bridging, and by default, AD creates
bridges throughout all site links.
59
Chapter 2
The practical upshot of bridging is this: Imagine that you have three sites, A, B, and C. A is
connected to B, and B is connected to C. When you create the two site links to represent these
connections, they’ll be named something like AtoB and BtoC (for example). AD will
automatically bridge these connections, creating a transitive link from A to C. Domain
controllers in site A will therefore replicate with domain controllers in site B and in site C. The
idea behind site bridging is to reduce replication latency by allowing domain controllers at
various sites to replicate with one another directly. You can disable site bridging, forcing A to
replicate only with B, and preventing any changes made at site A from reaching site C until site
C replicates them from site B over its private site link. You can also manually create site link
bridges, if desired, to “shortcut” the site topology and reduce replication latency.
The process of generating this site replication topology is automatic, and it’s handled by a special
service called the Knowledge Consistency Checker (KCC). If you don’t like the topology that the
KCC generates for you, you can create the topology manually.
The purpose of creating the site topology is to ensure rapid data communications among AD
servers. The site topology is used primarily when setting up replication of AD. However, the
placement of the domain controllers and partitions govern when and how replication takes place.
60
Chapter 2
The AD domain controllers query DNS to find each other during replication. A new domain controller
participates in replication by registering its locator records with DNS. Likewise, each domain controller
must be able to look up these records. Such is the case even if the domain controllers are on the
same subnet.
If you depend on an outside DNS service, you might need to adjust the number of DNS servers
and physical placement, if possible. You’ll also need to verify that the outside DNS service
supports the required SRV resource record. If it doesn’t, you may need to install and configure
your own implementation of Microsoft’s DNS to support AD.
If you don’t want to depend on an existing DNS service or a DNS service that is offsite, you
might want to install the Microsoft DNS service that is integrated into AD. The Microsoft DNS
service stores the locator records for the domain and domain controllers in AD. You can then
have one or more domain controllers provide the DNS service. Again, I recommend that you
place at least one DNS server for each site object in your environment. Using the Microsoft DNS
service is an optional configuration, and storing the locator records in AD may have a negative
impact on replication traffic on large networks.
Summary
My first recommendation for troubleshooting AD is to make sure that its components are
designed and implemented correctly. In addition, the efficiency of AD depends on the design and
implementation of key structures—forests, trees, domains, and OUs. I also recommend that the
sites and site links be properly established to support the distribution and replication of the
system. In this chapter, we explored these topics as well as the placement of other supporting
servers, such as domain controllers, GC servers, and DNS servers. The design and
implementation of these structures is strictly your responsibility as network administrators.
Before you can effectively troubleshoot AD, make sure you feel confident about your design.
61
Chapter 3
62
Chapter 3
Another example of why you need to monitor domain controllers is that some domain controllers
on a WS2K3 network (no matter how many domain controllers it may have) are unique. For
example, some domain controllers perform special duties called Flexible Single-Master
Operation (FSMO) roles. Although the replication of AD is multimaster, the FSMO roles held
by these domain controllers are single-master (much like a Windows NT 4.0 primary domain
controller—PDC). Thus, these domain controllers don’t have additional copies or replicas to
provide fault tolerance if the domain controller hosting a particular role is down.
These FSMO domain controllers perform special roles for AD, such as managing the domain,
managing the schema, and supporting down-level clients. If any of these critical domain
controllers go down, the directory loses functionality and can no longer update or extend the
schema, or add or remove a domain from the directory.
Failing to monitor domain controllers can adversely affect a network’s performance and
availability. For example, if an entire department is unable to access the domain controller or
directory, users lose time, and the company loses money. To help you ensure that your domain
controllers are available, you can, and should, monitor and analyze Windows in the five
following areas:
• Overall system
• Memory and cache
• Processor and thread
• Disk
• Network
I’ll discuss each of these areas, and the reason for their importance, in the following sections. I’ll
discuss monitoring AD itself in Chapter 4.
Another critical monitoring area is auditing. Although auditing is usually considered a security-
related task, auditing can play a crucial role in operational issues as well. For example, if a
domain controller that has been performing acceptably suddenly slows, what is the first question
that you’ll ask yourself: What changed? Monitoring elements such as the memory, disk, and
network will tell you that something changed, but not what; only proper auditing can provide the
clue as to what changed.
Unfortunately, Windows auditing leaves a lot to be desired from a reporting and investigative
point of view. For example, each domain controller maintains its own audit logs; in a large
company with many domain controllers, poring through each of them to find out what changed
in the domain can be a time-consuming task. So much so, in fact, that few administrators ever
turn to auditing as a troubleshooting tool. Later in this chapter, we’ll explore examples of how
auditing can be an effective troubleshooting tool and how you can make auditing more efficient
and useable.
63
Chapter 3
64
Chapter 3
• Performance console—Allows you to view the current activity on the domain controller
and select the performance information that you want collected and logged. You can
customize WS2K3’s performance-counter features and architecture to allow applications
to add their own metrics in the form of objects and counters, which you can then monitor
using the Performance console. By default, the Performance console has two
applications, System Monitor and Performance Logs and Alerts.
System Monitor enables you to monitor nearly every aspect of a domain controller’s
performance and establish a baseline for the performance of your domain controllers.
Using System Monitor, you can see the performance counters graphically logged and set
alerts against them. The alerts will appear in Event Viewer.
The Performance Logs and Alerts application enables you to collect information for those
times when you can’t detect a problem in real time. This application allows you to collect
domain controller performance data for as long as you want—days, weeks, or even
months.
• Event Viewer—Allows you to view the event logs that gather information about a
domain controller and its subsystems. There are three types of logs: the Application Log,
the System Log, and the Security Log. Although the event logs start automatically when
you start the domain controller, you must start Event Viewer manually.
When monitoring domain controllers using the Performance console’s logging feature, make sure you
don’t actually create a problem by filling the computer’s disk with large log files. Be sure to only
include those statistics in the logging process that you absolutely need. Keep the sampling period to
the minimum required to evaluate domain controller performance and usage. To select an appropriate
interval for your computer, establish a baseline of performance and usage. Also, take into account the
amount of free disk space on your domain controller when you begin the logging process. Finally,
make sure that you have some application in place (such as the Performance console) that
continually monitors the domain controller to ensure that it has plenty of free disk space.
In addition to monitoring the local domain controller, you can use the Performance console to monitor
domain controllers remotely and store the log files on a shared network drive. Doing so enables you
to monitor all the domain controllers in a directory from one console or utility.
At the heart of the Performance console and Task Manager are the performance counters that are
built-in to the Windows OS. I’ll introduce each of these monitoring utilities briefly in the
upcoming sections and demonstrate how they can help you monitor specific subsystems. Keep in
mind that this chapter isn’t intended to be an in-depth study of all the capabilities of these
utilities. Instead, the intention is to provide a general introduction to them and show you how you
can use them to assist you in monitoring your domain controllers.
65
Chapter 3
Figure 3.1: Windows’ Task Manager allows you to view and manage the applications and processes running
on a domain controller and manage their performance.
66
Chapter 3
In NT, the Performance console was known as Performance Monitor, and like most NT administration
utilities, it was a standalone utility rather than a Microsoft Management Console (MMC) snap-in.
The Performance console helps you accurately pinpoint many of the performance problems or
bottlenecks in your system. It monitors your WS2K3 domain controller by capturing the selected
performance counters that relate to the system hardware and software. The performance counters
are programmed by the developer of the related system. The hardware-related counters typically
monitor the number of times a device has been accessed. For example, the physical disk counters
indicate the number of physical disk reads or writes and how fast they were completed. Software
counters monitor activity related to application software running on the domain controller. To
launch the Performance console, choose Start, Programs, Administrative Tools, Performance.
The first application that starts in the Performance console is System Monitor. Using System
Monitor, you can view the current activity on the domain controller and select information to be
collected and logged for analysis. You can also measure the performance of your own domain
controller as well as that of other domain controllers on your network. Figure 3.2 shows System
Monitor.
Figure 3.2: The Performance console includes both System Monitor and Performance Logs and Alerts.
67
Chapter 3
When it starts, System Monitor isn’t monitoring any counters or performance indicators for the
system. You determine which counters System Monitor tracks and displays. To add a counter,
click the addition sign (+) icon on the toolbar or right-click anywhere in the System Monitor
display area and choose Add Counters from the shortcut menu. Using either approach, the Add
Counters dialog box appears, which Figure 3.3 shows, in which you can choose the counters to
monitor.
Figure 3.3: In System Monitor, you can choose which counters you want to track and monitor on the display.
Once you choose the counter that you want to view, System Monitor tracks performance in real
time. When you first start using System Monitor, the number of counters that are available seems
overwhelming because there are counters for almost every aspect of the computer. However, in
the spirit of the age-old 80/20 rule, you’ll probably find that you tend to use about 20 percent of
the available counters 80 percent of the time (or more), using the other counters only when you
need specific monitoring or troubleshooting information.
If you don’t understand the meaning of a particular Performance console counter, highlight it and click
Explain. The informational dialog box that appears provides a description of the selected counter
(and, in some cases, what the various values or ranges might indicate).
Later sections of this chapter discuss how you can use System Monitor to monitor memory, view
processes, and monitor network components on a domain controller as well as monitor the disk
subsystem.
68
Chapter 3
Event Viewer
As with its NT and Win2K predecessor, WS2K3 uses an event logging system to track the
activity of each computer and its subsystems. The events that are logged by the system are
predetermined and tracked by the OS. In addition, WS2K3 provides Event Viewer, which allows
you to view the events that have been logged.
69
Chapter 3
Only a user with administrative privileges can view the Security Log. Regular users can only view the
Application Log and System Log.
70
Chapter 3
71
Chapter 3
Figure 3.5: Events can be filtered in Event Viewer to restrict the list of events that are displayed.
Exporting Events
In addition to sorting and filtering events in Event Viewer, you can export events in a variety of
formats to use with applications such as Microsoft Excel. To export events, choose Action,
Export List. When the Save As dialog box appears (see Figure 3.6), you can type a file name
with the .xls extension or choose a file type such as Text (Comma Delimited) (*.csv).
72
Chapter 3
Figure 3.6: The events in Event Viewer can be exported for use with various applications.
A major shortcoming of the Windows event logs, which I’ve already mentioned, is the fact that
each server maintains it own logs. If you’re relying on system-generated events or auditing
events to help provide troubleshooting clues, you might find yourself running around to a dozen
domain controllers to determine whether any of them have anything of use in their logs.
Microsoft provides a basic solution in its Audit Collection Service (ACS), which is designed to
aggregate log data into a central Microsoft SQL Server database. However, ACS is only
designed to work with the Security Log, again focusing on the security uses of auditing and not
the operational and troubleshooting aspects.
73
Chapter 3
Because everything in the system is realized using pages of physical memory, it’s easy to see
that pages of memory become scarce rather quickly. VMM uses the hard disk to store unneeded
pages of memory in one or more files called paging files. Paging files represent pages of data
that aren’t currently being used but may be needed spontaneously at any time. By swapping
pages to and from paging files, VMM is able to make pages of memory available to applications
on demand and provide much more virtual memory than the available physical memory.
One of the first monitoring or troubleshooting tasks you’ll carry out is to verify that your domain
controller has enough physical memory. Table 3.1 shows the minimum memory requirements for
a WS2K3 domain controller.
Installation Type Memory Requirement
Minimum installation 256MB
Server running a basic set of services 512MB
Server running an expanded set of services 1GB or more
Your physical memory requirements for actual production servers will typically be much higher
if you expect decent performance. Because your domain controllers will at least be running AD,
I recommend that you always start with at least 512 megabytes (MB) RAM. If you want to load
other applications that come with their own memory requirements, you’ll need to add memory to
support them.
If there isn’t enough memory on your domain controller, it will start running slower as it pages
information to and from its hard drive. When physical memory becomes full and an application
needs access to information not currently in memory, VMM moves some pages from physical
memory to a storage area on the hard drive called a paging file.
As the domain controller pages information to and from the paging file, the application must
wait. The wait occurs because the hard drive is significantly slower than physical RAM. This
paging also slows other system activities such as CPU and disk operations. As I mentioned
earlier, problems caused by lack of memory often appear to be problems in other parts of the
system. To maximize the performance and availability of your domain controller servers, it’s
important for you to understand and try to reduce or eliminate wherever possible the
performance overhead associated with paging operations.
Fortunately, there are a couple of utilities that you can use to track memory usage. Two of the
most common are utilities I’ve already introduced: Task Manager and the Performance console.
74
Chapter 3
Figure 3.7: The Performance page of WS2K3’s Task Manager allows you to view a domain controller’s
memory usage.
75
Chapter 3
The Performance page in Task Manager contains eight informational panes. The first two are
CPU Usage and CPU Usage History. These two panes and the Totals pane all deal with usage on
the CPU, or processor. The remaining panes can be used to analyze the memory usage for the
domain controller and include the following:
• PF Usage—A bar graph that shows the amount of paging your domain controller is
currently using. This pane is one of the most useful because it can indicate when VMM is
paging memory too often and thrashing. Thrashing occurs when the OS spends more time
managing virtual memory than it does executing application code. If this situation arises,
you need to increase the amount of memory on the system to improve performance.
• Page File Usage History—A line graph that tracks the size of virtual memory over time.
The history for this pane is only displayed in the line graph and not recorded anywhere.
You can use this information to help determine whether there is a problem with virtual
memory over a longer period of time.
• Physical Memory—This pane tells you the total amount of RAM in kilobytes (KB) that
has been installed on your domain controller. This pane also shows the amount of
memory that is available for processes and the amount of memory used for system cache.
The amount of available memory will never go to zero because the OS will swap data to
the hard drive as the memory fills up. The system cache is the amount of memory used
for file cache on the domain controller.
• Commit Charge—This pane shows three numbers, which all deal with virtual memory on
the domain controller: Total, Limit, and Peak. The numbers are shown in kilobytes. Total
shows the current amount of virtual memory in use. Limit is the maximum possible size
of virtual memory. (This is also referred to as the paging limit.) Peak is the highest
amount of memory that has been used since the domain controller was started.
• Kernel Memory—Shows you the total amount of paged and non-paged memory, in
kilobytes, used by the kernel of the OS. The kernel provides core OS services such as
memory management and task scheduling.
I mentioned that you can easily and quickly check the memory usage on your domain controller
by using Task Manager. Task Manager allows you to see the amount of virtual memory in use.
76
Chapter 3
Figure 3.8: Using the Available Bytes memory counter to monitor or track how much memory is left for users
or applications.
77
Chapter 3
The Available Bytes counter shows the amount of physical memory available to processes
running on the domain controller. This counter displays the last observed value only; it isn’t an
average. It’s calculated by summing space on three memory lists:
• Free—Memory that is ready or available for use
• Zeroed—Pages of memory filled with zeros to prevent later processes from seeing data
used by a previous process
• Standby—Memory removed from the working set of a process and en route to disk but
still available to be recalled
If Available Bytes is constantly decreasing over a period of time and no new applications are
loaded, it indicates that the amount of working memory is growing, or it could signal a memory
leak in one or more of the running applications. A memory leak is a situation in which
applications or processes consume memory but don’t release it properly. To determine the
culprit, monitor each application or process individually to see whether the amount of memory it
uses constantly increases. Whichever application or process constantly increases memory
without decreasing it is probably the culprit.
Page-Fault Counters
When a process or thread requests data on a page in memory that is no longer there, a domain
controller issues a page fault. Here, the page has typically been moved out of memory to provide
memory for other processes. If the requested page is in another part of memory, the page fault is
a soft page fault. However, if the page has to be retrieved from disk, a hard page fault has
occurred. Most domain controllers can handle large numbers of soft page faults, but hard page
faults can cause significant delays.
Page-fault counters help you determine the impact of virtual memory and page faults on a
domain controller. These counters can be important performance indicators because they
measure how VMM handles memory:
• Page Faults/sec—Indicates the number of page faults without making a distinction
between soft page faults and hard page faults
• Page Reads/sec—Indicates the number of times the disk was read to resolve hard page
faults; this counter indicates the impact of hard page faults
• Pages Input/sec—Indicates the number of pages read from disk to resolve hard page
faults; this counter also indicates the impact of hard page faults
78
Chapter 3
Figure 3.9 illustrates how you can use System Monitor to track page-fault counters.
Figure 3.9: The Page Faults/sec, Page Reads/sec, and Pages Input/sec counters determine the impact of
virtual memory and paging.
If the numbers recorded by these counters are low, your domain controller is responding quickly
to memory requests. However, if the numbers are high and remain consistently high, it’s time to
add more RAM to the domain controller.
79
Chapter 3
If a domain controller was perfect, the OS would have enough memory for every application that
was loaded and would never page memory out. Both the % Usage counter and the % Usage Peak
counter would be at zero. The opposite is that the domain controller is paging memory as fast as
possible, and the usage counters are high. An example of a bad situation is one in which your
domain controller has 128MB of memory, the % Usage Peak counter is at 80 percent, and the %
Usage counter is above 70 percent. In this situation, it’s fairly certain that your domain controller
will be performing poorly.
By default, Windows automatically creates a paging file on the system drive during installation.
Windows bases the size of the paging file on the amount of physical memory present on the
domain controller (in most cases, it’s between 768MB and 1536MB). In addition to this paging
file, I recommend that you create a paging file on each logical drive in the domain controller. In
fact, I recommend that you stripe the paging file across multiple physical hard drives, if possible.
Striping the paging file improves performance of both the file and virtual memory because
simultaneous disk access can occur on multiple drives simultaneously.
The recommendation for using disk striping on the paging file works best with Small Computer
System Interface (SCSI) drives rather than those based on Integrated Device Electronics (IDE)
interfaces. The reason is that SCSI handles multiple device contention more efficiently than IDE and
tends to use less CPU power in the process. Also, I don’t recommend that you spread the paging file
across multiple logical drive volumes (partitions) located on the same physical drive. Doing so won’t
generally aid paging file performance—and it may actually hinder it.
To change or set the virtual memory setting on your domain controller, right-click My Computer,
then choose Properties from the shortcut menu. In the System Properties dialog box, click the
Advanced tab, then click Performance Options. Notice that the Performance Options dialog box
allows you to see the current setting for Virtual Memory. Next, click Change to display more
information and to change the paging file settings.
Changing the paging file size or location is unfortunately one of those rare setting changes in WS2K3
that requires you to restart the domain controller before the change takes effect. Thus, if you decide
to change any settings for the paging file, do so during a scheduled maintenance time when it’s safe
to take the domain controller down and doing so won’t affect your users.
System Cache
In addition to tracking the amount of memory and virtual memory in the domain controller, you
need to keep an eye on the computer’s system cache settings. The system cache is an area in
memory dedicated to files and applications that have been accessed on the domain controller.
The system cache is also used to speed both file system and network input/output (I/O). For
example, when a user program requests a page of a file or application, the domain controller first
looks to see whether it’s in memory (system cache). The reason is that a page in cache responds
more quickly to user requests. If the requested information isn’t in cache, the OS fulfills the user
request by reading the file page from disk.
80
Chapter 3
If the system cache isn’t large enough, bottlenecks will occur on your domain controller. The
Cache object in System Monitor and its counters help you understand caching in Win2K. In
addition, several counters under the Memory object help you determine the amount of file cache.
Two of the counters that best illustrate how the file cache is responding to requests are:
• Copy Read Hits %—This counter is under the Cache object and tracks the percentage of
cache-copy read requests that are satisfied by the cache. The requests don’t require a disk
read to give the application access to the page. A copy read is a file-read operation that is
satisfied by a memory copy from a page in the cache to the application’s buffer.
• Cache Faults/sec—This counter is under the Memory object and tracks the number of
faults that occur when a page sought in the system cache isn’t found. The page must be
retrieved from elsewhere in memory (a soft fault) or from the hard drive (a hard fault).
When you’re considering using the Copy Read Hits % counter to assess file-cache performance, you
might also consider tracking the Copy Reads/sec counter, which measures the total number of Copy
Read operations per second. By assessing these numbers together, you’ll have a better sense of the
significance of the data provided by the Copy Reads Hits % counter. For example, if this counter were
to spike momentarily without a corresponding jump (or perhaps even a decrease) in the number for
overall Copy Reads/sec, the data might not mean much. Ideally, you can identify a cache bottleneck
when there is a steady decrease in the Copy Read Hits % counter with a relatively flat Copy
Reads/sec figure. A steady increase in both counters, or an increase in Copy Read Hits % and a
relatively flat Copy Reads/sec, indicates good file cache performance.
Thus, the Copy Read Hits % counter records the percentage of successful file-system cache hits,
and the Cache Faults/sec counter tracks the number of file-system cache misses. Figure 3.10
shows these counters in System Monitor. Remember that one of the counters is a percentage and
the other is a raw number, so they won’t exactly mirror each other.
81
Chapter 3
Figure 3:10: The Copy Read Hits % and the Cache Faults/sec counters show how the domain controller’s
cache is responding.
Generally speaking, I recommend that a domain controller have at least an 80 percent cache hit
rate over time. If these two counters show that your domain controller has a low percentage of
cache hits and a high number of cache faults (misses), you may want to increase the total amount
of RAM. Increasing the RAM allows the domain controller to allocate more memory for system
cache and should increase the cache hit rate.
82
Chapter 3
83
Chapter 3
Figure 3.11: The Process Viewer utility allows you to view the processes and threads running on your
domain controller.
Using this utility, you can view the name of each process, the amount of time each process has
been running, the memory allocated to each process, and the priority of each process. You can
also view each thread that makes up a selected process. For each thread, you can see how long it
has been running, its priority, context switches, and starting memory address.
In addition to the information you see on the main screen, you can display the memory details for
a process. Figure 3.12 illustrates the Memory Details dialog box that is shown when you select a
process and then click Memory Detail.
84
Chapter 3
Figure 3.12: Memory details for each process are displayed by clicking Memory Detail in Process Viewer’s
main window.
When using Process Viewer, you can stop or kill a process that is running on a domain controller by
selecting it and clicking Kill Process. However, be sure you understand the function and impact of
killing a process before doing so—the process might be vital to your domain controller’s functionality.
Worse yet, by killing a process, you can irrecoverably lose or corrupt the data.
85
Chapter 3
Figure 3.13: Windows’ Task Manager allows you to view and manage processes that are currently running on
the system.
This view provides a list of the processes that are running—their names, their process identifiers
(PIDs), the percentage of CPU processing they’re consuming, the amount of CPU time they’re
using, and the amount of memory they’re using. Notice the System Idle Process, which always
seems to be toward the top of the process list. This process is a special process that runs when the
domain controller isn’t doing anything else. You can use the System Idle Process to determine
how the CPU is loaded because it’s the exact opposite of the CPU Usage value on the
Performance tab. For example, if the CPU Usage value is 5, the System Idle Process value will
be 95. A high value for the System Idle Process means that the domain controller isn’t heavily
loaded, at least at the moment you checked.
86
Chapter 3
Figure 3.14: The Select Columns dialog box in Task Manager allows you to monitor additional important
statistics about the processes that are running on your domain controller.
In addition, you can see which of these processes belong to an application. To do so, click the
Applications tab, right-click one of the applications on the Applications page, then click Go To
Process. Doing so will take you to the associated application’s process on the Process tab. This
feature helps you associate applications with their processes.
Highlighting a process in Task Manager, then clicking End Process, stops that process from running.
This feature is useful because it allows you to stop processes that don’t provide any other means of
being stopped. However, use this method only as a last resort because the process stops
immediately and doesn’t have a chance to clean up its resources. Using this method to stop
processes may leave domain controller resources unusable until you restart. It may also cause data
to be lost or corrupted.
87
Chapter 3
Figure 3.15: The Computer Management utility allows you to view the processes that are running on your
domain controller as well as the path and file name information associated with each process.
88
Chapter 3
Figure 3.16: The % Processor Time counter give you the ability to view the amount of time that the processor
is doing real work.
If the % Processor Time counter is consistently high, there may be a bottleneck on the CPU. I
recommend that this counter consistently stay below 85 percent. If it pushes above that value,
you need to find the process that is using a high percentage of the processor. If there is no
obvious CPU “hog,” consider adding another processor to the domain controller or reducing that
domain controller’s workload. Reducing the workload might involve stopping services, moving
databases, removing directory services, and so on.
89
Chapter 3
Interrupts/sec Counter
The Interrupts/sec counter measures the rate of service requests from the domain controller’s I/O
devices. This counter is the average number of hardware interrupts that the processor is receiving
and servicing each second. If this value increases without an associated increase in system
response, there could be hardware problems on one of the I/O devices. For example, a network
interface card (NIC) installed in the domain controller could go bad and cause an excessive
amount of hardware interrupts. To fix the problem, you need to replace the offending network
card’s driver or the physical card.
The Interrupts/sec counter doesn’t include deferred procedure calls; they’re counted separately.
Instead, this counter tracks the activity of hardware devices that generate interrupts, such as the
system clock, mouse, keyboard, disk drivers, NICs, and other peripheral devices. (For example,
the system clock interrupts the CPU every 10 milliseconds—ms.) When an interrupt occurs, it
suspends the normal thread execution until the CPU has serviced the interrupt.
During normal operation of the domain controller, there will be hundreds or thousands of
interrupts per second. System Monitor displays the counter as a percentage of the real number.
Thus, if the domain controller has 560 interrupts in one second, the value is shown as 5.6 on the
graph. Figure 3.17 displays the Interrupts/sec counter using System Monitor.
Figure 3.17: The Interrupts/sec counter allows you to view the impact the hardware I/O devices have on the
performance of the domain controller.
90
Chapter 3
In System Monitor, you can make changes to the graphic display. To do so, right-click anywhere on
the graph, then choose Properties from the shortcut menu. The System Monitor Properties dialog box
appears, containing several tabs to change the display and effect the graph and data. For example, if
you want to change the graph’s scale, click the Graph tab and change the Vertical Scale parameters.
To confirm the change, click Apply, then OK.
Unfortunately, it’s difficult to suggest a definite threshold for this counter because this number
depends on the particular processor type in use and the exact role and use of the domain
controller. I therefore recommend that you establish your own baseline for this counter and use it
as a comparison over time. Doing so will help you know when a hardware problem occurs.
Figure 3.18: The Processor Queue Length counter indicates how congested the processor is.
91
Chapter 3
I recommend that your domain controller not have a sustained Processor Queue Length of
greater than two threads. If the number of threads goes above two, performance slows, as does
responsiveness to the users. The domain controller shown in the figure could be in trouble,
especially if this type of activity is sustained. There are several ways to alleviate the domain
controller slow. You can replace the CPU with a faster processor, add more processors, and
reduce the workload. In some situations, the Processor Queue Length counter will increase if the
system is paging heavily; adding memory or RAM could be the solution in this case. To
determine whether you need more RAM, monitor the paging counters.
92
Chapter 3
Figure 3.19: The % Disk Time counter allows you to view how busy a physical disk drive is, and the % Idle
Time counter tracks the percentage of time a drive is idle.
Figure 3.19 shows that, as you might expect, % Disk Time and % Idle Time basically mirror
each other. I recommend that if the value for % Disk Time is consistently above 70 percent, you
consider reorganizing the domain controller to reduce the load. However, if the domain
controller is a database server, the threshold can go as high as 90 percent. The threshold value
depends on the type of server that has been implemented and what has caused the disk I/O. For
example, if VMM is paging heavily, it can drive up the % Disk Time counter. The simplest
solution is to add memory.
93
Chapter 3
Figure 3.20: The Disk Reads/sec and the Disk Writes/sec counters show how the domain controller is
handling disk requests.
Using these counters, watch for spikes in the number of disk reads when your domain controller
is busy. If you have the appropriate amount of memory on your domain controller, most read
requests will be serviced from the system cache instead of hitting the disk drive and causing disk
reads. You want at least an 80 percent cache hit rate, which means that only 20 percent of read
requests are forced to the disk. This recommendation is valid unless you have an application that
reads a lot of varying data at the same time—for example, a database server is by nature disk-
intensive and reads varying data. Obtaining a high number of cache hits with a database server
may not be possible.
94
Chapter 3
Figure 3.21: The Current Disk Queue Length counter represents the number of outstanding read and write
requests. Using this counter, you can monitor the performance of the queue for the disk drives.
If the disk drive is under a sustained load, this counter will likely be consistently high. In this
case, the read and write requests will experience delays proportional to the length of this queue
divided by the number of spindles on the disks. For decent performance, I recommend that the
value of the counter average less than 2.
Because gathering disk counters can cause a modest increase in disk-access time, WS2K3 doesn’t
automatically activate all the disk counters when it starts up. By default, the physical disk counters are
on, and the logical disk counters are off. The physical disk counters monitor the disk driver and how it
relates to the physical device. The logical disk counters monitor the information for the partitions and
volumes that have been established on the physical disk drives.
To start the domain controller with the logical disk counters on, you use the DISKPERF utility. At the
command prompt, type DISKPERF –YV. This sets the domain controller to gather counters for both
the logical disk devices and the physical devices the next time the system is started. For more
information about using the DISKPERF utility, type DISKPERF /? at the command prompt.
95
Chapter 3
This counter allows you to monitor the performance of the disk drives as they start to fill. This
task is important because as a disk drive starts to run out of space, each write request becomes
tougher to perform and slows overall disk performance. The reason is that as the drive fills, each
write takes longer to search for space. The longer it takes the disk to write the data, the less it
does, so performance slows. Thus, as the drive fills, it works harder to service requests; this is
often called thrashing. To minimize thrashing, leave at least 10 percent of the disk free.
Figure 3.22: Task Manager provides a quick look at network utilization for each installed network adapter.
96
Chapter 3
97
Chapter 3
The advantage of the last two counters is that they break out the values for traffic sent and
received. I recommend that once you’ve monitored these counters, you compare the results with
your domain controller’s total network throughput. To do so, establish a baseline of data rates
and averages. Establishing a baseline allows you to know what to expect from the domain
controller. If a potential problem or bottleneck in network throughput occurs, you can recognize
it immediately because you can compare it against the baseline you’ve established.
You can also make some estimates as to where a bottleneck exists if you know the network and
bus speeds of the domain controller. If the data rate through the card is approaching the network
limit, segmenting and adding a card may help. If the aggregate data rate is approaching the bus
speed, it may be time to split the load for the domain controller and add another one or go to
clustering.
98
Chapter 3
Figure 3.23: The Bytes Total/sec, Bytes Sent/sec, and Bytes Received/sec counters allow you to monitor the
domain controller’s network adapter.
99
Chapter 3
Other companies have taken a somewhat different approach. For example, NetPro offers
DirectoryTroubleshooter. Although this tool starts by aggregating an entire domain of
performance data into one place and goes one step further by placing some basic health
information on it, DirectoryTroubleshooter also looks beyond real-time performance statistics
and goes into configuration items that you might not otherwise be exposed to. For example,
DirectoryTroubleshooter can let you know if certain domain configuration parameters are set
outside ranges typically used for best performance. The tool can also run a number of “best
practices” reports to highlight areas of your domain controllers that might be performing
acceptably but that could be reconfigured to run better or provide more server capacity.
DirectoryAnalyzer performs a related set of services, helping to filter through the general mass
of performance data and event log entries and create specific alerts that highlight what you need
to focus on to prevent and repair AD problems.
Other tools, such as Winternals’ Insight for Active Directory, come at troubleshooting from the
other direction. Insight provides a way to look at low-level AD activity, almost serving as a
“network monitor” for what’s going on inside of AD. Using Insight, you can see each and every
operation that AD performs in real-time. When troubleshooting complex problems such as
replication, for example, this ability is invaluable.
Make no mistake: You need to be looking at performance information, especially when
something goes wrong with AD. However, looking at performance information, rather than data,
is crucial. Third-party tools can often provide a level of aggregation and intelligence on top of
Windows’ built-in, data-driven tools to help you spot problems more quickly and solve them
more efficiently.
100
Chapter 3
Windows’ built-in auditing functionality logs events to the Windows event logs (specifically, the
Security Log, because security is what auditing is designed to address). Thus, all of the earlier-
mentioned caveats about the event logs hold true, particularly the inability to easily pull all of those
events into one place.
Other tools exist to help answer the “Who changed what?” question. One is TripWire, a
configuration management tool that handles some basic AD configuration information. TripWire
is an excellent tool for determining who changes what on a single server; it’s not as capable
when it comes to the highly distributed AD. However, because many domain controller issues
result not from AD-specific changes but rather from per-server changes (someone removing a
memory module, for example), tools such as TripWire can be a valuable troubleshooting tool. It
can even alert you when key configuration elements change, helping you spot problems before
they actually occur. Similar tools, such as Configuresoft Enterprise Configuration Manager
(ECM), can also help you better manage the configuration on single servers and can even enforce
a desired configuration state on those machines, helping to ensure that problems don’t occur due
to misconfigurations.
101
Chapter 3
Summary
As a network administrator, a critical part of your job is making sure that each and every domain
controller hosting AD is functioning properly. To accomplish this task, you need to properly
monitor each of these Windows domain controllers, which, in turn, means watching over the
critical OS components and hardware subsystems.
To help you monitor a domain controller and its subsystems, Windows provides several utilities,
and this chapter discussed the most important ones: Task Manager, the Performance console, and
Event Viewer. Using these utilities and third-party tools, you can watch server resources and
subsystems in real time while they work to support the requests by users, applications, and other
servers.
102
Chapter 4
Third-Party Tools
It’s a shame that Windows doesn’t come with every tool you could possibly need to troubleshoot
AD. However, Windows’ lacking offerings in this area offer third-party software vendors a rich
field to work with, and several vendors have created products designed to make troubleshooting
easier and more effective.
103
Chapter 4
DirectoryAnalyzer
DirectoryAnalyzer from NetPro was one of the first AD monitoring tools on the market, and it
performs real-time monitoring and alerting on all aspects of the AD infrastructure. Instead of
monitoring individual domain controllers, it monitors the directory as a whole. It does so by
monitoring all domain controllers and directory processes at once as a background process. If a
problem occurs at any level in the directory, DirectoryAnalyzer alerts, or notifies, users. If the
problem is critical, the tool’s integrated knowledge base contains descriptions and
troubleshooting methods that will help you solve it.
DirectoryAnalyzer monitors the individual structures and components of AD—replication,
domains, sites, Global Catalogs (GCs), operations master roles, and DNS (inasmuch as it relates
to AD). Each of these components is vital to the operation of AD. DirectoryAnalyzer can
monitor and alert based on specific conditions and problems in each of the individual structures.
The alerts are then recorded at the DirectoryAnalyzer client or console for viewing.
Alerts have two levels of severity—warning and critical. Warning alerts indicate that a
predetermined threshold has been met in one of the directory structures. Warning alerts help you
identify when and where problems may occur. Critical alerts indicate that a predetermined error
condition has been met. Critical alerts are problems that need your immediate attention; if you
ignore them, AD could lose functionality or the directory altogether.
By clicking Current Alerts under View Status in the left pane, you can display all of the alerts
with their associated type, time, and description. Figure 4.1 shows the Current Alerts screen in
DirectoryAnalyzer. The alerts have been recorded for the AD domain controllers, directory
structures, and directory processes.
Figure 4.1: DirectoryAnalyzer allows you to monitor the entire directory for problems.
104
Chapter 4
You can also send alerts to enterprise management systems using Simple Network Management
Protocol (SNMP). Doing so allows you to integrate DirectoryAnalyzer alerts with management
consoles such as Hewlett-Packard’s HP OpenView and Tivoli. Alerts can also be recorded in the
event logs of the Windows system and viewed using the Event Viewer utility.
DirectoryAnalyzer logs all alert activity to a history database. You can export the database and
analyze alert activity over time using a variety of formats, such as Microsoft Excel, Hypertext
Markup Language (HTML), Dynamic HTML (DHTML), and Rich Text Format (RTF). You can
also identify trends in the data, finding cycles or periods of high and low alert activity.
ChangeAuditor needs to be installed on each domain controller for maximum effectiveness, but
events are collected into a central database for reporting and analysis.
105
Chapter 4
DirectoryTroubleshooter
NetPro’s DirectoryTroubleshooter is a kind of super-performance monitor with built-in
intelligence. It monitors literally hundreds of AD-related configuration settings, performance
values, and other aspects of AD, and reports to you on potential problem areas (see Figure 4.3).
This functionality allows you to quickly focus your troubleshooting efforts on areas with a
known problem rather than shooting blind and spending hours analyzing areas of AD that aren’t
having a problem.
106
Chapter 4
Figure 4.3: DirectoryTroubleshooter displays a great deal of AD configuration information in one window.
DirectoryTroubleshooter can also help fix problems. It includes a set of jobs, which can perform
tasks such as configure recovery options on a server, start AD defragmentation, troubleshoot the
File Replication System (FRS), and so forth. These jobs can be targeted to run on multiple
servers, helping automate troubleshooting and repair.
107
Chapter 4
Figure 4.4: Insight for Active Directory displays real-time diagnostic information.
Pop-up “tool tips” display further explanations for messages, making AD’s otherwise difficult
LDAP traffic easier to translate. Using Insight, you can see exactly what AD is doing at any
given moment, and often spot problems in operations such as replication by interpreting AD’s
own internal traffic.
AppManager Suite
The AppManager Suite from NetIQ Corporation is a suite of management products that manages
and monitors the performance and availability of Windows. One of these management products
allows you to monitor the performance of AD. For example, AppManager verifies that
replication is occurring and is up-to-date for the directory by monitoring the highest Update
Sequence Number (USN) value for each domain controller. In addition, inbound and outbound
replication statistics are tracked, as are failed synchronization requests for the directory.
108
Chapter 4
AppManager also allows you to monitor the number of directory authentications per second and
the cache hit rate of name resolution. Using this tool, you can monitor and track errors and
events for trust relationships. You can also log errors and events to enterprise management
systems using SNMP. Thus, SNMP traps are generated and routed to a configured network
manager.
In addition, you can use or run a set of prepackaged management reports that allow you to
further analyze current errors and events. You can also set up this utility to send email and pager
alerts when an event is detected.
Built-In Tools
In this section, I’ll discuss System Monitor, Event Viewer, and REPADMIN. These tools are
included with Windows and provide basic monitoring for key aspects of the OS and AD.
System Monitor
For the domain controller in AD, one of the main monitoring utilities is System Monitor. This
utility allows you to watch the internal performance counters that relate to the directory on the
domain controller. The directory performance counters are software counters that the developers
of AD have programmed into the system.
Using System Monitor, you can monitor current directory activity for the domain controller.
Once you’ve installed AD on a server, several performance counters—for replication activity,
DNS, address book, LDAP, authentication, and the database itself—measure the performance of
the directory on that computer.
Chapter 3 discussed how to launch and use System Monitor, so there is no need to repeat that
information. Instead, I’ll focus on how to use some of the more important performance counters
that are available for AD. Remember, System Monitor tracks all of its counters in real time. For
this reason, it is a good practice to establish a baseline of normal operation that you can compare
the real-time values against. When adding AD counters to System Monitor, if you don’t
understand the meaning of any counter, highlight it, then click Explain. The Explain Text dialog
box appears and provides a description of the counter.
You can also graph the performance counters and set alerts against them. The alerts will appear
in the Event Viewer.
109
Chapter 4
Event Viewer
To view and analyze the events that have been generated by a Windows domain controller, you
can use the Event Viewer. This utility allows you to monitor the event logs generated by
Windows. By default, there are three event logs: the application log, the system log, and the
security log.
In addition, after you install AD, three more logs are created:
• Directory service log—Contains the events that are generated by AD on the domain
controller. You can use this log to monitor activity or investigate any directory problems.
By default, the directory records all critical error events.
• DNS server log—Contains the events generated by the DNS service installed on your
domain controller. For example, when the DNS service starts or stops, it writes a
corresponding event message to this log. More critical DNS events are also logged—for
example, if the service starts but cannot locate initializing data, such as zones or other
startup information stored in the domain controller’s registry or AD. The DNS log exists
only if the DNS service is running on the server. The DNS service typically runs on only
a few domain controllers in the forest.
• FRS log—Contains events generated by file replication on the domain controller. FRS is
a replication engine used to replicate files among different computers simultaneously. AD
uses this service to replicate Group Policy files among domain controllers.
Depending on how you configure your AD installation, you may have one or all of these logs on
your domain controller. Figure 4.5 shows the Event Viewer startup screen on a domain controller
AD with DNS has been installed.
110
Chapter 4
Figure 4.5: The Event Viewer startup screen lists additional event logs that have been created for AD.
Replication Diagnostics
The Replication Diagnostics tool is simply referred to as REPADMIN. It’s a command-line
utility that allows you to monitor and diagnose the replication process and topology in AD. It
also provides several switches that you can use to monitor specific areas of replication. For
example, you can force replication among domain controllers and view the status.
During normal replication, the Knowledge Consistency Checker (KCC) manages and builds the
replication topology for each naming context on the domain controller. The replication topology
is the set of domain controllers that share replication responsibility for the domain. REPADMIN
allows you to view the replication topology as seen by the domain controller. If needed, you can
use REPADMIN to manually create the replication topology, although doing so isn’t usually
beneficial or necessary because the replication topology is generated automatically by the KCC.
You can also view the domain controller’s replication partners, both inbound and outbound, and
some of the internal structures used during replication, such as the metadata and up-to-date
vectors.
111
Chapter 4
You can install the REPADMIN.EXE utility from the support tools folder on the Windows
installation CD-ROM. Running the SETUP program launches the Support Tools Setup wizard,
which installs this tool along with many other useful support tools to the Program Files\Support
Tools folder. Figure 4.6 shows the interface for REPADMIN (the Win2K version is shown; the
WS2K3 version works identically).
Figure 4.6: The REPADMIN utility allows you to view the replication process and topology.
112
Chapter 4
113
Chapter 4
For details about monitoring the hardware components of the domain controller, refer to Chapter 3.
Using DirectoryAnalyzer
Many third-party tools, such as those I discussed earlier, provide you with an easy way to
monitor all of the domain controllers in your forest from one management console. For example,
in DirectoryAnalyzer, click Browse Directory By Naming Context; the directory hierarchy is
displayed. If you expand the naming contexts, you see all of the associated domain controllers.
To see the alerts for just one domain controller, select a domain controller object, then click
Current Alerts. The alerts that are displayed have exceeded a warning or critical threshold and
show the severity, subject, associated type, time, and description. Figure 4.7 shows an example
of using DirectoryAnalyzer to view all alerts for each domain controller.
Figure 4.7: DirectoryAnalyzer allows you to monitor all the domain controllers in your forest for problems
and see the alerts that have been recorded for each domain controller.
114
Chapter 4
To see the alerts and other information for each domain controller, you can also use the Browse
Directory By Site option. It allows you to browse the directory layout according to sites and their
associated domain controllers. In addition, it permits you to view the status of each site and the site
links.
DirectoryAnalyzer is an extremely useful utility because it monitors all of the domain controllers
in the AD forest as a background process and allows you to periodically view the results. It also
monitors the most critical directory structures and processes—for example, the configuration and
activity for the domain partitions, GC partitions, FSMO roles, sites, DNS, the replication
process, and the replication topology.
In addition to viewing the alerts from the domain controllers, you can click any alert and see a
more detailed description of the problem. If you don’t understand the alert, you can double-click
it; the Alert Details dialog box will appear and provide more description, as Figure 4.8 shows.
Figure 4.8: DirectoryAnalyzer provides more information about an alert in the Alert Details dialog box.
Once you’ve been notified of the alert and viewed more information about it in the Alert Details
dialog box, you can use the integrated knowledge base to help resolve the problem. The
knowledge base provides you with a detailed explanation of the problem, helps you identify
possible causes, then helps you remedy or repair the problem. To access the knowledge base,
click More Info in the Alert Details dialog box or choose Help, Contents in the console. Figure
4.9 shows an example of the information available in the knowledge base.
115
Chapter 4
Figure 4.9: DirectoryAnalyzer’s in-depth knowledge base helps you find solutions to problems in AD.
Domain controllers are the workhorses of AD. They manage and store the domain information
and accept special functions and roles. For example, a domain controller can store a domain
partition, store a GC partition, and be assigned as a FSMO role owner. Domain controllers, in
turn, allow the directory to manage user interaction and authentication and oversee replication to
the other domain controllers in the forest.
In addition to displaying alerts for each domain controller, DirectoryAnalyzer displays detailed
configurations. For example, when you choose Browse Directory By Naming Context, you see
several icons for each domain controller. An icon that includes a globe indicates that the domain
controller stores a GC partition. When an icon displays small triangles, it indicates that the
domain controller is also providing the DNS service. An icon that displays both a globe and
small triangles indicates that the domain controller has both a GC and a DNS.
116
Chapter 4
If you select a domain controller and then click the DC Information tab, you can view detailed
information about how the domain controller is operating and handling the directory load. Figure
4.10 shows the DC Information pane in DirectoryAnalyzer.
Figure 4.10: You can view detailed information about a domain controller using the DC Information pane in
DirectoryAnalyzer.
DirectoryAnalyzer provides a high-level summary of how each domain and its associated
domain controllers are functioning. Click Browse Directory By Naming Context to see a high-
level status of all the domain controllers in a domain. To view the status for a particular domain,
select it, then click the DC Summary tab. Figure 4.11 shows the DC Summary pane, which uses
green, yellow, and red icons to indicate the status of each domain controller in a domain.
117
Chapter 4
Figure 4.11: The DC Summary pane in DirectoryAnalyzer provides a high-level status of all domain
controllers in a domain.
You can also quickly view where the domain controller resides, whether it is a GC server, and
who manages the computer. If any of the domain controllers aren’t showing a green (clear) status
icon, there is a problem that you need to investigate and fix.
118
Chapter 4
119
Chapter 4
Table 4.1: A few of the NTDS performance counters that allow you to track how a domain controller is
responding to replication traffic, LDAP traffic, and authentication traffic.
120
Chapter 4
NTDS counters enable you to monitor the performance of AD for the selected domain controller.
You can view these counters under the NTDS object in System Monitor (see Figure 4.12). By
default, System Monitor is started when you choose Start, Administrative Tools, Performance
Console.
Figure 4.12: NTDS performance counters allow you to monitor and track load and performance of the AD
implementation on each domain controller.
121
Chapter 4
Using DirectoryAnalyzer
DirectoryAnalyzer allows you to monitor the alerts for each domain in AD and the associated
domain controllers. These alerts monitor the domain controllers, replicas, Group Policies, trust
relationships, DNS, and other activity for a domain. If you see any critical alerts, you need to
investigate and fix the problems.
To view the alerts for a domain, click Browse Directory By Naming Context. Select a domain,
then click the Current Alerts tab. The display shows the current alerts for that domain (see Figure
4.13).
Figure 4.13: DirectoryAnalyzer allows you to monitor each domain partition for problems.
In addition to displaying alerts for each domain, DirectoryAnalyzer allows you to view
configuration information. Using the Naming Context Information tab, you can view the current
number of alerts that are active for the following areas: Naming Context (or Domain), Replica,
DNS Server, and DC Server.
The Naming Context Information tab also displays the number of domain controllers for the
domain and whether the domain supports mixed mode. When a domain supports mixed mode, it
allows replication and communication with down-level domain controllers and clients to occur.
In addition, you can see which domain controllers in the domain are performing the FSMO roles
and perform a FSMO consistency check. And finally, you can view all the trust relationships that
exist for the domain. Figure 4.14 shows the Naming Context Information pane in
DirectoryAnalyzer.
122
Chapter 4
Figure 4.14: The Naming Context Information pane in DirectoryAnalyzer allows you to see detailed
information for a domain.
If necessary, you can relocate the NTDS.DIT database on a domain controller using the NTDSUTIL
utility, which is pre-installed.
Using this database engine, AD provides a set of database performance counters that allow you
to monitor the domain in depth. These counters provide information about the performance of
the database cache, database files, and database tables, and they help you monitor and determine
the health of the database for the domain controller. By default, database performance counters
aren’t installed on the domain controllers.
123
Chapter 4
You can view and monitor database counters using the System Monitor utility. Table 4.2 gives
you a general description of the more useful database performance counters and how to use them
to track the activity of the low-level database for each domain.
Counter Function Description
Cache % Hits Tracks the percentage of Indicates how database requests are
database page requests in performing. The value for this counter
memory that were successful. A should be at least 90 percent. If it’s
cache hit is a request that is lower than 90 percent, the database
serviced from memory without requests are slow for the domain
causing a file-read operation. controller, and you should consider
adding physical memory to create a
larger cache.
Cache Page Faults/sec Tracks the number of requests Indicates how the database cache is
(per second) that cannot be performing. I recommend that the
serviced because no pages are computer have enough memory to
available in cache. If there are no always cache the entire database. Thus,
pages, the database cache the value of this counter should be as
manager allocates new pages for low as possible. If the value is high, you
the database cache. need to add more physical memory to
the domain controller.
File Operations Pending Tracks the number of pending Indicates how the OS handles the
requests issued by the database read/write requests to the AD database.
cache manager to the database I recommend that the value for this
file. The value is the number of counter be as low as possible. If the
read and write requests that are value is high, you need to add more
waiting to be serviced by the OS. memory or processing power to the
domain controller. This condition can
also occur if the disk subsystem is
bottlenecked.
File Operations/sec Tracks the number of requests Indicates how many file operations have
(per second) issued by the occurred for the AD database. I
database cache manager to the recommend that this value be
database file. The value is the appropriate for the purpose of the
read and write requests per domain controller. If you think that the
second that are serviced by the number of read and write operations is
OS. too high, you need to add memory or
processing power to the computer.
However, adding memory for the file
system cache on the computer reduces
file operations.
Table Open Cache Tracks the number of database Indicates how the AD database is
Hits/sec tables opened per second. The performing. The value for this counter
database tables are opened by should be as high as possible for good
the cached schema information. performance. If the value is low, you
may need to add more memory.
Table 4.2: Some of the more useful database performance counters, which allow you to monitor the database
for the domain partition that stores all of the AD objects and attributes.
124
Chapter 4
125
Chapter 4
Once users query and retrieve the distinguished name from the GC, they can issue a search on
their local domain controller, and LDAP will chase the referrals to the domain controller that
stores the real object information. In addition, universal group membership is stored in the GC.
Because universal groups can deny access to resources, a user’s membership in this group must
be discovered during logon to build the logon access token. The requests made to the GC are
automatic and not seen by the user.
You can use DirectoryAnalyzer to monitor the GC partition and how it’s performing. It monitors
and tracks the following conditions:
• Domain Controller: Global Catalog Load Too High—Indicates that the domain controller
that stores the GC partition has too much traffic. This traffic is LDAP traffic coming from
workstations and servers.
• Domain Controller: Global Catalog Response Too Slow—Indicates that the domain
controller that stores the GC partition isn’t responding in time to queries and other traffic.
• Replica: GC Replication Latency Too High—Indicates that replication is taking too long
to synchronize the GC stored on the domain controller. If replication latency (the time it
takes to replicate changes to all GCs in the forest) is too high, an alert is generated.
• Site: Too Few Global Catalogs in Site—Indicates that there aren’t enough GC servers in
the site.
Figure 4.15 shows how DirectoryAnalyzer monitors and tracks alerts for the GC.
Figure 4.15: DirectoryAnalyzer allows you to monitor the GC partition that exists on various domain
controllers throughout the forest.
126
Chapter 4
Two operations masters manage forest-wide operations, so they have forest-specific FSMO
roles:
• Schema master—Responsible for schema extensions and modifications in the forest
• Domain naming master—Responsible for adding and removing domains in the forest
Three operations masters manage domain operations, so they have domain-specific FSMO roles:
• Infrastructure master—Updates group-to-user references in a domain
• RID master—Assigns unique security IDs in a domain
• PDC emulator—Provides PDC support for down-level clients in a domain.
The three domain-specific FSMO roles exist in every domain. Thus, an AD forest with a total of 3
domains would have 11 FSMO roles in all: 9 domain-specific roles and 2 forest-wide roles.
Because there is only one of each of the forest-specific FSMO roles, it’s extremely important that
you constantly monitor and track the activity and health of the operations masters. If any of them
fail, the directory loses functionality until the computer is restarted or another appropriate
domain controller is assigned the role.
To monitor operations masters, you can use DirectoryAnalyzer. It monitors, checks the status of,
and alerts on several types of conditions and situations relating to operations masters, such as
which domain controllers are holding the FSMO roles. Click Browse Directory By Naming
Context, and click the Naming Context Information tab. Under Operations Master Status, you
see which domain controller is holding which FSMO role. Figure 4.16 shows the status of the
FSMO roles in the Naming Context Information pane.
127
Chapter 4
Figure 4.16: DirectoryAnalyzer displays which domain controllers are holding which FSMO roles for the
naming context.
You can also use the Naming Context Information pane (shown in Figure 4.13) to check the
consistency of the FSMO roles across all of the domain controllers on the network.
DirectoryAnalyzer monitors what each domain controller reports for the FSMO assignments. If
not all of the domain controllers report the same values for all of the operations masters, the
word No appears beside Operations Master Consistent.
To investigate the problem, click Details. The Operations Master Consistency dialog box
appears, indicating that operations master information is inconsistent. It displays the names of
the domain controllers and which domain controller holds each role. In Figure 4.17, the domain
controller COMP-DC-04 has inconsistent information about the true owner of the PDC emulator
role because it shows domain controller COMP-DC-01 as the owner when it should be COMP-
DC-03. Thus, the owner of the PDC operations master is inconsistent.
128
Chapter 4
Figure 4.17: DirectoryAnalyzer allows you to monitor and check consistency for each operations master.
In addition to showing the status and consistency checks, DirectoryAnalyzer monitors and
displays alerts for each operations master. The alerts that are monitored and tracked provide
information about the availability of the FSMO roles. To monitor the availability of the FSMO
role holders, you can click Current Alerts in the bar to the side of the main screen. To display the
alerts for a domain or each domain controller, click Browse Directory By Naming Context.
The alerts indicate that the domain controller that holds the operations master isn’t responding.
This lack of response could mean that the domain controller and AD are down and not
responding. It could also mean that the domain controller no longer has network connectivity,
which could indicate DNS or Internet Protocol (IP) addressing problems. Finally, this alert could
simply mean that the domain controller or the directory that is installed is overloaded and
responding too slowly. Figure 4.18 shows how DirectoryAnalyzer monitors and tracks alerts for
each operations master.
129
Chapter 4
Figure 4.18: DirectoryAnalyzer monitors and tracks the availability of each FSMO role holder.
Monitoring Replication
AD is a distributed directory made up of one or more naming contexts, or partitions. Partitions
are used to distribute the directory data on the domain controllers across the network. The
process that keeps partition information up to date is called replication. Monitoring replication is
critical to the proper operation of the directory. Before I discuss how to monitor replication,
however, I need to describe what it is and how it works.
In AD, replication is a background process that propagates directory data among domain
controllers. For example, if an update is made to one domain controller, the replication process is
used to notify all of the other domain controllers that hold copies of that data. In addition, the
directory uses multimaster replication, which means that there is no single source (or master)
that holds all of the directory information. Through multimaster replication, changes to the
directory can occur at any domain controller; the domain controller then notifies the other
servers.
Because AD is partitioned, not every domain controller needs to communicate or replicate with
each other. Instead, the system uses a set of connections that determines which domain
controllers need to replicate to ensure that the appropriate domain controllers receive the updates.
This approach reduces network traffic and replication latency (the time to replicate a change to
all replicas). The set of connections used by the replication process is the replication topology.
130
Chapter 4
Schema Partition
The schema partition contains the set of rules that defines the objects and attributes in AD. This
set of rules is used during creation and modification of the objects and attributes in the directory.
The schema also defines how the objects and attributes can be manipulated and used in the
directory.
The schema partition is global; thus, every domain controller in the forest has a copy, and these
copies need to be kept consistent. To provide this consistency, the replication process in the
directory passes updated schema information among the domain controllers to the copies of the
schema. For example, if an update is made to the schema on one domain controller, replication
propagates the information to the other domain controllers, or copies of the schema.
Configuration Partition
The configuration partition contains the objects that define the logical and physical structure of
the AD forest. These objects include sites, site links, trust relationships, and domains. Like the
schema partition, the configuration partition exists on every domain controller in the forest and
must be exactly the same on each.
Because the configuration partition exists on every domain controller, each computer has some
knowledge of the physical and logical configuration of the directory. This knowledge allows
each domain controller to efficiently support replication. In addition, if a change or update is
made to a domain controller and its configuration partition, replication is started, which
propagates the change to the other domain controllers in the forest.
Domain Partition
The domain partition contains the objects and attributes of the domain itself. This information
includes users, groups, printers, servers, organizational units (OUs), and other network resources.
The domain partition is copied, or replicated, to all of the domain controllers in the domain. If
one domain controller receives an update, it needs to be able to pass the update to other domain
controllers holding copies of the domain.
A read-only subset of the domain partition is replicated to GC servers in other domains so that
other users can access its resources. This setup allows the GC to know what other objects are
available in the forest.
131
Chapter 4
132
Chapter 4
As I’ve mentioned, the values in the up-to-dateness vector can determine which updates need to
be sent to the destination domain controller. For example, if the destination domain controller
already has an up-to-date value for an object or attribute, the source domain controller doesn’t
have to send the update for it. To view the contents of the up-to-dateness vector for any domain
controller, type the following command at a command prompt:
REPADMIN /showvector <NC name>
To help resolve conflicts during replication, AD attaches a unique stamp to each replicated value.
Each stamp is replicated along with its corresponding value. To ensure that all conflicts can be
resolved during replication, the stamp is compared with the current value on the destination
domain controller. If the stamp of the value that was replicated is larger than the stamp of the
current value, the current value (including the stamp) is replaced. If the stamp is smaller, the
current value is left alone.
Although you can disable the KCC and create connection objects by hand, I strongly recommend
that you use the KCC to automatically generate the replication topology. The reason is that the KCC
simplifies a complex task and has a flexible architecture, which reacts to changes you make and any
failures that occur.
However, if your organization has more than 100 sites, you may need to manually create the
replication topology; in environments that have more than 100 sites, the KCC doesn’t scale well. In
extremely large organizations, the KCC will often spend too much time and processing power trying
to calculate the replication topology, with the result that the topology will never be properly generated
and replication won’t work properly between sites.
133
Chapter 4
The KCC uses the following components to manage the replication topology:
• Connections—The KCC creates connection objects in AD that enable the domain
controllers to replicate with each other. A connection is defined as a one-way inbound
route from one domain controller to another. The KCC manages the connection objects
and reuses them where it can, deletes unused connections, and creates new connections if
none exist.
• Servers—Each domain controller in AD is represented by a server object. The server has
a child object called NTDS Setting. This setting stores the inbound connection objects for
the server from the source domain controller. Connection objects are created in two
ways—automatically by the KCC or manually by an administrator.
• Sites—The KCC uses sites to define the replication topology. Sites define the sets of
domain controllers that are well connected in terms of speed and cost. When changes
occur, the domain controllers in a site replicate with each other to keep AD synchronized.
If the domain controllers are local (intra-site topology), replication starts as needed—with
no concern for speed or cost—within 5 minutes of an update occurring. If the two domain
controllers are separated by a low-speed network connection (inter-site topology),
replication is scheduled as needed. Inter-site replication occurs only on a fixed schedule,
regardless of when updates occur.
• Subnets—Subnets assist the KCC to identify groups of computers and domain
controllers that are physically close or on the same network.
• Site links—Site links must be established among sites so that replication among sites can
occur. Unless a site link is placed, the KCC cannot automatically create the connections
among sites, and replication cannot take place. Each site link contains the schedule that
determines when replication can occur among the sites that it connects.
• Bridgehead servers—The KCC automatically designates a single server for each naming
context, called the bridgehead server, to communicate across site links. You can also
manually designate bridgehead servers when you establish each site link. Bridgehead
servers perform site-to-site replication; in turn, they replicate to the other domain
controllers in each site. Using this method, you can ensure that inter-site replication
occurs only among designated bridgehead servers. Thus, bridgehead servers are the only
servers that replicate across site links, and the rest of the domain controllers are updated
within the local sites.
134
Chapter 4
Using DirectoryAnalyzer
DirectoryAnalyzer allows you to monitor replication among domain controllers and report any
errors or problems. It allows you to track the following problems and issues:
• Replication Cycle—The time during which the requesting domain controller receives
updates from one of its replication neighbors. You can view the successful replication
cycle as well as any errors that occurred during that time.
• Replication Latency—The elapsed time between an object or attribute being updated
and the change being replicated to all the domain controllers that hold copies. If
replication latency is too high, DirectoryAnalyzer issues an alert.
• Replication Topology—The paths among domain controllers used for replication. If the
replication topology evaluates that the topology is transitively closed (meaning that it
doesn’t matter on which domain controller an update occurs), the topology will provide
for that update to be replicated to all other domain controllers.
• Replication Failures—Occur when a domain controller involved in replication doesn’t
respond. Each time there are consecutive failures from the same domain controller, an
alert is issued. Many things can cause failures—for example, a domain controller may be
too busy updating its own directory information from a bulk load.
• Replication Partners—Sets of domain controllers that replicate directly with each other.
DirectoryAnalyzer monitors domain controllers and pings them to make sure that each is
still alive and working. If a replication partner doesn’t respond, an alert is issued.
• Replication Conflict—Occurs when two objects or attributes are created or modified at
exactly the same time on two domain controllers on the network. AD resolves this
conflict automatically, and DirectoryAnalyzer issues an alert so that you’ll know that one
of the updates was ignored by replication.
DirectoryAnalyzer is a unique utility because it allows you to browse AD for information on, for
example, the replication cycle and replication partners. Figure 4.19 shows the Replication
Information pane, which displays the last successful replication cycle for each domain controller,
replication partners, and any errors that occurred during replication.
135
Chapter 4
Figure 4.19: DirectoryAnalyzer allows you to view the replication cycle and replication partners for each
domain controller.
Using DirectoryAnalyzer, you can monitor and track the replication process for errors. If a
problem occurs, the utility will issue an alert to indicate what type of problem has occurred. You
can double-click the alert to see more detailed information, then use the knowledge base to find
troubleshooting methods to help you solve the problem. The Current Alerts screen displays the
more recent alerts that have been logged for replication (see Figure 4.20).
136
Chapter 4
Figure 4.20: The Current Alerts screen in DirectoryAnalyzer allows you to view the most recent alerts for the
replication process.
You can also view the replication-related alerts that have been stored in the Alert History file in
DirectoryAnalyzer. To display these alerts, on the Current Alerts screen, choose Reports, Alert
History. On the Report page, select one of the report options to specify what alerts you want to
include. Then select Preview to display the report on the screen. You can print the report or
export it to a file. Figure 4.21 illustrates an Alert History report.
137
Chapter 4
Figure 4.21: Using DirectoryAnalyzer, you can produce a report of replication-related alerts.
Setting up Auditing
Auditing is controlled on a per-object basis within the domain. To view this feature, you can, for
example, open Active Directory Users and Computers, and right-click a domain. Select
Properties from the context menu, select the Security tab, then click Advanced, and select the
Auditing tab.
If the Security tab isn’t visible, ensure that Advanced Features is selected on the console’s View
menu, then try again.
As Figure 4.22 shows, you can define which actions generate audit messages. To fully enable
auditing, audit all success and failure events for the special Everyone group. Although doing so
will produce the maximum amount of useful information for troubleshooting, it will create a log
entry for practically every event that occurs within AD. This volume of events can quickly
overfill the Security event log and create more data than you can readily utilize—a downside of
AD’s auditing capabilities.
138
Chapter 4
139
Chapter 4
Figure 4.23: Viewing auditing events in the Security event log viewer.
140
Chapter 4
For example, suppose you were having a problem with replication to a specific site. There are, of
course, a number of potential causes for this problem. However, if replication used to work, then
you can expect that one or more changes have occurred in order to create the problem. This
change(s) might simply be a change in network connectivity (such as a WAN link that’s down),
or it could be a configuration change. To begin the troubleshooting process, ChangeAuditor
gives you a good first place to look; as Figure 4.24 shows, you can quickly spot configuration
changes—such as the addition of a new site link—which might be having an impact on your
environment’s AD replication.
Figure 4.24: Spotting configuration changes can lead to a solution more quickly.
ChangeAuditor can also spot changes to the registry and file system—changes that Windows’
built-in auditing functionality can log, but can also be incredibly difficult to detect in the
enormous mass of data that the Security log would contain were you to audit events related to
those OS components.
141
Chapter 4
Summary
Before you can accurately troubleshoot AD, you must be able to effectively monitor it for
problems. Thus, you must be able to monitor the directory that has been distributed across
domain controllers on the network. You can do so by using the monitoring tools described in this
chapter. These tools allow you to watch the directory components individually and as they
interact with each other. For example, you can monitor the domain controllers, the domain
partition, the GC partition, the operations masters, and the replication process and topology.
Monitoring these components ensures the health of the directory as a system. In addition, you
can use auditing tools to perform effective and efficient troubleshooting of AD problems—
saving time and energy that can be better spent on other administrative tasks.
142
Chapter 5
143
Chapter 5
144
Chapter 5
Figure 5.1: A red X on the Local Area Connection icon indicates that the network cable is disconnected from
your domain controller.
145
Chapter 5
Figure 5.2: An unsuccessful TCP/IP configuration and network connection, shown by using IPCONFIG.
Listing 5.1 shows a well-connected LAN. Notice that the IP addresses are displayed with
appropriate values.
C:> ipconfig /all
Windows Server 2003 IP Configuration
Host Name . . . . . . . . . . . . : cx266988-S
Primary DNS Suffix. . . . . . . . : company.com
Node Type . . . . . . . . . . . . : Hybrid
IP Routing Enabled. . . . . . . . : No
WINS Proxy Enabled. . . . . . . . : No
Search List . . . . . . . . . . . : company.com
If you want to save the results of running IPCONFIG for further analysis, you can capture the
results in a text file. At the command line, enter the following command:
ipconfig /ALL > <local_drive>:\<text_file.txt>
There are many advanced features and switches available with IPCONFIG. To view the available
switches, enter the following command at the command line:
IPCONFIG /?
If everything looks normal when you run IPCONFIG, go on to test the TCP/IP connection.
146
Chapter 5
147
Chapter 5
Next, ping the IP address of the remote domain controllers on the remote subnet as follows:
PING <remote_domain_contoller1_address>
PING <remote_domain_contoller2_address>
PING <remote_domain_contoller3_address>
In the PING statements, the remote domain controller address is represented as the domain name
(that is, REMOTE.COMPANY.COM) or the IP address of the domain controller (that is,
20.0.0.20). If the PING command fails, verify the address of each remote domain controller and
check whether each remote domain controller is operational (generally, you’ll also want to ping
the IP address directly, to ensure that name resolution isn’t causing the ping to fail). In addition,
check the availability of all of the gateways or routers between your domain controller and the
remote one.
In addition to pinging the domain controllers, you need to ping the IP address of the DNS server.
If this command fails, verify that the DNS server is operational and the address is correct.
148
Chapter 5
Figure 5.3: Running the domain controller connectivity test to troubleshoot the communication path among
domain controllers in the forest.
After the test is completed, the results are displayed at the bottom of the dialog box:
• Destination—Shows the name of each destination domain controller you selected.
• Test—Shows the type of test that was performed. The type of test varies according to the
services that have been assigned to the domain controller.
• Time—Shows the amount of time (in milliseconds) it took to perform each test. If a test
is performed in less than 10 milliseconds, it’s displayed as < 10 ms; otherwise, the actual
time is displayed.
• Result—Shows whether a test was successful. If the test failed, this column displays a
brief description of why.
149
Chapter 5
Figure 5.4: Running the domain connectivity test to troubleshoot the communication between the domain
controller in the source domain and the domain controllers in the destination domain.
After the test is completed, the results are displayed at the bottom of the dialog box:
• Destination—Shows the name of each destination domain/domain controller you
selected.
• Test—Shows the type of test that was performed. The type of test varies according to the
services that have been assigned to the domain controller.
• Time—Shows the amount of time (in milliseconds) it took to perform each test. If a test
is performed in less than 10 milliseconds, it’s displayed as < 10 ms; otherwise, the actual
time is displayed.
• Result—Shows whether a test was successful. If the test failed, this column displays a
brief description of why.
150
Chapter 5
Figure 5.5: Running the site connectivity test to troubleshoot the communication between a site/domain
controller and the domain controllers in the destination site.
After the test is completed, the results are displayed at the bottom of the dialog box.
• Destination—Shows the name of each destination site/domain controller you selected.
• Test—Shows the type of test that was performed. The type of test varies according to the
services that have been assigned to the domain controller.
• Time—Shows the amount of time (in milliseconds) it took to perform each test. If a test
is performed in less than 10 milliseconds, it’s displayed as < 10 ms; otherwise, the actual
time is displayed.
• Result—Shows whether a test was successful. If the test failed, this column displays a
brief description of why.
151
Chapter 5
In WS2K3, not every domain controller contains your AD-integrated DNS zone. DNS can be
configured to replicate this information only to those domain controllers that are actually acting as
DNS servers. Doing so reduces the amount of replication required to keep DNS current on every
domain controller.
152
Chapter 5
Figure 5.6: Using Event Viewer to track DNS errors that occur on the selected domain controller.
If the domain controller is a DNS server, an additional log tracks all of the DNS basic events and
errors for the DNS service on the server. For example, the DNS Server log monitors and tracks
the starts and stops for the DNS server. It also logs critical events, such as when the server starts
but cannot locate initializing data—for example, zones or boot information stored in the registry
or (in some cases) AD. Figure 5.7 shows how you can access the DNS Server log in Event
Viewer.
153
Chapter 5
Figure 5.7: Using the DNS Server log in Event Viewer to track the errors for all DNS events that occur on a
domain controller that supports a DNS server.
Using PING
Another simple method for checking whether DNS records have been registered is to determine
whether you can look up the names and addresses of network resources by using the PING
utility. For example, you can check the names using PING as follows:
PING COMPANY.COM
If this command works, the DNS server can be contacted by using this basic network test. Even
if the command doesn’t work, the PING utility will show you the results of the name resolution
process. For example, typing:
PING SERVER2
Might create the output:
Pinging server2 [192.168.0.103] with 32 bytes of data:
Proving that the name Server2 was resolved to an IP address; you can check to make sure the IP
address is correct.
Using NSLOOKUP
Next, you need to verify that the DNS server is able to listen to and respond to basic client
requests. You can do so by using NSLOOKUP, a standard command-line utility provided in
most DNS-service implementations, including Windows. NSLOOKUP allows you to perform
query testing of DNS servers and provides detailed responses as its output. This information is
useful when you troubleshoot name-resolution problems, verify that RRs are added or updated
correctly in a zone, and debug other server-related problems.
154
Chapter 5
To test whether the DNS server can respond to DNS clients, use NSLOOKUP as follows:
NSLOOKUP
Once the NSLOOKUP utility loads, you can perform a test at its command prompt to check
whether the host name appears in DNS. Listing 5.2 shows the output you can receive.
> company.com
Server: ns1.company.com
Address: 250.45.87.13
Name: company.com
Address: 250.65.123.65
Listing 5.2: A sample command and output received by using NSLOOKUP.
The output of this command means that DNS contains the A record and the server is responding
with an answer: 250.65.123.65. Next, verify whether this address is the actual IP address for your
computer. You can also use NSLOOKUP to perform DNS queries, examine the contents of zone
files on the local and remote DNS servers, and start and stop the DNS servers. If the record for
the requested server isn’t found in DNS, you receive the following message:
The computer or DNS domain name does not exist
155
Chapter 5
156
Chapter 5
157
Chapter 5
158
Chapter 5
To manage the database, Windows provides a garbage-collection process designed to free space
in the AD database. This process runs on every domain controller in the enterprise with a default
lifetime interval of 12 hours. The garbage-collection process first removes “tombstones” from
the database. Tombstones are remains of objects that have been deleted. (When an object is
deleted, it’s not actually removed from the AD database. Instead, it’s marked for deletion at a
later date. This information is then replicated to other domain controllers. When the time expires
for the object, the object is deleted.) Next, the garbage collection-process deletes any
unnecessary log files. Finally, it launches a defragmentation thread to claim additional free
space.
Above the directory database is a database layer that provides an object view of the database
information by applying the schema to the database records. The database layer isolates the
upper logical layers of the directory from the underlying database system. All access to the
database occurs through this layer instead of allowing direct access to the database files. The
database layer is responsible for creating, retrieving, and deleting the individual database records
or objects and associated attributes and values.
In addition to the database layer, AD provides a directory service agent (DSA), an internal
process in Windows that manages the interaction with the database layer for the directory. AD
provides access using the following protocols:
• Lightweight Directory Access Protocol (LDAP) clients connect to the DSA using LDAP
• Messaging Application Programming Interface (MAPI) clients connect to the directory
through the DSA using the MAPI remote procedure call (RPC) interface
• Windows clients that use NT 4.0 or earlier connect to the DSA using the Security
Account Manager (SAM) interface
• AD domain controllers connect to each other during replication using the DSA and a
proprietary RPC implementation
159
Chapter 5
DSASTAT also allows you to specify the target domain controllers and additional operational
parameters by using the command line or an initialization file. DSASTAT determines whether
domain controllers in a domain have a consistent and accurate image of their own domain.
In addition, DSASTAT compares the attributes of replicated objects. You can use it to compare
two directory trees across replicas in the same domain or, in the case of a Global Catalog (GC),
across different domains. You can also use it to monitor replication status at a much higher level
than monitoring detailed transactions. In the case of GCs, DSASTAT checks whether the GC
server has an image that is consistent with the domain controllers in other domains. DSASTAT
complements the other replication-monitoring tools, REPADMIN and REPLMON, by ensuring
that domain controllers are up to date with one another.
DCDIAG is intended to perform a fully automatic analysis with little user intervention. Thus, you
usually don’t need to provide too many parameters to it on the command line. DCDIAG doesn’t work
when run against a Windows workstation or server—it’s limited to working only with domain
controllers.
160
Chapter 5
DCDIAG consists of a set of tests that you can use to verify and report on the functional
components of AD on the computer. You can use this tool on a single domain controller, a group
of domain controllers holding a domain partition, or across a site. When using DCDIAG, you can
collect either a minimal amount of information (confirmation of successful tests) or data for
every test you execute. Unless you’re diagnosing a specific problem on only one domain
controller, I recommend that you collect only the severe errors for each one.
DCDIAG allows you to run the following tests to diagnose the status of a domain controller:
• Connectivity test—Verifies that DNS names for the domain controller are registered. It
also verifies that the domain controller can be reached by using TCP/IP and the domain
controller’s IP address. DCDIAG checks the connectivity to the domain controller by
using LDAP and checks that communications can occur by using an RPC.
• Replication test—Checks the replication consistency for each of the target domain
controllers. For example, this test checks whether replication is disabled and whether
replication is taking too long. If so, the utility reports these replication errors and
generates errors when there are problems with incoming replica links.
• Topology integrity test—Verifies that all domain controllers holding a specific partition
are connected by the replication topology.
• Directory partition head permissions test—Checks the security descriptors for proper
permissions on the directory partition heads, such as the schema, domain, and
configuration directory partitions.
• Locator functionality test—Verifies that the appropriate SRV RRs are published in DNS.
This test also verifies that the domain controller can recognize and communicate with
operations masters. For example, DCDIAG checks whether the locator can find a primary
domain controller (PDC) and GC server.
• Inter-site health test—Identifies and ensures the consistency of domain controllers among
sites. To do so, DCDIAG performs several tests, one of which identifies the inter-site
topology generator and identifies the bridgeheads for each site. This test determines
whether a bridgehead server is functioning; if not, the utility identifies and locates
additional backup bridgeheads. In addition, this test identifies when sites aren’t
communicating with other sites on the network.
• Trust verification test—Checks explicit trust relationships—that is, trusts between two
domain controllers in the forest. DCDIAG cannot check transitive trusts (Kerberos V5
trust relationships). To check transitive trusts, you can use the NETDOM utility.
For more information about the NETDOM utility, refer to the resource kit documentation or The
Definitive Guide to Windows 2000 and Exchange 2000 Migration (Realtimepublishers), a link to which
can be found at http://www.realtimepublishers.com. You can download the WS2K3 resource kit from
http://www.microsoft.com/downloads/details.aspx?FamilyID=9d467a69-57ff-4ae7-96ee-
b18c4790cffd&displaylang=en.
161
Chapter 5
Using NTDSUTIL
The Directory Services Management utility (NTDSUTIL.EXE) is a command-line utility included
in Windows that you can use to troubleshoot and repair AD. Although Microsoft designed the
utility to be used interactively via a command-prompt session (launched simply by typing
NTDSUTIL at any command prompt), you can also run it by using scripting and automation.
NTDSUTIL allows you to troubleshoot and maintain various internal components of AD. For
example, you can manage the directory store or database and clean up orphaned data objects that
were improperly removed.
You can also maintain the directory service database, prepare for new domain creations, manage
the control of the FSMOs, purge meta data left behind by abandoned domain controllers (those
removed from the forest without being uninstalled), and clean up objects and attributes of
decommissioned or demoted servers. At each NTDSUTIL menu, you can type help for more
information about the available options (see Figure 5.8).
162
Chapter 5
Figure 5.8: Viewing a list of available commands in the utility and a brief description of each.
163
Chapter 5
This command works identically on Win2K and WS2K3, and can be used in mixed-version
environments with no problems.
Figure 5.9: Using the info command in NTDSUTIL to display the location and size of AD database files.
Using NTDSUTIL, you can relocate or move AD database files from one location to another on the
disk or move the database files from one disk drive to another in the same domain controller. You can
also move just the log files from one disk to another to free space for the data files (see “Moving the
AD Database or Log Files” later in this chapter).
164
Chapter 5
165
Chapter 5
Figure 5.10: Using the Integrity option in NTDSUTIL to examine the AD database on a domain controller.
To troubleshoot and repair the AD database, you can use the Integrity option only while the domain
controller is in Directory Service Restore mode.
166
Chapter 5
When you run the Semantic Checker, it performs the following checks:
• Reference Count Check—Counts the number of references in the database tables and
matches the results with the values that are stored in the data file. This operation also
ensures that each object has a globally unique identifier (GUID) and distinguished name
(DN). For a previously deleted object, this operation ensures that the object has a deleted
time and date but doesn’t have a GUID or DN.
• Deleted Object Check—Ensures that the object has a time and date as well as a special
relative distinguished name (RDN), given when the object was originally deleted.
• Ancestor Check—Ensures that the DN tag is equal to the ancestor list of the parent—
could also be stated as a check that the DN of the object minus its RDN is equal to its
parent’s DN.
• Security Descriptor Check—Ensures that there is a valid descriptor and that the
discretionary access control list (DACL) isn’t empty.
• Replication Check—Verifies that there is an up-to-dateness vector in the directory
partition and checks to see that every object has meta data.
Like the Integrity option described earlier, you can run the Semantic Checker option only when
the domain controller is in Directory Service Restore mode. To run in this mode, restart the
domain controller. When you’re prompted, press F8 to display the Advanced Options menu.
Select Directory Service Restore mode and press Enter, then log on using the administrator
account and password that you assigned during the DCPROMO process.
To run the Semantic Checker option, select Start, Programs, Accessories, Command Prompt. In
the Command Prompt window, type
NTDSUTIL
then press Enter. At the ntdsutil prompt, type
semantic database analysis
then press Enter. Next, type
verbose on
This command displays the Semantic Checker. To start the Semantic Checker without having it
repair any errors, type
go
To start it and have it repair any errors that it encounters in the database, enter
go fixup
The commands to this point appear in the Command Prompt window as follows:
I:>NTDSUTIL
ntdsutil: semantic database analysis
semantic checker: verbose on
Verbose mode enabled.
semantic checker: go
Figure 5.11 shows the results of using the NTDSUTIL Semantic Checker.
167
Chapter 5
Figure 5.11: Using the NTDSUTIL Semantic Checker option to check the consistency of the contents of the
directory database.
Again, the output of this command will be identical on Win2K and WS2K3, making it easier for
administrators working in mixed-version environments.
168
Chapter 5
Before you manually remove the NTDS Settings object for any server, check that replication has
occurred after the domain controller has been demoted. Using the NTDSUTIL utility improperly can
result in partial or complete loss of AD functionality. (For a description of how to check whether
replication has occurred, see Chapter 4.)
To clean up the meta data, select Start, Programs, Accessories, Command Prompt. At the
command prompt, type
NTDSUTIL
then press Enter. At the ntdsutil prompt, type
metadata cleanup
then press Enter. Based on the options returned to the screen, you can use additional
configuration parameters to ensure that the removal occurs correctly.
Before you clean up the metadata, you must select the server on which you want to make the
changes. To connect to a target server, type
connections
then press Enter. If the user who is currently logged on to the computer running NTDSUTIL
doesn’t have administrative permissions on the target server, alternative credentials need to be
supplied before making the connection. To supply alternative credentials, type the following
command, then press Enter:
set creds <domain_name user_name password>
Next, type
connect to server <server_name>
then press Enter. You should receive confirmation that the connection has been successfully
established. If an error occurs, verify that the domain controller you specified is available and
that the credentials you supplied have administrative permissions on the server. When a
connection has been established and you’ve provided the right credentials, type
quit
then press Enter, to exit the Connections menu in NTDSUTIL. When the Meta Data Cleanup
menu is displayed, type
select operation target
and press Enter. Type
list domains
then press Enter. A list of domains in the forest is displayed, each with an associated number. To
select the appropriate domain, type
select domain <number>
and press Enter (where <number> is the number associated with the domain of which the
domain controller you’re removing is a member). The domain you select determines whether the
server being removed is the last domain controller of that domain.
169
Chapter 5
Next, type
list sites
then press Enter. A list of sites, each with an associated number, is displayed. Type
select site <number>
and press Enter (where <number> is the number associated with the site of which the server
you’re removing is a member). You should receive a confirmation, listing the site and domain
you chose. Once you receive a confirmation, type
list servers in site
and press Enter. A list of servers in the site, each with an associated number, is displayed. Type
select server <number>
and press Enter (where <number> is the number associated with the server you want to remove).
You receive a confirmation, listing the selected server, its DNS host name, and the location of
the server’s computer account that you want to remove.
After you’ve selected the proper domain and server, type
quit
to exit the current NTDSUTIL submenu. When the Meta Data Cleanup menu is displayed, type
remove selected server
and press Enter. You should receive confirmation that the server was removed successfully. If
the NTDS Settings object has already been removed, you may receive the following error
message:
Error 8419 (0x20E3)
The DSA object couldn’t be found
Type
quit
at each menu to quit the NTDSUTIL utility. You should receive confirmation that the connection
disconnected successfully.
170
Chapter 5
171
Chapter 5
To move the directory database file or log files, locate the drive containing the directory and log
files. The directory database (NTDS.DIT) and log files are located in the NTDS folder on the
root drive by default. (However, the administrator may have changed their locations during the
DCPROMO process.) Next, select Start, Programs, Accessories, Command Prompt. In the
Command Prompt window, type
NTDSUTIL
then press Enter. At the ntdsutil prompt, enter the word
files
The utility displays the file maintenance category. The commands to this point should appear as
follows:
I:>NTDSUTIL
ntdsutil: files
file maintenance:
At the file maintenance prompt, enter the word
info
to display the location of the AD database files, log files, and other associated files. Note the
location of the database and log files.
To move the database files to a target disk drive, type the following command at the ntdsutil
prompt:
MOVE DB TO %s (where %s is the target folder on another drive)
To move the log files to a target disk drive, type the following command at the ntdsutil prompt.
(The target directory where you move the database file or log files is specified by the %s
parameter. The Move command moves the files and updates the registry keys on the domain
controller so that AD restarts using the new location.)
MOVE LOGS TO %s (where %s is the target folder on another drive)
To quit NTDSUTIL, type
quit
twice to return to the command prompt, then restart the domain controller normally.
Completely back up AD on the domain controller before you execute the Move command. In addition,
back up AD after you move the directory database file and log files; restoring the directory database
will then retain the new file location.
172
Chapter 5
To repair the AD database file, select Start, Programs, Accessories, Command Prompt. In the
Command Prompt window, type
NTDSUTIL
then press Enter. At the ntdsutil prompt, enter the word
files
The utility displays the file maintenance category. At the file maintenance prompt, enter the
word
repair
The commands to this point should appear as follows:
I:>NTDSUTIL
ntdsutil: files
file maintenance: repair
As soon as the repair operation has completed, run the NTDSUTIL Semantic Checker on the
database. Figure 5.12 shows the results of using the NTDSUTIL Repair option.
Figure 5.12: Using NTDSUTIL as a last resort to repair the directory database files.
173
Chapter 5
174
Chapter 5
if that link is down or the “fast” domain controller is unavailable, a domain controller over a
slower link may respond first, and all pass-through authentications occur over the slow link.
There is a built-in mechanism in Windows that tracks how long authentication takes over the
existing secure channel. If pass-through authentication takes longer than 45 seconds, that fact is
noted. If two such authentications exceed that limit, a rediscovery process begins, the current
secure channel is broken, and the trusting domain’s PDC once again sends out logon requests to
all known trusted domain controllers. However, because this mechanism tracks only those
communications that last longer than 45 seconds, users may see a 40-second delay every time
they attempt to use a resource without a secure-channel reset taking place.
You can run the NLTEST utility on the trusting domain controller to break and re-initialize a
secure channel (for example, when the secure-channel password was last changed) and obtain
information about an existing trust relationship. You can also use NLTEST to restart the
discovery process for a new trusted domain controller. The syntax of NLTEST is:
NLTEST /sc_query:<account_domain>
Where <account_domain> is the name of the trusted domain. This command returns the name of
the trusted domain controller with which the trusting domain controller has a secure channel. If
that domain controller is unacceptable, use the following syntax:
NLTEST /sc_reset:<account_domain>
175
Chapter 5
Schema Master
If the domain controller holding the forest-wide schema master role fails, you or your directory
administrators won’t be able to modify or extend the AD schema. Schema modifications
typically occur when you install directory-enabled applications such as management utilities that
rely on the directory for information. These applications try to modify or extend the current
schema with new object classes, objects, and attributes. If the applications being installed cannot
communicate with the domain controller that has been designated as the schema master,
installation will fail.
The schema master solely controls the management of the directory schema and propagates
updates to the schema to the other domain controllers as modifications occur. Because only
directory administrators are allowed to make changes, the schema operations master isn’t visible
to directory users and doesn’t affect them.
RID Master
If the domain controller that stores the RID master role fails or stops communicating, domain
controllers in the same domain cannot obtain the RIDs they need. (RIDs are unique security
IDs.) Domain controllers use RIDs when the domain controllers create users, groups, computers,
printers, and other objects in the domain; each object is assigned a RID. The RID master role
allocates blocks of RIDs to other domain controllers in its domain. As I mentioned at the
beginning of this section, there is only one RID master role per domain.
If a domain controller has remaining (unassigned) RIDs in its allocated block, the RID master role
doesn’t need to be available when new object accounts are created.
176
Chapter 5
Infrastructure Master
If the domain controller that stores the infrastructure master role fails, a portion of AD won’t
function properly. The infrastructure master role controls and manages the updates to all cross-
domain references, such as group references and security identifier (SID) entries in access
control lists (ACLs). For example, when you add, delete, or rename a user who is a member of a
group, the infrastructure master controls the reference updates. There is always only one
infrastructure master role in each domain in a forest.
Because only one domain controller is assigned to perform this role, it’s important that it doesn’t
fail. However, if it does, the failure is not visible to network users. In fact, it’s visible to only
directory administrators when they’ve recently moved or renamed a large number of object
accounts. In addition, having one domain controller assigned to this role can be a big security
problem.
If you force a transfer of the infrastructure master role from its original domain controller to another
domain controller in the same domain, you can transfer the role back to the original domain controller
after you’ve returned it to production.
It is strongly recommended that you not put the infrastructure master role on any domain controller
that is also acting as a GC server, unless you have only one domain in your forest. For more
information about FSMO placement rules and best practices, see Microsoft Product Support Services
article 223346, at http://support.microsoft.com.
PDC Emulator
If the PDC emulator fails or no longer communicates, users who depend on its service are
affected. These are down-level users from NT 4.0, Window 98, and Windows 95. The PDC
emulator is responsible for changes to the SAM database, password management, account
lockout for down-level workstations, and communications with the domain controllers.
If you force a transfer of the PDC emulator role from its original domain controller to another domain
controller in the same domain, you can transfer the role back to the original domain controller after
you’ve returned it to production.
177
Chapter 5
For the Active Directory Schema snap-in to be listed as an available, you’ll have to have already
registered the Schmmgmt.dll file. If it doesn’t appear as an option, follow these steps to register it:
select Start, Run, type
regsvr32 schmmgmt.dll
in the Open box, and click OK. A message will be displayed confirming that the registration was
successful.
Determining the forest’s domain naming master role holder requires you to select Start, Run,
type
mmc
then click OK. On the Console menu, click Add/Remove Snap-in, click Add, double-click
Active Directory Domains and Trusts, click Close, and then click OK. In the left pane, click
Active Directory Domains and Trusts. Right-click Active Directory Domains and Trust, and
click Operations Master to view the server holding the domain naming master role in the Forest.
Although these methods certainly work, they aren’t necessarily the easiest. The following
sections describe some additional methods for determining FSMO role holders on your network.
178
Chapter 5
Using NTDSUTIL
NTDSUTIL is a tool included with all editions of Windows Server—it is the only tool that shows
you all the FSMO role owners. To view the role holders, select Start, click Run, type
cmd
in the Open box, then press Enter. Type
ntdsutil
and then press Enter. Type
domain management
and then press Enter. Type
connections
and then press Enter. Type
connect to server <server_name>
where <server_name> is the name of the domain controller you want to view, then press Enter.
Type
quit
and then press Enter. Type
select operation target
and then press Enter. Type
list roles
for connected server, and then press Enter.
Using DCDIAG
Another method involves the use of the DCDIAG command. On a domain controller, run the
following command
dcdiag /test:knowsofroleholders /v
Note that the /v switch is required. This operation lists the owners of all FSMO roles in the
enterprise known by that domain controller.
179
Chapter 5
Figure 5.13: Using a third-party utility to determine which domain controller in your forest holds a particular
FSMO role.
180
Chapter 5
181
Chapter 5
If the current operations master role holder domain controller is online and accessible or can be
repaired and brought back online, it’s recommended that you transfer the role using NTDSUTIL’s
transfer command rather than the seize command. For more information about seizing and
transferring FSMO roles, see Microsoft Product Support Services articles 255504 and 223787 at
http://support.microsoft.com.
Figure 5.14: Checking for consistency of the operations masters on domain controllers.
182
Chapter 5
Figure 5.15: Using DirectoryAnalyzer to view the replication partners for each domain controller.
183
Chapter 5
You can also use the Replication Administration (REPADMIN) utility to monitor the current
links to other replication partners for a specific domain controller, including the domain
controllers that are replicating to and from the selected domain controller. Viewing these links
shows you the replication topology as it exists for the current domain controller. By viewing the
replication topology, you can check replication consistency among replication partners, monitor
replication status, and display replication meta data. To use REPADMIN to view the replication
partners for a domain controller, enter the command
REPADMIN /SHOWREPS
During normal operation, the Knowledge Consistency Checker (KCC) generates automatic replication
topology for each directory partition on the domain controllers. You don’t need to manually manage
the replication topology for normal operation.
In addition to tracking replicated changes, many third-party utilities constantly evaluate replication
latency across all domain controllers. If the latency exceeds the specified threshold, the utility will
generate an administrative alert and/or generate a log entry reporting the condition.
184
Chapter 5
To force replication among replication partners, you can use REPADMIN to issue a command to
synchronize the source domain controller with the destination domain controller by using the
object GUID of the source domain controller. To accomplish the task of forcing replication, you
need to find the GUID of the source server. Enter the following command to determine the
GUID of the source domain controller:
REPADMIN /SHOWREPS <destination_server_name>
You can find the GUID for the source domain controller under the Inbound Neighbors section of
the output. First, find the directory partition that needs synchronization and locate the source
server with which the destination is to be synchronized. Then note the GUID value of the source
domain controller. Once you know the GUID, you can initiate or force replication by entering
the following command:
REPADMIN /SYNC <directory_partition_DN> <destination_server_name>
<source_server_objectGUID>
The following example shows how to run this command to initiate replication between DC1 and
DC2 of the domain partition called COMPANY.COM. The replication is forced from the source
domain controller, DC1, to the destination domain controller, DC2. To perform the replication,
use the following command:
REPADMIN /SYNC DC=COMPANY,DC=COM DC1 d2e3ffdd-b98c-11d2-712c-
0000f87a546b
If the command is successful, the REPADMIN utility displays the following message:
REPLICASYNC() FROM SOURCE: d2e3badd-e07a-11d2-b573-0000f87a546b,
TO DEST: DC1 IS SUCCESSFUL.
Optionally, you can use the following switches at the command prompt:
• /FORCE—Overrides the normal replication schedule
• /ASYNC—Starts the replication event without waiting for the normal replication to finish
You’ll typically force replication only when you know that the destination domain controller has
been down or offline for a long time. It also makes sense to force replication to a destination
domain controller if network connections haven’t been working for a while.
185
Chapter 5
REPLMON provides a view only from the domain controller perspective. Like REPADMIN,
you can install it from the \Support\Tools folder on the Windows CD-ROM. REPLMON has two
options that you’ll find helpful when monitoring AD:
• Generate Status Report—Generates a status report for the domain controller. The report
includes a list of directory partitions for the server, the status of the replication partners
for each directory partition, and the status of any Group Policy Objects (GPOs). It also
includes the status of the domain controllers that hold the operations master roles, a
snapshot of performance counters, and the registry configuration of the server.
• Show Replication Topologies—Displays a graphical view of the replication topology.
This option can also display the properties of the domain controller and any intra-site or
inter-site connections that exist for the domain controllers.
When the KCC generates this error message, it’s in a mode in which it doesn’t remove any
connections. Normally, the KCC cleans up old connections from previous configurations or redundant
connections. Thus, you might find that there are extra connections during this time. The solution is to
correct the topology problem so that the spanning tree can form.
186
Chapter 5
It’s a good idea to use change-management information proactively in addition to using it reactively.
For example, you might use the object-population information to analyze and plan network capacity
and to predict future trends and infrastructure needs. This information is invaluable for management
reports and IT budget planning.
187
Chapter 5
Summary
Troubleshooting AD means identifying and analyzing problems that occur and repairing them in
the various systems. The troubleshooting process is mostly about isolating and identifying a
problem. To troubleshoot AD, you first check to see whether the domain controllers in the forest
can communicate with each other. Next, you need to ensure that AD has access to DNS and that
DNS is working properly. After you verify that DNS is working, you need to check that the
individual domain controllers and operations masters are working properly and supporting the
directory functions. Last, you need to verify that replication is working and that no consistent
errors are being generated. The ability to quickly assess what is causing a problem and
effectively develop a solution will help ensure smooth IT performance that successfully supports
the business.
188
Chapter 6
Design Goals
Every AD design has goals such as easy user and group management, proper application of
GPOs, user response time, and so forth. In addition to these goals, you simply need to include
troubleshooting and auditing. Auditing is such an important troubleshooting tool—auditing can
often tell you what has changed recently, which is often a good place to start troubleshooting—
that creating an auditable design is a big part of creating a design that lends itself more readily to
troubleshooting. Some specific design goals to consider include:
• Performance—You can overload a domain controller by placing too much of an auditing
burden on it, so your overall design needs to accommodate the level of auditing you plan
to do. If you’ll be auditing a lot, you might need more domain controllers so that domain
controllers can handle the auditing load as well as their normal duties. Monitoring is also
a concern: If you plan to monitor your domain controllers on a regular basis, expect that
monitoring to place some additional (albeit marginal) overhead on the domain
controllers, and design accordingly.
• Access to information—You will need to decide who will be performing auditing and
troubleshooting and ensure that information is easily accessible to those individuals. For
example, planning to use event log consolidation tools might help bring critical
information in front of the right people more quickly, making troubleshooting and
auditing more efficient and effective.
• Tools—You will undoubtedly turn to tools outside of Windows for many of your
auditing and troubleshooting needs because Windows isn’t well-equipped for either task
on a large scale. Tools often bring their own requirements and overhead, and your overall
AD design should acknowledge and accommodate those needs. In other words, make
your troubleshooting and auditing tools a part of your design so that they will work more
efficiently and effectively.
189
Chapter 6
Performance Considerations
Performance often takes the biggest hit when you begin to implement auditing. The reason is not
so much that auditing a single event consumes much computing power as it is the fact that a
single domain controller can easily generate hundreds or even thousands of events per minute,
especially during busy periods like the morning rush-hour login. Carefully planning your
auditing can help minimize performance impact, and implementing additional resources—such
as domain controllers—can help minimize an impact on end user response times.
For example, if you have 10,000 users and 10 domain controllers, everything might be working
great. Turn on a high level of auditing, however, and all 10 domain controllers might become
just a bit slower to respond to user requests than you would prefer. Adding another couple of
domain controllers can help pick up the slack. Each domain controller will have fewer users to
handle and will generate commensurately fewer auditing events during any given period.
Why monitor performance at all? It can be a great way to spot problems before they become
severe. For example, a server with steadily declining performance might be noticed before
performance declined to a point at which the server was useless. Performance values can also
provide obvious input into troubleshooting activities, especially where AD considerations such
as replication and database utilization are concerned.
Overauditing
Overauditing is perhaps the most common mistake administrators make when implementing
auditing. You must carefully consider exactly what you need to audit, and audit only that—and
nothing more. For example, some organizations audit login or access failures because a
significant number of failures can be a clear indicator that your systems are under attack.
However, if you’re not going to do anything more than a cursory, manual review of the event
logs from time to time, you’re not really achieving your stated goal of detecting attacks by
auditing for these types of events. Thus, the computing power going toward logging those
failures is essentially wasted. Consider whether there is a better way to obtain the information
that auditing might provide. For example, Figure 6.1 shows the Active Directory Sites and
Services console with the root container configured to audit all success and failure events for all
users, for all possible actions.
190
Chapter 6
Figure 6.1: Auditing everything, for all users, by using the Active Directory Sites and Services console.
This method is a good practice from a troubleshooting perspective because you’re going to get
very detailed information about everything that happens in the console. If someone adds a site
link, changes a site link bridge, and so on, an event will report that such an action took place. If a
problem occurs, you can jump right into the Event Viewer to see what changes have recently
been made. Figure 6.2 shows just such an event—someone has accessed a site configuration
object.
191
Chapter 6
However, enabling this level of auditing can result in a lot of events. Although useful for
troubleshooting, the overhead created might not be worth the benefit of having these events
simply for diagnostics. Third-party tools that provide the same information can come in handy in
such situations. For example, NetPro ChangeAuditor for Active Directory collects similar
information with a bit more detail, and logs it to a separate database. The Windows event logging
system isn’t involved; thus, although there is overhead involved, it is less than that required by
the Windows event logging system and it generates useful diagnostic information in the event
that you need to troubleshoot a problem. In this fashion, third-party tools can help collect useful
auditing and troubleshooting information without the need to go overboard with Windows’ built-
in auditing capabilities.
Overmonitoring
People don’t realize how much overhead monitoring can place on a server. For example, running
System Monitor against a remote system can add a measurable amount of overhead; running it
all day, every day can reduce response times if you don’t specifically plan for the monitoring in
your design and compensate for that overhead. For example, using System Monitor configured
as Figure 6.3 shows produced about a 2 percent overhead on the system being monitored. Not a
huge amount, but definitely something you want to be aware of and plan for.
192
Chapter 6
Figure 6.3: Monitoring isn’t free. This chart produced about 2 percent processor overhead on the monitored
system.
This area is another in which tools other than those bundled with Windows may do a better job.
For example, Microsoft Operations Manager (MOM) provides round-the-clock monitoring, but it
does so by sampling performance counters at regular intervals rather than continuously
monitoring them and drawing graphical charts based on their values. In fact, most enterprise-
class monitoring tools from companies such as NetPro, Microsoft, Argent Software, and NetIQ
are all capable of gathering more monitoring information while producing less overhead than
Windows’ built-in performance monitoring tools.
193
Chapter 6
Design Considerations
Once you’ve addressed your performance concerns in your design, you can begin thinking about
how your AD design will support troubleshooting and auditing needs. Specific questions—such
as Who will use auditing information and how?—will drive specific design decisions that affect
your overall AD design.
194
Chapter 6
Windows’ built-in tools for AD auditing and performance monitoring aren’t highly centralized.
Audit events, for example, are scattered across the event logs of every domain controller;
performance information can’t be readily centralized without significant overhead because
System Monitor is the only real built-in way to collect this information. Windows doesn’t
provide any built-in means of centralizing network-related documentation, particularly as it
relates to changes. Third-party tools often provide much more flexibility, allowing information to
be consolidated in central databases, made accessible through Web interfaces to distributed staff
members, and so forth. For these reasons, you’ll often find that third-party solutions will become
an indispensable part of your overall AD design.
For example, while not strictly falling into the category of “third-party,” Microsoft has a tool
called Microsoft Audit Collection Service (ACS, which has not been released by Microsoft as of
this writing). The tool is an agent-based service that collects security events from multiple
servers and funnels them into a central SQL Server database for filtering, reporting, searching,
and so forth. Many third-party manufacturers offer solutions that perform a similar function;
NetIQ, for example, has several products (notably the AppManager suite) designed to collect
event log entries into a single database; MOM performs a similar task as part of its feature set.
195
Chapter 6
Figure 6.4: Filtered events allow everyone who uses auditing information to focus on their specific job tasks
without being distracted by extraneous events.
Many tools store events in their own databases (either collecting events through their own
interfaces, as ChangeAuditor does, or consolidating Windows events as ACS does), your options
for providing this information to the people who need it are broader. You can, for example, make
auditing information available to individuals who aren’t administrators and might not normally
have access to, say, the security log; doing so allows you to more granularly define the security
in your environment.
196
Chapter 6
This script is designed to be saved as a file with a .WSF filename extension. You can run it from a
command line and use the /? argument to see its syntax and usage instructions. You can download
this script from the ScriptVault at http://www.ScriptingAnswers.com.
197
Chapter 6
198
Chapter 6
End If
Loop
oTS.Close
WScript.Echo "Complete"
]]>
</script>
</job>
</package>
Listing 6.1: A VBScript-based script that archives the security logs from several computers listed in a text
file.
Third-party tools, which rely on more flexible databases and their own storage formats, can
generally provide more flexible long-term storage options, if needed. For example, ACS is
designed to store security events for years in a SQL Server database. If long-term event storage
is a part of your organization’s needs, make sure these needs are accounted for in your AD
design.
Design Guidelines
There are specific design considerations and suggested best practices for creating an AD
environment that lends itself to troubleshooting and auditing. The following sections explore
these considerations.
199
Chapter 6
In addition, make tool selections based on your troubleshooting needs. For example, tools such
as ChangeAuditor are essentially reactive tools, providing you with information to diagnose a
problem that already exists. Other tools, such as NetPro’s SecurityManager, are somewhat more
proactive in nature because they continually monitor your environment for specific types of
changes, alert you to those changes, and, in some cases, restore the environment to a
preconfigured state, undoing the change. Although primarily designed to help maintain a secure
environment, these types of tools can have a valuable troubleshooting function by catching
potentially damaging changes and calling your attention to them immediately. All of these tools
have system requirements that need to be considered in your overall AD design.
At a minimum, decide on a tool set that offers the following capabilities:
• Collects information regarding changes to the environment as well as any other auditing
information necessary to support business policies and regulatory compliance. Tools that
offer this functionality include the Windows Event Viewer, NetPro ChangeAuditor,
MOM, and NetIQ AppManager. You might employ a collection of tools to meet this
need: ChangeAuditor, for example, is very specific to AD, while MOM can be extended
through management packs to handle specific Microsoft server products such as
Microsoft Exchange Server and Microsoft SQL Server.
• Makes the relevant information available to the correct people at the right time. Generally
speaking, this capability will involve centralizing information in a single database and
providing interfaces for various users to filter and view the information. Microsoft ACS
offers security event consolidation and reporting; MOM also helps centralize and manage
Windows events. Many third-party tools, such as ChangeAuditor, utilize their own events
and database rather than relying on Windows-generated event logs. A key purchasing
decision for any tool should be the ability to give non-administrative users access to
events (or reports of events) as needed to support their job tasks.
• Collects not only audit-style events (such as the events generated by Windows for its
event logs) but also performance data. Windows lacks a robust built-in means of
collecting, consolidating, and working with performance data. Tools such as NetPro
DirectoryTroubleshooter (which is specific to AD) and NetIQ AppManager are more
adept at collecting performance-related information and making it readily accessible to
troubleshooters. When it comes to AD troubleshooting, ensure that the tool you select
presents information in a way that enhances the troubleshooting process; a tool built
specifically for AD is often able to do this better than a tool that covers Windows in a
broader sense or covers multiple server products.
200
Chapter 6
If a Security tab isn’t visible, especially in Active Directory Users and Computers, select Advanced
Features from the console’s View menu and try again.
Figure 6.5: Configuring auditing on the domain in Active Directory Users and Computers.
201
Chapter 6
Regardless of the tools you use, make configuration maintenance easier by documenting your
desired configuration. Once you’ve decided how your tools should be configured, which events you’ll
audit, and so forth, having this documentation will allow a junior administrator or auditor to periodically
confirm that the environment is properly configured. In addition, this documentation will enable you to
more easily reconfigure a misconfigured environment.
202
Chapter 6
Basically, any performance information related to AD is worth collecting, if your tools can do so
without creating an unnecessary or undesirable performance burden on your domain controllers.
Preventing Trouble
Troubleshooting, of course, begins the moment you learn of a problem. Auditing can provide
useful troubleshooting information by helping you quickly determine what has changed in your
environment because changes are a major source of problems. However, an even more effective
method is to avoid problems as much as possible.
One way to avoid problems is to avoid change. Changes cause problems simply because AD
environments are so complex: Making a change without fully considering the ramifications can
often break things or at least cause them to work less than optimally. Change, however, is the
one constant you’ll always have in the IT industry, so although it is a great idea to avoid
unnecessary changes, you will never be able to avoid change entirely.
The trick is to manage your change so that change is never unexpected, never ad-hoc, and never
made without being thoroughly thought through. The Information Technology Information
Library (ITIL—read more at http://www.ogc.gov.uk/index.asp?id=2261) is a library of best
practices for IT management and offers a lot of information about change management. ITIL
provides a set of best practices for managing change to help avoid the problems that often
accompany change.
203
Chapter 6
204
Chapter 6
The way this process works is that all changes begin with a change request being submitted and
categorized. Immediate changes are sent for immediate development rather than being reviewed,
but other changes go through a review process to consider the risks of the change, the business
benefits, and so forth. Lower-priority changes are sent through a Change Advisory Board (CAB),
which will often package approved changes for scheduled implementation (perhaps once each
month) along with a group of other approved changes. The CAB considers past change
documentation when making its analysis and tries to group changes for release so that high-risk
changes aren’t bundled together—thereby presenting fewer risks at the same time. Changes are
also reviewed by an Executive Action Board (EAB), which considers areas such as business
impact. Urgent changes might bypass the CAB and go directly to the EAB, where they can be
approved for implementation more quickly or, if the EAB considers the change to be lower-
priority, queues for the CAB’s next review meeting.
Once approved, the change is developed by a technical staff member, then reviewed for potential
problems by a peer. Once the change is approved, it is deployed into a test environment to test
for additional potential problems. If all goes well with the test, the change is scheduled for
deployment. Prior to deployment, affected systems are backed up in case of a problem, then the
change is actually deployed to the production environment. The change is immediately reviewed
for accuracy, effectiveness, and for problems. If necessary, the change is rolled back and the
problem analysis documented for future review. At that point, the change goes back to the EAB
or CAB for further consideration. Changes that don’t result in a production problem are retained,
and the environment’s documentation is updated to reflect the change as a part of the baseline
environment.
The purpose of this process is threefold:
• To manage risk up-front by considering changes from a technical and business
perspective, and categorizing changes so that higher-priority ones receive precedence.
• To manage risk by technically reviewing changes. The idea is that more eyes on the
problem will be more likely to spot potential problems before they occur.
• To maintain a well-documented environment by ensuring that changes eventually become
a part of the environment’s overall documentation. This documentation helps to drive
decisions regarding future changes.
205
Chapter 6
Figure 6.8: IntaChange’s calendar view shows upcoming changes and their details.
Another tool, Elite Change Management System provides similar functionality, including
features such as file attachments (allowing you to, for example, attach new network diagrams to
a change, indicating how the network configuration will be modified by the change) and so forth.
On an AD-specific front, NetPro offers ChangeManager. This tool not only helps track and
manage the change management process but also helps to automate the actual implementation of
changes. Changes can be tracked through each phase of the process, and you can quickly see
which changes are pending approval, have been approved but not implemented, have been
implemented, and so forth. ChangeManager incorporates a review process to help prevent
changes tat might cause problems, and most importantly, allows you to quickly identify recently
made changes. If a problem occurs, you’ll know exactly what changed recently, allowing you to
focus your troubleshooting efforts on the most likely cause of the problem right away.
Perhaps the most interesting feature of ChangeManager—and a real benefit of having a change
management tool that is AD-specific—is the ability for lower-level administrators to have
ChangeManager invoke approved changes. This functionality allows a complex AD change to be
proposed, reviewed, and approved, all by senior administrators, while allowing a lower-level
administrator to actually have the tool implement the change at an appropriate time. The benefit
of this workflow is that senior administrators—the ones you trust most to design accurate
changes for your environment—can focus on design and architecture; lower-level administrators
can be trusted to implement even complex changes because ChangeManager performs the actual
implementation for them, according to the senior administrators’ designs.
206
Chapter 6
Summary
This chapter explains some of the design (or redesign) goals you should keep in mind for AD to
facilitate both troubleshooting and auditing. We’ve explored performance and recordkeeping
concerns that will be an issue for regulated companies and organizations. This chapter also
introduced change management, a process that seeks to avoid problems by carefully managing
the changes that are introduced into the environment. To aid in this process, you can employ
tools that can help automate an effective change management process and make it less of a
cumbersome business process and more of a practical tool to avoid the need to troubleshoot in
the first place. Ultimately, this scenario is your goal—keep AD from having problems, and if
problems do occur, solve them as quickly as possible.
Enterprises are relying more and more on the smooth operation of AD. Of course, even the best
designed and maintained AD environment can run into problems. This guide has shown you how
to monitor AD to spot problems early as well as how to test various aspects of AD to locate
problems when they occur. As our industry matures, we’re finding new and creative ways to
recognize problems, prevent them from happening, and fix them when they do occur. Some of
these new techniques include auditing, which isn’t immediately obvious as a troubleshooting
tool. However, the primary function of auditing is to keep track of what has changed, which is
the first step you will take in almost any troubleshooting scenario.
Other new techniques for preventing problems and reducing troubleshooting time include careful
change management and control so that only planned and tested changes are introduced into your
environment. As changes are usually the culprit, you can prevent problems by preventing
problem-causing changes.
The number of tools available to help with AD troubleshooting, change management, change
auditing, and other tasks is constantly growing. Now that AD is in its second generation, it’s a
more stable and mature product, and third-party manufacturers are producing robust, mature
tools to help keep AD humming along smoothly. Getting serious about troubleshooting means
putting the right management procedures in place to manage change, the right tools in place to
help, and the right know-how—which you’ve got, now—to quickly address any problems that
arise.
207
Chapter 6
Content Central
Content Central is your complete source for IT learning. Whether you need the most current
information for managing your Windows enterprise, implementing security measures on your
network, learning about new development tools for Windows and Linux, or deploying new
enterprise software solutions, Content Central offers the latest instruction on the topics that are
most important to the IT professional. Browse our extensive collection of eBooks and video
guides and start building your own personal IT library today!
208