Professional Documents
Culture Documents
2010 Design
and Architecture at Microsoft
Introduction.......................................................................................................................................6
Reasons for Microsoft IT to Upgrade 7
Directory Infrastructure 10
Messaging Infrastructure 12
Engineering Lab 17
Pilot Projects 18
Production Rollout 18
Message Routing 21
Unified Messaging 52
User Education..................................................................................................................................63
Deployment Planning........................................................................................................................64
Introducing Exchange Server 2010 into the Corporate Production Environment 64
Best Practices.....................................................................................................................................69
Planning and Design Best Practices 69
Conclusion.........................................................................................................................................72
Note: For security reasons, the sample names of forests, domains, internal resources,
organizations, and internally developed security file names used in this paper do not
represent real resource names used within Microsoft and are for illustration purposes only.
At the beginning of the new millennium, Microsoft IT operated a messaging environment that
included approximately 200 Exchange Server 5.5–based servers in more than 100 server
locations with approximately 59,000 users. The changes continued with the upgrade to
Microsoft Exchange 2000 Server, released in October 2000. Exchange 2000 Server so tightly
integrated with the TCP/IP infrastructure, Windows, and Active Directory that Microsoft IT no
longer could manage the messaging environment as an isolated infrastructure.
The shift of Microsoft IT toward a service-focused IT organization was also noticeable in the
designs and service level agreements (SLAs) that Microsoft IT established with the rollout of
Exchange Server 2003, released in October 2003. Microsoft IT designed the Exchange
Server 2003 environment for the scalability and availability requirements of a fast-growing
company. The consolidation included upgrades to the network infrastructure and the
deployment of large Mailbox servers to support 85,000 mailboxes in total. Business-driven
SLAs demanded 99.99 percent availability, including both unplanned outages and planned
downtime for maintenance, patching, and so forth. To comply with the SLAs, Microsoft IT
deployed almost 100 percent of the Mailbox servers in server clusters by using Windows
Clustering and a highly available, shared storage subsystem based on storage area network
(SAN) technology.
In 2006, before Exchange Server 2007, the environment had grown to include 130,000
mailboxes that handled 6 million internal messages and received 1 million legitimate
message submissions from the Internet daily. On average, every user sent and received
approximately 100 messages daily, amounting to an average e-mail volume per user per day
of 25 megabytes (MB). As the demand for greater mailbox limits increased, new technologies
and cost-efficient storage solutions such as direct-attached storage (DAS) were necessary to
increase the level of messaging services in the corporate environment.
Microsoft deals with the same business challenges as most other large multinational
companies. It has many remote workers, growing demands to cut costs in the current
economic environment, and a need to provide increased efficiencies through technology.
For the deployment of Exchange Server 2010, Microsoft IT defined the following key
objectives:
Figure 1 illustrates the locations of the data centers that contain Mailbox servers in the
corporate production environment. Concerning the WAN backbone, it is important to note
Microsoft IT deliberately designed the network links to exceed capacity requirements. Only
10 percent of the theoretically available network bandwidth is dedicated to messaging traffic.
The vast majority of the bandwidth is for non-messaging purposes to support the Microsoft
product development groups.
Network Infrastructure
Each Microsoft data center is responsible for a region defined along geographical
boundaries. Within each region, network connectivity between offices and the data center
varies widely. For example, a high-speed metropolitan area network (MAN) based on Gigabit
Ethernet and Synchronous Optical Network (SONET) connects more than 70 buildings to the
Redmond data center. These are the office buildings on and off the main campus in the
Puget Sound area. In other regions, such as Asia and the South Pacific, Internet-connected
offices (ICOs) are more dominant. Broadband connectivity solutions, such as digital
subscriber line (DSL) or cable modems, provide significant cost savings over leased lines as
long as the decrease in performance and maintainability is acceptable. Microsoft IT uses this
type of connectivity primarily for regional sales and marketing offices.
Data center
(Active Directory, Exchange Server,
data storage and backup)
Regional data centers and main campus The main campus and regional data
centers are connected together in a privately owned WAN based on frame relay,
asynchronous transfer mode (ATM), clear channel ATM links, and SONET links.
Office buildings with standard or high availability requirements Office buildings
connect to regional data centers over fiber-optic network links with up to eight
wavelength division multiplexing (WDM) channels per fiber pair.
Regional offices with up to 150 employees Regional offices use a persistent
broadband connection or leased line to a local Internet service provider (ISP) and then
access their regional data centers through a transparent virtual private network over
Internet Protocol security (VPN/IPsec) tunnels.
Mobile users These use a dial-up or non-persistent broadband connection to a local
ISP, and then access their mailboxes through VPN/IPsec tunnels, or by using Microsoft
Exchange ActiveSync®, remote procedure call (RPC) over Hypertext Transfer Protocol
(RPC over HTTP, also known as Outlook Anywhere), or Microsoft Outlook Web Access
over HTTP Secure (HTTPS) connections.
Directory Infrastructure
Like many IT organizations that must keep business units strictly separated for legal or other
reasons, Microsoft IT has implemented an AD DS environment with multiple forests. Each
forest provides a clear separation of resources and establishes strict boundaries for effective
security isolation. At Microsoft, some forests exist for legal reasons, others correspond to
business divisions within the company, and yet others are dedicated to product development
groups for early pre-release deployments and testing without interfering with the rest of the
corporate environment. For example, by maintaining separate product development forests,
Microsoft IT can prevent uncontrolled AD DS schema changes in the Corporate forest.
Corporate Seventy percent of the resources used at Microsoft reside in this forest. The
Corporate forest includes approximately 155 domain controllers. The AD DS database is
35 GB in size, with more than 180,000 user objects in the directory.
Corporate Staging Microsoft IT uses this forest to stage software images, gather
performance metrics, and create deployment documentation.
Exchange Development Microsoft uses this forest for running pre-release Exchange
Server versions in a limited production environment. Users within this forest use beta or
pre-beta versions in their daily work to help identify issues before the release of the
product. Microsoft IT manages and monitors this forest, while the Exchange Server
development group hosts the mailboxes in this forest to validate productivity scenarios.
Extranet Microsoft IT has implemented this forest to provide business partners with
access to corporate resources. There are approximately 30,000 user accounts in this
forest.
MSN® MSN is an online content provider through Internet portals such as MSN.com.
Microsoft IT manages this forest jointly with the MSN technology team.
Note: Microsoft IT maintains a common global address list (GAL) across all relevant forests
that contain Exchange Server organizations by using AD DS GAL management agents,
available in Microsoft Identity Lifecycle Manager 2007.
Note: Microsoft IT does not use domains to decentralize user account administration. The
human resources (HR) department centrally manages the user accounts, including e-mail
address information, in a separate line-of-business (LOB) application. The HR system
provides advanced business logic not readily available in AD DS to enforce consistency and
compliance. It is the authoritative source of user account information, synchronized with
AD DS through Identity Lifecycle Manager 2007.
Four sites contain Mailbox servers. The remaining Active Directory sites in the environment
contain infrastructure servers, domain controllers to handle authentication requests from
client workstations, and LOB applications, but no Exchange servers that are related to the
production implementation of Exchange Server.
Note: The Active Directory site topology at Microsoft mirrors the network layout of the
corporate production environment, with ADSITE_REDMOND as the hub site in a hub-and-
spoke arrangement of sites and site links.
Microsoft IT continues to use the dedicated Exchange site in its Exchange Server 2007 and
Exchange Server 2010 environment for the following reasons:
Warning: Implementing dedicated Active Directory sites for Exchange Server increases the
complexity of the directory replication topology and the required number of domain
controllers in the environment. To maximize the return on investment (ROI), customers
should weigh the business and technical needs of their own environment when considering
dedicated Active Directory sites.
Messaging Infrastructure
The design of an Exchange Server 2007 organization relies on the Active Directory site
structure, with the availability of configuration settings that enable the administrator to adjust
site connector costs to optimize traffic flow for messaging.
As depicted in Figure 3, at the end of the Exchange Server 2007 migration, four primary sites
held Exchange servers: Redmond, Dublin, Singapore, and Sao Paulo. These sites held the
Exchange Exchange
organization organization
Exchange
organization
Exchange
organization Corporate Forest Extranet
Exchange
organization
2 Public Folder
15 Mailbox servers servers
3 Edge Transport servers
8 Hub Transport
servers with antivirus
3 Hub Transport
Internet with antivirus
2 Edge Transport
(outbound)
Silicon Valley
perimeter 6 Client Access 2 Unified Messaging
Region/Data center
4 Client Access 2 Unified Messaging
Active Directory
forest
7 Unified Messaging servers
Active Directory
site
Messaging 2 Edge Transport
3 Hub Transport
connections (outbound)
with antivirus
2 Public-folder
15 Mailbox servers servers
31 Mailbox servers
Sao Paulo
1 Multiple- 2 Public-folder
role server servers
5 Public-folder servers
1 Mailbox server
One major change that occurred between the implementation of Exchange Server 2007 and
the beginning of the Exchange Server 2010 environment was the introduction of Microsoft
Figure 4 illustrates how Microsoft IT aligned the Exchange Server 2010 design and
deployment processes from assessment and scoping through full production rollout. The
individual activities correspond to the phases and milestones outlined in the Microsoft
Solutions Framework (MSF) Process Model.
As
t se
ll ou s sm
ro en
n ta
tio Deployment nd
uc complete sc
od
Pr op
i ng
Deploying Envisioning
es
Release Vision and
readiness scope
rcis
approved approved
exe
Pilo
g
Stabilizing Planning
nnin
t
pro
t pla
Developing
je
cts
Project
men
Scope
plans
complete
approved
loy
Dep
Engineering lab
Pre-release deployments and TAP
The next sections discuss key activities that helped Microsoft IT determine an optimal
Exchange Server 2010 architecture and design for the corporate production environment.
The messaging engineers based their design decisions on specific productivity scenarios, the
scalability and availability needs of the company, and other requirements defined during the
assessment and scoping phase. For example, Microsoft IT decided to use DAGs to eliminate
single points of failure in the Mailbox server configuration and Just a Bunch of Disks (JBOD)
to reduce storage costs while at the same time increasing mailbox quotas up to 5 GB through
thin provisioning. The deployment planning exercises helped to identify required hardware
and storage technologies that Microsoft IT needed to invest in to achieve the desired
improvements.
One of the major components of those technologies was a shift to JBOD. JBOD is a disk
technology that does not take advantage of disk-based redundancy, like redundant array of
independent disks (RAID), to help protect the data stored on disk. In past versions of
Exchange Server, as in most enterprise applications today, JBOD was not an option. This is
due to the limited nature of the software to offer high-availability options and to be architected
in a manner that is resilient to storage errors. With the introduction of Exchange Server 2010,
Microsoft has addressed these problems directly. The two investments that the product group
made to support JBOD were additional improvements to the input/output (I/O) profile of the
data and new improvements in the resiliency options.
The IOPS requirement for the Exchange database decreased by 70 percent beyond the
IOPS required in Exchange Server 2007. This is a significant achievement, considering that
Exchange Server 2007 was a 70 percent reduction from the requirements of Exchange
Server 2003. In addition to the reduction in IOPS, the I/O load was smoothed out to be
uniform. This is due to changes in the internal database architecture.
The resiliency was first enhanced with the ability to have up to 16 copies of a database
replicated to different Mailbox servers. With the potential for so many copies in any
implementation, the Exchange Server product group implemented a feature that would make
the mounted copy of the database more resilient to errors on the disk. Instead of failing the
database over any time an error on the disk is discovered, the Exchange Server 2010
database can perform an automatic page restore. This process identifies what portion of the
database is inaccessible because of the disk errors and copies that page or pages from one
of the other replicated copies of the database to an unused portion of the disk. This process
allows the disks to have a higher occurrence of errors than the software would otherwise
tolerate before failing over to a secondary copy of the data.
These changes, along with many other enhancements to the database, support a fail-safe
design by using extremely low-cost storage for a highly critical enterprise application like
Exchange Server 2010.
Engineering Lab
The Messaging Engineering team maintains a lab environment that simulates the corporate
production environment in terms of technologies, topology, product versions, and
interoperability scenarios, but without production users. The engineering lab includes
examples of the same hardware and storage platforms that Microsoft IT uses in the corporate
production environment. It provides the analysis and testing ground for the messaging
engineers to validate, benchmark, and optimize designs for specific scenarios; test
components; verify deployment approaches; and estimate operational readiness.
Testing in the engineering lab helps the messaging engineers ensure that the conceptual and
functional designs scale to the requirements and scope of the deployment project. For
example, code instabilities or missing features of beta products might require Microsoft IT to
alter designs and integration plans. Messaging engineers can verify the capabilities of
chosen platforms and work with the product groups and hardware vendors to make sure that
the deployed systems function as expected, even when running pre-release versions.
Pilot Projects
An important task of the Messaging Engineering team is to document all designs, which the
messaging engineers pass as specifications to the technical leads in the Systems
Management team for acceptance and implementation. The messaging engineers also assist
the technical leads during pilot projects and server installations and help develop a set of
build documents and checklists to give operators detailed deployment instructions for the full-
scale production rollout.
With this in mind, the Messaging Engineering team learns about the various features of the
products and compares them against the business and technical needs of the company. All
features that the product group adds to the product can therefore be validated by a separate
group to ensure that they are usable. During the Exchange Server 2010 development cycle,
engineers evaluated every feature in preparation for making future recommendations on
architecture and design. Some of the features met the design goals and needs of the
company and were integrated into the architecture, whereas other features did not meet the
needs of the company and were not implemented.
A good example of a feature that was not integrated into the architecture is distribution list
moderation. Although this feature has many uses, it did not readily fit into the existing use of
distribution lists and culture at Microsoft. It was important for the Messaging Engineering
team to determine how the feature functioned and whether it would enhance any of the
existing use cases or design goals. This evaluation also helps ensure that future
implementation of the feature will be less time consuming.
A good example of a feature that was integrated into the architecture is single item recovery.
As part of the move to a backup-free environment, single item recovery was a key
component to the overall architecture and required extensive testing in the engineering lab
and in pilot environments.
Note: Although many organizations are similar, business needs can vary. Every organization
must evaluate product features against its own business and technical needs.
Production Rollout
The server designs that the Messaging Engineering team creates include detailed hardware
specifications for each server type, which the Infrastructure Management team and the Data
Center Operations team at Microsoft IT use to coordinate the procurement and installation of
server hardware in the data centers. The Data Center Operations team builds the servers,
installs the operating systems, and joins the new servers to the appropriate forest before the
Exchange Systems Management team takes over for the deployment of Exchange
Server 2010 and related components. To achieve a rapid deployment, Microsoft IT
automated most of the Exchange Server deployment steps by using Exchange Management
Shell scripts.
The most important consideration of RBAC is that it does not affect Exchange Server 2003 or
Exchange Server 2007 permission models, and these need to coexist during the migration.
Microsoft IT used universal security groups to enable cross-forest administration and placed
administrators in these groups for use in both the Exchange Server 2007 permission model
and the Exchange Server 2010 permission model. This allowed for an easier transition.
After considering the groups that administrators were members of, Microsoft IT had to
consider the components of RBAC. These components are depicted in Figure 5 and are
defined as follows:
Management role A management role defines the cmdlets and parameters that are
available to a person or group assigned to the role. Granting rights to manage or view
objects associated with a cmdlet or parameter requires adding a management role for
that cmdlet or parameter to the role.
Management role entry A management role entry is a specific assignment of a single
cmdlet or parameter to a management role. The accumulations of all the management
role entries define the set of cmdlets and parameters that the management role can run.
Management role assignment A management role assignment is the assignment of a
management role to a user or a universal security group. After a management role is
created, it is not usable until the role is assigned to a user or group.
Management role scope A management role scope is the scope of influence or impact
of the person or group assigned a management role. In assigning a management role,
management scopes can be used to target which servers, organizational units, filters,
etc., that the role applies to.
Management role group A management role group is a special security group that
contains mailboxes that are members of the role group. Users can be added and
removed from the group membership to gain the assignments that have been given to
the group. The combination of all the roles on a role group defines everything that users
added to a role group can manage in the Exchange organization.
To determine which roles to use, Microsoft IT conducted a planning exercise to map the
existing team functionality and groups with the roles that are available out of the box in
Exchange Server 2010. After creating a table and assigning the various groups to the roles,
Microsoft IT determined that it would be able to use the out-of-the-box roles, groups, and
assignments for its implementation, with a few exceptions and modifications to the built-in
role group memberships. The following exceptions were not transferred back into the product
because the product group deemed them specific to the operations of the IT environment at
Microsoft:
Helpdesk The original idea was to use the built-in View-Only role. However, Microsoft
IT determined that this role would provide too much insight into the environment.
Microsoft IT decided instead to create a custom role that allows only a finite number of
cmdlets. The concern about the View-Only role was that almost all aspects of the
environment could be viewed. This might have confused staff that was not familiar with
Exchange and may have disclosed some confidential or sensitive data (for example,
journal and transport rules).
Executive support staff This team requires nearly unlimited rights on the mailboxes
for executives. In the existing Exchange Server 2007 model, this team is granted both
Exchange Organization Administrator rights and elevated AD DS rights. They therefore
can get access to any mailbox in the organization. Having a custom Exchange
Server 2010 role provides a way to grant the necessary tasks to a limited scope (the
executives).
Exchange Control Panel modifications The out-of-the-box rights were too broad for
Microsoft IT's deployment. To conform Exchange Control Panel access to internal
security requirements, Microsoft IT restricted access to the Exchange Control Panel by
using RBAC.
There are many advantages to using a uniform management tool. The biggest advantage to
Microsoft IT is that it does not require special installation and configuration on all workstations
or servers that will administratively interact with Exchange Server. For Microsoft IT, this is
important because of its goal to automate as much of the operations and maintenance of
Exchange Server as possible. Many scripts that are used on a regular basis do not require a
lot of input to ensure that the system is running. With Windows PowerShell remoting, these
scripts can run on any system where Windows PowerShell 2.0 has been installed. This is
especially important during testing of beta code because of the rapidly changing nature of the
administrative tools, and it has contributed to the cost savings that Microsoft IT has achieved
through its implementation of Exchange Server 2010.
Message Routing
Agile companies that heavily rely on e-mail in practically all areas of the business, such as
Microsoft, cannot tolerate messages that take hours to reach their final destinations.
Information must travel fast, reliably, predictably, and in accordance with security guidelines.
For Microsoft, this means “90 seconds, 99 percent of the time”—that is, 99 percent of all
messages in the corporate production environment must reach their final destination in 90
seconds worldwide. This SLA does not apply to messages that leave the Microsoft
environment, because Microsoft IT cannot guarantee message delivery across external
systems.
Microsoft established this mail-delivery SLA during the Exchange Server 2003 time frame,
and it was equally important during the design of Exchange Server 2010. Another important
design goal was to increase security in the messaging backbone by means of access
restrictions to messaging connectors, data encryption through Transport Layer Security
(TLS), and messaging antivirus protection based on Microsoft Forefront Security for
Exchange Server.
During the Exchange Server 2007 implementation, the topology and mail flow were built on
the updated messaging architecture. This architecture is still sufficient for the needs of the
organization, with the exception of some changes to the Internet-facing mail flow described in
the "Internet Mail Connectivity" section later in this paper.
Message routing at Microsoft consists of internal mail routing and mail routing to and from the
Internet. All mail routing within the Microsoft environment happens across the Hub Transport
servers. All mail routing to and from the Internet happens across the Hub Transport servers
in Redmond only and the Forefront Online Protection for Exchange environment. All of these
connections are encrypted through TLS, including the message relays to business partners.
That leaves a number of new Information Rights Management (IRM) features to consider in
the design, such as IRM Search, and integration with Active Directory Rights Management
Services (AD RMS).
The features that Microsoft IT turned on by default or configured to work during the initial
implementation included the following:
In the AD DS replication topology, this dedicated Exchange site is a tail site.
ADSITE_REDMOND is the Active Directory hub site that is used as the AD DS replication
focal point, yet this site does not contain any Hub Transport servers. Accordingly, at this
point, Exchange Server 2010 cannot use ADSITE_REDMOND as a hub site for message
routing purposes. By default, Exchange Server 2010 interprets the Microsoft IT Exchange
organization as a full-mesh topology where Exchange Server 2010 Hub servers in each
region connect to each other via a single Simple Mail Transfer Protocol (SMTP) hop. This
does not match the AD DS replication and network topology. Exchange Server 2010 uses the
full-mesh topology in this concrete scenario, because along the IP site links, all message
transfer paths appear to be direct.
Figure 6 illustrates this situation by overlaying the Active Directory site topology and the
default message routing topology.
The routing topology depicted in Figure 6 works because Exchange Server 2010 can transfer
messages directly between the sites that have Hub Transport servers (such as
ADSITE_SINGAPORE to ADSITE_DUBLIN), yet all messages travel through the Redmond
location according to the physical network layout. In this topology, all bifurcation of
messages, sent to recipients in multiple sites, occurs at the source sites and not at the latest
possible point along the physical path, which would be Redmond.
For example, if a user in Sao Paulo sends a single message to recipients in the sites
ADSITE_REDMOND-EXCHANGE, ADSITE_DUBLIN, and ADSITE_SINGAPORE, the
source Hub Transport server in Sao Paulo establishes three separate SMTP connections,
one SMTP connection to Hub Transport servers in each remote site, to transfer three
message copies. Hence, the same message travels three times over the network from Sao
Paulo to Redmond.
Based on such considerations, it is a logical conclusion that the design of an ideal Exchange
Server 2010 environment takes the implications of dedicated Active Directory sites into
account. On one hand, it is beneficial to keep the message routing topology straightforward
and keep the complexities associated with maintaining and troubleshooting message transfer
minimal. On the other hand, Microsoft IT had to weigh the benefits of eliminating the
dedicated Redmond Exchange site ADSITE_REDMOND-EXCHANGE against the impact of
such an undertaking on the overall deployment project in terms of costs, resources, and
timelines.
This approach ensured that Exchange Server 2010 could bifurcate messages traveling
between regions closer to their destination. By configuring the ExchangeCost attribute on
Active Directory site links, which Exchange Server 2010 adds to the Active Directory site link
definition, Microsoft IT was able to perform the message flow optimization without affecting
the AD DS replication topology. The ExchangeCost attribute is relevant only for Exchange
Server 2007 and Exchange Server 2010 message routing decisions between sites, not
AD DS replication.
Microsoft IT performed the following steps to optimize message routing in the corporate
production environment:
1. To establish a hub/spoke topology between all sites that have Exchange servers,
Microsoft IT created three additional Active Directory IP site links.
2. Microsoft IT specified a Cost value of 999 (highest across the AD DS topology) for these
new IP site links so that AD DS does not use these site links for directory replication.
Based on the Exchange-specific Active Directory/IP site link topology, Exchange Server 2010
routes messages in the Corporate forest as follows:
Messages to a single destination The source Hub Transport server selects the final
destination as the next hop and sends the messages directly to a Hub Transport server
in that site. For example, in the Dublin to Singapore mail routing scenario, the network
connection passes through Redmond, but the Hub Transport servers in
ADSITE_REDMOND-EXCHANGE do not participate in the message transfer.
Messages to an unavailable destination If the source Hub Transport server cannot
establish a direct connection to a destination site, the Hub Transport server backs off
along the least-cost routing path until it establishes a connection to a Hub Transport
server in an Active Directory site. This is a Hub Transport server in
ADSITE_REDMOND-EXCHANGE, which queues the messages for transmission to the
final destination upon restoration of network connectivity.
Messages to recipients in multiple sites Exchange Server 2010 delays message
bifurcation if possible. In the optimized topology, this means that Hub Transport servers
first transfer all messages that have recipients in multiple sites to a Hub Transport server
in ADSITE_REDMOND-EXCHANGE. The Hub Transport server in
ADSITE_REDMOND-EXCHANGE then performs the bifurcation and transfers a
separate copy of the message to each destination. Exchange Server 2010 transfers a
single message copy from ADSITE_SAO PAULO to ADSITE_REDMOND-EXCHANGE,
where bifurcation occurs, before transferring individual message copies to each
destination site. Again, Exchange Server 2010 transfers only a single copy per
destination site. Within each site, the receiving Hub Transport servers may bifurcate the
message further as necessary for delivery to individual recipients.
Note: Microsoft IT did not configure the REDMOND-EXCHANGE site as a hub site in the
routing topology by using the Set-AdSite cmdlet to force all messages between regions to
travel through the REDMOND-EXCHANGE site, requiring an extra SMTP hop on Hub
Transport servers in that site. Microsoft IT found no compelling reason to force all message
traffic through the North American Hub Transport servers. Establishing a hub site is useful if
tail sites cannot communicate directly with each other. In the Microsoft IT corporate
production environment, this is not an issue. For more information about hub site
configurations, see the topic “Understanding Message Routing” at
http://technet.microsoft.com/en-us/library/aa998825.aspx.
In addition to encrypting messaging traffic internally, Microsoft IT helps protect its internal
messaging environment by restricting access to incoming SMTP submission points. This
helps Microsoft IT minimize mail spoofing and ensure that unauthorized SMTP mail
submissions from rogue internal clients and applications do not affect corporate e-mail
communications.
To accomplish this goal, Microsoft IT removes all default receive connectors on the Hub
Transport servers and configures custom receive connectors by using the New-
ReceiveConnector cmdlet to accept only authenticated SMTP connections from other Hub
Transport servers and the Forefront Online Protection for Exchange environment. To meet
the needs of internal SMTP applications and clients, Microsoft IT established a separate
SMTP gateway infrastructure based on Exchange Server 2010 Hub Transport servers that
enforces mail submission access controls, filtering, and other security checks.
Furthermore, Microsoft IT deployed Forefront Security for Exchange Server on all Hub
Transport servers to implement messaging protection against viruses. Despite the fact that
internal messages and messages from the Internet might pass through multiple Hub
Transport servers, performance-intensive antivirus scanning occurs only once. Forefront
Security for Exchange Server adds a security-enhanced antivirus header to each scanned
message, so further Hub Transport servers do not need to scan the same message a second
time. This avoids processing overhead while maintaining an effective level of antivirus
protection for all incoming, outgoing, and internal e-mail messages.
The multiple-role approach provides the benefits of a reduced server footprint and can help
minimize the hardware costs. For example, Microsoft IT deploys multiple-role servers in Sao
Paulo for the Hub Transport server role, Client Access server role, and Unified Messaging
server role to use hardware resources efficiently. Similar to the Exchange Server 2007
design, Microsoft IT consolidated server roles in this location due to moderate workload.
However, the Mailbox servers in Sao Paulo are single-role servers to help ensure that the
Mailbox role can take advantage of all the resources available.
Microsoft IT based its decisions to combine Exchange server roles on the same hardware or
separate them between dedicated servers on capacity, performance, and availability
demands. Mailbox servers are prime examples of systems that have high capacity,
performance, and availability requirements at Microsoft. Accordingly, Microsoft IT deployed
the Mailbox servers in all regions in a single-role design. This single-role design enabled
Microsoft IT to eliminate single points of failure in the Mailbox server configuration by using
DAGs. In Redmond, Dublin, and Singapore, Microsoft IT also used the single-role design for
the remaining server roles, because these regions include a large number of users and
multiple Mailbox servers.
During the initial production deployment, it was important to consolidate the number of server
models to increase cost efficiency. Microsoft IT used the following hardware per server role:
Client Access Two quad-core Intel Xeon x5470, 3.3 gigahertz (GHz), with 16 GB of
memory.
Hub Transport Two quad-core Intel Xeon x5470, 3.3 GHz, with 16 GB of memory.
Mailbox Two quad-core Intel Xeon x5470, 3.3 GHz, with 32 GB of memory.
Unified Messaging Two quad-core Intel Xeon x5470, 3.3 GHz, with 16 GB of
memory.
Table 1 shows the number of servers and the technologies per server role that Microsoft IT
uses in the corporate production environment to implement load balancing and fault
tolerance.
Sao
Server role Redmond Dublin Singapore Paulo Technology
Determining these requirements is a difficult process. To make this process easier, Microsoft
IT and the Exchange Server product group agreed on several initial goals. These goals
included the following requirements:
Increase the mailbox quota to 5 GB With reliance on e-mail increasing every year, a
much larger mailbox size is necessary to sustain long-term growth.
Move to low-cost storage The Exchange Server product group built Exchange
Server 2010 to support low-cost storage architectures such as JBOD. Successfully
implementing this internally is key.
Remove third-party backups The Exchange Server product group built Exchange
Server 2010 to use native protection methodologies to provide backup and restore
functionality. This also needed to be proven in a production environment.
To meet these simple goals, Microsoft IT devised a unique solution. The goal of 5-GB
mailboxes was much larger than the average mailbox in the environment, which was around
890 MB at the end of 2008. This meant that a large gap existed between the current size of
the mailboxes and the goal. To take advantage of this gap, Microsoft IT took a thin-
provisioning approach to mailbox sizing.
This risk of data loss has prevented most modern enterprise applications from taking
advantage of this lower-cost disk option in production implementations. To specifically focus
on lowering the cost of storage for customers, the Exchange Server product group modified
the built-in database resiliency to accommodate JBOD-style architectures. (These changes
are described in more depth in the section titled "Optimization of the Storage Design for
Reliability and Recoverability" later in this paper.) The new features in the product underwent
joint testing in the pre-production environment, the product group's labs, and the Messaging
Engineering team's engineering lab. Ultimately, JBOD was implemented across the
enterprise.
Thin Provisioning
Microsoft IT defines thin provisioning as provisioning storage based on expected utilization as
opposed to maximum possible utilization. Thin provisioning enables Microsoft IT to balance
projected costs (possible infrastructure footprint in the future to deliver on the promise) and
committed cost (current infrastructure footprint procured today). Determining a method for
building a storage system to accommodate this careful balance requires a mature IT
organization, and it also requires a long-term understanding of mailbox profiles and growth
rates.
Internally, Microsoft has implemented a procedure for regularly collecting statistics and
trending them in an internal database. This process involves several custom scripts that use
built-in cmdlets such as Get-Mailboxstatistics. During a six-month planning stage of
implementing thin provisioning, this trending procedure revealed that the average mailbox
received 100–150 messages per day and the average growth rate was roughly 60 MB per
user, per month. That meant that if the employee's mail profile remained the same, or if the
employee continued to use his or her e-mail in the same manner that he or she had been
during the last six months, it would be reasonable to assume that mailboxes would continue
to grow at the same rate.
Projecting the number of 60 MB per user, per month for three years results in an average
mailbox size of 1.8 GB, in addition to the existing average size of 890 MB, for a total of 2.69
GB. This meant that if the mail profile and growth rates remained the same, the average
mailbox size would be around 2.69 GB per user over the next three years, regardless of what
the mailbox limit was. Microsoft IT added another 500 MB per user based on the following
factors:
It was important for Microsoft IT to continue to monitor the growth of mail to ensure that the
growth after moving to Exchange Server 2010 matched the anticipated growth defined
previously. This means constant monitoring and comparing against the baseline to provide
enough time to purchase additional servers and storage to accommodate growth beyond the
anticipated numbers.
Storage Design
The mailbox is one of the very few components in an Exchange Server 2010 organization
that cannot be load-balanced across multiple servers. Each individual mailbox is unique and
can reside in only one mailbox database on one active Mailbox server. It follows that the
mailbox store is one of the most critical Exchange Server components that directly affect the
availability of messaging services.
With previous versions of Exchange Server, Microsoft IT relied on SAN or DAS solutions to
provide the necessary configuration for its mailbox clusters. SAN provided a higher level of
availability due to the robust architecture built around the physical disks, and it enabled
Microsoft IT to achieve the number of disks required for I/O throughput and scalability. DAS
solutions with redundant disk technologies, when combined with high-availability features
such as CCR, also provided a higher level of availability and scalability. In fact, by using SAN
and DAS with previous versions of Exchange Server, Microsoft IT was able to achieve 99.99
percent availability. These were both good solutions. However, the desire to continue to
lower the per-gigabyte cost while increasing the size of the mailbox and maintaining a high
level of availability required rethinking the solution.
Maintain 99.99 percent availability at the service level, while putting better
measurements in place for that availability and increasing the services offered.
Increase Mailbox server resiliency by removing the two-server dependency and
distributing the redundancy across more than two servers.
Reduce storage infrastructure costs and increase mailbox quotas.
Remove backups from the environment entirely by using many copies of the data, lower-
cost JBOD arrays, and single item recovery.
To standardize the storage layout for Exchange Server 2010, help ensure reliability, and
provide scalability, Microsoft IT implemented a DAS storage device that was low cost, easy to
configure, and simple to replace. The architecture is based on 1-terabyte, 3.5-inch, 7,200-
RPM (revolutions per minute) serial-attached SCSI (SAS) drives. These drives are combined
into two shelves of 35 drives each for a total of 70 drives per JBOD array. The low cost of the
disks and the cost saved by removing the overhead of RAID are key to the overall reduction
in cost of the Exchange infrastructure. The cost savings are directly attributable to the
reduction in IOPS and the increased data resiliency in Exchange Server 2010.
Figure 7 shows an example 10-node DAG connected to five JBOD arrays. This is the
standard building block used for all DAGs. The universality of the design allows for easy
modification as the environment changes. Each JBOD array contains a total of 70 spindles;
these are split in half, with 35 on each shelf. The servers themselves are connected to one
half of the shelf.
To determine the required number of disks per database logical unit number (LUN), Microsoft
IT considered the following factors:
To understand the amount of memory required, the Exchange Server product group and
Microsoft IT performed tests to determine how much memory the early releases of Exchange
Server 2010 were using. Initial tests demonstrated that the operating system required roughly
2 GB of memory, whereas Exchange Server 2010 required roughly 4 GB of memory while
running the maximum number of active databases in the design (16 databases). Taking the
two options for memory, 32 GB or 64 GB, and subtracting the memory required for the base
operation of the server left either 24 GB for the Extensible Storage Engine (ESE) cache or 58
GB. Dividing these options by the desired number of mailboxes per server provided either 8
MB or 20 MB of ESE cache per user.
During the time that Microsoft IT evaluated its available options, the Exchange Server
product group also obtained results from its initial performance testing in its labs. The
Exchange Server product group found that mailbox profiles that transfer (send/receive) 100
messages per day would require 6 MB of ESE cache, and mailbox profiles that transfer 150
messages per day would require 9 MB of ESE cache. The observed average message
transfers was 120 messages per day, which meant that the 8 MB of ESE cache available in
the 32-GB memory configuration would be sufficient to handle the desired workload.
Considering both the cost difference and the performance requirements, Microsoft IT
designed 32 GB into its Mailbox server architecture. As with everything else in the
DAGs allow for single database failovers and up to 16 copies of any database. The
opportunity to take advantage of this type of failover capability, combined with a low-cost
storage architecture like JBOD, requires some critical design decisions to provide an optimal
storage design.
Microsoft IT uses the following features to help ensure reliability and recoverability at the
storage level:
Currently, the system is averaging database disk utilization of around 600 MB. If System
Center Operations Manager finds that a disk is over the 750 MB quota, Microsoft IT manually
runs a set of scripts to move the mailboxes to other databases within the DAG, stub the
database, and create a new database on the same server. This process of stubbing is not
simple, but it is required to ensure a clean migration of mailboxes and re-creation of the
database. The process includes the following steps:
1. Verify that all existing copies of the databases are able to be mounted.
4. Move all users off the database and distribute evenly into the existing databases. This
process includes a simple algorithm that spreads the mailboxes to the databases with
the lowest number of existing mailboxes.
5. Verify that the databases contain mailboxes by using Exchange Management Shell
cmdlets:
9. Delete the log files (and *.chk files) manually from all copies of the database (active and
passive).
10. Mount the active database. At this point, the log generation number has reset.
13. Delete the .edb files from all copies of the database (active and passive).
14. Mount the database by using an Exchange Management Shell cmdlet and force the
creation of a new .edb file:
Mount-Database –Force
To monitor the number of active databases per server, Microsoft IT runs (weekly) a custom
script that queries the servers to determine the number of active databases. After the script
finds the number of active databases per server in the DAG, it divides the total number of
active databases in the DAG by the number of active nodes, and then moves databases as
necessary to even the load.
To use server hardware efficiently while providing the required redundancy level through
DAG, Microsoft IT implemented 11-node DAGs in Redmond and 16-node DAGs in Singapore
and Dublin. Underneath the Exchange Server configuration, Exchange Server sets up a
Majority Node Set (MNS) server cluster with file-share witnesses. Each cluster node stores a
copy of the MNS quorum on the local system drive and keeps it synchronized with the other
node.
The file-share witness feature enables the use of a file share that is external to the DAG as
an additional vote to determine the status of the database on each of the nodes that are
participating in the DAG. This helps to avoid an occurrence of network partition within the
cluster, also known as split-brain syndrome. Split-brain syndrome occurs when all networks
designated to carry internal cluster communications fail, and nodes cannot receive heartbeat
signals from each other.
To enable the file-share witness feature, Microsoft IT specifies a file share by using the
WitnessDirectory property on the DAG configuration to specify a directory on a Hub
Transport server. For DAGs that are extended beyond a single site, Microsoft IT uses an
alternate file-share witness on the Hub Transport servers in the secondary site by using the
AlternateWitnessServer property.
Microsoft IT uses three network adapters in each DAG node. The first adapter connects the
node to the public network where Outlook clients connect for public folders and where the
Hub Transport and Client Access servers connect to interact with mailboxes. The Microsoft
Exchange Replication Service on the cluster nodes uses the second adapter on a private
Another important component that is necessary to ensure high availability in the DAG-based
Mailbox server configuration is the transport dumpster feature on Hub Transport servers. This
feature enables messages to be redelivered to a mailbox database in the event of a lossy
failure. It is important to ensure that Hub Transport servers have the appropriate capacity to
handle a transport dumpster. Mailbox servers might request redelivery if a failover occurs to a
passive database before the most recent transactions have been replicated. To configure the
transport dumpster, Microsoft IT uses the set-TransportConfig cmdlet with the following
parameters:
Note: Microsoft IT uses the same hardware configuration on all DAG nodes to maintain the
same performance level after a failover to an unmounted database.
Estimates of what a backup environment would have cost and what the increase in cost of
the Mailbox servers were show that there was a net savings in server and storage hardware.
The decision required careful planning, because removing backups from the environment
does not enable the elimination of restoration of e-mail messages or the cost associated with
restoration. This decision required some changes to help ensure the restoration of a portion
of mail messages and the recovery of messaging system functionality. The main architectural
processes that enabled this decision included the use of DAG, single item recovery, and
large mailbox sizes.
In addition to the basic process of mail restoration, the need for system-wide protection was a
major component to the decision process to remove backups completely. In Microsoft IT’s
previous implementation of Microsoft Exchange, the risk of losing all copies of data and
causing a system-wide failure with no recovery mechanism has traditionally been mitigated
with a backup copy. The move away from this form of mitigation by removing backups
requires a new mitigation technique against the risk of losing all on-line copies of the data.
Microsoft IT currently mitigates this risk through the rigorous monitoring and disk replacement
strategy outlined in the previous section, "Mailbox Server Configuration." Without a reliable
procedure for closely monitoring disk failures and managing the disk replacement and data
rebuild process, Microsoft IT would not be able to mitigate the risk and could not move to a
backup-free environment.
Note: While the risk of data loss and the impact of that data loss are greater when no backup
copy exists, it is important to understand that closely monitoring disk failures regardless of
the backup strategy or underlying storage technology is critical to running a Microsoft
Exchange environment. If failed disks are not discovered and replaced immediately, the
likelihood of a required restore from backup and incurring the cost associated with those
restores rises significantly.
For the rollout of Exchange Server 2010 in the corporate production environment, Microsoft
IT defined the following backup and recovery requirements:
Initially, Microsoft IT implemented all three options. Every mailbox database is built as a
member of a DAG. Every mailbox database has single item recovery enabled. The count of
copies initially was three active copies of the database and one lagged copy of the database.
After implementing the solution, Microsoft IT re-evaluated the use of the lagged database
copy to meet its fourth goal. The team found that the DAG safely met this goal, and a better
option was to change the fourth lagged copy to simply be a fourth copy of the database.
Microsoft IT never implemented any form of traditional or third-party backups in its Exchange
Server 2010 environment. The first design that Microsoft IT tested in the pre-production
environment included a lagged copy. The environment that Microsoft IT tested during this
time frame consisted of the minimum recommended number of copies for an environment
that had no backup (three copies), plus one lagged copy. The lagged copy provided a point in
time that could be recovered from and offered several benefits to Microsoft, including
protection from store or database logical corruption.
Despite the benefits of being able to have all recovery data on separate disks, this model did
not meet the reduction in administrative overhead that Microsoft required. Important
considerations included:
The ability to run all databases on available disks versus dedicating a few disks that are
available to the system that is used for high availability. This approach helps ensure a
cost-effective balance of high availability and resource utilization.
Uniformity across all DAG nodes helps ensure that no server or set of databases is
treated separately and must be processed with a different set of rules.
With the previously outlined goals in mind, Microsoft IT also tested the second option for
running a backup-free environment in the pre-production environment. This method would
rely solely on the new features in the mailbox dumpster and the single item recovery
capability. To understand how these new Mailbox dumpster features and single item recovery
can be used in place of a third-party backup system, it is important to first understand how
the Mailbox dumpster works.
Unlike the first version of the mailbox dumpster, the new version is no longer simply a view of
the database. The Mailbox dumpster in Exchange Server 2010 is implemented as a folder
called the Recoverable Items and is located within the Non-IPM subtree of the mailbox. The
folder has three subfolders: Deletions, Versions, and Purges.
Note: Exchange Server 2010 includes the capability for each mailbox to also maintain an
archive mailbox. There is a dumpster for both the primary mailbox and the archive mailbox.
Data deleted in the primary mailbox is placed in the primary mailbox dumpster, whereas data
deleted in the archive mailbox is placed in the archive mailbox dumpster.
Exchange Server 2010 includes the ability to preserve data within the mailbox for a period of
time. An administrator can enable this feature on a per-mailbox basis by setting the
SingleItemRecoveryEnabled switch to True on the Set-Mailbox cmdlet. The deleted item
retention window determines the period for which the deleted data is maintained. The default
period is 14 days in Exchange Server 2010 and is configurable per database or per mailbox.
To meet the 30-day recoverability requirement, Microsoft IT increased the default to 30 days.
This ability to preserve data is different from previous versions of Exchange Server because
the data cannot be completely deleted (hard-deleted) until the deletion time stamp is past the
deleted item retention window. Even if the end user attempts to hard-delete the data, the data
must be retained. With the new version of the Mailbox dumpster, instead of the messages
being hard-deleted, they are moved from the Recoverable Items\Deletions folder to the
Recoverable Items\Purges folder. All items that are marked to be hard-deleted go to this
folder when single item recovery is enabled. The Recoverable Items\Purges folder is not
visible to the end user, meaning that they do not see data retained in this folder in the
Recover Deleted Items tool.
When the message deletion time stamp has exceeded the deleted item retention window,
Records Management will hard-delete the item. Figure 8 provides a visual representation of
this behavior.
In addition to prevention of hard-deleting data before the deleted item retention window has
expired, the short-term retention features also enable versioning functionality. As shown in
Figure 8, when an item is changed, a copy-on-write occurs to preserve the original version of
the item (depicted in step five of Figure 8). The original item is placed in the Recoverable
Items\Versions folder. The employee does not see this folder.
For messages and posts (IPM.Note* and IPM.Post*), copy-on-write captures changes in
the subject, body, attachments, senders/recipients, and sent/received dates.
For other types of items, copy-on-write occurs for any change to the item, except for
moves between folders and read/unread status changes.
Drafts are exempt from copy-on-write to prevent excessive copies when drafts are auto-
saved.
The data stored in the Recoverable Items\Versions folder is indexed and discoverable for
compliance officers.
Ultimately, single item recovery was the method that Microsoft chose to use in the corporate
environment. Not only did it meet the basic needs outlined previously (such as reduced
administrative overhead, removal of third-party backups, and improved SLAs). This solution
also enabled Microsoft IT to provide better compliance during day-to-day operations.
Moving from disk backups to a backup-free environment gave Microsoft IT the following
advantages:
When litigation hold is enabled, records management ceases hard-deleting dumpster data,
and the following occur:
When the end user deletes an item from the Deleted Items folder or shift-deletes Outlook
or Outlook Web App from any folder (soft delete), the item is moved to Recoverable
Items\Deletions folder. There, it can be recovered in the Recover Deleted Items view in
Outlook or Outlook Web App.
When the end user hard-deletes data from the Recover Deleted Items view (hard delete
from the Recoverable Items\Deletions folder), the item is moved to the Recoverable
Items\Purges folder.
When records management hard-deletes data from the Recoverable Items\Deletions
folder (because data has reached the age limit for deleted items), the item is moved to
the Recoverable Items\Purges folder.
When the end user modifies an item, a copy of the original item is placed in the
Recoverable Items\Versions folder based on the criteria discussed previously.
Also, when litigation hold is enabled, the properties RecoverableItemsWarningQuota and
RecoverableItemsQuota (defined either on the mailbox database or on the mailbox) are
ignored. Data is therefore not hard-deleted or prevented from being stored in the dumpster.
Employees can access the Recover Deleted Items tool and recover items on their own.
An administrator causes mail items that the Recover Deleted Items tool cannot access to
be made available and either restored directly to the mailbox or to a PST file and
provided to the employee.
In past implementations of Exchange Server, the former method of restoration was limited
because of the limited number of items that were presented for recovery in the Recover
Deleted Items tool. The process for the latter was long and arduous, taking one to many
weeks to complete.
Moving to the new version of the mailbox dumpster as described earlier in this paper
removes both of the problems with the earlier implementations of Exchange Server. The
amount of information that employees can directly recover through the Recover Deleted
Items tool has increased. The required procedure for recovering items that are not available
in the Recover Deleted Items tool has been simplified and is now measured in days. For all
information in the Recoverable Items\Deletions folder (which is the majority of the mail), the
employees recover these items on their own.
Messages that are stored in the Recoverable Items\Purges folder and the Recoverable
Items\Versions folder require a separate process because employees cannot discover or
The Exchange Client Support (ECS) team provides this service at Microsoft. The members of
ECS are in the Discovery Management role. They use the Exchange Control Panel to
perform discovery searches through an easy-to-use search interface. When ECS receives a
request for a single item recovery, it takes the following actions:
1. A member of ECS uses the Exchange Control Panel to target a search against the
mailbox to locate the data in question. The framework to perform the search enables the
administrator to use Advanced Query Syntax. The administrator then extracts the
recovered data and places it in the discovery mailbox, in a folder that bears the
employee's name and the date/time that the search occurred.
18. The administrator opens the discovery mailbox via Outlook Web App or Outlook and
verifies that the content recovered is the content that the end user requested.
19. The administrator uses the Exchange Management Shell to perform an export-mailbox
against the discovery mailbox, targeting the employee's mailbox. The administrator
exports the data to a PST file and provides it to the employee to load in his or her
mailbox; or, the administrator helps the employee load the data in his or her mailbox.
In Exchange Server 2010, the Client Access server role supports the traditional functionality
that includes Outlook Web App, Exchange ActiveSync, POP3, and IMAP4 protocols.
Beginning in Exchange Server 2007, the Client Access server role also provides access to
free/busy data by using the Availability service and enables certain clients to download
automatic configuration settings from the Autodiscover service. For example, users might
work with Outlook Web App in a Web browser session or synchronize mobile devices by
using the Exchange ActiveSync protocol. Users can also work with the full Microsoft Office
Outlook 2007 or Microsoft Outlook 2010 client over Internet connections, accessing their
mailboxes through Outlook Anywhere.
In addition to this functionality, Exchange Server 2010 has moved access to mailboxes via
the MAPI off the Mailbox servers themselves and onto the Client Access server. This concept
is known as RPC Client Access Service. Moving MAPI onto the Client Access server enables
To provide Microsoft employees with reliable access to their mailboxes from practically any
location that has network access, Microsoft IT defined the following requirements for the
deployment of Client Access servers:
Establish flexible and scalable client access points for each geographic region that can
accommodate the existing mobile user population at Microsoft and large spikes in mobile
messaging activity.
Preserve common URL namespaces (such as https://mail.microsoft.com) that were
established during Exchange Server 2003 rollout for all mobile messaging clients within
each individual geographical region.
Deploy all mobile messaging services on a common standardized Client Access server
platform.
Establish a unified load-balancing strategy for internal and external access.
Provide seamless backward compatibility with mailboxes still on the Exchange
Server 2007 platform during transition and provide support for cross-forest access to
enable availability of free/busy information.
Administrators can use specific cmdlets to individually configure all other messaging services
internally and externally (Outlook Web App, Exchange Web Services, Exchange Control
Panel, OAB, and Exchange ActiveSync) to use separate URLs. Microsoft IT consolidated all
of these namespaces so that mail.microsoft.com is used for /owa, /ews. /ecp. /oab, and
/Exchange-Server-Activesync.
The namespace for the RPC Client Access Service is new to Exchange Server 2010 and is
controlled through the Client Access server array. Microsoft IT has consolidated all Client
Access server arrays by region so that all mailbox databases in Redmond point to
rpc://outlook.redmond.corp.microsoft.com, whereas the mailbox databases in the other
regions point to similarly formatted FQDNs. These FQDNs are also configured on the
hardware load balancer.
The move to a hardware load balancer enables all services (internal and external) to be
managed in a similar way, reducing administrative overhead and complexity in the
environment. This move also enables the load-balancing scheme to be optimized for each
protocol. For example, HTTP affinity options range from cookie-based affinity to Secure
Sockets Layer (SSL) ID. The load-balancing configuration for some of the protocols such as
RPC/HTTP have been made stateless, which further improves the ability to scale.
Note: Microsoft IT uses a split Domain Name System (DNS) configuration to accommodate
the registration of internal Client Access servers, provide load balancing, and provide fault
tolerance for internal client connections.
Figure 9 illustrates the load-balancing configuration that Microsoft IT established for internal
and external messaging clients.
Microsoft IT established this topology during the Exchange 2000 Server time frame, which
means that Microsoft employees became accustomed to these URLs over many years.
Preserving these URLs during the transition to Exchange Server 2010 and providing
uninterrupted mobile messaging services through these URLs was correspondingly an
important objective for the production rollout.
Preserving the existing URL namespaces during the transition from Exchange Server 2007 to
Exchange Server 2010 was not as straightforward as the transition from Exchange
Server 2003 to Exchange Server 2007. Microsoft IT devised the following strategy to ensure
a smooth transition:
1. Create new namespaces for the Exchange Server 2007 namespaces in both internal
and external zones. This step moved the Exchange Server 2007 namespace from
mail.microsoft.com to owa.microsoft.com.
20. Deploy the Exchange Server 2010 Client Access servers in the corporate production
environment in each data center and Active Directory site where Exchange Server 2010
Mailbox servers were planned.
21. Create DNS records for the Exchange Server 2010 Client Access arrays and the SCP
records for the new Exchange Server 2010 Client Access servers.
22. Reconfigure the Exchange Server 2007 Client Access servers to use a new namespace.
This step includes the configuration of new SSL certificates for the Exchange
Server 2007 Client Access server that include the new namespace.
23. Create a rule on the hardware load balancer to publish the new URL,
owa.microsoft.com.
24. Test access to Exchange Server 2007 and Exchange Server 2010 resources through
Client Access servers in all locations by manually pointing clients to them.
25. Modify the DNS records for the existing namespace to point to the Virtual IP of the new
Exchange Server 2010 Client Access server cluster. At this point, the Autodiscover SCP
records point to Exchange Server 2010 Client Access servers.
27. Modify external DNS to point the external namespace to point to the new Exchange
Server 2010 Client Access server rule on the hardware load balancer for external
network users.
28. Move mailboxes from Exchange 2007 servers to new Exchange 2010 servers and
enable new mobile messaging features for transitioned users.
As illustrated in Figure 10, Microsoft IT heavily focused the deployment on dedicated servers
with varying ratios of Mailbox servers to Client Access servers in order to establish flexible
and scalable messaging services that can accommodate large spikes in user activity. In Sao
Paulo only, Microsoft IT deployed a multiple-role server that hosts Hub Transport and Unified
Messaging server roles in addition to the Client Access server role, because of the moderate
number of users in those regions.
Client Access servers only communicate directly with Exchange Server 2010 Mailbox servers
in their local Active Directory site. For requests to Mailbox servers in remote Active Directory
sites, Client Access servers must proxy or redirect the request to a Client Access server that
is local to the target Mailbox server. Microsoft IT prefers to redirect Outlook Web App users.
Keeping client connections local within each geographic region minimizes the impact of
Within the corporate production environment, Microsoft IT generates four different OABs
according to each geographical region. This approach of generating and distributing OABs
locally helps Microsoft IT minimize OAB traffic over WAN connections while providing users
with relevant address information in cached or offline mode. Accordingly, Microsoft IT
configured an Exchange 2010 Mailbox server in each geographic region as the OAB
Generation (OABGen) server and local Client Access servers as hosts to download the
regional OABs. The Exchange File Distribution Service, running on each Client Access
server, downloads the OAB in the form of XML files from the OABGen server into a virtual
directory that serves as the distribution point for Outlook 2007 and Outlook 2010 clients.
Outlook 2007 and Outlook 2010 clients can determine the OAB download URL by querying
the Autodiscover service.
Outlook 2007 and Outlook 2010 clients on the Internet access the OAB virtual directory
through the hardware load balancers. Within the internal network, Outlook 2007 and 2010
clients can access the OAB virtual directory on Client Access servers directly without the
need to go through the hardware load balancers. Outlook 2003 clients can still download the
OABs from the public folder by using MAPI and RPCs directly or, in the case of external
Outlook 2003 clients, by using MAPI and RPCs through Outlook Anywhere.
Note: The Exchange Server 2010 Availability service is a Web service on Client Access
servers to provide Outlook 2010 clients with access to the Availability API. Outlook Web App
uses the Availability API directly and does not use the Availability service.
Figure 11 shows the cross-forest availability architecture that Microsoft IT established in the
corporate production environment. Although clients that use Outlook 2003 continue to work
with free/busy items in public folders as usual, Client Access servers communicate differently
to process availability requests from users who have mailboxes on Exchange Server 2010
and who work with Outlook 2010 or Outlook Web App. If the target mailbox is in the local
Active Directory site, Client Access servers use MAPI to access the calendar information
directly in the target mailbox. If the target mailbox is in a remote Active Directory site, the
Client Access server proxies the request via HTTP to a Client Access server in the target
mailbox’s local site, which in turn accesses the calendar in the user’s mailbox.
REDMOND-EXTRANET
ADSITE_SINGAPORE
Mailbox server Client Access server
InterOrg
Repl.
Exchange Server 2003 Mailbox server Multiple-role server Mailbox server
Note: Client Access servers can use an organization-wide access method (-AccessMethod
OrgWideFB) for communication with remote Client Access servers in non-trusted forests.
However, Microsoft IT does not use this access method to avoid the need for maintaining
special system account credentials in each forest. For more information about cross-forest
availability lookups, see “Configure the Availability Service for Cross-Forest Topologies” at
http://technet.microsoft.com/en-us/library/bb125182.aspx.
Secure channel planning Moving to RPC Client Access Service means that the
number of secure channels required for authentication by the Client Access server
increases. As the number of client connections increases, an organization must monitor
and plan for the number of secure channels.
Measurement for server purchases Purchasing the correct number of Client Access
servers can be a difficult task. The Exchange Server product group has provided specific
guidance on planning for this number. However, an organization must measure internal
usage of Exchange ActiveSync, Outlook Anywhere, and other technologies to determine
the proper number of servers.
Deployment cutovers The procedure described previously has proven to be the most
straightforward method for cutting over Client Access server functions without negatively
affecting users and functionality.
Unified Messaging
Microsoft IT has provided users with unified messaging capabilities for many years, starting
with Exchange 2000 Server. The Microsoft voice-mail infrastructure before Exchange
Server 2007 relied on various private branch exchange (PBX) telephony devices and a third-
party unified messaging product that communicated with Exchange servers to deliver voice
mail to users’ inboxes. The third-party product required a direct physical connection to the
PBX device for each location that provided unified messaging services. This requirement
meant that Microsoft IT placed third-party Unified Messaging servers in the same physical
location as the PBX.
For this migration of unified messaging services worldwide, Microsoft IT defined the following
requirements:
Provide high-quality, next-generation VoIP services with clear voice-mail playback and
voice conversations at all locations.
Provide additional language packs and the opportunity for end users to help protect their
voice mail.
Increase security through encrypted VoIP communication.
Maintain a unified messaging environment that allowed for the ongoing rollout of Office
Communications Server 2007 R2 to continue.
Ensure smooth migration between third-party unified messaging systems and Exchange
Server 2010-based unified messaging.
Reduce administrative overhead by educating and enabling users for self-service.
Reduced server footprint and costs Unified Messaging servers must reside in the
same Active Directory sites as Hub Transport and Mailbox servers. Deploying Unified
Messaging servers in a decentralized way would require a decentralization of the entire
messaging environment, with associated higher server footprint and operational costs.
Optimal preparation for future communication needs Centralizing Unified
Messaging servers in four locations makes deployment of technological advancements
in VoIP technology a straightforward process. It is also easier to integrate and maintain
centralized Unified Messaging servers in a global unified communications infrastructure.
Figure 12 illustrates the Microsoft IT Unified Messaging server deployment in the corporate
production environment.
After it chose to deploy Unified Messaging servers in the four regional data centers, Microsoft
IT faced the goal of providing high quality of voice for all unified messaging users. To provide
a high quality of voice data, Microsoft IT distributed Unified Messaging servers according to
the number of users that each region supports. For example, for Australia and Asia, Microsoft
IT determined that the two Unified Messaging servers deployed with Exchange Server 2007
were not adequate to handle the increased workload of voice-mail preview. This meant
deploying a third Unified Messaging server.
The connectivity requirements to PBXs at Microsoft locations vary according to the call load.
Microsoft IT deployed these connections years ago as part of a voice-mail solution. Existing
PBXs have T1 Primary Rate Interface (PRI) or Basic Rate Interface (BRI) trunks grouped
logically as a digital set emulation group. The T1 trunks can use channel-associated signaling
where signaling data is on each channel (24 channels for T1), or Q.SIG where there are 23
channels and a dedicated channel for signaling.
The VoIP gateway decisions depend on the type of telephony connection. VoIP gateways
support specific signaling types and trunk sizes. Microsoft IT considers the signaling types
and the size of the trunk, and then ensures that the combination meets the user load. From
Figure 13 outlines the method for providing redundancy and load balancing in Exchange
Server 2010 Unified Messaging.
Redmond
Dual T1s
Primary SMDI
Secondary SMDI
Dual T1s
Unified
Messaging Mailbox server
Dual T1s for more bandwidth servers
Silicon Valley
T1
Primary SMDI
Secondary SMDI
T1
Active Directory
Other locations
8 Digital set
for more ports
8 Digital set
Unified
Messaging
8 Digital set servers
for more ports
Active Directory
Microsoft IT decided that the minimum level of scalability and flexibility in the unified
messaging environment required at least two VoIP gateways communicating with at least two
Unified Messaging server partners. Microsoft IT based this decision on a few considerations.
First, using two VoIP gateways and two Unified Messaging servers helps ensure that if one
telephony link or network link fails at any time, users can still receive unified messaging
After deciding to use a minimum of two VoIP gateway devices by using two Unified
Messaging servers as communication partners, Microsoft IT considered the type and
capacity of VoIP gateway to use. Microsoft IT followed several technical and business
requirements for making VoIP gateway selections for each location, as follows:
Connectivity type For Microsoft IT, the connectivity type came down to two choices of
digital connections: PRI T1 or BRI emulated as a digital set. Microsoft IT eliminated
analog connections immediately because of cost and scalability factors. For small sites,
Microsoft IT uses BRI (PIMG80PBXDNI); for large sites, Microsoft IT uses either
TIMG300DT or TIMG600DT. Whereas TIMG300DT supports a single T1 for each
device, TIMG600DT supports dual T1s. Microsoft IT varied the number of T1s depending
on usage, employing dual T1s in Redmond. Other sites used BRI trunks emulated as a
digital set, with either 8 or 16 lines per gateway, depending on user load.
Simplified Message Desk Interface (SMDI)/signaling integration Intel gateways
provide a standard, supported SMDI integration, which is a decision factor for Microsoft
IT. To accomplish SMDI integration with Intel gateways, Microsoft IT connected multiple
gateways to the same SMDI link by using two primary gateways and multiple secondary
gateways. By doing this, Microsoft IT can switch over from one primary gateway to
another, enabling gateway firmware updates with no service interruption.
Secure protocols In the unified messaging environment, all traffic that uses SIP can
use Mutual Transport Layer Security (MTLS). This includes the traffic between VoIP
gateway devices and the Unified Messaging servers.
Trusted local area networks (LANs) To help prevent network sniffing and reduce
overall security risks, Microsoft IT places VoIP gateways on a virtual LAN (VLAN)
separate from the corporate production environment. This makes traffic access possible
only for authorized individuals who have physical access to VoIP gateways. Moreover,
Unified Messaging servers communicate only with gateways explicitly listed in the dial
plan.
In addition to these security measures, Microsoft IT enforces general security practices such
as using strong authentication methods and strong passwords.
The major considerations that Microsoft IT took into account during the migration included:
Microsoft IT defined the following goals for the design of Internet mail connectivity:
Increase security by using Microsoft Forefront Protection 2010 for Exchange Server in
combination with Forefront Online Protection for Exchange.
Adopt robust spam-filtering and virus-filtering methods by using the Forefront Online
Protection for Exchange environment.
Adopt Forefront Protection 2010 for Exchange Server running on the Hub Transport role
to filter virus messages by using more than one virus engine.
Develop a fault-tolerant system that balances both incoming and outgoing message
traffic.
During the migration to Exchange Server 2010, Microsoft IT updated the design and moved
the incoming/outgoing e-mail functionality to reside only in the Redmond site. Additionally,
Microsoft IT removed the Exchange Edge servers and migrated the outgoing connectors to
the Hub Transport servers in the respective sites. This enables greater scalability and
reduces management overhead while still maintaining rigorous security due to using
Forefront Online Protection for Exchange as the Internet-facing presence. The benefits from
this model include:
The outgoing relays are members of the local Active Directory site, allowing for easier
administration.
The additional overhead of managing another server type was removed from the
environment.
Security is still maintained because the Forefront Online Protection for Exchange layer
acts as the Internet presence.
Figure 14 shows the Internet mail connectivity and topology with Exchange Server 2010.
FOPE
Routers
Domain
Controllers
SD
BIG IP
SD
BIG IP
Mailbox
SD
BIG IP
CAS
CAS
Client Access
servers
Corpnet
Externally, for incoming message transfer from the Internet, Microsoft IT points its mail
exchanger (MX) records to the Forefront Online Protection for Exchange. The Forefront
Online Protection for Exchange infrastructure provides the initial tier of e-mail spam filtering.
From the Forefront Online Protection for Exchange environment, each accepted domain is
entered and configured to have mail routed to a hardware load balancer that sits in front of
the Hub Transport servers. This method enables Microsoft IT to take advantage of the
Forefront Online Protection for Exchange infrastructure to help ensure high availability of the
incoming/outgoing messaging platform.
Note: In addition to DNS host (a) MX records for round-robin load balancing, Microsoft IT
maintains Sender ID (Sender Policy Framework, or SPF) records. The Sender ID framework
relies on SPF records to identify messaging hosts that are authorized to send messages for
a specific SMTP domain, such as microsoft.com. Internet mail hosts that receive messages
from microsoft.com can look up the SPF records for the domain to determine whether the
sending host is authorized to send mail for Microsoft users. To enable this feature while
using Forefront Online Protection for Exchange, Microsoft has pointed its MX records at
mail.messaging.microsoft.com and modified its SPF records to include " v=spf1 include:
spf.messaging.microsoft.com –all". This configuration ensures that incoming e-mail goes to
Forefront Online Protection for Exchange and microsoft.com approves outgoing mail from
Forefront Online Protection for Exchange.
Note: The security of the Forefront Online Protection for Exchange environment is
maintained separately by the Forefront Online Protection for Exchange team and is not
covered in this paper. More information is available on the Microsoft Online Services Web
site at http://www.microsoft.com/online/exchange-hosted-services/filtering.mspx.
Server Hardening
Moving the servers that relay to the Forefront Online Protection for Exchange and eventually
to the Internet to inside the network also removed the need to have a large number of server
hardening methodologies applied to the servers. During the Exchange Server 2007 time
frame, methodologies like separate network interface cards (NICs) and port lockdowns were
required because the Exchange Server 2007 Edge servers were in the perimeter network.
With Exchange Server 2010 and the decommissioning of the Edge Transport role, Microsoft
IT reconfigured the requirements for hardening to match the requirements for every other
server inside the network, as follows. The hardening standards for servers inside the network
are significant, but the requirements for inside the network are different from the
requirements for outside the network.
2. Directory lookups are configured by synchronizing the Microsoft directory system with
the Forefront Online Protection for Exchange environment. After a message has passed
the connection filter, it is qualified against the e-mail addresses that are valid within the
organization.
3. The filter for file name extensions evaluates each message that passes through the first
two filters and ensures that if the message contains an attachment, it is not one of the 50
file attachments that Microsoft does not accept.
4. Virus scanning uses all five scan engines and is configured with maximum certainty. This
check blocks only messages that are highly likely to contain a virus.
After a message passes the Forefront Online Protection for Exchange checks, it moves to the
internal Hub Transport servers. Microsoft IT considered the available options and
implemented Forefront Server for Exchange to provide a second layer of transport protection.
After initial testing of both Forefront Online Protection for Exchange and Forefront Server for
Exchange, Microsoft IT decided to use the spam protection in Forefront Online Protection for
Exchange and the virus protection in Forefront Server for Exchange.
Internet incoming mail For incoming mail from the Internet to the internal Exchange
Server organization, Microsoft IT configured the Forefront Online Protection for
Exchange environment to send messages to redundant hardware load balancers that sit
in front of the Hub Transport servers in Redmond. The Hub Transport servers are
configured with one receive connector that accepts anonymous SMTP connections only
from specific IP addresses.
Cross-forest incoming mail For incoming mail from the separate forests such as the
Dogfood forest, Microsoft IT configured one receive connector that accepts anonymous
SMTP connections as trusted from the Hub Transport servers in those Exchange
organizations.
The team responsible for user education was involved with the planning of the Exchange
Server 2010 implementation from the beginning. Initially, the user-education team built a
Microsoft SharePoint® site where information such as FAQ and announcements could be
posted, and interaction with the employees could occur through feedback forms and user
forums. The Web site content was intended for those who wanted a lot of information or were
confused by the generalizations made in other forms of communications. The other forms of
communication included e-mail messages to all employees, e-mail messages to application
owners who connected to Exchange Server, and announcements on the company’s internal
portal.
Engaging with application owners early is an important part of the successful migration of
existing applications from Web-based Distributed Authoring and Versioning (WebDAV) to
Exchange Web Services. The user-education team began communications with the
application owners several months before the pilot in production and offered support and
deep technical assistance in the support forums on the SharePoint site. During the migration
itself, the user-education team handled communications regarding timelines, coordination of
test accounts, and eventual coordination of migrating the applications.
Educating end users is equally important and requires careful planning to ensure that the
right information is delivered to end users in a relevant time frame. The user-education team
sent three e-mail messages to end users. The first message described the project, the
timeline, and the new features that would be available (such as Exchange Control Panel).
This message also contained links and references to the SharePoint site where more
information was available.
The user-education team sent the second e-mail message directly to users before the
migration. This message contained more specifics about the features that would be available
and what to expect during the migration. This message included details about how access to
e-mail would not be interrupted during the process thanks to Online Mailbox Moves, yet users
would get a pop-up message at the end of the migration, at which point they should close
and re-open Outlook. This level of instruction and explanation likely prevented a significant
number of Helpdesk calls regarding what was going to occur during the migration.
The third and final e-mail message contained a welcome to Exchange Server 2010 along
with instructions for reviewing the new features (to increase user productivity). It also
reinforced the non-human and human support options (to reduce the overall cost of the
implementation).
The thorough communication effort resulted in fewer Helpdesk calls than previous
implementations of Exchange Server. Despite the number of new features that were
implemented (such as MailTips, Exchange Control Panel, and Message Tagging), calls
related to assistance with new features did not increase.
The high-level deployment plan that the Messaging Engineering team recommended for the
transition to Exchange Server 2010 in the corporate production environment included the
following phases:
Note: In addition to business and technical requirements, the Messaging Engineering team
had to consider several unique software issues because the designs were based on beta
versions of Exchange Server 2010. Important features, such as the new versions of OAB
and messaging records management (MRM), in addition to the related software products,
such as Forefront Security for Exchange Server, were not yet ready for deployment when
Microsoft IT started the production rollout. The Microsoft IT deployment plans reflect these
dependencies, which became obsolete with the release of Exchange Server 2010.
Figure 15 illustrates how Client Access servers support users with mailboxes on Exchange
Server 2010 and Exchange Server 2007.
Figure 15. Coexistence of Exchange Server 2010 and Exchange Server 2007 Mailboxes
As described earlier, redirection is a large portion of the Client Access server strategy for
Microsoft IT. To fully make the transition from Exchange Server 2007 to Exchange
Server 2010, Microsoft IT gave special consideration to the redirection methodology for
Exchange Server 2010. Redirection in Exchange Server 2010 back to Exchange Server 2007
is different from how Exchange Server 2007 handled communication with Exchange
Server 2003. This is due to the removal of WebDAV support and the associated earlier virtual
directories (that is, /public/* and /exchange/*).
These steps are necessary to complete the move from Exchange Server 2007 to Exchange
Server 2010 as the public-facing version of Exchange Server. It is important to understand
that Exchange Server 2010 Client Access server does not support rendering mailbox data
from earlier versions of Exchange.
For Outlook Web App, Exchange 2010 Client Access server follows one of four scenarios
depending on the target mailbox's version and/or location:
If the Exchange Server 2007 mailbox is in the same Active Directory site as Exchange
Server 2010 Client Access server, Exchange Server 2010 Client Access server will
silently redirect the session to the Exchange Server 2007 Client Access server.
If the Exchange Server 2007 mailbox is in another Internet-facing Active Directory site,
Exchange Server 2010 Client Access server will manually redirect the user to the
Exchange Server 2007 Client Access server.
If the Exchange Server 2007 mailbox is in an Active Directory site that is not Internet
facing, Exchange Server 2010 Client Access server will proxy the connection to the
Exchange Server 2007 Client Access server.
If the mailbox is Exchange Server 2003, Exchange Server 2010 Client Access server will
silently redirect the session to a pre-defined URL.
For Exchange ActiveSync, Exchange Server 2010 Client Access server does not support
rendering mailbox data from earlier versions of Exchange Server. Exchange Server 2010
Client Access server follows one of four scenarios depending on the target mailbox's version
and/or location, and device capabilities:
If the Exchange Server 2007 mailbox is in the same Active Directory site as Exchange
Server 2010 Client Access server and the device supports Autodiscover, Exchange
Server 2010 Client Access server will notify the device to synchronize with Exchange
Server 2007 Client Access server.
If the Exchange 2007 mailbox is in the same Active Directory site as Exchange
Server 2010 Client Access server and the device does not support Autodiscover,
Exchange Server 2010 Client Access server will proxy the connection to Exchange
Server 2007 Client Access server.
If the Exchange 2007 mailbox is in an Active Directory site that is not Internet facing,
Exchange Server 2010 Client Access server will proxy the connection to the Exchange
Server 2007 Client Access server.
After the Client Access servers were fully deployed, Microsoft IT considered the task of
enabling some of the additional features, such as the ability to update the version of Microsoft
Office Outlook Mobile on Windows Mobile® 6.1 phones.
After Microsoft IT decided to move to a model where spam scanning was handled in the
Forefront Online Protection for Exchange environment, it was crucial for Microsoft IT to
maintain virus-scanning on the internal Hub Transport servers. Accordingly, Microsoft IT
began to test Forefront Security for Exchange Server as soon as the software was available
in a stable version. The criteria that Microsoft IT defined for the deployment in the corporate
production environment included the following points:
The antivirus solution works reliably (that is, without excessive failures) and with
acceptable throughput according to the expected message volumes.
The software can find known viruses in various message encodings, formats, and
attachments.
If the software fails, message transport must halt so that messages do not pass the Hub
Transport servers until they are scanned.
After Microsoft IT completed these tests successfully, Microsoft IT deployed Forefront
Security for Exchange Server. The Hub Transport servers then superseded the earlier Hub
Transport servers running Exchange Server 2007. After Microsoft IT completed the transition
of mail routing, it decommissioned the Exchange Server 2007 Hub Transport servers.
The final switch to Exchange Server 2010 in the messaging backbone did not require many
further changes. One important step was to optimize the message-routing topology by using
Exchange Server-specific Active Directory site connectors in order to implement a hub/spoke
topology along the physical network links.
Use multiple-core processors and design storage based on both capacity and I/O
performance During testing, Microsoft IT determined that multi-core processors
provide substantial performance benefits. However, processing power is not the only
factor that determines Mailbox server performance. It is also important to design the
Increasing user productivity also entails maintaining availability levels for messaging services
according to business requirements and SLAs. To accomplish this goal, Microsoft IT heavily
focuses on single-role Mailbox server deployments with Exchange Server 2010. Dedicated
Mailbox servers support database availability groups. Microsoft IT uses database availability
groups to increase resiliency from storage-level failures and single-role and multi-role server
deployments in smaller sites to maintain resiliency during unexpected outages.
http://www.microsoft.com
http://www.microsoft.com/technet/itshowcase
The information contained in this document represents the current view of Microsoft Corporation on the issues
discussed as of the date of publication. Because Microsoft must respond to changing market conditions, it
should not be interpreted to be a commitment on the part of Microsoft, and Microsoft cannot guarantee the
accuracy of any information presented after the date of publication.
This white paper is for informational purposes only. MICROSOFT MAKES NO WARRANTIES, EXPRESS,
IMPLIED, OR STATUTORY, AS TO THE INFORMATION IN THIS DOCUMENT.
Complying with all applicable copyright laws is the responsibility of the user. Without limiting the rights under
copyright, no part of this document may be reproduced, stored in or introduced into a retrieval system, or
transmitted in any form or by any means (electronic, mechanical, photocopying, recording, or otherwise), or for
any purpose, without the express written permission of Microsoft Corporation.
Microsoft may have patents, patent applications, trademarks, copyrights, or other intellectual property rights
covering subject matter in this document. Except as expressly provided in any written license agreement from
Microsoft, the furnishing of this document does not give you any license to these patents, trademarks,
copyrights, or other intellectual property.
Unless otherwise noted, the example companies, organizations, products, domain names, e-mail addresses,
logos, people, places, and events depicted herein are fictitious, and no association with any real company,
organization, product, domain name, e-mail address, logo, person, place, or event is intended or should be
inferred.
Microsoft, Active Directory, ActiveSync, Forefront, MSN, Outlook, SharePoint, Windows, Windows Mobile,
Windows PowerShell, and Windows Server are either registered trademarks or trademarks of Microsoft
Corporation in the United States and/or other countries.