You are on page 1of 390

Sold to

Page 1 kamshun@outlook.com
Office 365 for IT Pros (Companion Volume for 2023 Edition)
Mastering Microsoft 365 Office Applications
Published by Tony Redmond (https://office365itpros.com)
© Copyright 2015-2022 by Tony Redmond.
All rights reserved. No part of this book may be reproduced or transmitted in any form or by any means
without the written permission of the authors.
The example companies, organizations, products, domain names, email addresses, logos, people, places and
event depicted herein are fictitious. No association with any real company, organization, people, domain
name, email address, logo, person, place, or event is intended or should be inferred. The book expresses the
views and opinions of the authors. The information presented in the book is provided without any express,
statutory, or implied warranties. The authors cannot be held liable for any damages caused or alleged to be
caused either directly or indirectly by this book.
Although the authors are members of Microsoft’s Most Valuable Professional (MVP) program, the content of this
book solely represents their views and opinions about Office 365 and any other technologies mentioned in the
text and is not endorsed in any way by Microsoft Corporation.
Please be respectful of the rights of the authors and do not make copies of this eBook available to
others.
Ninth (2023) edition. Previous editions:
• Office 365 for Exchange Professionals (May 2015 and September 2015).
• Office 365 for IT Pros (3rd edition – June 2016).]
• Office 365 for IT Pros (4th edition – June 2017).
• Office 365 for IT Pros (5th edition – July 2018).
• Office 365 for IT Pros (6th edition – July 2019).
• Office 365 for IT Pros (7th edition – July 2020).
• Office 365 for IT Pros (8th edition – July 2021)
This is the companion volume for Office 365 for IT Pros (2023 edition). Its content is valuable, but we do not
update it as often as we do for material in the main book.
This update for the 2023 companion volume published in July 2022.
The photo on the front cover is of a sunset over an island near Dubrovnik, Croatia taken by Tony Redmond in
May 2022.

Page: i Office 365 for IT Pros (Companion Volume) 2022 Edition (May 2022)
Table of Contents
Chapter 1: Introduction .............................................................................................................................................................................. 6
Welcome to the Companion Volume .............................................................................................................................................. 6
Office 365 History .................................................................................................................................................................................... 6
Technical steps along the path to the cloud................................................................................................................................. 7
Wave 14: Office 365 launches .......................................................................................................................................................... 10
Wave 15: Cloud-Ready by Design .................................................................................................................................................. 11
Wave 16: Now in production ........................................................................................................................................................... 12
The Next Wave ....................................................................................................................................................................................... 13
Microsoft 365 ......................................................................................................................................................................................... 13
Chapter 2: Exchange Mailbox Migration .......................................................................................................................................... 15
Migration approaches for Exchange Online .............................................................................................................................. 15
Managing a Migration Project ......................................................................................................................................................... 19
Preparing to Migrate to Exchange Online .................................................................................................................................. 20
Cutover Migration ................................................................................................................................................................................ 26
Staged Migration .................................................................................................................................................................................. 29
Hybrid Migration ................................................................................................................................................................................... 30
Post-Migration Tasks ........................................................................................................................................................................... 31
Other Exchange Online Migration Types .................................................................................................................................... 33
Data Migration with the Office 365 Import Service ................................................................................................................ 35
Migration of legacy public folders ................................................................................................................................................. 39
Migration of modern public folders from on-premises Exchange.................................................................................... 42
Public folder migration methodologies ....................................................................................................................................... 43
Migrating public folders to Office 365 Groups ......................................................................................................................... 45
Migrating legacy email archives ..................................................................................................................................................... 45
Running a Cutover Migration .......................................................................................................................................................... 47
Running a Staged Migration ............................................................................................................................................................ 57
Chapter 3: Managing Office 365 Addressing ................................................................................................................................. 63
Email Address Policies......................................................................................................................................................................... 63
Address Lists ........................................................................................................................................................................................... 68
Offline Address Book (OAB) ............................................................................................................................................................. 71
Hierarchical Address Book (HAB) ................................................................................................................................................... 73
Address Book Policies ......................................................................................................................................................................... 76
Chapter 4: Managing Hybrid Connections ...................................................................................................................................... 80

Page: ii Office 365 for IT Pros (Companion Volume) 2022 Edition (May 2022)
Hybrid Workloads ................................................................................................................................................................................. 80
Hybrid Exchange Architecture ......................................................................................................................................................... 81
The Exchange Hybrid Configuration Wizard .............................................................................................................................. 96
Considerations for a Hybrid Exchange Deployment ............................................................................................................... 99
Preparing for a Hybrid Exchange Configuration .................................................................................................................... 105
Configuring a Hybrid Exchange Connection............................................................................................................................ 106
Managing a Hybrid Exchange Deployment ............................................................................................................................. 110
Life After Hybrid .................................................................................................................................................................................. 111
Chapter 5: Managing Hybrid Recipients ........................................................................................................................................ 115
User Mailboxes .................................................................................................................................................................................... 115
Preserving Mailboxes for ex-Employees .................................................................................................................................... 123
Shared Mailboxes ............................................................................................................................................................................... 128
Archive Mailboxes............................................................................................................................................................................... 130
Hybrid Public Folders ........................................................................................................................................................................ 133
Groups ..................................................................................................................................................................................................... 134
Email Addresses ................................................................................................................................................................................... 140
Directory-Based Edge Blocking ..................................................................................................................................................... 142
Moving Mailboxes .............................................................................................................................................................................. 144
Recovering Soft-deleted Mailboxes in a Hybrid Environment ......................................................................................... 153
Chapter 6: Office 365 Analytics .......................................................................................................................................................... 156
Fitbit for the Office ............................................................................................................................................................................. 156
MyAnalytics (Viva Insights) ............................................................................................................................................................. 156
Using the Insights Dashboard ....................................................................................................................................................... 160
Workplace Analytics........................................................................................................................................................................... 174
Company Culture is Critical............................................................................................................................................................. 175
Chapter 7: Exchange Online ................................................................................................................................................................ 177
Public Folders ....................................................................................................................................................................................... 177
Differences with On-Premises Public Folders .......................................................................................................................... 179
Public Folder Moderation ................................................................................................................................................................ 184
Public Folder Clients .......................................................................................................................................................................... 184
Public Folders and Compliance ..................................................................................................................................................... 187
Discovery mailboxes .......................................................................................................................................................................... 187
Inbox Rules ............................................................................................................................................................................................ 187
Calendar Sharing ................................................................................................................................................................................. 190
Resource Mailboxes ........................................................................................................................................................................... 192
Exchange Online Mailbox Retention Policies........................................................................................................................... 197

Page: iii Office 365 for IT Pros (Companion Volume) 2022 Edition (May 2022)
Office 365 and Groups ...................................................................................................................................................................... 216
Managing POP and IMAP Clients ................................................................................................................................................. 218
Managing the Focused Inbox ........................................................................................................................................................ 220
Reporting Exchange Administrative Audit Data with PowerShell ................................................................................... 226
Reporting Mailbox Audit Data with PowerShell ..................................................................................................................... 228
Exchange DLP policies ...................................................................................................................................................................... 230
Creating a New DLP policy for Exchange .................................................................................................................................. 233
Building out an Exchange DLP policy ......................................................................................................................................... 239
Hybrid Exchange DLP ........................................................................................................................................................................ 240
The Big Funnel Mailbox Index ....................................................................................................................................................... 240
Using the Search-Mailbox Cmdlet ............................................................................................................................................... 241
Chapter 8: Office 365 Information .................................................................................................................................................... 251
Importing PSTs into Office 365 ..................................................................................................................................................... 251
Microsoft Forms .................................................................................................................................................................................. 260
Sway ......................................................................................................................................................................................................... 270
Deprecated SharePoint Online and OneDrive For Business Features ............................................................................ 273
PowerShell for Power Apps............................................................................................................................................................. 274
Chapter 9: Directory Synchronization ............................................................................................................................................. 276
The Basics of Directory Synchronization ................................................................................................................................... 276
Azure AD Connect Technical Concepts ...................................................................................................................................... 277
Installing Azure AD Connect .......................................................................................................................................................... 286
Customizing Azure AD Connect.................................................................................................................................................... 289
Managing and Monitoring Directory Synchronization ........................................................................................................ 291
Upgrading Azure AD Connect ....................................................................................................................................................... 295
Chapter 10: The Hybrid Configuration Wizard ............................................................................................................................ 297
Starting the HCW ................................................................................................................................................................................ 297
Server Detection .................................................................................................................................................................................. 298
Cutover or Hybrid ............................................................................................................................................................................... 299
Trusts and Domains ........................................................................................................................................................................... 300
Minimal or Full Hybrid ...................................................................................................................................................................... 301
Mail Routing ......................................................................................................................................................................................... 303
Organization ......................................................................................................................................................................................... 307
Starting the Configuration .............................................................................................................................................................. 308
Completing Configuration .............................................................................................................................................................. 311
Modern Hybrid Architecture .......................................................................................................................................................... 312
Chapter 11: Active Directory Federation Services ...................................................................................................................... 314

Page: iv Office 365 for IT Pros (Companion Volume) 2022 Edition (May 2022)
Configuring Active Directory Federation Services ................................................................................................................. 314
Restricting access to Office 365 through AD FS ..................................................................................................................... 322
Enabling password updates through AD FS ............................................................................................................................ 326
Enabling "Persistent SSO"................................................................................................................................................................ 326
Chapter 12: Delve .................................................................................................................................................................................... 327
Mastering Information ...................................................................................................................................................................... 327
Delve Browser App ............................................................................................................................................................................. 332
Privacy and Security ........................................................................................................................................................................... 340
Delve and Exchange Online ............................................................................................................................................................ 344
Hybrid Delve ......................................................................................................................................................................................... 345
Chapter 13: Basic Mobile Management ......................................................................................................................................... 346
Mobile Connectivity to Exchange Online .................................................................................................................................. 346
Configuring Mobile Devices and Applications for Exchange ActiveSync ..................................................................... 347
Mobile Device Mailbox Policies .................................................................................................................................................... 349
Managing Mobile Device Associations ...................................................................................................................................... 356
Establishing an ActiveSync Policy for Your Organization ................................................................................................... 361
Microsoft 365 Basic Mobility and Security ............................................................................................................................... 362
Chapter 14: Stream Classic .................................................................................................................................................................. 371
Stream Architecture ........................................................................................................................................................................... 371
Stream User Functionality ............................................................................................................................................................... 374
Stream Administration ...................................................................................................................................................................... 382
Microsoft 365 Groups and Stream ............................................................................................................................................... 386

Page: v Office 365 for IT Pros (Companion Volume) 2022 Edition (May 2022)
Chapter 1: Introduction
Welcome to the Companion Volume
In 2018, the fourth edition of Office 365 for IT Pros reached 1,150 pages (even after trimming, the seventh
exceeds 1,200 pages). The main book had swollen over the years to cater for developments inside Office 365,
such as the introduction of Planner, Teams, and Stream. As you might expect, some of the material included in
the book since 2015 had aged a little, and some of it was not as interesting to everyone. So, we took the
decision to create this companion volume and use it as a home for information that we think is still valuable
and should be shared but might not warrant a place in the main book.
Our focus remains on the content in the main book and we do not try to keep this material updated.
Sometimes, topics are updated, but we don’t guarantee how current the information presented here is. With
that in mind, use the material as guidance and check what the latest situation is with an online search.

Office 365 History


Office 365 is not Microsoft’s first cloud platform. In fact, Microsoft got into the cloud application game in
2005 when it started to provide a managed service for Exchange to some customers. The first public
information about this effort appeared in October 2007 when Microsoft announced “Exchange Labs.” Among
other features, Exchange Labs supported 5 GB mailboxes and used SSL to secure client communications. The
commercial launch followed with the launch of Business Productivity Online Services (BPOS) in March 2008.
BPOS included Exchange Online, SharePoint Online, Office Communications Server Online (an ancestor of
Skype for Business), Forefront (anti-virus), and Office Live Meeting. Collectively, this set of applications was
known as Microsoft Online Services.
The technology used by BPOS was an adapted form of the on-premises server software sold at the time to
customers as Exchange 2007, SharePoint 2007, and so on. The big difference between Office 365 and BPOS is
the stability and robustness of the current platform, largely due to the maturity of the workloads now running
inside Office 365 combined with a highly developed automation framework to orchestrate operations. In a
nutshell, BPOS took software designed to be deployed in a traditional on-premises environment and adapted
itto function in a multi-tenant infrastructure accessed through the Internet. The problem was that the software
was not designed to cope with the stresses and pressures generated by large-scale multi-tenant operations.
Things broke often.
Office 365 is built around software designed with the unique demand of cloud-scale operations in mind, so its
delivery and reliability record is much better than BPOS ever managed. Still, BPOS proved to be extraordinary
useful to Microsoft in educating engineers about how to build software to function in cloud environments,
even if its reputation suffered due to the relatively poor performance against the Service Level Agreements
(SLA) negotiated with customers. The value of attributes such as automation, simplification, and
standardization quickly became recognized and valued through working through the trials and tribulations of
BPOS. What’s absolutely certain is that without the experience gained through working through real-life
customer deployments of BPOS, Microsoft could not have achieved as much as they have since with Office
365.

Page 6
A remnant of Exchange Labs: If you look at the properties of a mailbox, you’ll see that the
LegacyExchangeDN property still has its roots in Exchange Labs. For example, you might see a value like
this:

/o=ExchangeLabs/ou=Exchange Administrative Group (FYDIBOHF23SPDLT)/


cn=Recipients/cn=yourdomain.onmicrosoft.com-52094eea20

Apart from Exchange Labs, the other bit of Exchange history is the fact that an administrative group is still
specified. Administrative groups appeared as a unit of server management in Exchange 2000 and were
phased out in Exchange 2007. However, the LegacyExchangeDN property goes back even further to the
X.500-like directory structure used by the first generation of Exchange (4.0 to 5.5 in 1996-99). All of this
goes to show that you can’t really discard history too easily.

Although BPOS provided a healthy dose of customer reality to Microsoft’s engineers, you cannot take too
many risks with a production offering. Microsoft therefore continued to operate “Exchange Labs” alongside
BPOS to conduct experiments at scale and learn just what it took to transform Exchange from a somewhat
stodgy but powerful email and collaboration server into software that could function in the cloud. Exchange
Labs was the playground while BPOS attempted – sometimes quite well – to deliver a commercial offering,
even if it was handicapped by its enterprise heritage. Together, BPOS and Exchange Labs constituted the
greenhouse for what later evolved into Exchange Online and an operations framework that permeates
throughout Office 365.
During the same period as they were figuring out how to execute their initial cloud deployment strategy, the
Exchange product group made a big bet on PowerShell by using it for many administrative operations in
Exchange 2007. Exchange was the first major Microsoft server application to use PowerShell so extensively
and did so at a time when “Monad” (the original code name) was often derided as a poor combination of
UNIX-style scripting and impenetrable syntax. As it turned out, PowerShell has made a huge contribution to
the success of Exchange in the cloud and Microsoft uses PowerShell scripts to automate many common
administrative operations that run inside Office 365.

Technical steps along the path to the cloud


Important as PowerShell is to Exchange Online, it’s not the only area of technical innovation that has allowed
Microsoft to transform the enterprise-centric traditional-deployment model used by Microsoft on-premises
server products into today's Office 365. Many threads have come together to deliver that work, many of which
started in the 2003-2005 timeframe. Although you might delight in the notion of a grand technical plan that
has come together to deliver Office 365, it’s much more of a case of blessed serendipity allied to some great
engineering and visionary work delivered over a series of server versions.
The two base Office 365 workloads are Exchange and SharePoint. Thw two products share a common history
in that SharePoint Portal Server 2001, the first version of SharePoint, used the Exchange ESE database engine.
SharePoint subsequently moved to the SQL database engine and has used it since. Each of the two workloads
has its own history and timeline in progressing from a product embedded in an on-premises ecosystem to
becoming a cornerstone of Office 365. Table 1-1 outlines the areas of technical innovation in different
versions of Exchange and notes why each area is critical to what we use today in Office 365.
Technical When Why important to Office 365
innovation introduced
PowerShell Exchange 2007 The basis for automation of many common management
operations. Remote PowerShell was introduced in Exchange
2010 and is used with Exchange Online and other applications.

Page 7
Autodiscover Exchange 2007 Allows clients to connect to Exchange Online without knowing
details of server names etc.
ActiveSync Exchange 2003 Although its importance to Office 365 is much reduced given
SP1 the prominence of Outlook Mobile for iOS Android, ActiveSync
is still used by many ISVs to connect mobile clients to
Exchange Online.
Role Based Exchange 2010 Allows granular access to user and administrator functionality.
Access Control Also used to control the display of UI in OWA and the Office
(RBAC) 365 administration portals.
Outlook Web Exchange 5.0 The original Exchange Web client goes back to 1997 and has
App evolved dramatically since to keep pace with web
developments. Outlook Web App supports premium access to
Exchange Online from Chrome, IE, Firefox, and Safari browsers
and downgraded access for other browsers.
RPC over HTTP Exchange 2003 RPC over HTTP (Outlook Anywhere) removed the requirement
MAPI over HTTP Exchange 2013 for VPN connectivity to email servers across the Internet.
SP1 Without it, you’d have to create a VPN to connect to Exchange
Online. MAPI over HTTP is a more modern and effective
replacement.

Exchange Web Exchange 2007 Allows programmatic access to items in the Exchange Store
Services (EWS) without having to resort to the far more complicated MAPI
API. Although the Microsoft Graph is a more modern API for
Exchange along with other Office 365 components, EWS
persists and is used in migration and backup products for
Office 365.
Extensible Exchange 4.0 ESE is the engine that lies at the heart of the Exchange
Storage Engine Information Store and has been extended and refined for
(ESE) more than 20 years to arrive at the point where 100GB-plus
mailboxes are usable. Another important aspect is the work
done in the 2004-2012 timeframe to drive the storage I/O
profile of Exchange from being a fat slob to a svelte service
and so enables the ability to exploit low-cost storage that
delivers massive mailboxes at a very low price point.
High Availability Exchange 2007 The first HA implementation allowed just two copies of a
database. The introduction of the Database Availability Group
(DAG) in Exchange 2010 expanded this to 16 copies and
introduced features like the lagged copy, single page
patching, automatic failover, and so on. Exchange 2013
continued to improve matters with database autoreseed and
greater automation to manage lagged database copy,
including the introduction of the Replay Lag Manager in
Exchange 2013. The HA features allow Exchange Online to
operate with a basic model of four database copies spread
across two datacenters (or more) and ensure that the 99.9%
SLA is met or exceeded.
Managed Exchange 2013 Some self-healing capabilities were introduced in Exchange
Availability 2010 but Managed Availability took the idea that servers could
monitor their own health and take action when required to fix
a failed component to a new level. Even if on-premises
administrators don’t like its influence over servers very much,

Page 8
Exchange Online might not be manageable without this
degree of automation.
Workload Exchange 2010 Multi-tenant environments operate on a fair usage basis. In
management other words, the workload of a single tenant should not be
(throttling) able to unduly affect others. Workload management makes
this so.
Modern public Exchange 2013 Without a modernized version, on-premises customers who
folders had invested heavily in the cockroaches of Exchange would
never be able to move to the cloud.
Table 1-1: Technical innovation in Exchange that help Exchange Online work
Some technology developed for Office 365 is difficult to move to on-premises versions. The automatic
filtering of inbound email performed by the Focused Inbox and the Teams, Planner, and Delve Analytics
applications are examples of Office 365 software that will probably never run in a pure on-premises
environment. On the other hand, as demonstrated by the hybrid features in SharePoint 2016 (hybrid sites,
search, and OneDrive sites), it is possible to provide data taken from on-premises servers and process them in
the cloud.
A number of reasons can exist to prevent the transfer of cloud-based software to on-premises deployments,
including:
• The technology is complex to deploy and requires substantial effort to sustain in production.
Integrating different pieces of software together so that they all work as planned is often difficult
when software changes all the time. Traditional IT discipline focuses on structured updates performed
in change windows, something that doesn't work quite so well given the need to execute updates
across many moving parts.
• The technology requires a high cost in infrastructure (servers, network, storage, and automation) to
deploy and keep running, which implies a high cost barrier. Applications that depend on machine
learning and artificial intelligence are often difficult to deploy in an on-premises environment because
of the resources they consume.
• The technology needs to be fine-tuned on an ongoing basis to improve its performance and, in some
cases, accuracy. This work usually requires engineers to be able to make frequent and ongoing
software changes.
• Less importantly, the technology requires a skill set that might not be feasible to expect within
customer environments.

Remember, Microsoft is able to operate and develop Office 365 by employing the full resources of the
company in addition to a massive financial commitment to build out the infrastructure. It’s almost inevitable
that some components developed for and implemented first in Office 365 will prove just too complex to
transfer, but the good thing is that Microsoft has built a strong track record of transfer from cloud to on-
premises and nothing indicates that this trend will not continue.

The importance of technology transfers to on-premises versions: Although on-premises customers


often complain – sometimes bitterly – about the way that features show up in Exchange Online and not in
the latest on-premises update, it is undeniable that Microsoft has done a good job of transferring
technology developed to help run Exchange Online at scale to on-premises customers. Most of the work
transferred to date has been directed to Exchange 2013; even more is included in Exchange 2016.
Although the nature of cloud services means that they will always be ahead of on-premises equivalents,
the real question is how quickly will on-premises customers deploy the new versions to take advantage of
the technology transfer? Traditionally, new server versions take several years before they reach general
deployment across the entire customer base.

Page 9
Wave 14: Office 365 launches
Microsoft put all of the experience gained in BPOS to advantage when it designed Office 365. When launched,
Office 365 used the “Wave 14” set of Office server products that shipped to customers as Exchange 2010,
SharePoint 2010, Lync 2010, and so on. The big difference was that Microsoft had had several years of
operational experience to better understand the demands of the cloud. More automation was incorporated
into the products, the code base was simplified, the economics were better, and great attention was paid to
all aspects of design, build, and operation.
Another important point in the evolution of Office 365 was the adoption of a software development method
based on the DevOps concept. In effect, this means that engineering group responsible for the development
of Exchange are the go-to people for problems. In other words, if code fails then it is the engineer who wrote
or maintains the code who has to wake up from blissful slumber to fix the problem. There’s no doubt that the
direct association between code quality and responsibility for maintenance influenced the way that engineers
created features. It’s obviously important for engineers to take personal pride in creating the best possible
code at all times, but it becomes terribly personal for an engineer when they are hauled in at 3am to fix an
irritating bug.
Some initial glitches occurred in Office 365 that interrupted service to customers in August and September
2011, but broadly speaking the performance, reliability, and scalability of the service has proven to be
excellent. We discuss how Office 365 measures performance against service level agreements in Chapter 1 of
the main book.
Exchange 2010 introduced several important technical advances that have contributed greatly to the
subsequent success of Exchange Online. The Database Availability Group (DAG) is the most important
because it allows Office 365 to operate a highly available infrastructure for mailboxes. Exchange Online now
spans several thousand DAGs positioned in Office 365 datacenters around the world; each database has four
copies including a lagged copy; and the high availability features built into the DAG allows work to be
transferred quickly and dependably to another server should a problem arise. It also underpins the concept of
“native data protection”, meaning that Exchange Online does not use traditional backups to protect data.
Instead, a combination of Exchange features such as user-driven recoverable items, single item retention, and
multiple database copies protect user data so that it can be recovered in the case of inadvertent loss.
Apart from making sure that mailboxes stay online, the DAG also contributes to the economics of Office 365
by allowing the use of inexpensive JBOD disks for the massive amount of storage needed to allow users keep
all the information they want inside massive mailboxes. The economics are such now that it is much cheaper
to allow users as much storage as they want rather than invest in the time to manage storage. Gigabytes of
storage cost a fraction of a penny per month (amortized over 24 or 36 months) when bought in the quantities
consumed by cloud datacenters, so the fact that someone is using a 100 GB mailbox that costs a few cents for
the storage is nothing when put into context with their monthly payment. The same is true for other services
like OneDrive for Business or the consumer email services where the providers are happy to have users
consume large amounts of storage in return for the chance to sell other services. Of course, cloud providers
incur massive additional costs other than storage, but it is interesting to see how storage has become so
cheap and plentiful in such a short time and the influence this has had on data management.
Since its introduction, the DAG has steadily added features to improve its ability to support low-cost disks. For
instance, single page patching arrived in Wave 14 to allow Exchange to detect and patch corrupt pages that
appear in both active and passive database copies. Without single page patching, human administrators
would have to take problematic databases offline and fix them manually. Traditionally, this would have been
done by restoring a backup copy, but these don’t exist inside Office 365. Database autoreseed is another
example. Introduced in Wave 15, this feature allows administrators to set aside disk space that Exchange can

Page 10
use to build a new copy of a failed database. The rebuild happens automatically, which is exactly what you
need when the use of low-cost disks makes it easy to predict that disk and controller outages will be the most
common form of failure inside Office 365 datacenters. And, as it turns out, they are.
Microsoft also introduced the Mailbox Replication Service (MRS) in Wave 14. Not quite as exciting or as
technically compelling as the DAG, MRS still plays an enormous role through its ability to move mailboxes
from on-premises servers to Office 365. The transfer is highly automated, batch-driven, happens in the
background, and includes automatic delta synchronization to maintain the copied mailboxes in a current state
until the switchover occurs. If MRS didn’t exist, it would be very much harder for large companies to transfer
mailboxes to Office 365.

Wave 15: Cloud-Ready by Design


In late 2013, Microsoft launched the next generation of Office 365 based on the “Wave 15” set of Office
servers. By this time the developers had gained an enormous amount of operational experience from BPOS
and the first iteration of Office 365 and had factored it into the development of Exchange 2013. Wave 15
marked the first time that Office 365 moved ahead of on-premises products in terms of introducing new code
into production. From this point on, new features appeared in the cloud first and then in an on-premises
release.
A common code base was reintroduced to unite the cloud and on-premises versions of Exchange and a new
supportability model was introduced where Microsoft shipped a cumulative update to Exchange on-premises
customers every quarter that contained bug fixes and new features proven in Office 365. On-premises
customers didn’t get every new feature because some depended on non-Exchange components (see below)
but a continual flow of information from “the service” (the term used by Microsoft employees to refer to
Office 365) is used by the developers to improve and refine on-premises Exchange.
Managed Availability is a Wave 15 feature that is also a good example of how Microsoft has transferred
technology from the cloud to on-premises Exchange. Today, Microsoft operates over 200,000 Exchange
servers inside its Office 365 datacenters. It would be impossible to have human administrators monitor the
mailbox and other servers at the scale used by Exchange Online and be expected to detect and take rapid
action when problems occur. Indeed, given the need for humans to sleep and our tendency to lose interest in
boring and repetitive actions, many issues that occur on servers would go unnoticed. Managed Availability
gathers a vast amount of health signals from its probes running on every Exchange server, decides whether
the data indicates a problem, and responds to any problem that is found, all without human oversight or
intervention. The idea is that Managed Availability should be able to take care of routine and common issues,
leaving the most complex and difficult problems for humans to solve.
Features such as database autoreseed, namespace rationalization, and the simplification of the DAG are other
examples of how Office 365 has influenced the evolution of on-premises Exchange. Customers have also
gained through the ten-year development effort to transform the disk and storage requirements of Exchange
from a point where deployments required expensive “enterprise-class” disks to today where inexpensive
JBOD-style storage is the norm. Microsoft had to change Exchange to be less of an I/O hog to make it feasible
to use Exchange for cloud-based email. After all, if you need expensive disks, you won’t be able to offer users
50 GB mailboxes at the kind of price points that Office 365 charges today.
The change that occurred in client focus is also worth noting. Whereas Outlook remains the single most
popular client used to connect to Exchange Online, its Windows-centric development model means that it is
slow to adapt and change, especially in the context of a fast-moving cloud service. The problem is simple: it
takes Microsoft far more time to update the Outlook user interface to introduce new features and even longer
for customers to deploy the new software to user desktops than it does to make a change to the browser
components that drive Outlook Web App.
Page 11
Microsoft transformed Outlook Web App in the Wave 15 release. Some of the updates seemed retrograde at
the time because functionality was reduced and performance was poor, but change was necessary to deliver a
user interface that was capable of working on PCs, tablets, and “candy bar” smartphones. Over time, missing
features have largely been restored alongside a range of new features and performance has steadily
improved. Indeed, the rate of change in Outlook Web App and the difference that opened up between the
version available to Office 365 users and that provided to on-premises Exchange customers made it obvious
that Outlook Web App is regarded almost as an experimentation platform for Office 365. In other words,
Outlook Web App is the client that Microsoft can use to introduce new features in a rapid manner, even when
those features are not fully complete. Outlook is the more popular client, but it lags in the functionality stakes
and is likely to always do so until customers adapt the “Click to Run” variant and accept the fact that desktop
user interfaces are liable to change on almost a monthly basis – at least, when connected to Office 365.
After the transit to the Wave 15 products was complete, signs of the growing maturity of the platform can be
seen in a growing concentration on technology designed to function across the service. New features are
developed for applications like Exchange but an increasing effort is dedicated to functionality that draws upon
multiple parts of Office 365 such as Office 365 Groups, the Security and Compliance Center, and Delve or
support functions like Unified Auditing. The provision of a suite of REST-based APIs gives programmers a
consistent method to access and exploit various forms of Office 365 data, including the signals representing
user activities that are accumulated in the Microsoft Graph.

Wave 16: Now in production


The Wave 16 set of the on-premises Office products first shipped to customers in October 2015. This wave
includes Exchange 2016 and SharePoint 2016 as well as the Office 2016 desktop suite. Although the new
generation of server products underpinned increased reliability and robustness in the base workloads, the
appearance of “cloud-only” applications is the most interesting development during this period. Teams,
Planner, Flow, and PowerApps are some of the examples of major change within Office 365 that will never
appear on-premises.
During this time, Microsoft consolidated its cloud properties to bring Outlook.com to the Exchange Online
infrastructure. The consumer and business email applications share the same servers, storage, and software
base with functionality delivered to end users being controlled by different license tiers. OneDrive and
OneDrive for Business also share many components. A good example fo the value of shared infrastructures is
seen in the transfer of functionality between consumer and business applications, such as OneDrive’s Restore
Files feature and Exchange’s Encrypt email feature, both of which are available across consumer and business
platforms.
Another major change is the leaving behind of workload-specific compliance. Where SharePoint had the
eDiscovery Center and Exchange had its eDiscovery searches, the Office 365 Security and Compliance Center
became the focus for platform-wide compliance functionality. Not only does the new technology work for the
base workloads and new applications, it introduces new features like manual disposition, event-based
retention, and supervisory policies.
The Security and Compliance Center has taken over some functionality previously managed by Exchange like
anti-malware. This is also part of a trend to move functionality away from workload administrative portals to
Office 365 administration portals. You still need to use the Exchange Administration Center or SharePoint
Administration Center (now refreshed), but not as often as you used to, and you will visit them less in the
future.
Finally, this wave marks the beginning of the transition from Skype for Business Online to Teams and the
Microsoft Phone System. The roots for Skype for Business Online are in a long line of on-premises servers. The
new voice and video platform is cloud-only and shared with the Skype consumer application. The new
Page 12
platform is more adaptable to the changing needs of businesses for voice and video communications, but the
transition to Teams will take some years yet.

The Next Wave


With so much happening in Wave 16, speculation inevitably turns to what might happen in the next wave of
product development. For on-premises customers, the answer lies in the 2019 generation of the Exchange,
SharePoint, and Skype for Business servers (all of which went into preview for on-premises customers in July
2018). These products remain good at what they do and deliver good value to customers if they simply want
an email server, a document management server, or a communications server.
The difference in the cloud is integration. Office 365 is a fabric that gives development groups a toolset to
build new applications and functionality around. Teams is a great example of how to bring components from
across Microsoft’s cloud properties together to create a new application. The intermingling of components
from different places within Office 365 to create new apps or embellish existing apps is a trend we can expect
to continue.
The role of Exchange and SharePoint, the two basic workloads in Office 365, diminished in some respects as
Office 365 evolved. Exchange was the king in the early days of Office 365 because email was the first and
easiest workload to move to the cloud. SharePoint followed as migration tools matured and customers
became more used to the idea of managing documents in the cloud. Both applications came from a position
on-premises where they sat at the center of ecosystems, surrounded by people and other applications. It is
very different inside Office 365.
Today, Exchange is no more than an email server for Office 365 that happens to provide a convenient way to
store some information (like Teams compliance records). SharePoint manages documents for other
applications and itself. The focus is no longer on Exchange or SharePoint; it has shifted to the integration of
the base workloads into other applications. Thus, we see Exchange deliver shared mailboxes to Teams and
Groups, and SharePoint give Teams and Planner a convenient place to store documents uploaded to these
applications.
Exchange and SharePoint are still extraordinarily important to Office 365 and every Office 365 administrator
should understand how to manage these applications. This need will continue, but might become less critical
as time goes by as Microsoft automates and simplifies cloud operations. There’s lots more to learn and master
in your Office 365 journey, including:
• Azure Active Directory (basic operations, plus extended functionality like conditional access policies).
• Azure Information Protection.
• Teams.
• Planner.
• Flow and PowerApps.
• Enterprise Mobility and Security, including InTune.
• PowerShell and the Microsoft Graph to automate/script operations.
When we set out to write the first edition of this book in 2014, none of the topics listed above apart from
PowerShell were covered. Now, they’re fundamental parts of the Office 365 landscape. It’s enough to keep
everyone busy.

Microsoft 365
From a business perspective, the bundling of Office 365 into Microsoft 365 is the most important influence on
Office 365 for the immediate future. Microsoft has invested heavily in cloud infrastructure to build out its

Page 13
datacenters and networks to support Office 365 and Azure. The need exists to achieve a return on that
investment, and that means that Microsoft must continue to grow the number of paid subscriptions for Office
365 and increase the annual revenue for each subscription. Growing the revenue per seat is done by
convincing customers to upgrade their subscriptions to a higher-priced plan (from E3 to E5, for example) or
by buying add-ons for specific functionality. Many of the new features being added to Office 365 now require
E5 licenses and a growing gap is developing between the functionality available to E3 and E5 tenants to justify
the extra cost of E5 licenses.
Convincing Office 365 customers to embrace Microsoft 365 is another example of driving extra revenue, and
to support the activity, you’ll see that Microsoft constantly emphasizes the value to customers of deploying
Office 365, Enterprise Mobility and Security, and Windows 10 together. Marketing and engineering support to
illustrate the benefits of Microsoft 365 will appear in a continual flow to convince customers to embrace the
program. If you want to continue using Office 365 on its own, you can, but a time might come when all you
can buy is Microsoft 365.

Page 14
Chapter 2: Exchange Mailbox
Migration
A green field deployment of the core services of Office 365 is straightforward because you don’t need to deal
with legacy infrastructure or data. Organizations that have an on-premises infrastructure usually need to plan
before they can migrate anything to Office 365. Because the migration methodology and tools are well
established and flexible enough to meet the requirements of almost any scenario, email is often the first and
easiest workload to move to the cloud.
Identity management is a key component of any Office 365 deployment. Many questions need to be
answered about how identity will be managed during and after the migration. Of course, identity
management is important from a security perspective, but it is also important to consider how it impacts the
end user experience. Chapter 3 (in the main book) examines the different identity models available for Office
365. You should understand the material presented there before you choose a migration method. Take your
time on these matters. It is possible that a specific identity model will cause you problems with your preferred
migration method. Or you might choose a migration method, reach the end of your migration project, and
discover that your ongoing identity needs have not been met. Be prepared to be flexible in your decision
making, and if in doubt, always consider the user experience implied in your chosen approach. A poor user
experience means a poor perception of the project outcome, even if the technical execution of the project
goes well.
This chapter examines the decision-making process for choosing a migration method, reviews the cutover and
staged migration processes, and provides an overview of hybrid configurations and other non-Exchange
migration methods. Hybrid configurations are also covered in much more detail in chapter 4. You may well
find that Hybrid is the best approach for your organization, but it is still well worth your time to read and
understand the other options that are available, so that your decision is an informed one. Let’s begin with a
look at the different migration approaches for Exchange Online.

Migration approaches for Exchange Online


Office 365 supports a variety of migration methods. The choice of migration method is often influenced by a
wide range of factors such as the chosen identity model, the number of objects (e.g. mailboxes, contacts,
public folders) involved in the migration, the amount of data to be moved to Office 365, the version of
Exchange (if any) running on-premises, long-term migration or co-existence requirements, whether the
organization uses non-Exchange email servers, and even the budget available to spend on the migration
project.
The migration methods that are available can be summarized as:
• Cutover migration.
• Staged migration.
• Hybrid configuration.
• PST-based migration.
• IMAP migration.
• Third party migration tools.

Page 15
The best place to start is with the business requirements for the migration project. Business requirements
should include factors such as the need to complete the migration by a particular date, whether a back-out
option for the migration needs to be included, or if some email workload will remain on-premises. As you will
see, each migration method has different benefits and constraints, and they may not all suit the business
requirements of the project.
Technical requirements are considered next. These often eliminate some of the migration methods and allow
the organization to zero in on the feasible approaches. Figure 2-1 provides an example of the decision-
making process you can work through based on your technical requirements to understand the available
migration methods for your scenario. Even if you find that you meet the technical requirements of a migration
method, you should continue to research the actual processes involved in performing that migration, because
you might still discover some undesirable element that steers you in another direction.

Figure 2-1: Decision tree for choosing a migration method


The decision-making process begins by determining which version(s) of Exchange exist in the environment (if
any) because the version(s) of Exchange in use can narrow the available migration methods, or at least those
provided by Microsoft. A specific question is whether Exchange 2003 or 2007 servers exist within the
organization. These are now very old servers, so it is unsurprising that the need to migrate data from these
servers would limit the available options.
Hosted Exchange providers complicate matters further, because the hosting provider usually limits a customer
from performing the types of configuration and preparation that are required for built-in migration methods.
Unless the Hosted Exchange provider is very cooperative, a third-party migration tool might be needed to
migrate from a Hosted Exchange service to Office 365. That said, third party migration tools have their own
set of requirements and limitations, which will vary depending on the product, not to mention the additional
cost involved, which needs to be factored in to your project budget.
Table 2-1 summarizes the built-in migration methods available for different versions of on-premises
Exchange. The Exchange versions listed refer to the highest version of Exchange in the organization. For
example, if an organization runs a mixed Exchange Server 2010 and 2007 organization, then they can use the

Page 16
migration methods supported for Exchange 2010 and are not limited to the options available for Exchange
2007. Some additional requirements and constraints are mentioned in Table 2-1 that will be explained in more
detail later.
Exchange Version Cutover Staged Hybrid IMAP
Exchange 2003 Yes (if under 2,000 Yes No (unless a Hybrid server running Yes
mailboxes) Exchange 2010 is deployed)
Exchange 2007 Yes (if under 2,000 Yes No (unless a Hybrid server running Yes
mailboxes) Exchange 2010 or 2013 is
deployed)
Exchange 2010 Yes (if under 2,000 No Yes Yes
mailboxes)
Exchange 2013 Yes (if under 2,000 No Yes Yes
mailboxes)
Exchange 2016 Yes (if under 2,000 No Yes Yes
mailboxes)
Table 2-1: Available migration methods for on-premises Exchange versions
In addition to the built-in migration methods, you can consider:
• Migrating user PSTs using the Office 365 Import Service, Microsoft’s PST Collection tool, or a third-
party migration tool. PST-based migrations are discussed later. Note: The PST collection tool is no
longer under active development by Microsoft. Third-party migration utilities are now the
recommended method.
• Third party migration tools that use protocols like Exchange Web Services (EWS) to ingest data into
Exchange Online mailboxes. Third party tools often provide solutions to very complex migration
scenarios that built-in migration methods cannot handle.

Note: Although IMAP is included in Table 2-1, it is the least preferable migration method when you
migrate from an Exchange server, and is generally only suitable when migrating from non-Exchange
platforms such as Gmail or Yahoo!. We discuss the limitations of IMAP migrations later.
The next consideration is whether there are more than 2,000 mailboxes. Organizations with fewer than 2,000
mailboxes are supported for cutover, staged and hybrid migrations, while organizations with more than 2,000
mailboxes are only supported for staged or hybrid migrations. The 2,000-mailbox limit does not mean that
organizations with less than 2,000 mailboxes should automatically choose a cutover migration. For example, if
the organization wants to migrate their users in smaller batches instead of one big batch then a cutover
migration is not suitable.

Real World: 2,000 mailboxes is the threshold specified by Microsoft in terms of support for cutover
migrations. The logistics involved in handling an outage for such a large number of users, as well as the
desk-side support needed to assist with reconfiguring Outlook profiles and mobile devices after the
cutover, may simply make a cutover migration too risky and complex for the organization. In fact, many
experienced Office 365 consultants consider the practical limits of both the cutover and staged migration
methods to be as few as 150 mailboxes. Organizations larger than 150 mailboxes should give strong
consideration to using a hybrid migration instead of a cutover or staged migration.
When cutover is either not possible or not desirable for an Exchange 2003/2007 organization, the remaining
options are staged and hybrid migrations. For an Exchange 2003 environment an Exchange 2010 server can
be deployed to create a Hybrid configuration. If you want to use an Exchange 2013 or Exchange 2016 server
to host the hybrid configuration, you will have to complete a full migration to Exchange 2010 first. For an
Exchange 2007 organization at least one server running either Exchange 2010 or Exchange 2013 must be
installed to provide the hybrid functionality. Both staged and hybrid options require the implementation of

Page 17
directory synchronization. Without directory synchronization, your migration options are limited to the use of
third party migration tools.
An organization migrating from Exchange on-premises can use the free Hybrid license available from
Microsoft (see this discussion for details). This license allows an Exchange 2016, 2013 or Exchange 2010 SP3
server to be deployed in the organization to facilitate a hybrid connection with Office 365 (depending on the
supported versions that can co-exist in an organization). The Hybrid license can’t be used for a server that
hosts mailboxes, but the server can be used during the migration to Office 365 and retained afterwards to
manage the Exchange attributes of the on-premises Active Directory objects. A server assigned with a Hybrid
license can also be used as an SMTP relay server for applications or devices on the corporate network.
If the implementation of a Hybrid server is not possible, for example due to server capacity constraints, then a
staged migration is the way forward. The staged migration method is not available for organizations that run
Exchange 2010 or later. Exchange 2010 or later environments with fewer than 2,000 mailboxes to migrate can
still choose to perform a cutover migration. However, as we’ve already discussed, large cutover migration
projects can be logistically very difficult to perform.
Given that Exchange 2010 (or later versions) is capable of hybrid configuration with Office 365 you should
give strong consideration to using the hybrid approach instead of a staged or cutover migration. Although
hybrid is the most complex of all the migration options in terms of initial setup and configuration, it delivers
the best user experience during the migration. Hybrid configurations allow the on-premises Exchange
organization and Office 365 to function as though they are the same environment with seamless mail flow, a
shared address book and calendar free/busy federation. In fact, most users would not even be aware that they
are working in a hybrid configuration with mailboxes deployed in both on-premises and Office 365. A hybrid
configuration is also the only option that allows mailboxes to be off-boarded from Office 365 to Exchange on-
premises without using third party migration tools. Cutover and staged migrations can’t off-board or roll back
to an on-premises server without significant effort and the risk of data loss.
Hybrid configurations require directory synchronization of the on-premises Active Directory objects into Azure
Active Directory, so that they can be used in Office 365. If for some reason the organization can’t implement
directory synchronization, then the choices are limited to third party migration tools.
Finally, businesses using non-Exchange email platforms such as Gmail or Google Apps for Business can’t use
the cutover, staged or hybrid options. For those businesses Microsoft provides the IMAP migration option to
move mailboxes to Office 365. Alternatively, a third-party migration tool can be used. If you’re engaging the
Microsoft Onboarding Center or an outside consultant to assist with the migration, they will usually
recommend a specific tool or method for the migration, which will probably be the tool for which they have
most experience.
All aspects of the decision-making process require careful consideration. Beyond the technical considerations
are also other factors such as whether the migration project will be handled in-house or by an external
consultant, whether extra training is required for IT staff to understand new features such as hybrid
configurations, and whether funding is available to pay for third party migration tools if built-in migration
options can’t be used.
If you want some specific recommendations to get you started, it is generally recommended to use:
• Hybrid configuration for Exchange 2010 or later.
• PST-based migration if a non-Exchange email system can extract data to PST files.
• IMAP or third-party tools for any non-Exchange email systems that can’t extract data to PST files.
• Third party solutions for very complex migration scenarios.

Note: Before you finalize your decision on which migration method to use it is strongly recommended
that you read through the example migration scenarios from start to finish, so that you can learn about

Page 18
any risks or timing issues that you need to be prepared for. Do not start a migration before you have read
through the process from start to finish at least once, and you understand the support implications of your
migration method and ongoing identity management. You should also consider creating a test
environment and signing up for a separate Office 365 trial tenant so that you can perform a hands-on test
run of your chosen migration method.

Managing a Migration Project


Every migration project needs some degree of project management ranging from a large project with a
dedicated project management team to a small project that you self-manage. Before you launch into the
details of figuring out the various configuration and migration tasks it is wise to start with a project planning
exercise.
During the planning phase, you should hold a kick-off meeting with the stakeholders of the project. This will
ensure that everyone has a chance to discuss what the Office 365 migration will mean for them, identify any
key success criteria, and to flag any concerns or potential issues that they foresee with the migration. It’s
always helpful to know what the stakeholders’ biggest worries are so that you can address them with a
technical solution or simply by providing more communication about that matter.
The planning phase is also the opportunity to collect information about the current environment such as the
size and number of user mailboxes, shared mailboxes, public folders, other mail-enabled objects such as
contacts and distribution groups, delegates, and any applications or systems that rely on email, and network
connectivity for the sites involved. If your migration is for more workloads than just email, then you can also
collect information about file shares, SharePoint sites, or Skype for Business configurations that will be
needed. The information collected during this stage can also feed into the decision-making process for
choosing an identity model and migration method.
After your planning is complete, you should begin communicating with end users about what they can expect
from the migration, and give them some advanced warning about anything they will need to do as part of the
migration. Good communication can be the difference between a successful project and one that is
considered unsuccessful. A flawless technical migration will still attract criticism from end users if they are
surprised by outages during the migration or different user experiences afterwards.
Develop a checklist of migration tasks to follow throughout your migration project. This will help you to avoid
missing any crucial steps that might slip your mind during the busy parts of the migration. Every
organization’s checklist will be different in some way, and if you’re doing multiple migration projects for
different customers you will develop a good checklist of your own over time. Microsoft also publishes a
checklist for Office 365 deployments that can be used as a starting point for creating your own.
Finally, ensure that you create a test plan based on the information that you collected during the planning
phase. A good test plan describes how services are used in your organization, beyond their basic functionality.
For example, testing that email works when sending to or from external addresses confirms that the basic
functionality works, while testing that email notifications from your CRM work correctly is a test that is
relevant to your specific organization. The more scenarios you can think of for your organization that are
more than just basic functionality, the more robust your test plan will be. You can use this test plan during the
migration project to ensure that important business systems continue to work correctly, and to verify that end
users will have the experience they are expecting post-migration.

Page 19
Preparing to Migrate to Exchange Online
Before you can migrate to Office 365, you need to sign up for a tenant. At this stage, you can opt to sign up
for a free trial that will run for 30 days, or you can decide to start paying immediately. The free trial period has
full functionality, and is a chance to try out features that you are unsure whether you will need. For example,
you can sign up for an Enterprise E5 trial, but then you might discover during the first 30 days that you only
need E1 or E3 licenses instead.
If you are unsure of your ability to work with Office 365 and need to gain experience, it’s best to create a
separate trial tenant at the start and use it to experiment with different services and settings. Then, after
you’ve acquired sufficient experience, you can create a tenant that you intend to use for production services,
and begin to prepare it for the migration of on-premises workload.
During the signup process, you’ll be asked where your organization is located. Selecting the right country for
your organization will determine where your Office 365 tenant is located around the globe. For organizations
that span multiple geographic regions you should choose the country where most of your end users will be
located. In Chapter 1 (main book) you learned that Microsoft operates many datacenters in different regions
to provide Office 365 service to end users. Your proximity to your datacenters is a factor in the quality of the
user experience for your Office 365 services, but not a defining one. A more important consideration for
organizations that reside in countries with strict data sovereignty regulations, for example Germany, is
ensuring that the Microsoft datacenters hosting your services are the appropriate ones.
As you move through the signup process you will be asked to create the first administrator account for your
tenant. This step also involves choosing the service domain (or tenant name) for your tenant, which is the
*.onmicrosoft.com domain assigned to every Office 365 tenant. The service domain will be visible to your
end users in the SharePoint Online URL, Skype for Business meeting invites, and OneDrive for Business
libraries, so choose a name that aligns with your organization’s company name or brand. The service domain
name must be unique within Office 365. Unfortunately, if another tenant already has the name that you would
like to use there is no way to get the name for yourself.
The service domain also cannot be changed later. Some customers make the unfortunate error of signing up
for a tenant with a service domain such as “contosotest.onmicrosoft.com”, then begin using it for production
services before they realize it cannot be changed. A similar naming problem can occur when the company
goes through a rebranding, or is acquired by another company. If it is important to your organization to
change tenant names, the only options are to perform an off-boarding migration to on-premises servers, then
migrate to a new Office 365 tenant, or alternatively to perform a tenant-to-tenant migration. Both options
involve considerable effort and cost, and are best avoided unless necessary. It's possible that in future
Microsoft will develop a tenant renaming process that makes the situation easier for customers, but
considering the complexity of Office 365 and the number of integrated services involved it could be quite
some time before that capability arrives.

Real World: Companies that operate with multiple parent-child companies and different brand identities
have some challenges with Office 365 tenant naming to consider. If the companies do not share an email
domain or any other data and resources, then it’s feasible to consider separate Office 365 tenants.
However, if there are shared domains and resources, and a single Office 365 tenant is deemed necessary,
then a common approach is to use a single parent company as the tenant name, such as
“contosoholdings.onmicrosoft.com”.

Adding Domain Names


Migrating to Exchange Online usually means moving across an existing email domain to Office 365, but some
migration scenarios can also involve migrating to a new domain name, for example when a portion of a
Page 20
company is divested and moves to its own corporate brand. In either case, the domain names that will be
used by email recipients need to be added to Office 365. Exchange administrators will be familiar with this
requirement from managing Accepted Domains in on-premises Exchange environments. For Office 365 the
domain names are used by multiple services, not just by Exchange Online, so they are managed through the
Office 365 admin center and are referred to as “vanity domains”, or simply as “domains”. A domain name can
only be verified in one Office 365 tenant at a time.
Domains are added to a tenant by logging in to the Office 365 admin center and navigating to Settings and
then Domains. Click the Add domain button to start the wizard that will guide you through the process. As
part of this process you’ll need to validate the domain by adding a record to the DNS zone to prove your
ownership and control. There are two validation records offered; an MX record, and a TXT record. Adding an
MX record that points to Office 365 at this early stage of a migration is likely to cause disruption to your
existing mail flow, so I recommend that you use the TXT record method instead.
Microsoft can host the DNS for your Office 365 domains. You have the option to use Office 365 DNS services,
or continuing to manage your own DNS records with your existing host. For domains that are already in use
by an organization there are no immediate advantages to moving the DNS hosting to Office 365, but it’s an
option you might consider if you are adding a brand-new domain to Office 365 in the future, or if you want to
move all aspects of your messaging service to Office 365.
As additional tasks after adding domain names to your Office 365 tenant, you can consider:
• Adding administrator user accounts. Having multiple administrators in Office 365 is quite common,
especially in larger IT organizations. As explained in Chapter 5 (main book), you can use different
administrator roles to control the access that administrators have.
• Adding user accounts. This will depend on the identity model and migration approach you are using.
For example, the cutover, staged, and hybrid migration approaches all have specific steps for
provisioning users in Office 365, while other methods such as third-party tools might require you to
manually provision the user accounts. You should check the requirements of your chosen migration
method first, before you create any accounts, so that you don’t cause errors and unnecessary rework
later.

Configuring the On-Premises Infrastructure


The migration service for cutover and staged migration methods uses Outlook Anywhere to connect to an on-
premises Exchange server and synchronize the mailbox contents. Outlook Anywhere (also called RPC-over-
HTTP in Exchange 2003) is not enabled by default on Exchange 2010 or earlier, so you need to review your
server configuration and enable Outlook Anywhere. A valid third-party SSL certificate will also be required for
your server, if one is not already installed. Self-signed certificates simply won’t work. This is a nominal cost
from most certificate authorities such as Digicert. You do not need to spend thousands of dollars on a new,
high-end SSL certificate just to satisfy the requirements of a migration to Exchange Online.
Outlook Anywhere requires TCP port 443 (HTTPS) to be open on your firewall and NATing or port forwarding
to the on-premises Exchange server. The hybrid migration method uses Exchange Web Services (EWS), which
also operates over HTTPS. EWS is used by several third-party tools as well, but you should always check the
documentation provided by those vendors for specific firewall requirements. If you have not already opened
firewall access for services such as Outlook Web Access and ActiveSync, then you should review your firewall
configuration at this stage and make any necessary changes to allow the HTTPS connections.
The Exchange Remote Connectivity Analyzer (ExRCA) can be used to test the connection to Outlook Anywhere
or Exchange Web Services so that you can verify that your on-premises infrastructure is configured correctly
before you attempt any further migration steps.

Page 21
Some organizations that do not already have HTTPS open for external connections to an on-premises
Exchange server may resist the notion of opening firewall ports to the entire world. In such cases the firewall
access can be restricted to the IP address ranges for Office 365. Using IP address filtering may be an
acceptable temporary measure while the migration is performed, but it becomes difficult to maintain this
configuration over a long period due to the rate of change that occurs with the Office 365 IP address ranges.
Firewalls that can only filter based on IP addresses may require changes as often as weekly. Firewalls that can
filter based on FQDNs, or DNS names, will require far less maintenance on an ongoing basis.

Creating Migration Service Accounts


For cutover and staged migrations, the Office 365 migration service needs a set of user credentials to connect
to your on-premises organization and access mailboxes. It is recommended to create a dedicated service
account for this purpose. Service account requirements for Hybrid scenarios are covered in chapter 4.
The first step is to create a new service account in the on-premises Active Directory with a meaningful name,
for example “O365Migration”. Make sure that this account has a strong password, and set the password to
never expire so that you can avoid problems if you have a password policy in Active Directory that would
expire the password before the migration is complete.
One method suggested by Microsoft’s TechNet documentation to grant the service account the necessary
permissions is to add it to the Domain Admins group in Active Directory. However, this makes the account
very powerful and increases the risks if the credentials are compromised, so it is not recommended. A lower
risk approach that aligns with best practices is to grant the user permissions only to the mailboxes in
Exchange. You can perform this on each mailbox, or on each mailbox database. The advantage of doing it at
the database level is that any new mailboxes created after the permissions are granted will automatically
inherit the required permissions, while the per-mailbox method requires you to manually add the permissions
to any new mailboxes created later.
To grant the service account permissions for a mailbox database, you can run this command.
[PS] C:\> Get-MailboxDatabase | Add-ADPermission -User NRU\O365Migration
-ExtendedRights Receive-As

Applying the permissions at the database level is simpler, however, if you decide to grant the service account
permissions for each mailbox instead you can run a single PowerShell command.
[PS] C:\> Get-Mailbox | Add-MailboxPermission -User NRU\O365Migration
-AccessRights FullAccess

Note: If you are migrating from Exchange 2003, PowerShell is not available to run the commands shown
above. Instead, you can use the Exchange System Manager console to add the permissions to each
database.

Reducing the Migration Load


The more data that is migrated to Exchange Online the longer it will take. To enable a speedier migration, you
can undertake a clean-up process to reduce the overall size of mailbox data that will need to be migrated.
Emptying deleted items from mailboxes is a quick win, and in some customer environments I’ve worked in this
has reduced the overall load by as much as 20%. Aged data is also a prime candidate for clean-up. If emails
older than a certain number of years are no longer required, then retention policies can be used to remove
them from the on-premises mailboxes ahead of the planned migration.
One of the most common causes of skipped items during a migration is the size of the item itself. Office 365
has an advertised, default per-item size limit of 25 MB, which amounts to approximately 35 MB once the
combined size of the message contents, file attachments, and other metadata are considered. Administrators
Page 22
can increase this up to 150 MB (see the mail flow chapter in the main book) to allow larger email attachments
to be sent and received.
The maximum item size limit for mailbox migrations using the Mailbox Replication Service (MRS) is 150 MB.
MRS-based migrations include hybrid migrations, but not the other migration methods such as cutover,
staged, IMAP, or most migrations that use third party tools, which are limited to the general message size
limit that the administrator has configured. For example, if you configure a size limit for Exchange Web
Services (EWS), then any EWS-based tool is subject to that limit. For this reason, you should always ask
vendors of migration software to describe size limits for their software. You can use a PowerShell script such
as the Exchange Large Items Compliance script to scan existing on-premises mailboxes for items that need
attention before the migration begins.

Reviewing Email Addresses


Recipients in Exchange Online can have a maximum of 400 proxy addresses, which consists of the primary
SMTP address and any secondary SMTP addresses that are configured on the recipient. While this is a
reasonable limit, it may pose a problem for some on-premises customers who move SMTP addresses for
departed users to one single mailbox as a type of "catch all" mailbox for any future email received at that
address. Recipients with more than 400 proxy addresses will cause errors during a migration or when directory
synchronization is implemented.
To find recipients with more than 400 proxy addresses you can run the following PowerShell command.
[PS] C:\> Get-Recipient -Resultsize Unlimited | Where {($_.Emailaddresses).count -gt 400}

Reviewing Shared Mailboxes


As the name suggests, a shared mailbox is one that is used by multiple users to receive or send email. You
have probably seen shared mailboxes used for situations such as an IT help desk, a HR department, payroll
enquiries, and reception desks. A shared mailbox also has a calendar, so they can be used to coordinate team
schedules or anything else where multiple people may need to look at a common calendar (except for
meeting rooms which should use a room mailbox, and pool equipment which should use an equipment
mailbox). Shared mailboxes do not consume an Office 365 license, and they can’t be logged into directly. This
is a good thing, since most organizations would not want to pay for extra licenses for shared mailboxes unless
they have a need for licensed features such as in-place hold or unlimited archive mailboxes, but it means they
are not a way to get a “free” license for generic/shared user accounts (e.g. a shared “Reception” account).
The licensing point is important, because in a staged Office 365 migration you will end up with mailboxes that
were shared mailbox on-premises being migrated as regular user mailboxes into Exchange Online. User
mailboxes will work without a license for 30 days after which they will be deleted. Fortunately, you can convert
them from a user mailbox to a shared mailbox without assigning a license. To retrieve a list of shared
mailboxes in an Exchange 2007 organization use the Get-Mailbox cmdlet.
[PS] C:\> Get-Mailbox -RecipientTypeDetails Shared

Unfortunately, it is quite common for on-premises Exchange environments to have shared mailboxes that
were not originally created as shared mailboxes, and are simply user mailboxes instead. In those cases, the
PowerShell command above will not return all the mailboxes that would qualify as “shared” mailboxes. A
manual review of the mailboxes in the organization is recommended. If you have a large number of shared
mailboxes to convert, Oliver Moazzezi published an article to explain an easy method to handle this with
PowerShell.

Page 23
Reviewing Mail-Enabled Groups
For mail-enabled groups in the on-premises Active Directory (whether they are security groups or distribution
groups) the migration process results in new groups being created in Office 365. For cutover and staged
migration scenarios, the newly created groups in Office 365 will be restricted from receiving email from
external senders, as this is the default setting for new groups. This setting may be desirable if your mail-
enabled security and distribution groups are for internal use only, but for any groups that need to receive
email from external senders you will need to manually change the setting in Office 365. To avoid this issue,
you can use the Exchange Management Shell to identify groups that do not require sender authentication so
that you can plan to remediate them during the migration project.
PS C:\> Get-DistributionGroup | Where {$_.RequireSenderAuthenticationEnabled}

In cutover and staged migration scenarios, you need to change the setting in Office 365 after the group has
been created by the cutover migration batch. You can make the change before changing your MX records to
point to Exchange Online. For Hybrid migrations the group will synchronize to the cloud with the same setting
as the on-premises object.

Real World: In some on-premises organizations other configuration factors such as SMTP gateways and
load balancers can cause the RequireSenderAuthenticationEnabled setting to be overridden, and a group
may still receive emails from external senders even when sender authentication is required. When those
factors are removed from the equation after the migration to Exchange Online you may see email being
rejected unexpectedly. You can resolve these on a case by case basis, but if the risk is considered too high
for your organization then you can consider removing the sender authentication requirement from all
groups for an initial period after the migration, then reviewing the groups and turning it back on for those
groups intended for internal usage only.
During a cutover migration, the Office 365 migration service will automatically provision any mail-enabled
distribution groups in the cloud. However, it is not able to automatically provision mail-enabled security
groups. The solution for cutover migration scenarios is to create an empty mail-enabled security group in
Office 365 for each on-premises mail-enabled security before you begin the migration. You can locate mail-
enabled security groups in your on-premises organization by using PowerShell.
[PS] C:\> Get-DistributionGroup | Where {$_.GroupType -like "*SecurityEnabled*"}

Name DisplayName GroupType PrimarySmtpAddress


---- ----------- --------- ------------------
IT Team IT Team Universal, Secur..ITTeam@office365itpros.com

In the example above one mail-enabled security group has been located. To create a new mail-enabled
security group in Office 365 login to the Admin center and navigate to Groups section, or use the Exchange
Administration Center. In the Exchange Administration Center navigate to Recipients and then Groups. Click
the icon to create a new group, and choose Security Group. Give the new group the same name and alias as
the on-premises group, but do not assign the same email address. Instead use the *.onmicrosoft.com domain
for your tenant to assign the email address for now. If you have many mail-enabled security groups to migrate
you can use this PowerShell script to automate the creation of the new groups in Office 365.

Reviewing calendar delegates


Over time, users in your organization might grant other people delegate access to their mailbox. If those
delegates leave the company later and their mailboxes are then disabled, then the risk exists that a number of
stale delegate entries will exist on mailboxes. These entries can cause the cutover migration process to fail for
that mailbox user. Staged and Hybrid migrations do not have the same issue. It is a good idea to review
delegate access to mailboxes and to clean up unwanted or unused permissions before the migration

Page 24
proceeds. Finding mailbox users with delegates is a simple task in PowerShell. For Exchange 2007, use the
Get-MailboxCalendarSettings cmdlet.
[PS] C:\> Get-MailboxCalendarSettings | Where {$_.ResourceDelegates} |
Format-List Identity, ResourceDelegates

Identity : office365itpros.com/ExchangeUsers/Elizabeth.Holloway
ResourceDelegates : {office365itpros.com/ExchangeUsers/Steve Heppful,
office365itpros.com/ExchangeUsers/Alice.Mullins}

For Exchange 2010 or later, use Get-CalendarProcessing cmdlet instead.


[PS] C:\> Get-Mailbox -UserTypeDetails UserMailbox | Get-CalendarProcessing | Where
{$_.ResourceDelegates} | Format-List Identity,ResourceDelegates

However, the output generated by the cmdlets doesn’t tell you which of the delegates may be a stale entry.
Some additional PowerShell is required for that task to locate delegate entries that are not mail-enabled. For
instance, you could use the PowerShell scripts Find-StaleDelegates.ps1 or Glen Scales’ Reverse Delegate
Permissions and Rights Report.
Like delegates, any object in Active Directory that is being migrated to Office 365 can’t be configured with a
link to a Manager that is not also going to be migrated. Again, it is a simple task to locate users with a
Manager that is not also mail-enabled. An example of this is the Find-StaleManagers.ps1 script.

Managing Out of Office Settings


Any Out of Office (OOF) messages configured for mailboxes will not be migrated to Office 365 during a
cutover migration. For staged migrations, if a user has turned on Out of Office prior to the migration starting,
it will be enabled on the cloud mailbox, but the Out of Office message will be blank and will need to be
manually added again. You should communicate this issue to your end users so that they know to recreate
their Out of Office message after their mailbox has been migrated. For users who will be absent during the
migration you can configure the Out of Office settings for them. Refer to the Exchange Online chapter of the
main book for more details.
Remote mailbox moves that occur during a Hybrid migration preserve the OOF settings and message, but in
some cases where the OOF message is configured with an expiry date it will not turn off automatically if the
user is migrated to Exchange Online. To fix this use OWA or PowerShell to manually set the OOF message
again, save that change, and then turn it off again.

Disabling Unified Messaging


Before beginning a cutover or staged migration to Office 365, any mailboxes that are enabled for Unified
Messaging need to have UM disabled. After this is performed the user will not be able to use any Unified
Messaging features of the on-premises Exchange server. If Unified Messaging is important to your
organization a Hybrid migration may be the better choice. You can find the UM-enabled mailboxes with
PowerShell by running the Get-Mailbox or Get-UMMailbox cmdlets.
[PS] C:\> Get-Mailbox –ResultSize Unlimited | Where {$_.UMenabled}

Name Alias ServerName ProhibitSendQuota


---- ----- ---------- ---------------
Alan.Reid Alan.Reid ex2007srv unlimited
Elise.Daeth Elise.Daeth ex2007srv unlimited
Michelle.Peak Michelle.Peak ex2007srv unlimited

[PS] C:\> Get-UMMailbox

Name UMEnabled Extensions UMMailboxPolicy PrimarySMTPAddress


---- --------- ---------- --------------- ------------------

Page 25
Alan.Reid True {12345} Default Policy Alan.Reid@office36...
Elise.Daeth True {12346} Default Policy Elise.Daeth@office...
Michelle.Peak True {12347} Default Policy Michelle.Peak@offi.

To disable the mailboxes for UM, use the Disable-UMMailbox cmdlet. The following command disables all the
UM-enabled mailboxes in the organization. If you want to disable just a single mailbox, use the Identity
parameter to pass the alias of the mailbox to Disable-UMMailbox.
[PS] C:\> Get-UMMailbox –ResultSize Unlimited | Disable-UMMailbox

Confirm
Are you sure you want to perform this action?
Disabling UM mailbox "office365itpros.com/ExchangeUsers/Alan.Reid".
[Y] Yes [A] Yes to All [N] No [L] No to All [S] Suspend [?] Help
(default is "Y"):a

Name UMEnabled Extensions UMMailboxPolicy PrimarySMTPAddress


---- --------- ---------- --------------- ------------------
Alan.Reid False {} Alan.Reid@office36...
Elise.Daeth False {} Elise.Daeth@office...
Michelle.Peak False {} Michelle.Peak@offi...

Reviewing MX Records in DNS


MX records control where the email servers for other organizations will send email for your domains. At the
start of your migration project your current MX records will point to your on-premises Exchange server’s
public IP address, or perhaps they point to the IP address of a third-party email filtering product or service
that you use. You can review your current MX records using nslookup or a tool such as MXToolbox. When you
add the domain name for your organization to your Office 365 tenant a series of DNS records are provided by
Microsoft for you to add to your DNS zone. One of those DNS records is the new MX value that will direct
email to your Exchange Online mailboxes instead of the on-premises environment. You can find the MX
record by logging in to the Office 365 portal with your administration account, and navigating to the Domains
section.
Don’t change your MX records yet. The MX record change is performed during or after your mailbox
migrations, depending on the migration method you are using. But it is important to check the existing MX
records to determine what the time-to-live (TTL) value is. The TTL value tells other DNS servers how long they
should cache the DNS record. When it comes time to change your MX record during the migration, you will
want that change to take effect almost immediately. As such, you should lower the TTL value of your existing
MX records to 5 minutes (300 seconds) or less. Doing this step early in your migration ensures that the
change has well and truly taken effect by the time you need to change your MX record to point to Exchange
Online.

Cutover Migration
A cutover migration migrates all the existing mailboxes, contacts, and distribution groups from an on-
premises Exchange organization into Exchange Online. Cutover migrations do not include public folders or
dynamic distribution groups. Cutover migrations can be performed for up to 2,000 mailboxes, however even
companies smaller than 2,000 mailboxes may find the logistics of a cutover migration do not suit them, and
would prefer to use a different method such as a staged or hybrid migration.
One of the main logistical challenges with a cutover migration is the need to reconfigure every user’s Outlook
profile to connect to Exchange Online. Unlike the staged and hybrid configurations there is no method for
handling this automatically with a cutover migration, at least not within the migration process itself. Instead
the IT team needs to either manually reconfigure every user’s Outlook profile, provide instructions to end

Page 26
users to reconfigure it themselves, or develop an in-house script or solution to handle it automatically. Any of
these choices imposes a significant burden for a cutover migration on larger organizations. However, for very
small businesses the extra effort is negligible.
Another challenge with cutover migrations is the network bandwidth requirements. A low bandwidth
connection to the internet will slow down the migration of the mailbox contents, and will also cause problems
on the first day after the cutover occurs, because every user is resynchronizing their mailbox contents in their
new Outlook profile. Outlook 2013 and 2016 have an adjustable setting that controls the amount of mailbox
content that is synchronized to the offline cache, however even with that setting turned down to a very short
time, a large group of users can still saturate a network with Outlook synchronization traffic. In Figure 2-2,
which shows the internet usage for a small business that performed a cutover migration, the significant
increase in utilization on the day of the cutover can be clearly seen and caused a day’s productivity to be lost.
Had they elected to go with a Hybrid migration instead they would have incurred a slightly higher project
cost, but not impacted productivity for the entire company on cutover day.

Figure 2-2: The impact of a cutover migration on internet bandwidth


The cutover migration is performed as a single batch move that includes all mailboxes, mail contacts, and
distribution list. Unless you hide objects by setting their HiddenFromAddressListsEnabled property to $True, no
option exists to selectively migrate objects from an on-premises organization to the cloud. The effect of
setting HiddenFromAddressListsEnabled is that an object is not included in the migration batch.
Because Exchange includes all visible mailboxes, contacts, and distribution lists in the migration batch, it is
advisable that you perform a review and clean-up of on-premises objects, particularly mailboxes, before
commencing the migration. There is little point in spending time and bandwidth to move a mailbox for a
departed staff member if you no longer need to keep the data. However, if the data is still needed, then the
departed staff member’s mailbox can be moved to the cloud and made into an inactive mailbox, which does
not require an Office 365 license. See the Exchange Online chapter in the main book for more details about
inactive mailboxes.
The cutover migration approach also makes it impossible to conduct a pilot migration, in which a small group
of pilot users are migrated first to test whether everything is working correctly. A pseudo-pilot migration can
be performed, in which some users log on to their Exchange Online mailboxes via OWA or Outlook to test
basic functionality. But they can’t use the mailboxes for day to day email tasks while the MX records in DNS
are still routing email for the domain to the on-premises Exchange environment. If you have a need to
perform a true pilot migration, then a staged or Hybrid migration will need to be used instead of a cutover.
Network bandwidth is not the only factor that will impact the speed that the cutover migration batch
processes mailbox data. The performance of the servers in Exchange Online is also a factor, and could cause
your migration batch to slow or stall periodically. As such, it is not simply a matter of estimating the migration
timeframe based on the amount of data and your maximum network bandwidth, although that is a

Page 27
reasonable starting point. Give the difficulty in predicting exactly how long the synchronization of mailbox
data will take, scheduling the final cutover tasks is something of a grey area. You might plan to perform the
final cutover on a particular weekend, only to discover that the initial synchronization is still running through
that weekend. A flexible approach to scheduling the final cutover is required, and may require you to wait a
few days or a week between the end of the initial synchronization and when you’re able to actually finalise the
cutover.
The high-level process for a cutover migration is as follows:
• Users and mailboxes are automatically provisioned in Office 365 to match the on-premises objects. It
is important not to pre-populate your Office 365 tenant with accounts that already exist in the on-
premises environment as this will cause the cutover migration batch to fail due to the existing objects
in the cloud.
• The migration batch performs an initial synchronization of mailbox content that includes email
messages, contacts and calendar items. Non-content items such as inbox rules and calendar
permissions are not included. The administrator can be notified when this initial synchronization has
completed, but it should also be monitored along the way in case there are individual failures or
errors. No option exists to allow an administrator to prioritize certain mailboxes over other ones, nor
is there much point in trying to do so. The cutover migration by its very nature involves waiting for all
the mailboxes to be synchronized first, so the order in which they are synchronized is of little
importance.
• After the initial synchronization has completed, Exchange Online continues to perform an incremental
synchronization every 24 hours to update online mailboxes with any changes to the on-premises
mailboxes. This process of incremental synchronization continues for up to 90 days unless an
administrator acts to complete or abort the migration. If nothing happens, the migration job is
automatically stopped after 90 days and is then removed after a further 30 days. It is not
recommended to allow the incremental synchronization process to continue for a long period of time.
Even though new mailbox items are synchronized, some changes to existing items are not. For
example, if a user moves a lot of messages out of their inbox into a sub-folder, after the cutover is
completed and the user connects to their Exchange Online mailbox they may notice that the items are
back in their inbox.
• When the organization is ready to complete the cutover migration the MX records for the domains
are updated in DNS, and then a series of other supporting tasks are also performed to complete the
migration.
Because the cutover migration process is essentially just a data synchronization followed by an MX record
change, the rollback process involves simply changing the MX record to point back to the on-premises
Exchange server. However, the data synchronization process is one way only. Cutover migrations do not
provide a way to synchronize new data in Exchange Online mailboxes back to the on-premises mailboxes. This
means that a rollback from a cutover migration risks the loss of new mail items that have arrived in the
Exchange Online mailboxes after the MX record was pointed at Office 365. It doesn’t mean that a rollback is
impossible, only that it carries the risk of data loss unless the effort is made to locate new mailbox items and
manually migrate them back to the on-premises environment, for example by exporting them from Exchange
Online to a PST file and then importing them on-premises. If the lack of a seamless rollback process is a
concern for you, then it is recommended to look at a Hybrid migration instead. Hybrid is the only approach
that allows off-boarding of mailboxes from Exchange Online to an on-premises Exchange server.
A cutover migration involves a reasonable amount of technical work, but there is also some non-technical
work required. For example, you need to communicate to end users when the migration will occur and what it
means to them, and perhaps also request that they clean-up their mailboxes to reduce the amount of data
that is being migrated. You might also advise them to backup any critical messages, contacts or calendar

Page 28
items to a PST file to ensure they can still access them if a problem occurs. And you will also need to
communicate to them any changes that they need to understand or make to access Office 365 after the
migration completes, such as reconfiguring mobile devices, new URLs for accessing services such as OWA for
Office 365, as well as new functionality that they can access after the migration.
At the end of a cutover migration the users provisioned in Office 365 are cloud identities. The most significant
impact of this to your end users is that it is a separate password that they need to manage, in effect giving
them two sets of login credentials; one for on-premises login, and one for Office 365 login. An SSO solution
such as AD FS or password synchronization will improve the user experience by allowing them logon to Office
365 services with the same credentials they use for the on-premises environment. Some organizations
implement SSO immediately after the cutover and before the end users log on to their Exchange Online
mailboxes for the first time so that separate passwords are not required.
Retrofitting directory synchronization after a cutover migration is not a simple task, and gets more
complicated as more time passes after the migration. It also requires that an on-premises Exchange server be
retained for managing mail attributes for users, which is not ideal for organizations that want to reduce the
number of servers they manage. If you do want directory synchronization and SSO in place after your
migration then I recommend you consider a staged or Hybrid migration instead, each of which implements
directory synchronization at the beginning of the project rather than at the end.

Staged Migration
A staged migration can be used to migrate some or all the existing user and resource mailboxes from an on-
premises Exchange 2003 or 2007 organization into Exchange Online. Staged migrations can be performed for
organizations of any size, allowing them to migrate mailboxes to Office 365 over a period of several weeks or
months, with the eventual result of all mailboxes being hosted in Office 365.
A staged migration is not available to organizations running Exchange 2010 or later. Those organizations must
choose between performing a cutover migration or establishing a Hybrid configuration for the migration if
they want to use built-in migration methods.
A staged migration uses directory synchronization to provision users, distribution groups and contacts in
Office 365 by synchronizing them from the on-premises Active Directory. This makes the process more
selective than the cutover migration which does not permit the migration of a subset of on-premises objects,
because the directory synchronization can be filtered to only those objects that the organization wants to
sync to the cloud, leaving other objects such as user accounts for departed staff members behind.
The high-level process for a staged migration is:
• Users and other object types are provisioned in Office 365 by directory synchronization. The
organization has the option to use password synchronization or AD FS as a single sign-on (SSO)
solution as well. Password synchronization is built in to Azure AD Connect, so it can be implemented
without any additional infrastructure required, and it provides a good end user experience because
your users are not required to manage two sets of credentials for on-premises and Office 365 logins.
AD FS requires the deployment of at least one, and often as many as six additional servers into the
organization, depending on the requirements.
• Groups of on-premises mailboxes are added to migration batches. Batches are defined in CSV files
that can contain up to 2000 rows, which is a very large batch. For the early stages of your migration it
is wise to use small batches to test out the process and ensure that everything is working well. When
Office 365 processes a migration batch it mail-enables the cloud user account and synchronizes the
mailbox data.

Page 29
• The migration process immediately, or within a few minutes, configures mail forwarding on the on-
premises mailbox so that new email messages are delivered to the Exchange Online mailbox. At this
stage, the user should not use the on-premises mailbox, as it will be the Exchange Online mailbox that
receives new email. Furthermore, if the migration process has finished processing a folder, no
subsequent changes to that folder will be migrated, because the staged migration process does not
do multiple migration passes of the same folders. The user can be switched to using the Exchange
Online mailbox, but should expect to not see all their mailbox content until the migration batch has
completed.
• The on-premises mailbox user is converted to a mail-enabled user when the migration batch is
completed, which is the end of the migration for that group of mailbox users. The conversion to a
mail user allows Autodiscover to connect the Outlook profile to the Exchange Online mailbox. You
can then repeat the process for additional batches until they are all migrated to Office 365.
A staged migration has similar logistical challenges to a cutover migration, in that you will need to manually
reconfigure users’ Outlook profiles to connect to Exchange Online. As the Outlook profiles resynchronize their
offline caches there is a similar impact on network bandwidth as well. However, being able to control the
batch sizes during your migrations means you can limit the number of users in each batch to a manageable
size.
A staged migration accommodates a very simple level of co-existence between the on-premises organization
and Exchange Online. During a staged migration, the users with mailboxes in Office 365 can send and receive
emails from on-premises users, and vice versa. They can also see each other in their Global Address List (GAL).
However, they are not able to share calendar free/busy information, nor are they able to access each other’s
mailboxes as delegates. These limitations mean it is recommended to move batches of users to Office 365
based on teams or departments, so that the people who are used to collaborating with each other can
continue to do so. It also means that staged migrations are not suitable for very long migration projects, or
for scenarios where an organization will require a permanent co-existence between on-premises and the
cloud. In those cases, a hybrid configuration is the recommended approach.
Although directory synchronization is a requirement for a staged migration, it is not an ongoing requirement
after the migration has been completed. An organization may choose to keep directory synchronization in
place, which can be convenient for password synchronization and other ongoing administration so that
changes can be made to on-premises objects and synchronized to the cloud, rather than having to make the
same changes in both the on-premises and cloud environments separately. However, if the organization plans
to completely remove their on-premises Active Directory and transition to a full cloud model then they can
simply deactivate directory synchronization and perform ongoing identity administration using Office 365
management tools.

Real World: To remain in a supported configuration, if you want to maintain the ability to manage the
email attributes for users and other objects in your on-premises Active Directory, and have those changes
synchronize to the cloud, then an on-premises Exchange server should be retained for those administrative
tasks. If you choose to remove the on-premises Exchange server, and use ADSIEdit or third-party tools to
manage the email attributes in Active Directory, then Microsoft will not provide support for any issues that
may arise.

Hybrid Migration
A Hybrid configuration can be used as both a migration method, and as a permanent state for the Exchange
environment. Hybrid configurations are available for Exchange organizations that contain at least one
Exchange 2010 SP3 or later server. For Exchange organizations that do not already contain an Exchange 2010

Page 30
or later server, a free Exchange 2016, 2013, or 2010 license can be obtained from Microsoft solely for enabling
the Hybrid functionality, but not for hosting mailboxes.
Hybrid configurations require directory synchronization, and it is recommended to also use password
synchronization or AD FS for a single sign-on experience for end users. Without either of those SSO solutions
in place the end user experience when on-boarding or off-boarding mailboxes, as well as during other
Exchange server interactions such as Autodiscover lookups, will be very poor.
Hybrid configurations allow for rich co-existence between the on-premises Exchange organization and Office
365, including:
• Remote mailbox moves that are nearly seamless to end users.
• A unified Global Address List.
• Sharing of Free/busy calendar information.
• The ability to migrate mailboxes back from Office 365 (offboarding).
This rich co-existence makes Hybrid configurations ideal for many migration scenarios and avoids many of the
issues that staged and cutover migrations create. Hybrid is also ideal as a permanent state for organizations
that want to have some mailboxes on-premises and some in the cloud, with the freedom to move mailboxes
back and forth as required. In fact, Hybrid is the only supported scenario in which off-boarding from Office
365 is made possible. Without a Hybrid configuration, off-boarding often relies on third party migration tools
instead. You can learn more about Hybrid configurations in Chapter 4.

Post-Migration Tasks
At the completion of a cutover, staged or Hybrid migration there are several post-migration tasks that need to
be performed.

Updating MX Records in DNS


For cutover and staged migrations, you change the MX record after all mailboxes have been migrated to
Exchange Online. For Hybrid migrations, the MX record can be changed at any time after the Hybrid
configuration is created and secure mail flow is working. Some organizations choose to change the MX record
in a Hybrid migration immediately, so that Exchange Online Protection can be used for email hygiene for the
entire organization, while others choose to change the MX record later in the project when most or all the
mailboxes have been migrated.
When you are ready to make the change, log in to your DNS host or server’s control panel and update the MX
record to the value provided by Microsoft. A low TTL value of 5 minutes or less should be used for the MX
records in case the need exists to roll them back to their previous values quickly. After making the DNS
change you can check DNS again to verify that the change has taken effect, and log in to Outlook Web App or
Outlook to send and receive some test messages to verify that mail is flowing correctly.
For cutover migrations, the MX record change will be simple to test; email either goes to the on-premises
mailbox or the Exchange Online mailbox. For staged and Hybrid scenarios, be aware that mail flow will appear
to work fine even if other email servers on the Internet are still sending messages to your on-premises
Exchange server, because the messages will simply be forwarded to the relevant cloud mailbox. There are a
few ways you can verify whether messages go directly to Office 365 or whether they still route through the on
premises server. The first is to look at the message headers of an email message. Headers can be challenging
to read in their raw form, for example when viewed in Outlook. It is much easier to copy and paste the
headers into the Message Analyzer tool available as part of the Microsoft Remote Connectivity Analyzer.
Another option is to install the Message Header Analyzer app for Outlook 2013 or later.

Page 31
The results will include a table showing all the hops that the message passed through from sender to
recipient. Review the table to verify that the message did not pass through your on-premises Exchange server.
Some mail servers may continue to send new messages to the on-premises Exchange server due to DNS
propagation delays or caching. One of the ways that you can determine this is by running a message tracking
log search. A simple search for log entries in the last 24 hours will show you whether any mail traffic is
continuing to flow through the server.
[PS] C:\> Get-MessageTrackingLog -Start (Get-Date).AddHours(-24)

If you do see some results in that query that look like user-generated email, then you may simply need to wait
longer and try again. However, if you’re still seeing message tracking results long after you changed your MX
records in DNS then there may be some other cause of the email traffic, such as a device or application on
your network that is using the on-premises server for SMTP relay, or simply due to health mailbox probes if
the server is running Exchange 2013 or later. Of course, if you are not planning to decommission the on-
premises server and instead plan to keep it running for ongoing management and SMTP purposes, then you
can expect to see some email traffic still flowing through it. I still recommend you look closely at it though, to
make sure that it isn’t something nefarious like a spammer trying to exploit your Exchange server to send their
spam campaigns.

Configuring Autodiscover
The reconfiguration of Outlook profiles for your end users depends on Autodiscover working correctly. The
Autodiscover service provides configuration details to Outlook clients, telling them exactly how to configure
themselves to connect to Exchange or Office 365.
For cutover migrations, Autodiscover is updated to point to Exchange Online when all the mailboxes have
been migrated. For staged migrations Autodiscover configuration depends on whether you plan to keep the
on-premises Exchange server or decommission it. If you plan to retain an on-premises Exchange server, the
Autodiscover records in Active Directory and DNS point to that on-premises server. The mailboxes have been
converted to mail users, so the on-premises Exchange server will return an Autodiscover response that
instructs the client to connect to Office 365. If Autodiscover was previously working, and worked during the
various migration batches that you ran, then likely you won’t need to modify anything. If you plan to
decommission the on-premises Exchange server, the Autodiscover records need to be modified to point at
the Office 365 servers.
For Hybrid deployments, the Autodiscover configuration should continue to point to the on-premises server if
mailboxes exist on-premises. Autodiscover for Hybrid scenarios is covered in more detail in Chapter 4.
When Autodiscover records are ready to be updated, you can once again refer to the Domains section of the
Office 365 admin center to retrieve the DNS records that are required for each domain name. At this stage,
the Autodiscover CNAME record for Exchange Online is the one to implement.
Add the record to your public DNS zone, making sure to remove any other Autodiscover records that might
also exist. If you also host a DNS zone for your email domain on your on-premises domain controllers, then
you should check for Autodiscover records in that zone as well, and change those to the new Office 365 value.
If an on-premises Exchange server is being retained, and you want the Autodiscover configuration to point to
Exchange Online, then the Autodiscover SCP also needs to be updated. On the on-premises Exchange server
use the Exchange Management Shell to update the SCP by running Set-ClientAccessServer.
[PS] C:\> Set-ClientAccessServer –Identity <servername>
-AutoDiscoverServiceInternalUri $Null

Page 32
The step is not required if you are migrating from Exchange 2003, as Exchange 2003 does not include the
Autodiscover service.

Other Post-Migration Tasks


• Configuring Other DNS Records: Office 365 requires a series of other DNS records to also be added to
your public DNS zone for services such as Skype for Business, and for the Sender Protection
Framework (SPF) to help combat spam. You will see these records in the Office 365 admin portal in
the Domains section. While you are already managing your DNS zone for the MX and Autodiscover
changes it is a good opportunity to also add the remaining DNS records that Office 365 requires.
• Convert Shared Mailboxes: As discussed earlier, for some migration methods any shared mailboxes
are initially created in Office 365 as a regular user mailbox which will consume a license. To avoid
paying for excessive licenses convert those mailboxes to shared mailboxes. Managing shared
mailboxes is covered in the Exchange Online chapter in the main book.
• Remove restrictions from distribution groups: As described earlier, the distribution groups in your
Office 365 tenant may be created with default restrictions that prevent external senders from sending
email to them. The sender authentication requirement should be disabled for any groups that need to
receive external email.
• Assign licenses to Office 365 accounts: The accounts that are provisioned as part of the migration
process have a 30-day grace period to become licensed. If a license is not assigned to the accounts in
that time, the unlicensed accounts will be disabled and removed, so it’s essential to assign the
licenses (for the appropriate plans) to the new accounts. Managing Office 365 licenses is covered in
Chapter 4 of the main book.
• Decommission on-premises Exchange servers: This is optional, and only performed if you if you are
planning to transition to a cloud identity model and not continue with directory synchronization or
single sign-on (SSO). Do not simply shut down the server. Instead you must ensure that you follow
the correct procedures to uninstall the Exchange software from the server, which will cleanly remove it
from the Active Directory environment.

Other Exchange Online Migration Types


Cutover, staged and Hybrid migrations are not the only options available to organizations that are migrating
email to Office 365. Let’s take a brief look at the other migration options that are also available to choose
from.

IMAP Migration
Staged, cutover and hybrid migrations utilize Outlook Anywhere or Exchange Web Services to connect from
Office 365 to the on-premises Exchange server during the migration. For customers migrating from non-
Exchange servers those protocols are not available, and other methods must be used instead. IMAP4 is a
widely-supported protocol by email server platforms and therefore serves as the lowest common
denominator protocol for connectivity. Office 365 supports migrations that use the IMAP protocol to extract
information from source mailboxes which is then pushed to Office 365 mailboxes. The migration works in a
similar way to cutover migrations, in that an initial synchronization pass occurs and then multiple incremental
synchronizations can run to keep updating the Exchange Online mailboxes with the latest new items from the
source mailboxes. However, the user accounts in Office 365 must be provisioned separately, either by
manually creating them or by directory synchronization, as the IMAP migration process does not provision
them for you.

Page 33
Figure 2-3: Starting a data migration from an IMAP-enabled email service
Once everything is ready, a cutover-style migration is executed by updating MX records and connecting email
clients and devices to the Exchange Online mailboxes. As with other cutover migrations, this process can be a
logistical challenge, can place a big burden on network connectivity as email clients synchronize with their
new mailboxes, and there is no rollback without the risk of data loss.
IMAP migrations can be initiated from commonly used email services such as Gmail, Yahoo, Outlook.com and
Hotmail by navigating to Users and then Data migration in the Office 365 admin center (Figure 2-3). Each
service has some preparation steps in which it is generally recommended to enable two-factor authentication
for the external email account, and then generate an app password that is used by the data migration service
to authenticate to the mailbox over IMAP.
When you select a service such as Gmail, the settings for the migration are automatically configured with the
IMAP server settings for that service. If the email service that you migrate data from is not one of the listed
options, you can choose Other email sources and manually configure your own IMAP settings. You can
choose to migrate data to any licensed Exchange Online mailbox user. However due to the nature of this
migration type you will need to know the account passwords, or app passwords as mentioned earlier, for the
accounts (Figure 2-4) on the other email service. The passwords are necessary so that the data migration
process can log on to them to retrieve data, unless the external service you are migrating from supports
administrator or impersonation access to all the mailboxes.

Figure 2-4: Configuring the email address and password for accessing IMAP email services
Because IMAP4 is exclusively an email protocol, any migration based on this protocol will have some natural
limitations. The process will only migrate mail items. No calendars, contacts, notes or other item types will be
migrated. There is also a limit of 500,000 items per mailbox, and 50,000 total mailboxes. Despite these

Page 34
limitations, it is good that the IMAP migration method exists, giving non-Exchange customers a reasonable
pathway to Office 365.

Third Party Office 365 Migration Tools


Apart from the built-in migration tools available with Office 365 there is also a healthy ecosystem of third
party migration tools available. These tools often solve problems that the built-in migration methods don’t
adequately cover, such as better support for migrating from hosted Exchange providers and non-Exchange
platforms, the ability to perform staged-style migrations with and without directory synchronization in place,
and automated desktop reconfiguration. They also often include additional tools to help with the discovery,
planning and overall management of your migration project.
Typically priced per-mailbox and operating in a software-as-a-service (SaaS) model, they are well worth
considering and testing if you have a scenario that doesn’t fit the template of a typical Office 365 migration.
Many market leaders in this space have similar offerings, but often have minor differences in their feature sets
that can make them more suitable to specific scenarios. Just as it’s important to understand, and preferably to
test, the built-in migration methods, it is also important to take the time to perform an evaluation of any
third-party tools that you’re considering using. Trial licenses can be used to run a test migration of some
mailbox data to a trial Office 365 tenant so that you can verify that the tools meet your requirements. For
complex scenario, the migration tool vendor or one of their partners can assist you with the initial set up so
that you can then drive the migration yourself.

Data Migration with the Office 365 Import


Service
Challenges exist for organizations that have a legacy of PST usage to move the information contained in PSTs
into Office 365. PST files are a useful container for moving content from one place to another, particularly for
scenarios such as migrating between two Exchange organizations that are unable to connect or co-exist in
any way, but they are not a suitable location to store archived email data as a permanent, long term solution.
The PST file format itself simply has too many technical, security, and compliance issues. Compliance is a big
driver for organizations to rid themselves of PST files today, especially after the high-profile hack of Sony
Pictures Entertainment in 2014 which resulted in sensitive data being leaked from PST files.
With generous mailbox quotas and archiving features in Office 365 it makes sense to move data that has
accumulated in user PSTs to Exchange Online mailboxes or archives, especially if you want to ensure that the
data is available for compliance and eDiscovery purposes. At the very least, it removes the expense and
burden of storing and backing up all that PST file data on-premises.
Office 365 customers have a choice of tools to migrate PST data into Exchange Online mailboxes. Microsoft
makes the PST Collection Tool available to tenants. This is a free utility program to discover, collect, and
import PST files into Office 365. However, the most recent innovation is the Office 365 Import Service, which is
capable of processing much more than PSTs. Given that PSTs are still popular in many Exchange deployments,
it’s natural that PSTs are the focus of much of the ingestion done by the Office 365 Import Service.
The Office 365 Import Service is available through the Data Migration section of the Office 365 admin portal
Before using the Import Service, you must collect user PST files from your environment. Once the PSTs are
ready, you can either upload them to Microsoft over the internet or, if you have a large amount of data that
makes a network transfer unfeasible, package the files on 3.5 inch SATA II/III drives (currently limited to 4 TB
capacity) and ship the drives to a Microsoft datacenter. Once the data is in Microsoft datacenters it can be
imported at very high speed into Exchange Online mailboxes. The mailboxes that are used as the target for

Page 35
PST imports can be an Exchange Online primary or archive mailbox, or an Inactive mailbox. Inactive mailboxes
are covered in the Exchange Online chapter of the main book.
The Office 365 Import Service is not available to every Office 365 tenant. Currently the network upload service
is available for tenants hosted in the United States, Canada, Brazil, the United Kingdom, Europe, India, East
Asia, Southeast Asia, Japan, Republic of Korea, and Australia. The drive shipping service is available in United
States, Europe, India, East Asia, Southeast Asia, Japan, Republic of Korea, and Australia. The list of countries
will change over time, so you should double check where the Import Service is available when you plan your
project. Government customers also need to consider whether they are willing to use the public Azure cloud
as the staging ground for the data that will be imported (bring-your-own-key encryption is available, which
should satisfy some government customers).

No Drive Shipping for SharePoint: Microsoft used to support the drive shipping method for the Office
365 Import Service to move data from on-premises SharePoint to SharePoint Online. Support ended in
July 2017 and the approved and supported method is now the SharePoint Migration Tool.
You can test the Import Service process by uploading a few PSTs to see how the import process works and to
satisfy yourself that the procedure is suitable for your organization. Of course, technology is only one part of
the solution. The more challenging issue might be to track down all the PSTs that lurk on personal drives and
to persuade users that they really can trust the cloud to store their most secret data. That might sound simple,
but discovering and collecting all those PST files on your network is a big challenge. The PST Collection Tool
uses an agent that you can deploy to your network computers to discover PST files, and some third-party
tools also provide agents to assist with the discovery process. Even when the PST files are all accounted for,
disconnecting them from users' Outlook profiles so that they can be copied and uploaded will be a purely
manual task without the assistance of non-Microsoft tools.
Other challenges include the deduplication of information held in PSTs (which the Import Service does not
do), attribution of PST files to users (which the Import Service itself also does not do), and the speed in which
PST information can be ingested by Office 365 (some third-party technologies claim to be faster than the
Import service, depending on the network conditions of the environment). And for some organizations it is
simply preferable to hand over responsibility for the import process to Microsoft by shipping the data on hard
drives instead of attempting a network transfer.
Apart from importing PSTs that have been accumulated by users, Microsoft also intends the Import service to
persuade customers to move data back to Exchange from third-party archiving solutions such as Veritas
Enterprise Vault. The idea is that you can export data from these repositories (using tools provided by the
archive vendor or other ISVs) to PSTs and then upload or send these PSTs to Microsoft for ingestion into user
primary or archive mailboxes. This is all possible, but you should be wary of plunging into a migration project
without considering:
• The potential impact on users as the location of their information moves from the old repository to
Office 365 (including a potential requirement for client software updates)
• How the old archive information (such as Enterprise Vault “stubs”) are handled as data is migrated to
Office 365
• How information in the PSTs is collected, verified, and tracked so that a complete chain of custody
record is retained to prove the immutability of the migrated data. Without a believable chain of
custody record, any information extracted from a repository and imported to Office 365 might not be
viable if required for the purposes of litigation.
• Deduplication of information held in PSTs before the information is imported to Office 365 (both to
reduce the overall processing requirement and to ensure that you don’t swamp Office 365 with piles
of duplicate content).

Page 36
The fact is that right now the Import service is a functional tool that will work for many organizations, but
doesn’t handle some of the considerations above that third-party solutions are designed to handle. Let’s look
at each of the migration methods for PST data available to the Office 365 Import Service.

Migrating PST Data with the Drive Shipping Method


When you use the “drive shipping” method to transfer data to Microsoft, the data on the drives should be
protected by BitLocker encryption by following Microsoft’s instructions, and includes a mapping file that
associates each PST with a user account. Each drive is prepared using a special Azure Import/Export tool that
creates a journal file for the drive containing the drive ID and the BitLocker key used to protect the data.
Microsoft provides guidance about the specification of hard drive that is supported. The drives must be 3.5-
inch SATA II or III, and can’t be larger than 6 TB. The drive must be formatted as a single, NTFS volume.
Microsoft has tested the process with Western Digital “Green” drives of 1 TB, 2 TB, or 4 TB capacity.
When the drives arrive at Microsoft, their contents are loaded into Azure and made available to the tenant
administrator. At this point, the tenant administrator can invoke an import job to start importing the data
from the PSTs into the target mailboxes. The administrator who runs the job must possess the RBAC Mailbox
Import-Export role and have access to the journal files created by AzCopy. Once launched, the import job runs
on Azure to process the PSTs found on the drive and uses the mapping file to move content from the PSTs
into the target mailboxes. The mapping file can direct information to either primary or archive mailboxes. You
can monitor progress of the import job from the Office 365 Admin console. The drive shipping method is
currently priced at $2 USD per gigabyte.
Aside from the cost of the Import Service itself, the customer is also responsible for the costs of shipping the
hard drives to and from Microsoft. Hard drives shipped to Microsoft may pass through more than one
country, so they might be subject to customs inspection or other regulations about the shipping of data
across borders. You should be careful not to ship hard drives in a way that might cause them to be seized by
another country. Even though the data is encrypted using BitLocker, it’s best to avoid such a situation.

Migrating PST Data with the Network Transfer Method


If you have a high-capacity connection to the Internet and don't have a lot of PSTs to process, you can
consider moving the PSTs over the network to a Microsoft datacenter instead. Microsoft recommends that the
network transfer method should not be used for more than 1 TB of PST data, and suggests that the drive
shipping method is typically faster for such large data sets, but you can make your own judgement call on
whether you’re willing to upload that much data over your network. In this scenario, the data is still protected
because:
• It is uploaded over HTTPS, so it is encrypted in transit
• It is uploaded to storage that is encrypted at rest, and you have the keys
Once the data is uploaded, you can then run an import job to process the PSTs and transfer the content to
Exchange Online mailboxes. The network transfer method is free, although you are responsible for your own
network transfer costs. The PST data is stored in Azure for 60 days after the most recent upload. With an
ingestion rate of approximately 1 GB per hour according to Microsoft, a 60-day window allows for the import
of about 1.4 TB of data, assuming you are running import jobs almost continually. You can understand why
they recommend a maximum overall data size of 1 TB.
If you have a requirement for additional encryption of the individual PST files, you can use Azure Rights
Management to encrypt the files before they are uploaded. This process follows a similar process as
uploading unencrypted files, but uses the Office 365 Import Tool to perform the encryption and upload,
instead of AzCopy which simply performs the upload. For a detailed walk-through of migrating PST files using

Page 37
the Office 365 Import Service, refer to the bonus guide included with your download of this eBook, or
available at https://practical365.com/office-365-bonuses.

Migrating PST Data with Third Party Tools


The Office 365 Import Service is relatively new. It represents a serious push by Microsoft to provide a toolset
that is capable of ingesting many different forms of data into Office 365. Before the Import Service became
available a broad range of third party tools filled this need. Table 2-2 lists a number of PST acquisition and
processing tools that are available from third party ISVs. These tools are designed to move information out of
third-party repositories and prepare the data for ingestion into Office 365. Some of the tools include project
management functionality to allow you to plan and track the progress of the extraction, preparation, and
transmission of the PSTs to Office 365, as well as perform additional tasks such as analysis of PST data to
attribute it to owners, de-duplication of PST data, and disconnecting PSTs from Outlook profiles as part of the
migration process.
Software Vendor Tool
Quadrotech PST FlightDeck
MessageOps Office 365 Exchange Migration Tool
BitTitan MigrationWiz
Sherpa Software Mail Attender for Exchange
Nuix Intelligent Migration
C2C (Barracuda) PST Enterprise
Table 2-2: PST Acquisition and Processing tools
All the companies mentioned in Table 2-2 have pre-sales assistance, evaluation programs, and can provide
assistance for larger or more complex migration scenarios through their own or partner consulting services.
The order of the companies is random and no recommendation or guarantee is extended as to their ability to
help your migration project be successful. Quadrotech provides a free eBook containing in-depth coverage of
different aspects of PST migration projects that might helpful to those planning PST eradication projects. The
best approach is to conduct some research to gather information (such as the eBook mentioned above) to
identify viable tools that meet your requirements. Once you have a list, you can obtain test copies of the
software from the vendors (or local agents) to verify that the products work as expected in your environment.

Migrating Non-Email Data into Office 365


Microsoft’s Office 365 Import Service is not just intended for importing email data contained within PST files
into mailboxes in the cloud. Microsoft has developed a specification to allow third party developers to extract
information from other data sources and package them in a form that can be ingested by the Office 365
import service. Microsoft has used this approach to create import facilities for on-premises SharePoint lists,
libraries, and sites and traditional file shares. Another free tool from Microsoft can move data from on-
premises SharePoint sites and file shares to SharePoint Online.
Third party tools can also handle extraction of information from:
• Social networks such as Twitter, Facebook and Yammer.
• Instant messaging services such as Yahoo Messenger and Google Talk.
• Cloud document storage services such as DropBox.
• Cloud applications such as SalesForce.
A big driver for supporting these services is to meet the compliance requirements of businesses. If your
business has compliance policies for email communications, then the chances are good they would also like
those same policies to apply to other communications channels such as social networks and instant
messaging. Microsoft partners with archiving vendors to help make these integrations possible.

Page 38
Retaining data after import
In many cases the archive data that you import into Office 365 through the Import Service or other methods
will include content that is quite old. PST files could hold items that were created more than a decade ago, but
without a means of analyzing the data you’ll need to apply your own judgement as to how old (and how
valuable) the data might be.
What’s important to consider is how much of that data you want to preserve after it has been imported into
Office 365. If your tenant uses retention policies to control the age of items that can be stored in Exchange
Online mailboxes, then you should also expect any imported data that is older than the retention period to be
deleted or archived by the Managed Folder Assistant the next time the mailbox is processed. To prevent
unexpected removal of data, the Office 365 Import Service applies a retention hold to any mailbox to which it
has imported data so that the mailbox owner can review the imported data and apply retention tags to
folders or specific items to make sure that they are retained. The retention hold remains in place on the
mailbox until it is removed by an administrator. If you're importing data using third party tools, then you
should consider manually applying retention holds to the target mailboxes before you begin importing data.
Retention policies are explained in more detail in the Compliance chapter of the main book.
Another approach is to only migrate the desired data from the PST files. For organizations that have
Enterprise E5 licenses, the Office 365 Import Service offers the ability to filter the PST contents by date, so you
can let the Import Service automatically ignore older data you do not need to migrate to Exchange Online.
Filtering out the data during the import process doesn't save you any network bandwidth as the PST files still
need to be uploaded to Azure store for processing, but it does avoid potential complications with the amount
of PST data exceeding available mailbox quotas.

Migration of legacy public folders


Newer Office 365 customers who have never used Exchange on-premises are relatively unlikely to adopt
public folders as their choice for collaboration and sharing. Other options such as Office 365 Groups and
Teams exist within Office 365 and Microsoft has no plans to enhance public folders functionality in any
significant way in the future. Most public folders running inside Exchange Online will therefore originate in an
on-premises database on an Exchange 2007, Exchange 2010, Exchange 2013, or Exchange 2016 mailbox
server.
You cannot move from legacy public folders to Exchange Online until all mailboxes are on Exchange
2013/Exchange 2016 or Exchange Online. For this reason, the migration of old public folders is usually the last
task on a deployment list. The methods and tools used to migrate legacy public folders are well documented
online and do not need repeating here. What is clear is that the up-front work to analyze the legacy folders
and their hierarchy is the critical step in the process and, because of the need to manually check folders,
ownership, and usage, this work can absorb much effort over a long time.
The steps involved in the standard approach to public folder migration are below. It is sensible to read up on
this topic before beginning to learn from the experiences reported by others who have been through a
migration. This is a process that flexes and changes (more detail is available online). For instance, the current
limit for public folder migration to Exchange Online is 250,000 folders, but once you move folders across to
Office 365, Exchange Online supports up to 500,000 folders in the public folder hierarchy.
First, download the Microsoft scripts to create the CSV files consumed in the migration process. You then
prepare for the migration by running the Export-PublicFolderStatistics.ps1 script to generate a list of all the
public folders found in your environment.

Page 39
Examine the CSV output file created by the script and decide which public folders are not needed. Ideally, you
should remove all redundant and unused public folders at this point. You might need to preserve the data
from the folders you remove by using Outlook to copy the data to a PST file. It is also a good idea to look for
any public folder that is larger than 2 GB. If you find some, consider splitting the content across multiple
folders to make it easier to migrate. Again, this is a great opportunity to remove some old data from the
hierarchy.
It doesn’t make sense to migrate old and unused public folder data to Exchange Online as it will only extend
the migration timeframe (because more data must be transferred over the Internet) to move the obsolete and
unwanted data into the cloud. That data must then be managed in Exchange Online. Overall, it is a much
better idea to do the work to examine the legacy public folder hierarchy and prune it as hard as you can
before you start the migration. Unfortunately, there are no automated tools to help with this work as only
humans can decide whether to keep or discard data, whether folders are still in use or obsolete, or whether
any business value is contained in items stored in a public folder. No one looks forward to reviewing a public
folder hierarchy that holds tens of thousands of folders, but this work needs to happen if you are to migrate a
clean and efficient hierarchy to the cloud. After pruning and cleaning up the public folder hierarchy, you
should run Export-PublicFolderStatistics.ps1 again to generate a new CSV file. Give the CSV file a quick check
to verify that it contains the data you expect.
Now run the PublicFolderToMailboxMapGenerator.ps1 script. This script takes the CSV file that you created in
the earlier step and uses the data about public folders to generate a mapping between the folders in your
hierarchy and a set of public folder mailboxes to which the migration process will move the public folders.
After the script completes, you need to examine the CSV output that it creates and decide whether the
distribution of folders is good enough or you can improve it. For instance, you might decide to use some
extra public folder mailboxes as migration targets. This is a manual process and it can be exhausting to review
the distribution of thousands of folders.
Before you can move public folders to Exchange Online, you must create a migration endpoint for the
Mailbox Replication Service to use. The endpoint points to an on-premises public folder server that will act as
the source for the migration and includes the credentials of an account that has enough permissions to
perform administrative activities. This example shows how to create an endpoint:
[PS] C:\> New-MigrationEndpoint –Name PublicFolderEndpoint
–RPCProxyServer pfserver.contoso.com –Credentials (Get-Credential)
–SourceMailboxLegacyDN $source_remoteMailboxLegacyDN
-PublicFolderDatabaseServerLegacyDN $source_remotePublicFolderServerLegacyFQDN
–Authentication Basic

The next step is to create the set of target public folder mailboxes in Exchange Online. You must use the same
names as contained in the CSV file because these mailboxes are the target for the migration. You do not have
to limit the set of public folder mailboxes to that suggested by the script as it is perfectly acceptable to create
some extra mailboxes and then change the CSV mapping file so that some public folders are directed to the
additional mailboxes. The HoldForMigration parameter tells Exchange Online that these mailboxes are not yet
in use and will not be until the migration is complete. The command to create a new public folder mailbox is
like this:
[PS] C:\> New-Mailbox -PublicFolder 'PF Mailbox 1' -HoldForMigration:$True

Once the public folder mailboxes are ready, you can launch the public folder migration request. Create a new
migration batch that points to a migration endpoint for the source public folder server in the on-premises
environment. Note that the CSV mailbox mapping file is also an input for the migration batch is. When the
batch is ready, you can start it.

Page 40
[PS] C:\> New-MigrationBatch –Name PublicFolderMigrationJob
–CSVData (Get-Content "PFMapping.csv" -Encoding Byte)
–SourceEndpoint "PublicFolderEndPoint" -NotificationEmails Admin@Office365ITPros.com

[PS] C:\> Start-MigrationBatch –Identity PublicFolderMigrationJob

Exchange uses the Mailbox Replication Service (MRS) to transfer all the data found in legacy public folders to
new public folder mailboxes created in Exchange Online. The batch migration procedure creates a separate
migration job for each target public folder mailbox. Obviously, the more public folder mailboxes you use, the
more distributed the workload is when transferring information from the on-premises servers to Exchange
Online. Although splitting up the workload into multiple streams makes more efficient use of available
resources, it will not allow you to transfer any more data than the network connection between the on-
premises servers and Exchange Online can handle. In other words, if you have a lot of public folder data to
migrate, it might take some time for the transfer to happen.
The batch process continues to copy data from the on-premises public folders to the target public folder
mailboxes. This work happens in the background. When the first synchronization is complete (a task that
could take many days for large public folder hierarchies), the migration job will auto-suspend and await
administrator permission to go ahead. When you are ready to complete the migration, you must lock the
public folder infrastructure to prevent further change. To do this, run this command on an on-premises
mailbox server:
[PS] C:\> Set-OrganizationConfig -PublicFoldersLockedForMigration:$True

Depending on the size of the on-premises organization and the health of the public folder replication
between the servers that host public folder databases, it might take some hours before the lock-down is
complete. Users must disconnect from public folder databases during this time. You can check the progress of
the individual migration threads (one for each public folder mailbox) through the migration section of the EAC
(found under recipients). After locking public folders for the organization, you can instruct the migration batch
job to complete. The time taken to complete depends on how many changes happened in the legacy public
folders since the first synchronization occurred.
[PS] C:\> Complete-MigrationBatch -Identity "PublicFolderMigrationJob"

You will know when the migration process has finished when the status for the migration job is "Completed".
Before switching on public folders for end users, you can test that everything is correct by examining the
Exchange Online public folders by configuring a mailbox to point to a public folder mailbox. Log on to the
mailbox with Outlook or OWA and compare the contents of several public folders to ensure that everything is
as it should be. It is also a good idea to post items to a few folders to make sure that write access works.
[PS] C:\> Set-Mailbox -Identity 'Kim Akers' -DefaultPublicFolderMailbox 'PF Mailbox 2'

If everything checks out, we can release the block and allow users access to the newly migrated public folders.
These steps allow all the public folder mailboxes to serve the hierarchy to clients, mark the organization
configuration to say that the migration is over, and that the public folders are local to Exchange Online.
[PS] C:\> Get-Mailbox -PublicFolder | Set-Mailbox -PublicFolder
-IsExcludedFromServingHierarchy $False
[PS] C:\> Set-OrganizationConfig -PublicFolderMigrationComplete:$True
[PS] C:\> Set-OrganizationConfig -PublicFoldersEnabled Local

Public folders are not mail-enabled by the migration process. If you need some public folders to be mail-
enabled, you must do this after the migration is complete.
You cannot apply a filter or otherwise instruct Exchange Online to ignore some of the legacy public folders
because the migration process is designed to move everything found in the old infrastructure. It is usually true

Page 41
that some debris has accumulated in a legacy public folder infrastructure due to unused folders or folders that
contain obsolete information.

Migration of modern public folders from on-


premises Exchange
While the process to move old-style public folders from on-premises servers to Exchange Online has been
available for quite some time, the ability to migrate modern public folders from Exchange Server 2013 or 2016
to Exchange Online has only been generally available since March 2017. The delay in developing the
migration process was likely due to the development team working to improve the scalability and
manageability of modern public folders. In addition, most companies who have deployed Exchange 2013 or
Exchange 2016 and completed the migration to modern public folders do not have the appetite to then move
to Exchange Online.
Modern public folder migration can be performed from environments running Exchange Server 2013 CU15 or
later, and Exchange Server 2016 CU4 or later. It is always best to use the latest possible version of Exchange to
take advantage of any updates and improvements made by the development group in response to customer
feedback. Exchange 2013 is now in extended support, and CU21 is the latest update.
Up to 1,000 public folder mailboxes are supported in Exchange Online, and each public folder must be less
than 25 GB in size. The public folder mailboxes can be up to 50 GB in size for Business and Enterprise E1
tenants, or up to 100 GB for Enterprise E3 and E5 tenants. Modern public folder migration follows a similar
batch process to the legacy public folder migration, but uses a different set of migration scripts. The migration
steps vary for Exchange 2013 and Exchange 2016 scenarios. If you have both versions of Exchange in your on-
premises environment then you must use the Exchange 2016 steps.
Some planning is required to ensure a smooth migration. Orphaned or duplicate public folder objects need to
be checked and remediated, and the SMTP addresses on the Active Directory objects must match the email
addresses configured in Exchange for any mail-enabled public folders. Send-as and send-on-behalf
permissions are not migrated with the public folders, so you will need to audit those permissions and
manually re-apply them after the migration completes. Microsoft has documented other known issues on the
TechNet pages for migrating from each Exchange version, which were mentioned earlier in this section.
For mail-enabled public folders that need to receive email from external senders, Microsoft also requires you
to disable Directory Based Edge Blocking (DBEB) in Exchange Online Protection. DBEB rejects mail to non-
existent recipients, and unfortunately does not recognize mail-enabled public folders. Disabling DBEB has a
domain-wide impact, because it involves changing the Accepted Domain type from Authoritative to Internal
Relay. The implications are mostly load-related, which more of a problem for Microsoft since they are
responsible for running the Exchange Online infrastructure. Still, it is not ideal and should be avoided if
possible. An alternative to mail-enabled public folders is Microsoft 365 Groups.

No Dumpster
Items deleted from public folders go into the “dumpster” before Exchange permanently removes the data
upon the expiration of the retention period. Often, a substantial amount of data accumulates in the dumpster
that you probably don’t want to migrate to Exchange Online. Apart from anything else, moving the dumpster
data extends the timeframe for a migration. To check the size of dumpster folders, run the following
command:
[PS] C:\> Get-PublicFolder \NON_IPM_SUBTREE\DUMPSTER_ROOT -Recurse -ResultSize:unlimited |
?{$_.FolderClass -ne $null} | Format-Table Name, FolderSize

Page 42
To exclude the dumpster from public folder migrations, you include the ExcludeDumpsters switch when you
create a new migration batch. For example:
[PS] C:\> New-MigrationBatch -Name PFMigrationBatch2 -CSVData $bytes -SourceEndpoint
$PfEndpoint.Identity -ExcludeDumpsters

Excluding dumpster content will speed up migrations at the expense of eliminating the ability of users to
recover deleted items. It’s a trade-off between speed and utility.

Public folder migration methodologies


Everyone has their own approach to migrations and public folder migrations are no different. Organizations
who have deployed on-premises public folders usually avoid thinking about how to move public folders until
the end of the project, but some considerations exist that should be considered when planning how to move
user mailboxes. Failure to do this might impact the ability of people to work after they have moved to Office
365 or provide a degraded user experience. Consider these questions:
• Does everyone in the organization use public folders or is their use confined to certain groups? If
everyone uses public folders planning needs to consider migration or replacement very early in the
move to Office 365. If not, migration can be left until the groups who use public folders are ready to
move.
• What versions of on-premises Exchange support public folders? Are they new-style or old-style
(Exchange 2010 and previous versions)? Does any data exist on old Exchange 2003 or Exchange 2007
servers?
• What is the overall size and structure of the public folder infrastructure? Can you align parts of the
public folder hierarchy with different business units or groups? As mentioned earlier, it is an excellent
idea to prune public folders to reduce the hierarchy and remove obsolete content as this will make
the migration easier and faster.
• Is the content stored in public folders dynamic or static in nature? Dynamic means that people access
and update public folder content daily while static content is more like archive storage where reading
is the predominant access mode.
Understanding the answers to these questions will help arrive at the optimum migration strategy from a list of
three options:
1. Migrate public folders before user mailboxes.
2. Migrate user mailboxes before public folders.
3. Simultaneous migration of mailboxes and public folders.
Each of these approaches require different effort and techniques. Ultimately, you want to ensure that access
to data needed for business purposes is maximized during the migration period.

Migrate user mailboxes before public folders


This is the approach recommended by Microsoft. Cloud-based mailboxes can access public folders stored on
older Exchange on-premises servers as long as Outlook clients are used. OWA does not support the redirect
mechanism used to access on-premises public folders. The attraction of moving user mailboxes first is that
people get faster access to the new features available in Office 365 and it might be possible to make cost
savings by eliminating many on-premises mailbox servers. On the other hand, moving mailboxes usually
represents the bulk of the migration effort and it will take longer to plan and execute.

Page 43
Third party migration utilities
Do not assume that Microsoft is the only source of public folder migration tools. Whenever possible, you
should consider the tools Microsoft makes available, if only because they are free. But other tools available
from third-party providers that might be more effective or can handle a situation. For example, the basic
migration facilities available from Microsoft do not accommodate the parallel migration of both content types
because this functionality needs bi-directional public folder coexistence, which is not a feature that Microsoft
has engineered for Office 365. In this instance, you will need to use a third-party product that can perform the
bi-directional synchronization for both public folder hierarchy and contents. Some of the third-party tools can
prune and graft public folders as the migration is ongoing, which is an attractive choice when faced with the
need to migrate a massive public folder infrastructure.
Because third-party products often go well beyond the migration tools available from Microsoft, it is sensible
to review what is available in the market before making any decision about how you will execute a migration.
These tools might not use standard migration techniques (for example, they might transfer data using
Exchange Web Services rather than the Mailbox Replication Service) and they will cost, but might be the only
choice. Examples of the factors that you should consider in assessing any third party public folder migration
tool include:
• Is it possible to perform a selective migration (for example, only the public folders used by a group of
mailboxes that are being moved to Office 365) or do you have to move the complete public folder
infrastructure at one time (the approach taken by the Microsoft tools)?
• Is bi-directional synchronization supported to allow a portion of the public folder infrastructure to be
moved to Office 365?
• If bi-directional synchronization is supported, can the public folder hierarchy be maintained on both
cloud and on-premises platforms? Is synchronization performed on a scheduled basis or on-demand
(this could be important for public folders that change frequently).
• What throughput is attainable for moving public folder content? This is seldom an issue for small
public folder deployments (less than a few thousand folders) but can be an issue when large amounts
of content (terabytes) are stored in public folders. Microsoft’s tools mitigate this issue by moving data
behind the scenes over an extended period before performing a final synchronization just before the
public folders are switched over to Office 365.
• What downtime is needed to migrate public folders?
• Is the tool proven to scale to deal with the numbers of public folders to be migrated?
• What planning tools or other assistance is available to help you decide upon the most effective
migration strategy for public folders? Are reporting tools available to analyze the public folder
infrastructure and the overall progression of the migration?
• Do the tools work against old-style and modern-style on-premises public folders?
• How much does the tool cost? Is it based on the number of public folders or the overall size of the
infrastructure (factors that could assist in the decision to prune public folders before starting a
migration)?
Another factor to consider is whether now is the time to move away from public folders. Like any migration
project, a move to drop public folders is not something that you should do without careful planning. Ideally,
the work should happen as part of an overall project to review the use of public folders within an
organization. Ideally, the review should divide public folders into a set of buckets, such as:
• Public folders that are obsolete and you can discard without further action. For example, a folder that
no-one has accessed in the last five years is probably obsolete and unwanted.
• Obsolete folders whose content you need to keep for legal or regulatory reasons.

Page 44
• Public folders in active use that are good candidates to move to a more modern collaboration
platform like Office 365 Groups.
• Public folders that you can transform to other objects within Office 365, such as shared mailboxes.
• Public folders that you cannot migrate and need to stay as public folders within Exchange Online.
Perhaps the review will conclude that no business needs to move any public folders to Office 365 and that
you can leave the folders to wither away while all efforts concentrate on how best to exploit the newer and
more powerful collaboration tools offered by Office 365.
If you decide to keep some or all public folders, the result of the review will guide the choice of the tools to
achieve your goals. A tool selected and used successfully in one company is no guarantee of the same
outcome in another. Planning is all-important and that must include a review of the current infrastructure, an
assessment of what data should be moved and what can be dropped, and some testing to identify the right
migration strategy for your organization and the tools to be used. Experience demonstrates that public
folders are left without much thought until the last phase of the migration project, at which time they can
become a major pain point for both the users who have been moved to Exchange Online and the IT
department who now needs to figure out how to move public folders without causing extra disruption to
users. The more familiar you become with the advantages and disadvantages for the different approaches to
public folder migration, the more successful the outcome will be.

Migrating public folders to Office 365 Groups


Office 365 Groups are an interesting migration destination for some public folders. Folders that are suitable
migration candidates are:
• In active use.
• Mail-enabled to allow users to post to the folder via email.
• Hold mail and calendar items and not tasks or contacts.
The Microsoft tools available to migrate public folders to Office 365 Groups based on the Exchange
Mailbox Replication Service (MRS) to move content from public folders to selected Office 365 Groups. You
must create the groups before migration begins. MRS copies mail items to the Inbox folder in the group
mailbox to allow users to access these items as conversations. It copies calendar items to the group
calendar. After MRS completes copying the content to the target groups, you run scripts to switch the
properties of the public folders to the groups, including their email addresses to allow the groups to
accept new posts. Later, when you are confident that users can access and use the content in the groups,
you can remove the public folders from the hierarchy. Microsoft’s tools allow tenants to move public
folders from Exchange 2010, Exchange 2013, Exchange 2016 on-premises servers as well as Exchange
Online.
Although you can move public folders to Office 365 Groups, you cannot move them to Teams.

Migrating legacy email archives


In the early days of Exchange, storage was expensive and databases could not grow past a 16 GB limit. The
result was constrained mailbox quotas and users often spent considerable effort keeping their mailbox under
quota by cleaning out old items so that they could continue to receive new messages. The Enterprise Vault,
originally engineered at Digital Equipment Corporation and first introduced as a product by Compaq in 1998,
was the first email archive server that offloaded email from Exchange by transferring messages and their
attachments to a specialized server. Small “stubs”, pointers to the moved items in their new archive location,
were left behind in user mailboxes. If a user wished to access an archive item, they could recall it to their
mailbox or view it direct from the archive.
Page 45
Enterprise Vault, now sold and maintained by Veritas, is still in use in thousands of companies today. Other
email archive solutions in popular use include EMC SourceOne, ZANTAZ EAS, HP Autonomy, and Daegis AXS-
One. The archives on these platforms extend to petabytes of data, especially in large companies who are
subject to strict compliance regulations. Maintaining separate archive servers is expensive and a natural desire
therefore exists to migrate data to Office 365 and take advantage of large user mailboxes, automatically
expanding archive mailboxes, integrated search, and the range of other compliance features that exist across
Exchange Online, SharePoint Online, and OneDrive for Business. Because of the complexities and amount o
data involved, the migration away from specialized email archives is usually a task left until the last stages of
Office 365 deployments.
Microsoft does not have any tools to migrate data from a specialized email archive like Veritas Enterprise
Vault to Office 365. Table 2-3 lists some ISV products specializing in the migration of email archives to Office
365. Other companies, like BitTitan, white-label email archive migration technology to build out their
portfolio. Only the Quadrotech and TransVault products are certified by Veritas to ensure that the data
extracted from Enterprise Vault pass the acid test of being identical post-migration to what was archived,
something that is critical when proving a legal chain of custody for email. Not being certified by Veritas does
not mean that other products do not extract data in a way that preserves all its characteristics intact. It simply
means that they have not been through an independent certification procedure to verify that this is so.
Company Product
Quadrotech Archive Shuttle
DELL Migration Manager for Email Archives
Archive360 Email Archive Migration for Enterprise Vault
Table 2-3: Some companies active in the email archive migration market
On the surface, the task of extracting data from an email archive and importing that same data into Office 365
seems straightforward. It is complicated by the need to preserve a legally defensible chain of custody for the
information as otherwise the data is rendered useless in terms of its ability to prove that a message was sent
from someone to a set of recipients at a particular time and so satisfy compliance or eDiscovery requirements.
When considering how to approach an email archive migration project, the following questions should be
asked.
• What data needs to be migrated? Two forms of archive data exist. Email archives are populated by
moving items out of user mailboxes and replacing the real items with “stubs”. Journal reports, which
represent a full copy of a message including all its recipients and any attachments, are gathered by
capturing copies of messages as they are sent and redirecting those copies to a journal recipient
(often identified with an SMTP address). The journal reports are later processed and stored in the
archive. All recent versions of Exchange including Exchange Online use a format called envelope
journaling. Microsoft does not allow an Exchange Online mailbox to be a journal recipient.
• Where will the migrated data go? The available destinations for email archives are user primary and
archive mailboxes. In most cases, archive mailboxes are the preferred option as these are designed to
store information that needs to be retained but not necessarily accessed on a regular basis. The
complication here is how to deal with stubs as they are no longer valid after the migration is
performed. Journal reports are more complicated because of their volume and because a report does
not “belong” to any specific mailbox. Two approaches are used:
o Journal splitting copies sets of reports associated with a group of users to a shared mailbox
(or the archive of a shared mailbox).
o Journal explosion reads the recipient information contained in a journal report and recreates
copies of the original message in the mailboxes of recipients. Fanning out a message in this
way is comparable to the way that Exchange creates copies of messages to deliver them to
users. This is not a fast process and it consumes a great deal of storage.

Page 46
• How much data needs to be moved? It’s likely that the archive contains information that is quite old
and might not be relevant anymore for compliance purposes.
• How are “leavers” dealt with? Any archive that has existed for more than a few months will contain
messages originally sent or received by people who no longer work with the company. Where should
these messages go when they are extracted from the legacy archive? In addition, it’s possible that the
SMTP address for someone who has left the company has been reused by someone who now works
for the company. These and other edge conditions need to be considered and resolved before the
migration can proceed.
Automation and orchestration of processing help smooth the rough edges of legacy email archive migration
but there is no disguising the fact that these are costly and complicated projects that need buy-in from many
interested parties, including IT, business units, and legal. In some cases, it might be possible to migrate the
email archives that can be associated with current mailboxes and leave the remainder of the email archives
and the journal reports in the legacy archive, which can then be consolidated into a smaller set of servers to
reduce the cost of hardware and software licenses. Later on, when the usefulness of the information held in
the legacy email archive has expired, it can be decommissioned and removed from use. In the interim, if an
eDiscovery investigation needs to access information over an extended period, it might be necessary to
perform separate searches in the legacy archive and Office 365 and then combine the results.

Running a Cutover Migration


A cutover migration migrates all the existing mailboxes, contacts, and distribution groups (except for dynamic
distribution groups) from an on-premises Exchange organization into Exchange Online. Cutover migrations do
not include public folders.
An overview of the cutover migration process, along with the pros and cons of this migration method, is
described earlier. Before you begin a cutover migration, read through the overview and the complete
migration process so that you can understand the steps involved, any limitations, and to identify any areas in
which a cutover migration might be unsuitable for your migration project.

Preparing for a Cutover Migration


Let’s take a closer look at the steps for a cutover migration. The scenario used in this example is an Exchange
2007 on-premises organization called “Not Real University”. We'll begin with a demonstration of how to
prepare the Office 365 tenant for a cutover migration.

Adding Domain Names to Office 365


Naturally you need to sign up for an Office 365 tenant before you can perform a cutover migration. Set up the
new tenant, and follow the steps to add the SMTP domains for your organization as accepted domains for the
Office 365 organization as well.
For example, here are the accepted domains for an Exchange 2007 on-premises organization retrieved by
running the Get-AcceptedDomain cmdlet.
[PS] C:\>Get-AcceptedDomain

Name DomainName DomainType Default


---- ---------- ---------- -------
notrealuniversity.com notrealuniversity.com Authori... True

Log in to the Office 365 administration portal and navigate to Setup and then Domains. Click the Add
domain button to add your domains, as shown in Figure 2-5.

Page 47
Figure 2-5: Adding domains to Office 365
As you run the wizard to add a domain, the wizard provides you with a domain validation record to add to
your external DNS zone, which proves that you own the domain. As discussed earlier, I recommend using the
TXT record so that you do not cause any issues with your mail flow by adding an MX record at this stage.

Figure 2-6: Choosing DNS hosting for your domain

Page 48
There's also the question of whether to host your DNS in Office 365, or to host and control your own DNS
records separately (Figure 2-6). For most organizations that already have a public DNS zone and DNS hosting
provider, and who are comfortable making DNS changes themselves, the prospect of moving the zone to
Office 365 offers no advantages. For the remainder of this example scenario it will be assumed that we'll be
managing our own DNS records.
Microsoft then checks your domain for the DNS records that are needed for Office 365 services to work. For a
new domain, or domain that you're already using with an on-premises, most if not all those DNS records
won't exist in the zone. At this stage, do not make any changes to your DNS records. Instead, select the
option to skip this step (Figure 2-7).

Figure 2-7: Don't add any DNS records to your zone yet

Real World: Until the full list of required records is added to the DNS zone for your domain Office 365
will warn you that there are “Possible service issues” with your domain. You can ignore this warning until
you’ve completed adding all the DNS records later.

Enabling Outlook Anywhere


In this example, the on-premises Exchange 2007 server is already enabled for Outlook Anywhere using the
hostname of mail.notrealuniversity.com, and a valid third-party SSL certificate is already installed on the
server.
[PS] C:\> Get-ClientAccessServer | Format-List Name,OutlookAnywhereEnabled

Name : EX2007SRV
OutlookAnywhereEnabled : True

[PS] C:\>Get-OutlookAnywhere | Format-List ServerName,ClientAuthenticatioNmethod,ExternalHostname

ServerName : EX2007SRV
ClientAuthenticationMethod : Ntlm
ExternalHostname : mail.notrealuniversity.com

To test the Exchange configuration, we can use the Exchange Remote Connectivity Analyzer to perform an
Outlook Connectivity test (Figure 2-8).

Page 49
Figure 2-8: Microsoft Remote Connectivity Advisor
This test will validate that both Autodiscover and Outlook Anywhere are working correctly for the on-premises
organization, which is important for Office 365 to be able to detect and connect to your on-premises server
during the migration process. It will also validate that the expected firewall ports are open and NATing to the
Exchange server.

Figure 2-9: A successful test with the Remote Connectivity Advisor

Real World: It is common to see a “Test Successful with Warnings” result from the Remote Connectivity
Analyzer due to the use of an SSL certificate that will require clients to have downloaded the root CA
updates from Windows Update.

Configuring a Migration Service Account and Permissions


For cutover migrations, the Office 365 migration service needs a set of user credentials to connect to your on-
premises organization and access mailboxes. The steps for creating a migration service account are covered in
earlier.

Preparing Recipients
Many preparation and review tasks are recommended for the recipients in your on-premises organization
before beginning the migration:
• Reduce the migration load by reviewing large mailboxes, and large mailbox items
• Review shared mailboxes
• Review the sender authentication setting for mail-enabled groups
Page 50
• Pre-provision a security group in Exchange Online for each mail-enabled security group that exists in
the on-premises environment
• Clean up stale delegates and managers
• Disable Unified Messaging

Starting the Cutover Migration


With all the preparation tasks complete you can create the migration batch and begin the initial
synchronization of mailbox contents.

Creating the Migration Batch


Log in to the Exchange Administration Center using your tenant administrator account and navigate to
Recipients, and then Migration. Click the icon to create a new migration batch, selecting “Migrate to
Exchange Online.”
Select Cutover migration from the list of migration types. Enter the email address of an on-premises mailbox
user, and then the username and password for the migration service account you created earlier. Autodiscover
will automatically detect the Outlook Anywhere settings to use. You can also click on More options and verify
that the mailbox permissions type is set to match the access type you granted to the service account earlier
(Figure 2-10).

Figure 2-10: Validating mailbox permissions


Give the migration batch a name. Since there is only one migration batch for a cutover migration you don’t
need to put much thought into a descriptive name. Finally, enter at least one recipient to receive reports for
the migration batch. This can be a recipient in the on-premises Exchange organization or an external email
address if you choose.
Now select whether to start the batch manually or automatically (i.e., immediately) and click New to complete
the wizard. The cutover migration batch has no impact on the end users while it is synchronizing mailbox
contents, but you might want to wait for an evening or weekend before you start it if you have specific timing
for the migration in mind.

Monitor the Migration Progress


You can monitor the progress of the migration by selecting the migration batch and clicking the link to View
details (Figure 2-11).

Page 51
Figure 2-11: Monitoring the migration batch job
Be patient, as the initial provisioning can take several minutes, and the initial synchronization can take days or
weeks depending on the amount of mailbox data to be migrated. You may notice that not all mailboxes are
processed simultaneously. The limits for simultaneous processing vary, but you can expect 100 mailboxes to
be processed at a time.

Figure 2-12: Viewing details of the migration job


If you do experience failures you may need to review your preparation steps again, such as verifying that there
are no stale delegate or manager entries. At any stage, you can stop and then start the migration batch to
allow it to retry a previously failed item.
You can also check the progress of the migration batch in PowerShell. The first step is to connect a PowerShell
session to Exchange Online, which can be achieved using a convenient PowerShell function such as Connect-
EXOnline.

Page 52
After connecting to Exchange Online with PowerShell, run the following command to see the status of each
mailbox being migrated.
[PS] C:\> Get-MigrationUser | Get-MigrationUserStatistics | Select identity,status,percentage* |
Format-Table –AutoSize

Alternatively, use the Get-MigrationBatch cmdlet to view the progress of the batch as a whole. This command
lists all migration batches and their status.
[PS] C:\> Get-MigrationBatch | Format-Table Identity,Status

Continue to monitor the progress of the migration at regular intervals, or simply wait for the notification email
to arrive to let you know when initial synchronization has completed.
If you look at the list of users in the Office 365 admin portal, or the list of mailboxes in the Exchange
Administration Center, you will notice that it is populated with all of the users you are migrating to Office 365.
At this stage the mailboxes are fully functional and users could login to services such as Outlook Web App,
but new email is not being delivered to those mailboxes until you change the MX records for the domain to
point to Office 365. This gives you total control over when the final cutover will occur. In the meantime, your
users should continue to connect to their on-premises mailboxes and use them as normal.
When the initial synchronization is complete, you’ll receive an email notification to let you know the results.
Now you can begin the final cutover tasks. These include:
• Changing MX records to point to Office 365.
• Configuring Autodiscover to point to Office 365.
• Running a final synchronization of mailboxes. One way to do this is to stop the migration batch in
EAC and then restart it again to resume synchronization. Any outstanding transactions will then be
processed. You can then delete the migration batch. After the migration batch is removed, the on-
premises mailboxes are no longer synchronizated with Exchange Online.

You will need to communicate this change to your end users and advise them of what they need to do (e.g.,
log out of Outlook for the cutover period) as well as any user experiences that will change (e.g., the Office 365
OWA interface looks very different to Exchange 2010 and earlier). Finally, you need to provide users with their
login credentials for their new Office 365 accounts. Like anything to do with credentials, this should be done
in a secure manner.

Note: Providing users with their Office 365 credentials before the cutover time allows them to login to the
Office 365 portal and set their password to one that they will remember. However, if you do this make sure
you clearly communicate to your users that they should not use their Office 365 mailbox to send or receive
email yet.
The scheduling of the cutover will really depend on the size and complexity of your business. Whether you
choose to do it during a business day, one evening during the week, or over the weekend, if planning and
communication is in good order the cutover should go well. Because multiple DNS changes are required to
enable the switchover to Office 365, it is strongly recommended to review your existing DNS records first
(Figure 2-13).

Page 53
Figure 2-13: Reviewing DNS records
The value for TTL (time to live) is important here. This indicates to DNS servers and clients how long they
should cache the value of the DNS record. Until the TTL has expired a DNS server will continue to answer
queries for your records with the old, cached value instead of the new value. In other words, the TTL is
approximately how long you can expect to wait before a change to your Autodiscover or MX records takes
effect, which means new email will still deliver to your on-premises Exchange server for that period.

Real World: A common approach is to lower the TTL value to something very short, such as 5 minutes
(300 seconds). If this is done a day or two before the planned cutover time, then it improves the
likelihood that the change to the Autodiscover and MX records will take effect much quicker.
For details on the steps required to update MX and Autodiscover records in DNS, refer the section earlier in
this chapter.

Configuring Client Software and Devices


It’s time to connect your end users to their new Office 365 mailboxes. Remember that the user accounts in
Office 365 are separate to their on-premises Active Directory user accounts, and will have different passwords.
You must distribute the passwords to your end users so that they can configure their software and devices to
connect to their Office 365 mailboxes. For Outlook users, when a new profile is created Autodiscover will
direct them to Exchange Online and they will be prompted to enter their Office 365 credentials. Ticking the
box to remember the credentials will improve their user experience by not constantly re-prompting them for
the credential with every new connection (Figure 2-14).

Page 54
Figure 2-14: Entering credentials for Outlook
You can read more about configuring and managing clients and devices in the Clients chapter of the main
book.

Removing the Migration Batch


When the cutover migration is completed, you can remove the migration batch job. Before you remove the
job, you should ensure that all your end users are able to connect to Office 365, or at least ensure that none
of them have connected to the on-premises Exchange server since the last incremental synchronization
occurred.
One way to verify this is to use the Get-MailboxStatistics cmdlet in the Exchange Management Shell. For
example:
[PS] C:\> Get-MailboxStatistics | select DisplayName,LastLogonTime,LastLoggedOnUserAccount | Sort
LastLogonTime

The output (Figure 2-15) will show the last logon time for each mailbox, sorted in order of the last logon
timestamp, and will also display the name of the user account that was responsible for the last logon.

Page 55
Figure 2-15: Viewing the output of Get-MailboxStatistics
In the example above you can see that “Alan Reid” has logged on to his mailbox more recently than the
migration service account. This indicates that his Outlook or mobile device may still be configured to connect
to the on-premises server. It may also mean that there are unsynchronized email messages in his mailbox, for
example new sent items that have not yet been copied to Office 365. That is an example of something that
should be followed up before removing the migration batch.
Another method to check that the migration batch can be removed is to check the last sync time for the
mailboxes. You can check this in the Exchange Administration Center for Office 365 in the Recipients section
under Migration. Review the last synced time for the migration batch (Figure 2-16) to confirm that at least
one incremental synchronization has occurred since you changed the MX records to point to Office 365 and
reconfigured all the end users’ Outlook profiles and mobile devices.

Figure 2-16: Checking the status of a migration batch

Page 56
If you need to manually force another incremental synchronization click Resume. When you’re ready to
remove the migration batch, click Delete.

Completing the Migration


After the migration batch job has moved all the user mailboxes to Office 365, there are some further steps to
complete the cutover migration project.
• Convert shared mailboxes
• Remove restrictions from distribution groups
• Assign licenses to Office 365 users
• Decommission on-premises servers

Running a Staged Migration


A staged migration migrates all of the existing user and resource mailboxes from an on-premises Exchange
2003 or 2007 organization into Exchange Online. Staged migrations can be performed for organizations of
any size, allowing them to migrate mailboxes to Office 365 over a period of several weeks or months, with the
eventual result of all mailboxes being hosted in Office 365.
An overview of the staged migration process, along with the pros and cons of this migration method is
described earlier. Before you begin a staged migration, read through the overview and the complete
migration process so that you can understand the steps involved, any limitations, and also to identify any
areas in which a staged migration might be unsuitable for your migration project.

Preparing for a Staged Migration


Let’s take a closer look at the steps for a staged migration using the scenario of an Exchange 2007 on-
premises organization. We'll begin with a look at how to prepare an Office 365 tenant for a staged migration.

Adding Domain Names to Office 365


The process of adding domain names to Office 365 and testing that everything works with the DNS
configuration is explained earlier in the Cutover Migration section.

Configure a Migration Service Account and Permissions


For staged migrations, the Office 365 migration service needs a set of user credentials to connect to your on-
premises organization and access mailboxes. The steps for creating a migration service account are covered in
earlier.

Preparing Recipients
Many preparation and review tasks are recommended for the recipients in your on-premises organization
before beginning the migration:
• Reduce the migration load by reviewing large mailboxes, and large mailbox items
• Review shared mailboxes
• Review the sender authentication setting for mail-enabled groups
• Disable unified messaging

Implementing Directory Synchronization


A key requirement of the staged migration approach is to implement directory synchronization. This will
populate the Office 365 tenant with users, groups and contacts based on the objects that exist in the on-
premises Active Directory.

Page 57
Directory synchronization is an important part of almost all Office 365 migration scenarios, not just the staged
migration approach. After you implement directory synchronization for your organization you can continue
with the next steps in the staged migration process.

Starting the Staged Migration


With all the preparation tasks complete you can create the migration batch and begin migrating mailboxes to
Office 365.

Create a Migration Batch


Migration batches for a staged migration are created using a CSV file. The CSV file contains information about
the mailboxes to be migrated. The minimum detail required is the email address of the mailbox. The password
can also be included, and password change can be forced at first logon. However, if you have enabled
password synchronization when you deployed the directory synchronization tool then the password fields are
not required in the CSV.
Here is an example of a CSV file for a small migration batch. The CSV file for a migration batch can have up to
2000 rows. You can run your migration in very large batches like that if you like, as long as you can manage
the logistics in terms of reconfiguring desktops and mobile devices for that many users. A more sensible
approach is to use smaller batches, especially when you consider that Office 365 will perform up to 20
concurrent migrations at a time anyway.
EmailAddress
Alan.Reid@Office365bootcamp.net
Alannah.Shaw@Office365bootcamp.net

Login to the Office 365 Exchange admin center and navigate to Recipients → Migration. Click the + (plus)
button to create a new migration batch and choose Migrate to Exchange Online.
Select Staged migration from the list of migration types. Click Browse and choose the CSV file you created
earlier (Figure 2-17). The New Migration Batch wizard will calculate the number of mailboxes in the CSV file.

Figure 2-17: Specifying the CSV file


Enter the credentials for the service account that you set up earlier with permissions to access the on-
premises mailboxes. Office 365 will use Autodiscover to determine the correct Outlook Anywhere settings for
the migration endpoint that it will create for the migration. If this fails, you will be prompted to manually enter
the server name and RPC proxy server name. The best approach at this point is to use the Microsoft Remote
Connectivity Analyzer to test and troubleshoot Outlook Anywhere access to your Exchange server. If the
Autodiscover process is successful the Exchange server name and RPC proxy server will be automatically
displayed for you, and you can continue the New Migration Batch wizard.
Next, give the migration batch a name. The name has no real impact on the technical success or failure of the
migration batch, and is only useful for your own ease of running the various migration batches for your

Page 58
organization. So simply choose a name that makes sense to you. Finally, choose a recipient for the migration
batch reports to be sent to, and choose whether to manually or automatically start the migration batch.
Staged migration batches cause new email for the mailboxes to be redirected to the Exchange Online
mailbox, which means users should begin using their Exchange Online mailbox shortly after the migration
batch starts. The steps to perform after a staged migration batch completes are explained a little later in this
section.

Real World: While the migration batch is running users should not access their on-premises mailbox to
send or receive email. However, because the mailbox hasn’t been converted to a mail-enabled user yet,
Autodiscover will still configure Outlook or a mobile device to connect to the on-premises mailbox. To
access the Exchange Online mailbox during the migration Outlook or the mobile device can be
manually configured to point to the server “outlook.office365.com”, or the user can simply use Outlook
Web App by browsing to https://outlook.office365.com/owa. While a staged migration batch is running
it is a good opportunity to visit the computers of those users in the batch and upgrade Office as well.

Monitoring the Migration Batch Progress


After the migration batch starts you can monitor the progress in the Exchange admin center. Select the
migration batch and click View Details. You will be able to view information such as the status of each
mailbox within the batch, the number of items synced, and the number of items skipped. A link to see the
Skipped item details is available so that you can get information for troubleshooting those items.

Figure 2-18: Viewing the progress of a migration batch


The completion time for the migration batch will depend on the amount of data that needs to be
synchronized. If the batch includes more than 100 mailboxes, you may notice that only 100 mailboxes are
processed simultaneously. This limit may vary though, depending on the current load experienced within the
service. When a staged migration batch completes it has a status of “Synced”, and an email notification is sent
to the recipient you specified earlier.
If the staged migration batch fails for some reason it puts the mailbox in an unfortunate state where new
email is being forwarded to the Exchange Online mailbox, but not all email was copied from the on-premises
mailbox. In this situation, it is not as simple as deleting the Exchange Online mailbox and trying again, nor can
another migration batch be run for the same mailbox. Manual remediation of the missing items is the only
available resolution. This situation is one of the reasons that a hybrid migration or a third-party migration tool
is often a better approach.

Page 59
Converting Mailboxes to Mail-Enabled Users
Before a staged migration batch runs the mailboxes that exist on the on-premises Exchange server are just
regular mailbox users. Email sent to that mailbox’s email address is delivered to the on-premises mailbox, and
that is where the user connects with their Outlook client.
[PS] C:\> Get-Recipient alan.reid | fl name,recipienttype,*external*

Name : Alan.Reid
RecipientType : UserMailbox
ExternalEmailAddress :

When the staged migration batch is started the ExternalEmailAddress attribute of the mailbox is updated with
the email address of the Exchange Online mailbox. This happens generally within the first few minutes of the
migration batch running.
[PS] C:\> Get-Recipient alan.reid | fl name,recipienttype,*external*

Name : Alan.Reid
RecipientType : UserMailbox
ExternalEmailAddress : SMTP:Alan.Reid@office365bootcamp.onmicrosoft.com

This causes all email that is sent to the mailbox’s email address to be forwarded to the Exchange Online
mailbox. Although the end user can still connect to their on-premises mailbox they will not see any new email,
and any email they send may not be migrated to Office 365 if the migration batch has already finished
processing their Sent Items folder. A staged migration batch does not do multiple synchronization passes like
a cutover migration batch does, and doesn't handle new items that appear in folders that have already been
processed.
When all existing mailbox contents have been migrated to Office 365, and new email is being delivered to the
Exchange Online mailbox, the user also needs to update their Outlook profile to connect to Office 365 instead
of the on-premises Exchange server.
During a staged migration, the Autodiscover records in DNS still point to the on-premises Exchange server,
except when the on-premises environment is running Exchange 2003 which does not have an Autodiscover
service. If the user were to create a new Outlook profile, Autodiscover would still configure them to connect to
the on-premises Exchange server. To get Autodiscover to tell Outlook to connect to Office 365 instead the
mailbox user needs to be converted to a mail user.
Microsoft provides two sets of scripts to convert Exchange 2007 mailboxes to mail-enabled users and to
convert Exchange 2003 mailboxes to mail-enabled users. In each case the process is basically the same. First a
PowerShell script is run that exports some user information from Office 365 based on the list of users in the
CSV file you used to create the migration batch. Secondly, a script is run that modifies the on-premises
mailbox users.
For Exchange 2003 the second step uses a VBScript, as there was no PowerShell available for Exchange 2003.
For Exchange 2007 the second step uses a PowerShell script. Let’s look at the process for an Exchange 2007
scenario.
First, download the scripts to the Exchange server. You may need to unzip the files and rename them to a .ps1
file extension. You should also have the migration batch CSV file in the same folder (Figure 2-19). The file
must be named migration.csv for the script to run successfully.

Page 60
Figure 2-19: Preparing files for the migration
Open a PowerShell console and run the ExportO365UserInfo.ps1 script.
[PS] C:\Admin> .\ExportO365UserInfo.ps1

You will be prompted to enter your Office 365 administrator credentials. After authenticating the script will
collect the required information and output it to a file named cloud.csv.
The next step is to run the Exchange2007MBtoMEU.ps1 script. This script takes the information in cloud.csv
and uses it to update attributes on the on-premises user objects, and then converts them from mailbox users
to mail users.
The script uses Exchange cmdlets, so it needs to be run from an Exchange Management Shell console. The
name of a domain controller must also be specified.

Warning: Do not run this script until the mailbox has been successfully migrated to Exchange Online.

[PS] C:\Admin> .\Exchange2007MBtoMEU.ps1 -DomainController OBCDC1

At this stage, the user can no longer connect to the on-premises mailbox with Outlook because it has been
removed. The Outlook profile must be recreated so that it is configured to connect to Office 365.
When the new Outlook profile is created the user will be prompted for their Office 365 credentials to connect
to Exchange Online. With password synchronization enabled on the directory synchronization server the
credentials the user enters will be the same as their on-premises credentials – their UPN/email address, and
their password.

Note: To make it easier for end users to remember their username for logging in to Office 365 services
it is recommended to match their UPN with their email address.
Based on the targetAddress attribute of the mail user, which refers to the service domain for the Office 365
tenant (e.g., “alan.reid@office365bootcamp.onmicrosoft.com”), Autodiscover will reconfigure the Outlook
profile to connect to Exchange Online. You can verify that this has occurred by looking at the server name for
the profile. Instead of your previous Exchange server name you will now see a string of characters that looks
like an email address (Figure 2-20).

Page 61
Figure 2-20: Checking the user profile
One more restart of Outlook may be necessary to complete the reconfiguration. If the Outlook profile is not
recreated Outlook will display error messages when it launches. To fix the problem, the easiest solution is to
recreate the user profile as described earlier.

Removing the Migration Batch


After a staged migration batch reaches the status of “Synced”, and you no longer need it for reporting
purposes, you can remove the migration batch. Select the migration batch and click Delete.

Completing the Migration


After all the migration batches have been completed you can perform the post-migration tasks. Refer to
sections earlier in this chapter for more information on these tasks.
• Change the MX records in DNS to direct email to the Exchange Online mailboxes.
• Configure Autodiscover DNS records and the SCP.
• Configure other DNS records required for service such as Skype for Business.
• Convert shared mailboxes.
• Remove restrictions from distribution groups.
• Assign licenses to Office 365 users.
• Decommission on-premises servers.

Page 62
Chapter 3: Managing Office
365 Addressing
All mail-enabled recipients must have a routable email address to allow them to receive email. But unless
users know the email address of an intended recipient, they cannot send messages to them. Inside cloud-only
deployments, Azure Active Directory obviously plays a huge role for Office 365 tenants as the source for
authentication and repository for information about objects, including mail-enabled recipients. The situation
is more complicated with hybrid deployments where the on-premises Active Directory is authoritative and the
source for authentication when Active Directory Federation Services is deployed.
Users access directory information through address books, otherwise known as address lists. You do not have
to do anything to provide address books to users as Exchange Online will generate all the necessary data in
the right format at the right time. But you can exert a certain amount of control over what is in the address
books through the features discussed in this section:
• Email Address Policies.
• Address Lists.
• Offline Address Book (OAB).
• Hierarchical Address Book (HAB).
• Address Book Policies.
These features are supported in both Office 365 and on-premises Exchange. If you are involved in a hybrid
deployment, to avoid user confusion, you must make sure that the features are used in the same way on both
sides.

Email Address Policies


On-premises versions of Exchange use email address policies to control the format of the email addresses
assigned to mail-enabled objects like mailboxes and distribution groups. Typically, these policies are used to
ensure that objects have consistent email addresses. A common example is when users are assigned SMTP
addresses in the “firstname.lastname@domain” format, such as Tony.Redmond@Office365ITPros.com.
Exchange Online does not allow tenants to create and apply email address policies to mail-enabled objects
created in the cloud. The New-EmailAddressPolicy and its companion cmdlets that are used to control email
addresses on-premises were unavailable to Exchange Online administrators. This means that administrators
can assign whatever email addresses they like to new mailboxes and other mail-enabled objects created
inside Office 365. The situation is different in hybrid deployments where the on-premises address policies that
are in force are applied to new objects as they are created before the objects are synchronized to Office 365.
As with on-premises Exchange, a default email address policy exists within Exchange Online to act as a
backstop to ensure that mail-enabled objects receive at least one valid SMTP email address when they are
created. Normally, administrators add other addresses when they create mailboxes, groups, and other mail-
enabled objects.
Here is how to view the default email address policy using the Get-EmailAddressPolicy cmdlet together with
some of the more important items that you’ll see in the output.
[PS] C:\> Get-EmailAddressPolicy –Identity 'Default Policy' | Format-List

Page 63
RecipientFilter : Alias -ne $null
LdapRecipientFilter : (mailNickname=*)
LastUpdatedRecipientFilter :
RecipientFilterApplied : False
IncludedRecipients : AllRecipients
RecipientContainer :
RecipientFilterType : Precanned
Priority : Lowest
EnabledPrimarySMTPAddressTemplate : @office365itpros.onmicrosoft.com
EnabledEmailAddressTemplates : {SMTP:@office365itpros.onmicrosoft.com}
DisabledEmailAddressTemplates : {}
HasEmailAddressSetting : True
HasMailboxManagerSetting : False
NonAuthoritativeDomains : {}
AdminDescription :
ManagedByFilter :
ManagedByLdapFilter :
AdminDisplayName :
ExchangeVersion : 0.1 (8.0.535.0)
Name : Default Policy

This policy makes sure that every mail-enabled object has a valid SMTP address. Its priority is “lowest”, which
indicates that it is applied last. The policy is valid for all object types (the IncludedRecipients setting is
AllRecipients) providing they have an email alias (RecipientFilter looks for any object with a non-blank alias).
Finally, the EnabledEmailAddressTemplates setting determines the email domain that will be combined with
the object alias to create a valid SMTP address. As you can see, the domain used is the tenant’s sub-domain of
“onmicrosoft.com”. All of this means that if the alias is “Test”, the resulting email address will be
Test@office365itpros.onmicrosoft.com).

Using Email Address Policies with Office 365 Groups


Exchange Online does not support the application of email address policies in the same way as on-premises.
The exception is for Office 365 Groups, where a support article describes how to deploy email address policies
to support the situation where you would like groups created by a certain set of users to have a specific form
of email address. The scenario described is a university or college where students and faculty are both allowed
to create groups, providing the right email address is assigned, which is where email address policies come in.
The same approach could be taken in other corporate structures. For example, you might want groups
created in a certain geography or by a specific department to have a particular form of email address. The
basic idea behind “multi-domain support for groups” is that you can create an email address policy that is
triggered when a user that comes within the scope of a recipient filter creates a new group. Here’s an
example:
[PS] C:\> New-EmailAddressPolicy –Name MarketingGroups –IncludeUnifiedGroupRecipients
–EnabledEmailAddressTemplates "SMTP:@Marketing.Office365ITPros.com", "smtp:@anotherdomain.com" -
ManagedByFilter {Department –eq "Marketing"} –Priority 1

This command creates a new email address policy to ensure that any group created by a user covered by the
filter specified (Department equals “Marketing”) gets two email addresses as defined by the
EnabledEmailAddressTemplates property. Assuming that the group name is Football, the addresses are:
• Football@marketing.Office365ITPros.com. This is the primary address (we know this because SMTP is
uppercase in the template address) and is stamped onto outbound messages from the group.
• Football@anotherdomain.com (secondary as indicated by the lowercase smtp prefix).
The domains defined in the EnabledEmailAddressTemplates setting must belong to the tenant and be
configured as an accepted domain. In addition to the addresses assigned by policy, Exchange Online also
gives the group an address belonging to the tenant’s service domain (tenant.onmicrosoft.com).

Page 64
The email address policy is given priority 1, meaning that it is applied first before other policies. You can have
multiple policies, each with a different priority from 1 down, each applying to a separate set of users (for
example, different departments). If you want to have a catch-all policy to assign groups with an address from
a specific domain owned by the tenant, create a policy without a filter. For example:
[PS] C:\> New-EmailAddressPolicy -Name AllOtherGroups -IncludeUnifiedGroupRecipients
-EnabledPrimarySMTPAddressTemplate "SMTP:@Office365ITPros.com" -Priority 3

Make sure that the catch-all policy has a lower priority than any policy that contains a filter. In this case, we
only want groups to have a primary SMTP address, so we can use the EnabledPrimarySMTPAddressTemplate
parameter instead of EnabledEmailAddressTemplates as used above to assign multiple SMTP addresses to a
group.
If you introduce a new vanity domain to your tenant, you can update the email address policies with the Set-
EmailAddressPolicy cmdlet. For example:
[PS] C:\> Set-EmailAddressPolicy -Identity AllOtherGroups -EnabledPrimarySMTPAddressTemplate
"@Office365ITPros.com"

Creating new email address policies does not affect the SMTP addresses for groups that already exist. In other
words, there’s no way to retrospectively update Office 365 Groups with email addresses except by doing so
individually or by writing some PowerShell to find all the groups that you want to process and then add a new
address to them. In the same way, removing an email address policy with the Remove-EmailAddressPolicy
cmdlet leaves addresses that are stamped on groups in place.

SMTP Addresses and Mail-Enabled Objects


Like Exchange on-premises, a mail-enabled object owned by Exchange Online can have multiple email
addresses, which are held in the object’s EmailAddresses property. All of the email addresses are routable for
transport purposes and can be used to send messages to the object. However, one of the addresses is always
set as primary (also known as the reply address because this is the address transmitted in message headers
that recipients can use to reply to messages). The primary email address is usually the same as the User
Principal Name used by a person to sign into Office 365. The tenant's default domain is selected for new user
accounts created through the Office 365 Admin Center, but you can override the suggestion and use any of
the domains registered for the tenant.
When you create or edit a mail-enabled cloud object, you can add an email address for any of the domains
owned by the tenant in whatever format you choose. This means that some work to apply consistent address
policies to objects across the whole of the tenant might have to be done manually. PowerShell scripts are the
best approach if you believe that it’s important to have consistency in email addresses and want to update
addresses to follow a certain pattern.
For example, let's assume that you have a cloud-only tenant and you want to be sure that all mailboxes can
be addressed using the "firstname.lastname@domain" convention that is commonly used for SMTP-based
email systems. The steps involved are:
• Scan for mailboxes.
• Check whether the primary SMTP address is in the desired format.
• If not, build the required address and use the Set-Mailbox to cmdlet to update the EmailAddresses
property with the new address.
Three approaches can be used for the last step. First, you can simply add a new address in the desired format
to the EmailAddresses property (it is a multi-valued attribute). For example, this snippet adds a new SMTP
address to the set:

Page 65
[PS] C:\> Set-Mailbox –Identity TRedmond –EmailAddresses
@{Add="Tony.Redmond@Office365ITPros.com"}

However, you might want to ensure that the primary SMTP address is set to the address in the chosen format.
In this case, you might rewrite the entire set of addresses and indicate which address is primary. For example,
this snippet overwrites whatever addresses exist for a mailbox and adds three new addresses. The first one
(prefixed with SMTP – capitalization is important) becomes the primary email address.
[PS] C:\> Set-Mailbox –Identity TRedmond –EmailAddress
SMTP:Tony.Redmond@Office365ITPros.com,tony@Office365ITPros.com,tredmond@Office365ITPros.com

The third method of attacking the problem is to update the WindowsEmailAddress attribute. This will update
the primary SMTP address for the object with the address passed to WindowsEmailAddress and retain the
previous primary SMTP address in the list of proxy addresses held in the EmailAddresses attribute. For
example, these PowerShell commands update the WindowsEmailAddress attribute for the mailbox belonging
to Ben Owens. We can see the value for PrimarySMTPAddress before and after the update. The old primary
address is now in the list of proxy addresses, which is what we want because mail can continue to be delivered
to the old address.
[PS] C:\> Get-Mailbox –Identity 'Ben Owens' |
Format-List PrimarySMTPAddress, WindowsEmailAddress

PrimarySMTPAddress : Ben.Owens@Office365ITPros.com
WindowsEMailAddress: Ben.Owens@Office365ITPros.com

[PS] C:\> Set-Mailbox –Identity 'Ben Owens' –WindowsEmailAddress 'B.J.Owens@Office365ITPros.com'


[PS] C:\> Get-Mailbox –Identity 'Ben Owens' |
Format-List PrimarySMTPAddress, WindowsEmailAddress, EmailAddresses

PrimarySmtpAddress : B.J.Owens@Office365ITPros.com
WindowsEmailAddress : B.J.Owens@Office365ITPros.com
EmailAddresses : {SMTP:B.J.Owens@Office365ITPros.com, smtp:Ben.Owens@Office365ITPros.com,
SPO:SPO_4e030ae4-12bf-49d3-951b-dd4c9158ca8f@SPO_b662313f-14fc-43a2-9a7a-d2e27f4f3478,
SIP:Ben.Owens@Office365ITPros.com...}

You can’t set the WindowsEmailAddress property for an Office 365 Group because it is not supported for these
objects. Use the Set-UnifiedGroup -PrimarySmtpAddress command instead if you need to change the primary
SMTP address for an Office 365 Group.
Proxy addresses exist for many reasons. Some are used by applications, some to route email, and some to
preserve old email addresses for previous email domains. For instance, if your company has been through a
merger or acquisition, it’s likely that a new email domain is created for the new company. To make sure that
the addresses for the old email domain can still be used to deliver messages to recipients, they are retained as
proxy addresses. The usual rule is that the older a mail-enabled object is, the more likely it is to have a large
number of proxy addresses. The documented maximum number of proxy addresses supported for a mail-
enabled object in Exchange Online is 400. Although you can exceed this number, it is unwise to go past 400 as
you will eventually hit a hard limit. The exact limit depends on the size of the proxy addresses. Because a
similar restriction does not exist for on-premises mail-enabled objects, this raises the issue of what happens
when on-premises mail-enabled objects are synchronized to Office 365. The answer is that the objects might
not synchronize properly. With this in mind, it is a good idea to check whether any on-premises mail-enabled
objects exist that might need to be adjusted. You can identify potentially problematic objects with the
following command:
[PS] C:\> Get-Recipient | Where {($_.EmailAddresses).count -gt 400} | Format-Table DisplayName,
Alias, RecipientType

It is quite common to find that mailboxes and other mail-enabled objects have multiple email addresses,
especially when the tenant owns multiple domains. An object might have twenty or more valid email

Page 66
addresses, in which case it's probably easier to input a new address via the Office 365 Admin Center by
editing the email properties of an account as shown in Figure 3-1. All of the secondary addresses are equally
valid in terms of routing messages to the right object. The address on any message arriving into Exchange
Online is validated against Azure Active Directory to determine whether it can be delivered and the scan
includes all secondary addresses. The primary email address and username are the same, which is the best
practice for Office 365 deployments. The primary address is the one used as the reply-to address for
messages sent by the user.

Figure 3-1: Adding an extra SMTP address to a mailbox


You can also add new email addresses to mailbox through the Exchange Administration Center or PowerShell.
To set a primary address with PowerShell, capitalize the SMTP prefix as shown in the example above. Chapter
5 explains some additional considerations that should be taken into account when you update addresses in a
hybrid environment.

The SPO Proxy Address


When you review the set of addresses present for a mailbox, you might see an address of the type “SPO” (as
in the case of the mailbox of Ben Owens described above). This is an address that is automatically created by
SharePoint Online when the mailbox participates in sharing operations involving SharePoint Online or
OneDrive for Business sites, including those used with Office 365 Groups. The SPO addresses are something
like this:
SPO_cc191cff-670a-4740-8458-e6067537c747@SPO_b662313f-14fc-43a2-9a7a-d2e27f4f3478
The SPO proxy address for a user is generated using the text string “SPO_”, the GUID of the user's SharePoint
Online MySite, the @ sign, and the Tenant GUID. Although an address like this means nothing to a human, it
is sufficient to allow SharePoint Online to use Exchange Web Services (EWS) to impersonate the user to send
messages on their behalf. For instance, users who share items from SharePoint Online or OneDrive for

Page 67
Business sites generate sharing notifications that are sent to the people with whom the content is shared. The
notifications can be found in the Sent Items folder of the sharer’s mailbox.
Using EWS in this manner means that the notifications are less likely to be picked up as spam and suppressed
by junk mail filters. The reason why a special address of such an odd format is used is that it makes it less
likely that regular mailbox maintenance will ever attempt to interact or interfere with the addresses, which are
after all, owned by SharePoint Online. For more information, see Microsoft KB3134824.

Address Lists
The Global Address List (GAL) is the best-known example of an address list and is the one that receives most
use because it includes all mail-enabled recipients. Exchange Online provides other standard address lists:
• All rooms: All room mailboxes.
• All users: All mailboxes – including, shared, and resource (room and equipment) mailboxes (but not
Office 365 group mailboxes).
• All distribution lists: All standard and dynamic distribution groups, including Office 365 Groups.
• All contacts: All mail-enabled contacts.
• All groups: All Office 365 groups. The recipient filter still refers to these objects as “group mailboxes”.
• Offline Global Address List: The GAL as downloaded and made available for offline use by Outlook
clients. Mail-enabled public folders are included in this address list.
• Public Folders (an address list generated on-premises when in a hybrid deployment).
The “Offline Global Address list” address list is not used in current Exchange on-premises deployments.
An address list is created by applying an OPATH query to find objects in Azure Active Directory. The OPATH
query specifies the properties that are used to locate the desired objects. You can see the query used for an
address list by running the Get-AddressList cmdlet. For example, here is the query used for the Offline Global
Address List. An account needs to be assigned the Address Lists RBAC role before you can run any of the
Address List cmdlets. We’ll cover how an account is assigned this role shortly.
[PS] C:\> Get-AddressList –Identity "Offline Global Address List"

RecipientFilter : ((Alias -ne $null) -and (((((((((((ObjectClass -eq 'user') -or (ObjectClass
-eq 'contact'))) -or (ObjectClass -eq 'msExchSystemMailbox'))) -or (ObjectClass
–eq msExchDynamicDistributionList'))) -or (ObjectClass -eq 'group'))) -or (ObjectClass
-eq 'publicFolder'))))

This query finds all users, contacts, system mailboxes, distribution groups, Office 365 groups, and public
folders.
Users access address lists through the address book option in Outlook or the Directory section of the OWA
People option. Mobile clients do not typically offer an option to browse the directory through address lists as
the nature of the interaction between a mobile client and the server is designed to minimize communication.
Instead, mobile clients are able to search the directory for a specific recipient.
Often the need arises to create a separate address list, perhaps to identify recipients in a specific business unit
or location. Exchange Online supports up to 1000 address lists per tenant but does not include functionality in
the EAC to manage address lists so the work to create a new address list has to be done with PowerShell.

Gaining Permission to Work with Address Lists


Like everything else in Exchange Online, RBAC controls access to the cmdlets that control the ability to work
with address lists. The “Address Lists” management role is used to grant access. However, this role is not

Page 68
assigned by default to any administrative role group, so the first task is to include the role in a suitable role
group. Follow these steps:
1. Open the EAC and click Permissions.
2. Click Admin roles.
3. Select the role group that you want to amend. “Organization Management” is a good choice as it is
the usual role group assigned to tenant administrators. Click the Pencil icon to edit the role group.
4. Add the “Address Lists” role to the set of roles included in the Organization Management role group
(Figure 3-2) and then save the change.
5. Alternatively, you can create a new management role group that includes the Address Lists role and
use the new group to assign permission to accounts to work with address lists.

Figure 3-2: Adding the Address Lists role to the Organization Management role group
Now start PowerShell and connect your session to Office 365. When you connect, RBAC will load all the
cmdlets assigned to the role groups that you belong into the PowerShell session, including the Address Lists
cmdlets (New-AddressList, Get-AddressList, and Remove-AddressList). If these cmdlets are not available in your
session, the change to add the Address Lists role to the role group that you updated did not work.

Creating Address Lists


The next task is to create the new address list by running the New-AddressList cmdlet. You need to provide
two pieces of data – the name of the address list as seen by users and the recipient filter to be used by
Exchange Online to extract items from the directory to display in the list. The example shown below is a very
simple filter that extracts user mailboxes whose “StateOrProvince” property is set to “Ireland”.
[PS] C:\> New-AddressList –Name 'Ireland Users' –RecipientFilter
{((RecipientType –eq 'UserMailbox') –and (StateOrProvince –eq 'Ireland'))}

Page 69
Microsoft recommends that the fifteen custom attributes provided by Exchange Online for mail-enabled
objects are used as the basis for recipient filters because they are consistent across all recipient types, but you
can use any recipient filter you like as long as it works and finds the objects that you want to include in the
address list. Note that not all properties are filterable.
After a couple of minutes, you should be able to sign into OWA, access People, and see the new address list
under “Directory” – and better again, if the recipient filter works and the right information has been populated
about the objects you want to display, you will see a populated list. The last point is important – address lists
can only work if Exchange Online can locate objects by using the filter criteria you specify to execute the
query against Azure Active Directory. If an object is missing some value, then it won’t be found. For instance, if
a user doesn’t have “Ireland” in their StateOrProvince property, then they won’t appear in the “Ireland Users”
view (queries executed by a RecipientFilter are not case sensitive).
Because Exchange Online does not support the Update-AddressList cmdlet, objects that already exist might
not show up in the new address list. The reason is that address list membership is evaluated when an object
changes, so any recipients that should be in the list already exist in the directory will not appear until the next
time that their object is updated (by EAC, Office 365 Admin, or PowerShell). Microsoft says that this is “by
design” and it’s understandable in some respects because Microsoft clearly wants to avoid processing
operations that could be resource-intensive and impact multiple tenants. Address list updates, especially for
large tenants, fall into this category. If you want to be sure that a new address list is correctly populated, you
have to force this by updating all of the objects that should be in the list. Once again, this has to be done in
PowerShell.
For example, these steps use the recipient filter for the address list that we just created to locate all of the
users that come under the scope of the recipient filter and then update a custom attribute for each mailbox to
force Exchange Online to update the mailbox's address list membership. You don't have to write the word
"Updated" into the attribute as all we're doing here is forcing Exchange Online to re-evaluate the address lists
that the object belongs to. Some administrators like to write the date and time into the attribute as in
"Updated: 26-Apr-2015 10:00". Before selecting an attribute to use to hold update information, it’s a good
idea to validate that the attribute is not used for another purpose.
[PS] C:\> $Filter = (Get-AddressList –Identity "Ireland Users").RecipientFilter

[PS] C:\> Get-Recipient –RecipientPreviewFilter $Filter | Set-Mailbox –CustomAttribute6 "Updated"

These commands work because the address list only includes mailboxes. Other code would be required if
different recipient types were included in the list. This update is only necessary to include pre-existing
recipients into an address list. Recipients who are subsequently created will be evaluated for address list
membership at that time and their AddressListMembership property will include all of the address lists to
which they belong.
Because of the lack of a GUI and any assistance to navigate the OPATH syntax, it can be difficult to create a
satisfactory recipient filter. One workaround is to create a dynamic distribution group with a filter for the same
recipient set that you’d like to use in an address list and then reuse the filter. You can extract the filter for a
dynamic group to a variable with a command like:
[PS] C:\> $Filter = (Get-DynamicDistributionGroup –Identity "Dynamic User").RecipientFilter

Then, use the $Filter variable as input to either the New-AddressList or Set-AddressList cmdlets to create a new
address list or modify an existing address list.

Page 70
Offline Address Book (OAB)
The OAB is a point-in-time version of the Global Address List made available for download by Outlook clients
for use offline. Outlook clients configured to use cached Exchange mode use the OAB for address validation
and directory lookup. By comparison, other clients use the online GAL when they need directory information.
All mail-enabled recipients in the directory, except receipients whose HiddenFromAddressListsEnabled property
is set to $True, are included in the OAB.
In an on-premises deployment, it is normal to generate the OAB daily and that's what happens in Exchange
Online too. However, unlike on-premises organizations, an Office 365 tenant cannot force the generation of
the OAB and must wait for Exchange Online to run the OAB generation process, which is sometimes delayed
due to server load. Depending on the age of the OAB files present on a client and the percentage of change
within the directory, a download is either a set of six files containing delta changes to bring the client’s copy
of the OAB up to date or a complete copy of the OAB. The latter is only used if the client’s OAB has not been
updated for more than 30 days or more than 12.5% of the objects in the directory have been updated. This
can happen if the organization is in the middle of a change like a corporate acquisition or merger or indeed
when mailboxes are being migrated to Office 365.
OAB updates occur using an Outlook background thread. Users are usually unaware that they have happened.
The most common complaint is that a new user or other mail-enabled recipient is not present in the OAB, a
fact that is easily accounted for because new objects do not appear in the OAB until after Exchange Online
next generates OAB updates. A further delay occurs until the client downloads and applies the OAB updates.
Users can force an OAB update by selecting Download Address Book from Outlook’s Send/Receive menu,
but the client can only download the available OAB updates, which might not include new recipients.
However, if the user knows the SMTP address of the new user, they can use that to address messages.
Alternatively, they can access the online Global Address List in Outlook’s address book or use the People
section of OWA to look up the online directory for recipient details.

Managing the OAB


Exchange Online takes care of setting up OAB generation for a tenant. You have no control over the
arbitration mailbox used to hold the OAB files or where or when the OAB is generated. It all just happens.
However, you can run the Get-OfflineAddressBook cmdlet to examine details of the OAB. The following
properties are of interest:
• Generating mailbox: Is blank – a tenant administrator cannot see the arbitration mailbox used for
OAB generation. An on-premises administrator is able to exert some control over arbitration
mailboxes that is impossible within Exchange Online. However, the GUID for the mailbox is reported
in LastGeneratingData (you still can’t use it).
• AddressLists: Usually “\Offline Global Address List”, meaning that the Offline Global Address Book
address list is used to generate the OAB.
• LastTouchedTime: The date and time when Exchange Online last generated the OAB.
• LastNumberOfRecord: The number of data records (recipients) in the OAB. This should be the
number of mailboxes, groups, contacts, distribution groups, and mail-enabled public folders that exist
in the organization.
• ConfiguredAttributes: The properties of objects that are available in the OAB. Azure Active Directory
stores a lot of information about objects and only a selection of those objects are available offline. To
see the set of attributes available in the OAB, run this command:
[PS] C:\> (Get-OfflineAddressBook –Identity "\Default Offline Address Book").ConfiguredAttributes

Page 71
You can change the attributes contained in the OAB by running the Set-OfflineAddressBook cmdlet
and specifying the set that you’ll like to have. Each attribute has a qualifier to indicate its use. The
qualifiers are “ANR”, indicating that it can be used for ambiguous name resolution (when several
objects have the same value in the attribute), “Value”, indicating that the attribute contains whatever
is in the directory and is not indexed, and “Indicator”, meaning that a restriction might be in place that
prevents the delivery of a message to the recipient (such as they only accept messages from specified
users).
• DiffRetentionPeriod : The number of days that delta changes (differences) files are retained by
Exchange Online for clients to copy.
• Name: This will be “Default Offline Address Book” unless you have created additional OABs for use
with Address Book Policies.
Although you can create and modify new OABs, unlike Exchange 2013 or Exchange 2016, Exchange Online
does not allow you to assign a different OAB to a mailbox by running the Set-Mailbox cmdlet. In addition, you
cannot make a new OAB the default OAB. The only way that a customized OAB can be provided to a user is
through Address Book Policies.

User Photos and the OAB


Thumbnails are small photos that are uploaded into Azure Active Directory as data attributes for user
accounts. Users can upload photos through the General section of OWA options, which is the quickest and
simplest way for users with Exchange Online mailboxes to upload their own photos, if allowed by the
organization. In fact, when a user uploads a photo using OWA options, Exchange stores a high-fidelity version
of the photo in the mailbox and Azure Active Directory stores a smaller version. The same is true in hybrid
environments. See Chapter 5 for more information on this point. Tenants who do not use Exchange Online
can load user photos through the SharePoint profile.
If user photos are available, thumbnails derived from the high-fidelity version are used throughout Office 365
in message headers, menu bars, Delve, Teams, Planner, for people cards, and when browsing the directory.
Skype for Business Online and Teams use the high-fidelity version during video conversations. Thumbnails for
external contacts can also originate through Outlook's connections with the Facebook and LinkedIn where the
photos are extracted from those networks. LinkedIn contact information is, for instance, stored in a folder
called "LinkedIn" that is only exposed through Outlook's "Folders" view.
An administrator can add a photo for a mailbox by running the Set-UserPhoto cmdlet. For instance, this
example uploads a JPG file into for the mailbox belonging to Ben Owens.
[PS] C:\> Set-UserPhoto "Ben Owens"
-PictureData ([System.IO.File]::ReadAllBytes("C:\Temp\BenOwens.jpg"))

Clients fetch photo information from the mailbox when working online. Because Outlook can work offline, it
needs a mechanism to store thumbnails locally, which is the OAB. The default configuration of the OAB marks
the ThumbnailPhoto attribute with an Indicator flag, meaning that it is only available online and is not
downloaded. If you want to create an OAB that includes the thumbnails, you can do this by changing the flag
to be "Value" instead as shown below. This is acceptable in small tenants where downloading photo data is
unlikely to make a lot of difference to the overall size of the OAB, but it is not a good idea for larger tenants
where it might create a severe case of “OAB bloat.“
[PS] C:\> $Attributes = (Get-OfflineAddressBook -Identity "\Default Offline Address
Book").ConfiguredAttributes
[PS] C:\> $Attributes.Remove("ThumbnailPhoto, Indicator")
[PS] C:\> $Attributes.Add("ThumbnailPhoto, Value")
[PS] C:\> Set-OfflineAddressBook -Identity "\Default Offline Address Book"
-ConfiguredAttributes $Attributes

Page 72
The changes made to the OAB configuration are effective the next time Exchange Online updates the OAB. In
addition to creating a larger OAB, making a change that might affect many OAB records creates the possibility
that Outlook clients will have to download a complete copy of the OAB the next time they connect.

Hierarchical Address Book (HAB)


By default, the OAB has no hierarchy and objects are presented in strict alphabetical order. Providing that
some planning is done to determine to how best to identify users with common names in a large organization
(for example, by including their department in a user’s display name), the default order is sufficient in most
cases. However, some organizations like to present information in hierarchical (or seniority) order so that the
most important people in the company are listed first with the ability to navigation down through the various
layers of responsibility within the organization.
All versions of Exchange from Exchange 2010 SP1 or later support the ability to include hierarchical details in
OAB. All Outlook clients from Outlook 2007 SP2 are able to interpret the additional information and present
the hierarchical address book (HAB) alongside the normal “Name List”. Only a single HAB can exist inside a
tenant and the HAB only becomes visible after Exchange Online next generates an OAB and clients download
the OAB updates. Because the feature is based on the OAB, Outlook is the only supported client.
The HAB works by having a set of distribution groups, each of which represents a different level of the
hierarchy as indicated by a “seniority index” beginning from 1 (the most senior level) and extending to
whatever number of levels exists within the company. Each distribution group contains the users that belong
to the level and a link to the next level, represented by the distribution group for that level. As shown in Table
3-1, it is a good idea to lay out a plan for the distribution groups to show how they will make up the HAB. The
display name of each group is used for the level name inside the hierarchy.
Seniority Group Display Name Description
Index
1 Chief Executive Officer Level 1 – the Chief Executive (plus HAB-VP)
2 Vice Presidents Level 2 – All Vice Presidents (plus HAB-Senior Managers)
3 Senior Managers Level 3 – All Senior Managers (plus HAB Managers)
4 Managers Level 4 – All Managers (plus HAB-IC)
5 Individual Contributors Level 5 – All Individual Contributors
Table 3-1: Defining the hierarchy to implement in the HAB
Some work on the back end is required to create the HAB. Apart from creating the distribution groups that
are used to form the HAB, you can’t do this work through the browser-based management interface. Here’s
what you need to do:
• Understand the hierarchy that should be presented and agree this with the HR department (or
whoever determines these matters within the company). Identify the individual users who will be
listed at different levels of the hierarchy. It is easy to end up with duplications so that users appear in
multiple levels of the hierarchy.
• Create a distribution group to serve as the root of the HAB. As described in Table 3-1, this is the most
senior level within the hierarchy and should therefore contain the entry for the CEO, Managing
Director, or whoever is deemed to be the top person within the company.
• Create the other distribution groups required by the HAB and populate them with the users at that
level. Remember to include the distribution group for the next level. You can include contacts in these
groups.
• Configure the organization (tenant) so that Exchange Online knows you want to generate the HAB:

Page 73
[PS] C:\> Set-OrganizationConfig –HierarchicalAddressBookRoot HAB-CEO

• Assign the seniority index to all of the groups that make up the HAB and set the flag to tell Exchange
Online that the group forms part of a hierarchy. Using the information described in Table 3-1, this
means that the following commands are needed (the alias for each group is used to identify it rather
than its display name).
[PS] C:\> Set-Group –Identity HAB-CEO –SeniorityIndex 1 –IsHierarchicalGroup $True

[PS] C:\> Set-Group –Identity HAB-VP –SeniorityIndex 2 –IsHierarchicalGroup $True

[PS] C:\> Set-Group –Identity HAB-SeniorManagers –SeniorityIndex 3 –IsHierarchicalGroup $True

[PS] C:\> Set-Group –Identity HAB-Managers –SeniorityIndex 4 –IsHierarchicalGroup $True

[PS] C:\> Set-Group –Identity HAB-IC –SeniorityIndex 5 –IsHierarchicalGroup $True

After these tasks are complete, you have to wait for Exchange Online to update the OAB before you can
download the updates to Outlook and check whether the HAB works as planned. If everything is in order, you
should now find that two tabs are available in the Address Book. The traditional OAB is available through the
“Name List” tab and the HAB through the “Organization” tab. As you can see in Figure 3-3, the HAB is
organized in the seniority levels that were previously defined and the members of each level are displayed.

Figure 3-3: A hierarchical address book organized by job level


You might wonder whether you can hide the distribution groups used to build the HAB so as to prevent
people using them to address messages. The answer is that you can hide the groups by setting their
HiddenFromAddressListsEnabled property to $True. This will effectively stop people using them to send email,
but it will also stop the groups being used in the HAB. As you might observe from the information shown in
Figure 3-3, the “Individual Contributors” level doesn’t show up because it is hidden. A better solution is to
leave the distribution groups used by the HAB visible but set their properties so that only certain people (such
as the administrator) can use them to address email. You can also include a MailTip to advise users that the
distribution groups have restricted use. For example:
[PS] C:\> Set-DistributionGroup –Identity 'Chief Executive Officer'
–AcceptMessagesOnlyFromSendersOrMembers “Administrator”, “Chief Executive Officer”
–MailTip “This distribution group is restricted and should not be used for email”

Page 74
The individual members of the group used to form each level are ordered alphabetically. You can affect the
sort order in two ways:
1. Assign a seniority index value to individual objects. The seniority index takes precedence and the sort
order is descending from 100. The most important user in a group should be therefore assigned a
seniority index of 100 and those who are less important should be assigned smaller values.
[PS] C:\> Set-User –Identity "David Pelton" –SeniorityIndex 100

2. If seniority index values are not assigned, Outlook uses the phonetic display name if it exists. A
phonetic display name was originally intended for use in the Exchange Unified Messaging system as it
allowed administrators to provide a phonetic form of a user’s name to help the UM Attendant sound
the name properly in the greeting heard when someone calls their mailbox.
[PS] C:\> Set-User –Identity "MVanHorenbeeck" –PhoneticDisplayName "Michael Van Hybrid"

It’s clear that many interesting office political games could be played by a tenant administrator who
selectively alters the seniority index for users.
Organizing by job level is one way to create an HAB. Another way is to build it via organizational units with
the root of the organization provided by corporate HQ and then the various divisions and units underneath.
Table 3-2 illustrates an example organization. Note that in this instance the hierarchy is more complex
because multiple units exist with the same seniority level.
Seniority
Index
Corporate HQ 1
Divisions 2
Manufacturing 3
VP Office 4
Production 4
Engineering 4
Human Resources 3
HR management 4
Corporate Finance 3
Accounting 4
Budgets 4
Forecasts 4
Table 3-2: Charting an organizational layout for a HAB

Page 75
Figure 3-4: A hierarchical address book organized by company departments
The same approach is taken to build this form of HAB. Establish the distribution group for the root and the
groups needed to define the levels underneath. Define the root for the HAB and the seniority level for each
group within the hierarchy and wait for Exchange Online to update the OAB. Figure 3-4 illustrates the kind of
HAB that results. Note the sort order in this example. David Pelton is assigned a seniority index to move him
to the top of the list and the other entries are sorted alphabetically.

Address Book Policies


Address book policies, or ABPs, are a mechanism to control what directory objects are shown to users. By
default, Exchange Online does not apply an ABP to mailboxes so their users are able to see the entire set of
objects in the tenant directory. You can see the mailboxes that have an ABP assigned by running the
command:
[PS] C:\> Get-Mailbox –Filter {AddressBookPolicy –ne $Null}

ABPs were originally introduced to handle scenarios where the on-premises version of Exchange is deployed
for hosting purposes and you don’t want the users from one customer to see the users belonging to another.
Of course, Office 365 is a hosted environment and the segmentation already exists between tenants insofar as
the users belonging to one tenant cannot see the users belonging to another. Even so, ABPs can still be a
useful method to provide a customized view of a tenant to a targeted group of users as in the example found
in schools or colleges where administrators want to provide one view of the directory to students and a
completely different view to teachers and other employees. Although Exchange Online supports ABPs, the
interface available in the version of the EAC used by on-premises Exchange is not provided. All of the work
done to implement and manage ABPs has to be done through PowerShell.

Using ABPs with Teams


Address book policies are obviously useful to Exchange, but they are also used by Teams for the same reason:
to segment Azure Active Directory into sections accessible to different groups of users. To enable address
book policies for Teams, go to the Org-wide settings section of the Teams and Skype for Business Online

Page 76
Admin Center, select Teams settings, and then toggle the switch for Scope directory search in Teams using
an Exchange address book policy to On. After a short pause to allow cached data to clear, Teams will
respect the address book policy setting for user mailboxes and only allow users to see the organizational
information available to their assigned policy.
Just like Exchange users can communicate with people outside their ABP by entering the SMTP address of a
recipient, the use of ABPs in Teams is not a complete block. For instance, the membership of teams that
someone belongs to might include people inside and outside the scope of an ABP (including org-wide teams).
Any member of a team can select another member and chat without them without hindrance, perhaps after
browsing the full membership of the team with the Manage team menu option.
Remember that Teams is designed to foster communication, not stop it, and ABPs are not intended to be a
full block on interpersonal communication between different sections of an organization. Other methods like
transport rules exist for that purpose in email, and if you want a full block in Teams, you;ll use Information
Barriers.

Creating an Address Book Policy


As an example of what’s possible with ABPs, let’s assume that we want to provide a customized directory view
to the members of the Engineering department. The only users, groups, rooms, and contacts that they should
see are those associated with the department. To accomplish the goal, we need to create:
• A filter that can be applied to directory objects. For this example, we will filter based on the value
“Engineering” stored in CustomAttribute10. As noted earlier, custom attributes are a good choice for
filters because they are available for all supported object types.
• Type-specific address lists to match the “All Users”, “All Rooms”, and “All Contacts” lists usually
provided to users.
• A new GAL containing all of the objects belonging to the Engineering Department.
• A new OAB based on the Engineering GAL.
• An Address Book Policy that combines the address lists, GAL, and OAB.
Once the constituent parts of the ABP are created, we can put the new ABP into practice by assigning the ABP
to mailboxes. First, we create the three type-specific address lists to filter mailboxes and groups, contacts, and
rooms based on a suitable filter for the recipient type and the custom attribute.
[PS] C:\> New-AddressList -Name "Engineering-Users" –DisplayName "Engineering Users"
-RecipientFilter {((RecipientType -eq "UserMailbox") -or (RecipientType -eq
"MailUniversalDistributionGroup") -or (RecipientType -eq "DynamicDistributionGroup")) -and
(CustomAttribute10 -eq "Engineering")}

[PS] C:\> New-AddressList –Name "Engineering-Contacts" –DisplayName "Engineering Contacts"


–RecipientFilter {(RecipientTypeDetails –eq "MailContact") –and (CustomAttribute10 –eq
"Engineering")}

[PS] C:\> New-AddressList –Name "Engineering-Rooms" –DisplayName "Engineering Rooms"


–RecipientFilter {(RecipientTypeDetails –eq "RoomMaiilbox") –and (CustomAttribute10 –eq
"Engineering")}

Testing an Address Book Policy


To test the effectiveness of a recipient filter, input it to the Get-Recipient cmdlet and check the set of objects
that are returned. For example:
[PS] C:\> $Filter = (Get-AddressList –Identity "Engineering-Rooms").RecipientFilter

[PS] C:\> Get-Recipient – RecipientPreviewFilter $Filter

If the expected set of recipients are not returned you know that either:
Page 77
• The recipient filter is incorrect, or
• The underlying data does not contain the values that are needed by the filter. In our case it means
that “Engineering” has not been input to CustomAttribute10 for some of the objects we expect to
find.
After creating our type-specific address lists, we now create a GAL and an OAB based on that GAL.
[PS] C:\> New-GlobalAddressList –Name "Engineering GAL" –RecipientFilter
{(CustomAttribute10 –eq "Engineering")}

[PS] C:\> New-OfflineAddressBook –Name "Engineering OAB" –AddressLists "Engineering GAL"

The Update-OfflineAddressBook cmdlet is unavailable in Exchange Online, so we will have to wait for the OAB
generation assistant to generate the new OAB during its regular workcycle processing. Finally, we can bring
everything together by creating an ABP for the engineering department.
[PS] C:\> New-AddressBookPolicy –Name "Engineering ABP" –AddressLists "Engineering-Users",
"Engineering-Rooms", "Engineering-Contacts" –OfflineAddressBook "\Engineering OAB"
–GlobalAddressList "Engineering GAL" –RoomList "\Engineering-Rooms"

Figure 3-5: OWA directory view when under the control of an ABP
We now have an ABP but we still need to make it effective by assigning the new policy to users. Pick one user
to test things on and assign it to them by running the Set-Mailbox command or by editing their record
through the EAC (the ABP assignment is under “Mailbox features”).
[PS] C:\> Set-Mailbox –Identity "Kim Akers" –AddressBookPolicy "Engineering ABP"

Figure 3-5 illustrates what we expect to see. A limited number of users are shown in the directory and the
three specific Engineering address lists are listed. Another example is shown in Figure 3-6 where an ABP is
used to restrict access for Teams users. In this instance, the user tries to chat with someone who is outside the
scope of the ABP. Teams fails to find the intended correspondent in the ABP and signals an error.

Page 78
Figure 3-6: An ABP restricts access to the drectory for a Teams user
It is important to note that ABPs do not erect barriers to prevent users communicating with one another.
Another mechanism is necessary if you need to stop specific user groups sending mail to each other, such as
the ethical firewalls that can be constructed using transport rules.
Note that you cannot remove an ABP if it is assigned to mailboxes. To find and remove an ABP from
mailboxes, use the Get-Mailbox cmdlet to find the mailboxes and Set-Mailbox to remove the property:
[PS] C:\> Get-Mailbox -RecipientTypeDetails UserMailbox -Filter {AddressBookPolicy -eq "Engineering
ABP"} | Set-Mailbox -AddressBookPolicy $Null

Clash with Hierarchical Address Book


Note that you cannot use address book policies in the same organization as a hierarchical addresss book
(HAB – see section earlier in this chapter). The two mechanisms place different restrictions on the directory
that are mutually incompatible.

Page 79
Chapter 4: Managing Hybrid
Connections
A hybrid connection is a deployment model where an existing on-premises environment runs seamlessly
alongside a cloud solution like Office 365 so that end users see an integrated environment. Microsoft first
delivered hybrid connectivity between on-premises Exchange and Exchange Online in Exchange 2010 Service
Pack 1. Since then, the ability to deliver unified address lists, mail flow, and other functionality across on-
premises and cloud services has proven to be a very important competitive and operational feature of Office
365 and has allowed customers to move millions of mailboxes between the two environments. More recently,
Microsoft added support for other workloads to function in a hybrid deployment, such as Skype for Business
and SharePoint.
The success of the hybrid model for Exchange is undoubted and many organizations have used hybrid
connectivity as a transit method to get to the cloud. At the Ignite conference 2017, Microsoft said that 70% of
hybrid customers have moved all mailboxes to the cloud. They also said that the average number of on-
premises mailboxes running in hybrid configurations is 105. The low average surprised many, but it probably
means that many small tenants have completed moving to Exchange Online and have just a few mailboxes
left on-premises. On the other hand, the hybrid model is here to stay as many large enterprises consider it
essential to preserve the choice of having mailboxes on-premises or in the cloud.

Hybrid Workloads
Exchange
Today, two types of hybrid Exchange deployments exist: a "full" hybrid connection and a "minimal
configuration." Unlike other migration methods, a full hybrid connection is not a short-term solution to
migrate mailboxes between Exchange on-premises and Exchange Online. Maintaining a hybrid connection
enables you to keep some of your mailboxes on-premises while others stay in Office 365, and it is the only
migration type that allows you to move mailboxes back to an on-premises server. Hybrid connectivity is a
unique selling point for Microsoft because other cloud platforms do not offer companies the chance to keep
some workload on-premises.
The following features are available in a hybrid configuration:
• Move mailboxes in and out of Exchange Online using built-in tools. As discussed later, a huge benefit
of a hybrid mailbox move is that it does not need Outlook's offline cache to be resynchronized after
the mailbox is moved to or from Exchange Online.
• Almost seamless co-existence between Exchange Online and one or more on-premises Exchange
organizations (over a longer period). Among other benefits, mail can flow securely between the
different platforms and users can share Free/Busy information. Note that “full” hybrid connections
with multiple on-premises Exchange organizations are only supported from Exchange 2013 SP1
onwards.
• Cloud-based archiving for on-premises mailboxes, also known as Exchange Online Archiving.
• Simplified administration using a single Exchange admin console or PowerShell.
• Transfer of configuration settings from Exchange on-premises to Exchange Online.

Page 80
Because of the advantages that it offers over other migration options, many organizations deploy a hybrid
connection solely for moving mailboxes to Office 365. Originally, the only choice was to deploy a full hybrid
connection, which meant that all the hybrid features were enabled as part of the configuration process,
whether a tenant needed them or not. To reduce the configuration effort for customers that only need to
(quickly) move mailboxes in/out of Exchange Online, without the need for elaborate coexistence capabilities,
Microsoft introduced the "minimal hybrid configuration" option. Unlike the full version, the minimal hybrid
configuration only configures the essentials to enable a customer to move mailboxes to and from Office 365.
Although most the work to set up and run a hybrid connection is executed within the on-premises Exchange
organization, maintaining a successful connection to Office 365 involves other components that might be new
to the Exchange administrator. One of these is Directory Synchronization, which plays a significant role in
making sure that both sides of a hybrid deployment have the same information to work with. This chapter
discusses the various features and components of a Hybrid deployment and explains how everything fits
together. We also walk through the configuration of a Hybrid connection using the Hybrid Configuration
Wizard.

SharePoint
SharePoint Server 2013 and later allow you to configure a hybrid connection with Office 365. Like Exchange, a
hybrid SharePoint connection enables a customer to migrate to Office 365 at their own pace. Following the
footsteps of the former, hybrid connectivity through SharePoint Server 2016 now also uses a configuration
wizard, which largely automates the configuration process.
In a hybrid configuration, some SharePoint workloads run on the on-premises SharePoint servers while other
workloads move to SharePoint online. For example, you can host users’ personal OneDrive for Business sites
in Office 365 while keeping other SharePoint sites hosted on-premises. The following features are available in
a hybrid configuration:
• Hybrid Search is one of the major benefits of a hybrid SharePoint connection. Instead of using
federated queries (forwarding search queries cross-premises) as in previous version of SharePoint, the
search service (and crawler) now stores indexes and relevant information directly in Office 365. Not
only does this increase the relevance and speed of the searches, but also makes the discovered
information available to the Office Graph and Delve.
• When a user follows a site, a link is added to the user's Followed Sites-list. Without a hybrid
connection, lists for both locations might be different. With SharePoint hybrid, the information from
both locations are consolidated in SharePoint Online to make a single list available to the user
containing links to resources in both locations.
• The Self-Service Site Creation feature automatically redirects users the online version of the
SharePoint Online Group Creation page, allowing users to directly create sites in SharePoint Online.
• Although profiles exist in both locations. SharePoint on-premises redirects user profile links to their
profile location in SharePoint Online.

Hybrid Exchange Architecture


The same core concepts underpin a full hybrid connection and a minimal configuration. For both option, the
following apply:
• Directory Synchronization ensures that objects (like user accounts) are synchronized between the
on-premises Active Directory and Azure Active Directory. As a result, users should see the same
Global Address List (if no filtering of objects has been configured and that the same address lists are
used on both platforms) regardless of where their mailbox is hosted. Directory synchronization plays
an essential role in hybrid deployments. Over the course of this chapter, you will find several
Page 81
references to how the process unfolds and its impact on operations as well as more information
about how directory synchronization works behind the scenes.
• Mailbox moves to/from Exchange Online use the same approach as a mailbox move between two
on-premises Exchange servers. These moves do not need the offline cache in Outlook (the offline
Outlook data file or OST) to be resynchronized after the mailbox is moved. These mailbox moves can
be started from the on-premises Exchange Admin Center or through PowerShell using the New-
MoveRequest or New-MigrationBatch cmdlets.
• Autodiscover ensures that the correct information about mailboxes and other resources is returned
to clients so that they can connect automatically after a mailbox has been moved to or from Office
365.
A full hybrid connection is different from the minimal configuration in that it configures added hybrid features
that enable a seamless end-user experience when collaborating across the on-premises organization and
Office 365:
• Exchange Online Protection (EOP) secures mail flow between Exchange Online and the on-premises
organization using a TLS connection.
• Exchange Federation supports rich interaction between mailboxes in Exchange Online and on-
premises servers. For example, Exchange federation allows cross-premises Free/Busy lookups by
establishing a trust with the Azure AD Authentication system (formerly known as Microsoft Federation
Gateway). In environments that only include Exchange 2013 or 2016 servers, OAUTH replaces the
Azure AD Authentication system as the authentication mechanism used for cross-premises Free/Busy
lookups.

Minimal Configuration and Express Migrations


A customer choosing to deploy a minimal hybrid configuration does not automatically get any of the
following features after running the Hybrid Configuration Wizard:
• Secure cross-premises mail flow.
• Cross-premises Free/Busy, eDiscovery or Archiving.
• Automatic OWA or EAS redirection for migrated users.
Although the primary goal of a minimal configuration is to enable customers to move mailboxes to/from
Exchange Online, a minimal hybrid configuration also creates basic coexistence between the on-premises
organization and Office 365. You will not be able to share Free/Busy information or benefit from any of the
above features, but you can still send messages to and receive message from migrated mailboxes. The
biggest difference between the minimal configuration and the full option is that TLS is not explicitly
configured in the minimal configuration option. You can configure these features afterwards, but if you need
this kind of configuration, it is best to put a full hybrid configuration in place from the start.

Express Migrations
The minimal hybrid configuration also underpins the Express Migration option, which can be initiated from
the Office 365 Admin portal by navigating to Users > Data Migration, and from there choosing Exchange.
If you have not setup a hybrid connection before, you will be asked to download and run the Hybrid
Configuration Wizard. Once the wizard starts, you should choose the minimal hybrid configuration option.
While running the HCW, on the User Provisioning page, you have the choice to select Synchronize my users
and passwords one time. When you do so, Azure AD Connect will be downloaded and prompt you to install
using the Express Settings configuration option. Once successfully installed, your directory will synchronize to
Azure AD and provision user accounts. After that, directory synchronization is disabled; leaving you with a
cloud-only accounts.

Page 82
This option offers a great alternative to solutions like a Cutover migration, as mailbox are now moved to
Office 365 using the Mailbox Replication Service. As such, you can preserve the Outlook profile (and cache) on
client computers, greatly improving the efficiency on the network and end user experience.
The added benefits come at a small price though: you not only need to run the HCW, but you also must install
Azure AD Connect as part of the process. Although experience shows that most of the times this all goes
according to plan, the directory synchronization process itself can generate some added problems when you
have conflicting objects in your on-premises directory etc. In such case, not all your users are (correctly)
synchronized to Azure AD, and mailbox moves for those users will not be possible.
The Express migration option also comes with some limitations/challenges:
• Only a single migration endpoint is supported.
• Although you can create multiple migration batches, they will only be executed one at a time. This
means that the time to complete mailbox moves can be considerably longer over a regular hybrid
deployment (where multiple batches can be executed at the same time)
• You must assign a license to the user, before trying to migrate the mailbox
• During the initial synchronization, the user’s password is synchronized once to Office 365 as well. If
the user decides to change their password in the on-premises organization before the mailbox has
been moved to Office 365, that change will not be reflected in Azure AD and can therefore cause
some confusion; the password the user must use to login to his mailbox in Office 365 is still the same
as his previous on-premises password.
Some of the limitations clearly show that the Express migration option is not targeted at large, complex
environments with a large number of users, or multiple datacenters which only increase the risk of running
these limitations. Instead, it is a valuable alternative for smaller environments with perhaps only a handful to a
few tens of users, where the on-premises Active Directory is very unlikely to yield conflicts.

Native Mailbox Moves


Many methods exist to move a mailbox into Office 365. A common drawback is the need for Outlook clients
to resynchronize mailbox contents after the mailboxes have been moved into Office 365. For every gigabyte
of data moved into Exchange Online, a client must download a similar amount to rebuild its OST. Even in a
staged migration approach, this can generate a significant amount of extra traffic on the network. However,
when a mailbox is moved using the “native mailbox move” approach, such as in a hybrid deployment, it is not
necessary to resynchronize mailbox contents after the mailbox has been moved.
The built-in mailbox move mechanism delivers the best experience for the end user and generally achieves
the highest migration throughput of all migration methods. For this reason, it is usually the preferred method
to on-board mailboxes into Office 365. However, this does not mean that a hybrid connection is always the
best migration solution! Third-party migration products can also move information into Office 365 without
the need for setting up a (complex) hybrid connection.
Other migration methods must create a new mailbox from scratch in Office 365, meaning that the new
mailbox has a different identifier, called the ExchangeGUID. This action breaks the link between the two
mailboxes and, in turn, means that Outlook must resynchronize. When a native mailbox move occurs,
Exchange stamps the same GUID on the target mailbox. When Outlook connects to the mailbox in its new
location, it will recognize the GUID and reconnect without the need to take further action. The
msExchangeMailboxGUID is also used to encrypt the OST file, which explains why the same file can be re-used.
Figures 6-1 and 6-2 show the comparison of the ExchangeGUID of a mailbox before and after it is moved
using the Native Mailbox Move process to Office 365. Notice that the ExchangeGUID is still the same.

Page 83
Figure 4-1: ExchangeGUID on-premises before mailbox move

Figure 4-2: ExchangeGUID in Exchange Online after mailbox move


Moving mailboxes between an on-premises organization and Office 365 is like a mailbox move between two
on-premises organizations and is also performed by the Mailbox Replication Service (MRS). These moves are
called Exchange online moves, which means that users can send and receive email while their mailbox is being
moved. When a mailbox move is initiated, Exchange Online makes an inbound connection to the on-premises
migration endpoint. When the HCW first runs, it will attempt to create a migration endpoint. However, if
something went wrong during the initial creation (for instance, the Autodiscover call failed), additional
endpoints can be created using the New-MigrationEndpoint cmdlet or though the Migration section of the
Exchange Administration Center in Office 365 (click the ellipsis menu to reveal the option to create a new
endpoint). Also, when you try to start a data migration and no endpoint already exists, you will be
automatically asked to create one.
The inbound connection is proxied through the MRS Proxy component on the Client Access Server(s). That is
why the MRS Proxy endpoint on the Internet-facing Client Access Servers must be enabled first. The Hybrid
Configuration Wizard takes care of this task and configures the MRS Proxy automatically on all Client Access
Servers. The wizard uses the ADPropertiesOnly parameter to query the existing WebServices Virtual Directories
to reduce the time it takes for the wizard to make configuration changes. However, if you have a lot of (Client
Access) servers in your environment, some of which could be in a remote location, you might find that the
wizard still needs a (lot) more time to complete. Instead of potentially timing out, the wizard will warn the
administrator if it couldn’t update a (remote) Client Access Server in a timely manner. As such, the
administrator can use that information to manually enable the MRS Proxy using the Set-
WebServicesVirtualDirectory -MRSProxyEnabled $true command.
Mailbox moves between Exchange Online and the on-premises organization can be initialized in various ways.
Even when you create a migration batch through the GUI, the actual move which is always based on a move
request created in PowerShell using the New-MoveRequest cmdlet, is automatically created for you.

Migration Endpoints
When creating a migration batch, the wizard verifies if it can connect successfully to the on-premises
migration endpoint. If a connection cannot be established, you'll see an error message: "The connection to
the server '<endpoint>' could not be completed." There are many reasons why the inbound connection
might fail and the error message provides little information as to what might be causing the problem. For
instance, the proper DNS records might not have been configured, there might be a networking issue, or
perhaps the MRS Proxy has not been enabled properly.

Page 84
To figure out what might have happened, you can use the Test-MigrationServerAvailability cmdlet (part of
PowerShell for Exchange Online) to test the on-premises endpoint using the same code as the New Migration
Batch Wizard does. The output of the cmdlet might reveal more information to the cause of the failure.
PS C:\> Test-MigrationServerAvailability -RemoteServer webmail.exchangelab.be -ExchangeRemoteMove
-Credentials (Get-Credential)

RunspaceId : 67af4669-b1d9-4403-aef3-cd50c72f72da
Result : Failed
Message : The connection to the server 'webmail.exchangelab.be' could not be completed.
ConnectionSettings :
SupportsCutover : False
ErrorDetail : Microsoft.Exchange.Migration.MigrationServerConnectionFailedException: The
connection to the server 'webmail.exchangelab.be' could not be completed. --->
Microsoft.Exchange.MailboxReplicationService.RemotePermanentException: The Mailbox Replication
Service was unable to connect to the remote server using the credentials provided. Please check the
credentials and try again. The call to 'https://webmail.exchangelab.be/EWS/mrsproxy.svc' failed.
Error details: The HTTP request is unauthorized with client authentication scheme 'Negotiate'. The
authentication header received from the server was 'Negotiate,NTLM'. --> The remote server returned
an error: (401) Unauthorized.. --> The HTTP request is unauthorized with client authentication
scheme 'Negotiate'. The authentication header received from the server was 'Negotiate,NTLM'. --> The
remote server returned an error: (401) Unauthorized.
...

If needed, the administrator can make changes to the migration endpoint using the Set-MigrationEndpoint
cmdlet.

Real World: A frequent problem with migration endpoints is that each migration endpoint is configured with
an account that is used to set up a connection with the on-premises environment. If the password for the
account changes, but the endpoint is not updated in Office 365, the above error message appears, and
there's nothing that indicates it is because of an expired password.
Sometimes it might be necessary to define added migration endpoints. For instance, when you have multiple
offices in different regions that each have some Exchange servers with mailboxes. In such case, the default
migration endpoint points to Client Access Servers in one region, but the mailboxes that must be moved
might be in a different region. If the default endpoint is used to start the mailbox move, the WAN link
between the regional offices carries the migration load. Often these WAN links (or the utilization thereof) are
costly and perhaps not capable of carrying such a heavy amount of data. In these circumstances, it would be
more sensible to connect directly to the Exchange servers in the regional office.
Creating a new migration endpoint, which defines a different endpoint URL, allows an administrator to select
this endpoint for a given migration batch and thus ensure that the inbound MRS connection is made to the
Exchange Servers closest to the mailboxes that must be moved. For example, the code below shows how to
use PowerShell to create a new migration endpoint using the New-MigrationEndpoint cmdlet and specifying
the different endpoint FQDN in the RemoteServer property. Alternatively, you can create a new endpoint
through the migration section of the EAC.
[PS] C:\> New-MigrationEndpoint –Name "Endpoint 2" –ExchangeRemoteMove
–RemoteServer target.domain.com –Credential (Get-Credential)

Communications with the migration endpoint must happen securely over SSL. This means that when defining
a new migration endpoint, you must make sure that a valid certificate is used on the Exchange server: the
hostname must be covered by the certificate and the certificate must be issued by a trusted third-party
certification authority. If already available, you can also use a wildcard certificate.

Real World: Although Microsoft officially supports only a handful of certification authorities, you will find
that certificates from many other certification authorities not on the list will also work. Keep in mind that,
when you purchase such a certificate, you have no guarantee it will work as expected. I’ve rarely come across

Page 85
a certificate did not work for securing HTTPS communications, regardless of the certification authority that
issued it. On the other hand, I’ve seen numerous cases where certificates from non-supported authorities
caused all sorts of problems related to secure mail flow.
It is common to find that a migration endpoint connects to a set of load-balanced Exchange servers. Although
this configuration is suitable for many Exchange workloads, mailbox moves to Office 365 can be negatively
impacted for the following reasons:
• In a set of load-balanced Exchange 2010 servers, session affinity must be maintained. This means that
once an incoming connection has been established with a specific Exchange server, all subsequent
communication should be routed to that same Exchange server. Thus, all connections coming from
Office 365 are likely to be routed to a single Exchange server. This is especially true when you use the
source IP as the identifier for an incoming connection; most mailbox moves initiated by Office 365 will
be coming from the same set of IP addresses. So, when all traffic is routed to a single Exchange
server, that server can become a bottleneck and significantly slow down mailbox moves. Additionally,
if the incoming connection is transferred to another Client Access server, for example because affinity
was setup incorrectly, mailbox moves can stall and even fail completely.
• Exchange 2013 and 2016 do not need a load balancer to have session affinity. This means that
incoming connections can switch from one Exchange server to another, a desirable situation to ensure
fault tolerance when mailboxes might move within a Database Availability Group. When a switch
occurs, the connection must be re-authenticated and any operation that was in progress must be
resumed or restarted. If a switchover affects the server that is handling the inbound connection for
Office 365, the mailbox move can stall which causes the move operation to slow down dramatically
and sometimes might cause the move to fail. If this happens, you will see the problem reported in the
error details logged for the move request.
• If you troubleshoot mailbox moves to/from Office 365, having a load balancer in place makes it
harder to figure out which server handles the connection. This means you must go through the log
files on each server in the array to decide what server was involved in the mailbox move you are
trying to troubleshoot.
Although using a load balancer is a supported scenario, you can avoid these problems by (temporarily)
defining a unique migration endpoint for each Exchange server from which you want to process move
requests. The benefit is that the connection path is now much more predictable (you can specify a different
migration endpoint per migration batch) and mailbox moves generally complete faster. Additionally, this
approach usually yields a higher number of concurrent mailbox moves as you can manually control how many
mailbox moves are directed to each Exchange server.
The downside of defining a separate endpoint for each Exchange server (or for as many as you need), is that
you need an external IP address, hostname, and SSL certificate for each endpoint before the Exchange servers
behind the endpoint can accept incoming connections and process mailbox moves.

Secure Mail Flow


Even though a hybrid deployment allows rich coexistence between mailboxes hosted on-premises or in Office
365, Exchange Online and the on-premises organization remain two completely different systems. Without
specific configuration, both systems would treat messages received from each other as if they were from an
unknown, untrusted source. A hybrid deployment offers a variety of options to configure mail flow between
the on-premises organization and Exchange Online, also referred to as hybrid mail flow. To prevent any
chance that messages sent between the two platforms can be tampered with or otherwise compromised,
hybrid mail flow is configured to use Transport Layer Security (TLS) by default. TLS uses SSL certificates to
identify both organizations to each other.

Page 86
To allow for TLS mail flow in a hybrid deployment, the following requirements must be met:
• The Subject Name or Subject Alternative Name on the certificate used to negotiate TLS must match
the FQDN used to connect to the remote server. This allows the use of SAN or wildcard certificates.
• The certificate must be issued by a trusted public Certificate Authority.
• The certificate must be valid; it must not be expired or revoked.
• The Certificate Revocation List (CRL) must be available.
The certificate information needed to establish and enforce a TLS connection between both environments, is
also stored as part of the configuration of the various mail flow connectors in the on-premises and Exchange
Online organization. For more information on what Inbound and Outbound connectors are, have a look at
mail flow chapter in the main book.
• The on-premises Send, and Receive Connectors hold certificate details of the on-premises
organization which are presented to Office 365 (EOP) whenever a message is sent to or received from
Office 365. This information is recorded in the TlsCertificateName attribute and can be viewed using
the Get-ReceiveConnector or Get-SendConnector cmdlets.
• In addition to the certificate information, the on-premises Receive Connector also includes an
attribute called TlsDomainCapabilities. This attribute specifies which added features are offered for a
specific domain. For example, the value mail.protection.outlook.com:AcceptCloudServicesMail,
indicates that messages coming from the hostname mail.protection.outlook.com (EOP) are, in fact,
part of the same organization, and should be treated as such.
• The inbound connector in Office 365 has certificate information of the on-premises organization. This
information is used to confirm (compare) the information presented by the on-premises organization
whenever messages are sent to Office 365.
• The outbound connector in Office 365 does not hold any details about the on-premises certificate,
but it specifies the domain name that the on-premises domain part of the hostname should match
with, to establish a secure (TLS) connection when sending messages to the on-premises organization.
For instance, if the Outbound connector specifies "office365itpros.com" in the TLSDomain attribute, a
valid on-premises hostname would be mailserver.office365itpros.com. The certificate installed on the
on-premises server(s) must then also match that hostname. As mentioned earlier, it can also be a
wildcard certificate covering all hostnames for a specific domain.
If any of the information in the various connectors does not match, the secure connection between the on-
premises organization and Office 365 might fail, therefore halting (hybrid) mail flow. If that happens, emails
will queue on the sending server.
Messages sent between Exchange Online and the on-premises organization are treated differently from
messages that originate elsewhere. This allows Exchange Online to bypass anti-spam filtering or added
Advanced Threat Protection (ATP) processing for messages that are received from on-premises Exchange
servers. Every message in the hybrid mail flow is stamped with an added SMTP X-MS-Exchange-Org-AuthAs
header. The value for this header should always be Internal for messages flowing across a hybrid connection.
If Exchange Online Protection detects that the X-MS-Exchange-Organization-AuthAs header is set to Internal
for a message, the Spam Confidence Level header (SCL) for that message is automatically set to -1. This value
allows the message to be forwarded to the recipient without going through message hygiene processing and
potentially being marked as spam. The authentication header also allows Outlook clients to recognize the
sending partner as being part of the same organization, triggering Outlook to display a full contact card
instead of a summarized version.

Real World. The presence of the X-MS-Exchange-Organization-AuthAs header is one of the key differences
between a hybrid deployment, and a staged migration or a minimal hybrid configuration. Although the latter
two options also offer mail flow coexistence, they do so without configuring extra connectors (with TLS). As

Page 87
such, the end user experience in Outlook is different. If needed, an administrator can manually update the
connector configuration, so that the user experience in a staged migration or minimal hybrid scenario mimics
the one in a hybrid scenario.
The presence of the header also has implications for the on-premises architecture. The Exchange header
firewall strips any X-headers from messages that come from non-Exchange servers and therefore break this
functionality. For this reason, hybrid mail flow must not be configured to pass through a third-party mail
filtering solution. If your organization requires external connections to be terminated in the perimeter network
(DMZ), you must route messages through an on-premises Edge Transport Server as it is the only Exchange
Server Role supported to be installed in the DMZ.

Real World. Edge Transport servers aren’t really known for excellent message hygiene features. Other
solutions, including Microsoft’s own Exchange Online Protection, offer a much higher level of protection
when it comes to spam and malware. This doesn’t mean Edge Transport servers cannot be useful. First, they
help you fulfil the requirement to terminate incoming connections (for SMTP!) in the DMZ. They can also be
used to configure more transport rules (and thus separate those rules from the internal deployment). The
servers offer some basic address rewriting capabilities, something which can easily be extended using a
custom (or third-party) transport agent. Aside from perceived security benefit in hybrid deployments, Edge
Transport servers can be a flexible mail-handling solution!
There are several ways in which outbound mail flow can be configured. Here, outbound is used to define
messages sent to recipients which are not part of the hybrid deployment:
• Centralized Mail Flow enables all outbound messages from both organizations to be routed through
the on-premises organization. This allows for added processing by on-premises message hygiene or
compliance solutions.
• Non-centralized Mail Flow is selected by default in the Hybrid Configuration Wizard; Office 365
sends messages directly to the Internet and the on-premises organization continues to send
messages as configured before. Alternatively, you can configure the on-premises organization to
route all outbound email through Exchange Online Protection.

Figure 4-3: (Non-)Centralized Mail Flow with Edge Transport

Page 88
Figure 4-4: (Non-)Centralized Mail flow without Edge Transport
Similarly, there are two options for inbound mail flow:
• The MX record points to the on-premises organization. This scenario is illustrated in Figure 4-3
and Figure 4-4. Email to on-premises mailboxes is delivered locally. Email to Office 365 mailboxes is
forwarded to the destination recipient in Office 365. Exchange will always forward messages to the
email address specified in the targetAddress attribute if it is configured. The targetAddress attribute is
set on the on-premises Mail-Enabled User account that represents the mailbox in Exchange Online.
• Alternatively, as illustrated in Figure 4-5, the MX record points to Exchange Online Protection
where email is filtered and scanned. From there, messages are either sent directly to mailboxes in
Exchange Online or forwarded to the on-premises organization through the Outbound Connector
that is created automatically by the Hybrid Configuration Wizard. Just like the outbound mail flow
scenarios described earlier, the use of an on-premises Edge Transport service is optional.

Figure 4-5: MX records are pointing to Exchange Online Protection (with Edge Transport)

Real World. In the latter scenario, the MX record doesn’t necessarily have to point to EOP. Although
Microsoft recommends not using additional mail handling/filtering solutions in front of EOP (this can cause
certain features to become less effective), often third-party solutions that manipulate inbound/outbound
messages require MX records to point to their solution. One such example is when an organization uses a
third-party (hosted) solution to automatically add signatures to email.

Exchange Federation
One reason why organizations create a full hybrid deployment is to allow mailboxes in both environments to
collaborate as if they are hosted in the same organization. This is especially useful if a migration to Office 365
is spread over an extended period, or if your organization has decided to keep certain mailboxes on-premises
indefinitely. Regardless of the reason of the hybrid connectivity, two common scenarios are: looking up

Page 89
calendar information for another user when booking a meeting, and delegating access to another user's
mailbox. These are user actions, but there are also other, system-driven, actions such as displaying MailTips
that are automatically executed. Federating the on-premises organization with Exchange Online allows many
types of cross-platform data sharing. Note that this kind of federation has nothing to do with federated
authentication.
Despite the sharing capabilities available through the federation process, it is important to understand that
not all sharing scenarios are supported. Although Microsoft now supports Full Access and Send-on-behalf-of
permissions, Send-As, folder-level permissions or Automapping are not. More information on limitations on
cross-premises permission is in "Limitations of a Hybrid Deployment," later in and in Chapter 5. Before
Exchange 2010, if an organization wanted to share Free/Busy calendar information with another organization,
a direct trust relationship had to be created in which a shared- or pre-defined service account is specified.
Exchange Federated Sharing introduced a third-party trust broker called Azure AD Authentication System, or
previously the Microsoft Federation Gateway (MFG). The Azure AD Authentication System ends the need to
create direct trust relationships between organizations. First, each organization sets up a trust with
Authentication System in Azure AD. This is done through the New-FederationTrust cmdlet and is executed
automatically by the Hybrid Configuration Wizard. In a hybrid deployment, this trust only needs to be created
for the on-premises organization as Exchange Online already has an existing trust with the Azure AD
Authentication System. The trust, also known as the Federation Trust, uses a custom X.509 certificate to
encrypt and sign delegation tokens issued by the Azure AD Authentication System. By default, a self-signed
certificate is generated when the Federation Trust is created. Even though you could use a third-party
certificate, it is recommended to use the self-signed certificate.
When the Federation Trust is created, an Organization Identifier is created for your organization. This
identifier defines which accepted domains in your organization are enabled for federation. The Azure AD
Authentication System will only allow users that have one of the configured domains in their email address to
use the federated sharing features. Like the Organization Identifier, the Federation trust creates an AppID for
your organization which is used to uniquely identify the organization. The AppID is used to verify whether
your organization owns the SMTP domains that are configured as part of the Federation Trust. Like how the
validation of a domain in Office 365 works, an administrator must configure a public DNS TXT record for each
domain in the Federation trust so that Microsoft can confirm the ownership of those domains by looking up
the record with the specific AppID value.
The federation trust is represented by an object in the configuration partition of Active Directory. The object
contains many other items of information about the trust, also called the trust metadata. Running the Get-
FederationTrust cmdlet in the on-premises Exchange PowerShell instance, will reveal various elements of the
federation trust. Note that some information is omitted from the output below for brevity:
[PS] C:\> Get-FederationTrust

RunspaceId : f11725de-1fd2-430d-a6d8-a9909a7b4413
ApplicationIdentifier : 0000000040027F13
ApplicationUri : FYDIBOHF25SPDLT.EXCHANGELAB.BE
OrgCertificate : [Subject] CN=Federation [Issuer] CN=Federation

Amongst other items, the federation trust object also holds information about the certificates that Microsoft
uses for the Azure AD Authentication System. Although the federation trust does not require much
maintenance, Microsoft recommends that the federation trust metadata is updated regularly to ensure that
the information is up-to-date, and that any feature relying on the Azure AD Authentication System is not
affected. The metadata is updated automatically if you run servers with Exchange 2013 SP1 or later, but if you
run an earlier version, you should regularly update the metadata by running the following command:
[PS] C:\> Get-FederationTrust | Set-FederationTrust –RefreshMetaData

Page 90
After the federation trust is setup, organizations must configure an implicit one-to-one relationship with each
other, called an Organization Relationship. This Organization Relationship controls what information is shared
with the other organization. Both the trust with the Azure AD Authentication System, and the organization
relationships are configured automatically by the Hybrid Configuration Wizard. Figure 4-6 illustrates how the
Exchange Federation process works when an on-premises mailbox requests Free/Busy information for an
Office 365 mailbox.

Figure 4-6: Free/Busy information flow using the Azure AD Authentication System
The following steps occur:
1. An on-premises mailbox requests Free/Busy calendar information for an Office 365 mailbox.
2. The on-premises Client Access Server checks if it has an Organization Relationship for the remote
organization. It does so by verifying if one of domains configured in the Organization Relationship
matches the domain for which Free/Busy information is requested.
3. If an Organization Relationship is found, the Exchange server requests a delegation token from the
Azure AD Authentication System.
4. The delegation token is sent along with the request for Free/Busy information to the remote
organization.
5. The remote organization (Exchange Online, in this scenario) will first validate the delegation token
that was sent along with the request for Free/Busy information with the Azure AD Authentication
System. Once the token is verified, the remote organization knows the originating request is valid and
will honour the request by looking up the Free/Busy information returning it to the on-premises
organization.

OAUTH Trusts
Some organizations are restricted by laws, or regulations which prohibit offloading authentication to an
external broker service, such as the Azure AD Authentication System. This introduces a problem because
organizations cannot create a direct trust with Office 365 using a shared or pre-defined service account.
Microsoft introduced a new feature called OAUTH, and is available from Exchange 2013 Cumulative Update 5
onward. This feature enables organizations to setup a direct trust relationship with one another without the
need of a shared account or credentials, thus keeping authentication local. The feature is based on the
industry standard OAUTH.

Page 91
Like how Exchange Federation needs Organization Relationships to create a trust between both organizations,
OAUTH uses an Intra-Organization Connector (IOC) to explicitly define the trust. The difference is that in a
scenario based on Organization Relationships, Direct Authentication or DAUTH is used instead of OAUTH.
DAUTH is not a standard like OAUTH and relies on an external service, in Exchange's case the Azure AD
Authentication System, to handle authentication.
Just like an Organization Relationship, the IOC contains information about the domains and endpoints of the
remote organization. OAUTH relies on certificates to authenticate requests between both environments. The
on-premises authorization certificate is exported to Exchange Online as part of the OAUTH wizard in the
Hybrid Configuration Wizard. The ability to automatically configure OAUTH is only available if the on-
premises organization contains only Exchange 2013 (or later) servers. If older servers are used, you must
configure OAUTH manually to use it. If you have both OAUTH and traditional federation configured, the two
features work independently and are not used as a fallback in case either method fails. The order in which
Exchange will use OAUTH or the Azure AD Authentication System is listed below:
1. A user with a mailbox on an Exchange 2013 mailbox server tries to access Free/Busy calendar
information for a mailbox in Office 365.
2. The Exchange Availability Service checks if OAUTH is configured and whether it can find an Intra-
Organization Connector for the remote organization of the queried mailbox. The remote organization
is identified using the email address of the mailbox specified in the Free/Busy request.
3. If an IOC connector is found, Exchange will use OAUTH to authenticate the Free/Busy request. It is
important to understand that even when an IOC is misconfigured, no attempt is made to fall back to
using Organizational Relationships (DAUTH).
4. Only if no IOC connector is found, Exchange will verify if it can find an Organization Relationship for
the remote organization (Office 365). If it does, it will make a call to the Azure AD Authentication
System requesting an authentication token which it will send to Office 365 along with the Free/Busy
request. The remote organization, which also has a trust with the Azure AD Authentication System,
can validate the token and will process the request.
5. However, if no Intra-Organization Connector or Organization Relationship can be found, Exchange
falls back to using Availability Address Spaces to look up Free/Busy calendar information, if these are
configured.
As soon as OAUTH is configured, it becomes the preferred method for authenticating requests and Exchange
Online will only use OAUTH and not even attempt to use the Azure AD Authentication System. The on-
premises situation is a little different. For instance, a mailbox hosted on an Exchange 2010 server will never
attempt to use OAUTH because Exchange 2010 does not support OAUTH for federation. In a mixed
environment, this might cause various methods to be used interchangeably. Although implementing OAUTH
is optional, you must implement OAUTH if you want to leverage the following hybrid cross-premises features:
• In-place eDiscovery search and hold.
• In-place Archiving (automatically moving items from an on-premises mailbox to a cloud-based
archive).
• Messaging Records Management (MRM).

Autodiscover
The Autodiscover process ensures that clients receive the necessary information to connect to their mailbox,
and any other resources that they use such as delegate mailboxes, and public folders. The process for a cloud
mailbox is like that for on-premises mailboxes. When an organization only uses Exchange Online, the
Autodiscover record must point to Office 365. This process uses a CNAME record which ultimately points to
autodiscover.outlook.com.

Page 92
Because a hybrid deployment allows mailboxes to be on-premises as well as in Office 365, the Autodiscover
endpoint (for example, autodiscover.office36itpros.com) must point to the on-premises organization if the
information for on-premises mailbox and mailboxes in Office 365 must be discoverable through the on-
premises Exchange servers.
Clients trying to connect to a mailbox in Office 365 will first connect to the on-premises Exchange servers
after which they are redirected to the Autodiscover endpoint in Office 365 if the targetAddress property is
configured. Figure 4-7 outlines the client-server communication flow for Autodiscover in a hybrid deployment:

Figure 4-7: Autodiscover in a hybrid deployment


Using the mailbox’s email address, the client goes through a number of steps to determine the Autodiscover
endpoint it needs to connect to. Depending on how Autodiscover is configured, the client might use an Active
Directory Service Connection Point (SCP), an SRV record, or a hard-coded URL or a URL that was pre-defined
by the administrator for the Autodiscover endpoint.
The client then sends an Autodiscover request to the discovered endpoint, asking for the connection
information for the mailbox it is trying to connect to. Using the email address that is specified in the
Autodiscover request, the on-premises organization will look up the recipient to check whether it can find a
mailbox. The lookup will find the on-premises object but show that the user object’s targetAddress attribute is
configured. Instead of returning connection information to the client, the Exchange server will respond with
the value from the targetAddress attribute. This attribute was stamped on the user object after the mailbox
was moved to Office 365 to show to the on-premises Exchange organization that the mailbox is now located
elsewhere. The targetAddress attribute is constructed using the service domain that is configured
automatically during the process of setting up directory synchronization. Unless you configure the service
domain manually, or your hybrid deployment dates from the Exchange 2010 SP1 timeframe, the service
domain will always be in the following format: tenantname.mail.onmicrosoft.com.
Because the client received a targetAddress instead of connection information, the Autodiscover process will
restart. This time the targetAddress is used to lookup the Autodiscover endpoint. In this process, the new
Autodiscover endpoint will always be autodiscover.tenantname.mail.onmicrosoft.com which is a CNAME record
for autodiscover.outlook.com.
The client now sends a request to the Autodiscover endpoint in Office 365. This request, using SSL, will fail
and cause the client to attempt to connect to the same endpoint but without SSL (port 80). This connection
will succeed, but yield a redirect to autodiscover-s.outlook.com instead. That is the endpoint where ultimately
the Autodiscover request is sent to successfully.

Page 93
Exchange Online will now lookup the user’s mailbox with the targetAddress. Because the mailbox's
proxyAddresses attribute contains the same address, the connection information is returned to the client. As
part of running the Hybrid Configuration Wizard, the proxyAddresses attribute for each mailbox is populated
automatically with the value that will be used for the targetAddress. This step is explained in more detail later.
Using the connection information from the Autodiscover response, the client now tries to connect to the
endpoint in Office 365 and gets access to its mailbox.
The Autodiscover process is not only used to retrieve connecting information for the primary mailbox. If the
user has an archive mailbox or access to other mailboxes which might use automapping to show up in the
user’s Outlook client, additional Autodiscover processes are kicked off automatically to discover the
connection information for those additional resources. In the initial Autodiscover response, Exchange (or
Exchange Online) will send back the connection information to the primary mailbox along with the SMTP
addresses of the additional resources the user has access to. The example below shows the XML-format
information returned by Autodiscover about a shared mailbox to which the user has access:
<AlternativeMailbox>
<Type>Delegate</Type>
<DisplayName>Mark Spencer</DisplayName>
<SmtpAddress>mark.spencer@contoso.com</SmtpAddress>
<OwnerSmtpAddress>mark.spencer@contoso.com</OwnerSmtpAddress>
</AlternativeMailbox>

The type node in the XML output tells Outlook what kind of object the resource represents. For instance,
when an online archive is listed, the value is "Archive" instead of "Delegate".

Understanding the Hybrid Architecture

Figure 4-8: Hybrid Architecture


Now that we understand the fundamentals of a hybrid deployment, let’s step back and review the architecture
that underpins hybrid connections. Figure 4-8 shows how the various components of a Hybrid deployment
connect to each other. Each connection represents a different workload or feature and links the two endpoints
responsible for that workload. For example, the Send Connectors used to send messages from the on-

Page 94
premises organization to Exchange Online only exist on-premises. In the image, the server role-specific
workloads are shown as separate servers. However, many deployments use multi-role Exchange servers which
run both the Client Access and Mailbox Server roles, which means that all Hybrid workloads exist on the same
Exchange server.
Some confusion exists around the use of the term “Hybrid Server.” This is not a separate or special Exchange
server role. Instead the term is used to refer to an Exchange server or a set of Exchange servers assigned the
task of handling Hybrid workloads. The assignment of these workloads to servers is done as part of the Hybrid
Configuration Wizard.
In most Exchange deployments, there is no need to install more servers to perform Hybrid-related tasks. The
existing servers can easily accommodate the workload. This is also the reason why there is no specific design
or sizing information for Hybrid servers. The general recommendation is to treat these servers just like any
other Exchange server in the organization. Of course, some common sense is needed too. If your Hybrid
deployment adds more mailboxes into your environment (on-premises or in Exchange Online), the added
workload should be considered. Hybrid services can only run on a server that has Exchange Server 2010 SP3+
or Exchange Server 2013 (or later) installed. See this page for more information about the requirements for
hybrid deployments.

Note. Despite extensive information with regards to sizing an on-premises Exchange deployment, there is
little information about sizing for a hybrid deployment. In part this is because there is little difference in
the requirements for a purely on-premises vs. hybrid deployment. Unfortunately, the same is not true for
Edge Transport servers. It’s not uncommon for an organization to deploy Edge Transport servers for a
hybrid configuration to meet specific security requirements. To properly size Edge Transport servers, the
following two items are most important to consider: storage and CPU. To calculate how much storage you
need, you should follow the same guidance as for internal Transport Services: You must calculate the
overall transport storage requirements and then divide that across the number of Edge Transport servers
you will deploy. To calculate how much processing power you need, you should consider deploying one
Edge Transport server CPU core per five to eight Mailbox Server CPU cores. Because the latter is not an
exact science, deploying Edge Transport servers virtually will provide you with a bit more flexibility to scale
up (or down) depending on the observed load. More information on the topic of Edge Transport servers in
a hybrid deployment can be found in this article.

Limitations of a Hybrid Deployment


Even though the value proposition of a Hybrid connection is attractive, some limitations exist that you need to
be aware of before deciding to deploy a Hybrid configuration:
• Exchanging Free/Busy data between two Hybrid environments (see this page for full technical
details) or single hybrid and regular on-premises organization is, by default, not configured. Normally,
when Exchange executes a Free/Busy request for a remote organization, it expects to receive the
results of the query from the remote organization so that it can return the information to the
requesting user. When information is requested for a user whose mailbox is hosted in a remote Office
365 tenant, the initiating Exchange server will receive the information from the remote Office 365
organization instead. Because it does not know how to handle that request and because it would
normally not have an Organization Relationship or Intra Organization Connector, the request fails.
• The Release cadence of new features in Office 365 is rapid. To ensure full compatibility with the
latest code, Microsoft requires Hybrid customers to install no more than one version from the latest
available Cumulative Update on their on-premises servers if Exchange 2016 CU6 is the current version,
the minimum version required for Hybrid customers is Exchange 2016 CU5.
• Permissions, especially cross-premises, need careful handling. Not all cross-premises permissions are
fully supported. While Full Access and Send-On-Behalf-Of are supported, automapping or Send-As

Page 95
permissions require an (unsupported) workaround. Other permissions, such as Delegate Access, work
in most cases but are unsupported. The safest and most functional approach is to have all mailboxes
that share delegate permissions on the same platform (on-premises or cloud) to guarantee the best
end user experience. In other words, if you move someone’s mailbox, make sure that you also move
any other user who has delegate access to that mailbox. This is the best way to avoid permission
hiccups and ensure that users can continue to work as before the move. Cross-premises permissions
also do not work for all clients and can be influenced by how authentication is performed. It is
recommended that you use Modern Authentication and install the latest available client version (and
updates).
• Multi-forest scenarios need extra planning and effort. Although Exchange 2013 Service Pack 1 (CU4)
and later support multiple on-premises Exchange organizations in a Hybrid deployment with a single
Office 365 tenant, these scenarios are technically very challenging because of the conflicting
requirements that might arise when synchronizing objects between the different forests. In addition,
cross-premises permissions can behave differently compared to a single on-premises forest. This is
mainly because of which accounts are used to authenticate, what forest permissions are assigned in,
and how or what attributes are synchronized between various forests and Azure AD.

Real World. The inability to use all available permissions in a hybrid configuration in a predictable way can
be a real pain when moving mailboxes to the cloud. It is best practice to move mailboxes using one of the
various delegate permissions like Send-As, Send-On-Behalf-Of or specific folder and calendar permissions
to Exchange Online together. Larger organization often find that, while they intend to move only a limited
set of mailboxes, they end up moving a much higher number (at the same time) because of the need to
keep delegates on the same platform to ensure that permissions work. Although Microsoft extended
support for a variety of cross-premises permissions, some delegate permission scenarios are still not
supported today. It’s still best to move interconnected mailboxes together.

The Exchange Hybrid Configuration Wizard


When Exchange first supported hybrid connections, an administrator had to go through a complicated
document describing no less than 50 steps that had to be flawlessly executed to create a Hybrid Connection.
The complexity of the steps and the risk associated with having to perform everything manually, discouraged
many administrators from trying to configure a hybrid connection. Microsoft realized the problem and
simplified matters by releasing the first version of the Hybrid Configuration Wizard (HCW) in Exchange 2010
SP2. The HCW handled most the manual configuration steps needed to create a Hybrid environment by:
• Validating that both the on-premises environment and the Office 365 meet the prerequisites for a
Hybrid deployment.
• Automatically provisioning a trust with the Azure AD Authentication System (previously known as
MFG).
• Creating the organization relationships in each environment.
• Configuring secure mail routing.
• Enabling cross-premises Free/Busy, message tracking and MailTips.

Note. Although it is technically possible to execute all the steps performed by the Hybrid Configuration
Wizard manually, running the wizard is now the only method supported by Microsoft to set up a Hybrid
environment.
The Hybrid Configuration Wizard greatly simplified connectivity between an on-premises Exchange
organization and an Office 365 tenant, but there were still manual steps an administrator had to go through
before the configuration was complete. For instance, the wizard did not support Edge Transport servers,

Page 96
forcing the administrator to manually add Edge Transport servers to the hybrid configuration after running
the Hybrid Configuration Wizard.
Another problem a lot of organizations faced, is the way secure mail flow worked between both
environments. The Hybrid Configuration Wizard would create a Receive Connector in the on-premises
organization and scope it down so that it would only accept connections from a handful of IP addresses that
belonged to the Forefront Online Protection system in Office 365 and were used to send mail from Office 365
to the on-premises organization. Because Microsoft would sometimes add, remove, or change IP addresses,
the administrator needed to regularly re-run the Hybrid Configuration Wizard to ensure the list of IP
addresses on the Receive Connectors was kept up to date. Additionally, every time an administrator ran the
Hybrid Configuration Wizard, it would re-configure everything that was already configured before, possibly
overwriting manual configuration changes an administrator might have made.
One of the side effects of shipping the Hybrid Configuration Wizard with Exchange is that the wizard code
depends on the version of Exchange running on the server. For example, a customer running the Hybrid
Configuration Wizard with Exchange 2013 CU4 does not have the same experience as a customer running the
wizard on Exchange 2013 CU5 or later. This created the problem that customers do not always run the latest
version of the wizard. With components in Office 365 moving forward at light speed, this can cause issues. For
instance, in 2014, Microsoft introduced several changes in Office 365 which broke hybrid functionality, or the
ability to run the Hybrid Configuration Wizard. In the latter scenario, customers must install a hotfix, if one is
available. If not, they must wait until the next Cumulative Update which has a fix for the issue. In the worst
case, the next update could be a quarter away.
To mitigate this problem, Microsoft issued a new Hybrid Configuration Wizard. Instead of shipping the wizard
as part of the on-premises server, customers now download the latest code from a Microsoft server. This
approach allows Microsoft to have full control over the wizard software, ensures that administrators always
use the latest version, and it decouples the experience from whatever Exchange version you are running.
By default, the Hybrid Configuration Wizard automatically uploads log files to Microsoft. The development
team analyzes these logs to help detect the emergence of problems. If Microsoft detects a problem for finds a
bug, they can update the wizard almost instantaneously. The uploaded log files give Microsoft a large amount
of data that they might otherwise have to gather from customers (for instance, through support cases) to
understand and then fix the problem. The agility Microsoft gains from this automated approach reduces the
time between the report of an error (or its automatic detection) and the generation of a fix. That time can be
as short as a few days, which is a lot better than having to wait for Microsoft support to release a hotfix or the
next Cumulative Update! In general, Microsoft publishes an updated version of the Hybrid Configuration
Wizard on a weekly basis.
If you do not want to automatically upload hybrid configuration log files to Microsoft, create a new DWORD
registry key called DisableUploadLogs on the server before running the Hybrid Configuration Wizard.
Depending on the version of Exchange you use, the registry key is in one of the following locations. Substitute
[XX] with 15 for Exchange 2016 and Exchange 2013, or 14 for Exchange 2010:
[HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\ExchangeServer\v[XX]\Update-HybridConfiguration]

Set the value of the key to 1 to block log uploads.


The newest version of the Hybrid Configuration Wizard also introduces several new features and improves the
overall experience for the administrator:
• The wizard is adaptive. Components of a hybrid configuration that were already configured would be
recognized and therefore not be reconfigured.

Page 97
• Secure mail flow between Exchange Online and the on-premises organization no longer relies on the
list of IP Addresses on the Receive Connector. Hybrid deployments can now solely rely on the use of
certificates for determining if TLS encryption should be used or not.
• Built-in support for Edge Transport servers removes the need to manually add and configure them
after the wizard executes.
• Support for added mail routing scenarios removed some limitations that existed previously. For
example, prior to Exchange 2013 an organization could not point its MX records to Office 365 if the
hybrid deployment was configured for centralized mail flow.
• Mailbox moves between Office 365 and the on-premises environment now use an improved mailbox
move wizard. This allows for more flexible scheduling and a unified experience for the administrator.
• The steps executed by the Configuration Wizard are recorded with a lot more detail, giving a greatly
enhanced log file for troubleshooting purposes.
• The wizard supports the use of Multi-Factor Authentication during run-time. This allows an
administrator, whose account is enabled for MFA, to run the wizard instead of having to use either an
app password or a (temporary) service account without MFA.
Perhaps not related to the new wizard itself, but in Exchange 2013 CU5 the wizard already received several
under-the-hood improvements such as support for OAUTH Federation, and the automatic configuration of
the MRS Proxy for selected Client Access Servers.
Now that we have a broader understanding of what the Hybrid Configuration Wizard entails, let's have a
closer look at how it works, and how changes are made to the environment.

Preparing for HCW


Before you can run the Hybrid Configuration Wizard, you must enable the on-premises organization for a
hybrid deployment. You can do this in one of two ways: either by running the New-HybridConfiguration
cmdlet from the Exchange Management Shell or clicking Enable in the EAC under the Hybrid section.

Note: Throughout the rest of this chapter, we refer to the cmdlets used by the Hybrid Configuration
Wizard to illustrate how the wizard works. It is one of the few scenarios where using the GUI is an easier,
and faster way to configure something.
The Hybrid Configuration Wizard creates a new Hybrid Configuration object in Active Directory in the
Exchange container of the configuration partition. The object holds details of the parameters selected during
the execution of the Hybrid Configuration wizard.
CN=Hybrid Configuration,CN=[OrgName],CN=Microsoft
Exchange,CN=Services,CN=Configuration,DC=Domain,DC=TLD

The Get-HybridConfiguration cmdlet retrieves and displays the values stored by the Hybrid Configuration
Wizard:
[PS] C:\Windows\system32>Get-HybridConfiguration

ClientAccessServers {}:
EdgeTransportServers {}:
ReceivingTransportServers :
{E15-01, E15-04}
SendingTransportServers :
{E15-04, E15-02}
OnPremisesSmartHost :
smtp.exchangelab.be
Domains :
{EXCHANGELAB.BE}
Features :
{FreeBusy, MoveMailbox, Mailtips, MessageTracking, OwaRedirection,
OnlineArchive, SecureMail, Photos}
ExternalIPAddresses : {}
TlsCertificateName : <I>CN=DigiCert Secure Server CA, O=DigiCert Inc,
C=US<S>CN=outlook.exchangelab.be, OU=IT,O=VH Consulting & Training, L=Vichte, S=Vlaams-Brabant, C=BE
ServiceInstance : 0
Name : Hybrid Configuration

Page 98
DistinguishedName : CN=Hybrid
Configuration,CN=HybridConfiguration,CN=ExchangeLab,CN=Microsoft
Exchange,CN=Services,CN=Configuration,DC=EXCHANGELAB,DC=BE

• EdgeTransportServers: a list of Edge Transport servers included in the Hybrid Configuration.


• ReceivingTransportServers: includes the Client Access Servers or Mailbox Servers chosen to receive
inbound email from the Office 365 tenant. The servers are not necessarily the same servers configured
to accept messages from the Internet.
• SendingTransportServers: the mailbox servers listed as “SourceServers” on the Send Connector
intended for Hybrid Mail Flow.
• OnPremisesSmartHost: the endpoint to which Office 365 will connect when delivering emails to the
on-premises organization. This endpoint should connect into the servers listed in the
ReceivingTransportServers or EdgeTransportServers settings in the hybrid configuration as only the
necessary Receive Connectors are created on those servers.
• Domains: a list of the accepted domains included in the hybrid configuration.
• Features: this list describes the hybrid co-existence features enabled for the hybrid deployment. If
you use the Hybrid Configuration Wizard, you enable all features by default. If you use the Set-
HybridConfiguration cmdlet, you can disable specific features if needed. However, I have yet to meet
an environment which had this requirement.
• TlsCertificateName holds the certificate information used to configure the Inbound Connectors in
the Office 365 tenant to ensure that TLS encryption is enforce on hybrid mail flow.
After walking through the steps in the Hybrid Configuration Wizard, the Update-HybridConfiguration cmdlet is
automatically executed to invoke the Hybrid Configuration Engine. The engine is the component responsible
for making configuration changes in both environments:
1. The engine reads the values held in the Hybrid Configuration object. The set of values is also referred
to as the Desired state.
2. The engine then discovers the current configuration (Current state) from both the on-premises
organization and Exchange online and compares the results with the values from the Hybrid
Configuration object.
3. Based on the differences between the desired stated, and the current state, the Hybrid Configuration
Engine figures out the changes to apply (Delta-config) and then continues to make those changes to
the on-premises organization and Exchange Online.

Considerations for a Hybrid Exchange


Deployment
Planning a Hybrid Exchange deployment is key. Although the process of configuring hybrid connectivity is
largely automated thanks to the Hybrid Configuration Wizard, there are a several factors to keep in mind.
Next to the different architecture components, there are several other areas which deserve some extra
thought prior to making the jump into the Hybrid deployment.

Exchange Versions and Hybrid Connectivity


There are many elements that influence the decision to upgrade to a newer version of Exchange for your
hybrid deployment. But, before diving into why you would want to upgrade to a newer version of Exchange,
let’s first take a closer look at the upgrade process itself:
• If you have not yet created a hybrid connection, upgrading Exchange is no different from a pure on-
premises upgrade. Given that an in-place upgrade of the Exchange server(s) is not possible, you are
Page 99
essentially migrating from one version to another. In function of a hybrid configuration, you can, but
are not required to, complete a full migration. Mailboxes can still be on Exchange 2007 and be moved
from there through Exchange 2010 or Exchange 2013 to Office 365. If you have an existing Exchange
2007 environment, it suffices to introduce new Exchange 2010, or 2013 servers and switch over the
namespace(s) from Exchange 2007 to the newer version of Exchange. Note that switching over
namespaces can be somewhat disruptive as clients then connect to newest installed version of
Exchange prior to being proxied or redirected to Exchange 2007. The same is true for environments
that have an existing Exchange 2010 environment and want to upgrade to Exchange 2013 or 2016.
• The upgrade process while already in a hybrid connection, is largely the same. However, once you
have introduced the newer Exchange version, and you have switched over the namespace, you must
re-run the Hybrid Configuration Wizard and update the configuration to reflect the changes in the
environment. Just like before, you do not need to move mailboxes to the newest version of Exchange
to move them to Office 365.
Because of the added work involved with upgrading the on-premises environment to Exchange 2013, or 2016,
in wake of preparing it for a hybrid connection, many organizations seek alternatives that are less work
intensive, or intrusive. One approach which has popped-up now and then is to introduce on or more
Exchange servers, running a newer version of Exchange, and assigning them with a new, "hybrid", namespace
which is different from the namespaces already in use in the environment. That hybrid namespace is then
configured to point to the newer Exchange servers, along with the existing Autodiscover service and EWS. The
other pre-existing namespaces continue to point to the legacy Exchange version in the environment (which
usually is Exchange 2010). For example, an organization might choose to introduce a new namespace called
hybrid.office365itpros.com, while the existing namespaces might be called mail.office365itpros.com and
autodiscover.office365itpros.com.
The idea behind this approach is that you 'trick' Autodiscover in handing out existing URLs for mailboxes that
are hosted on the legacy Exchange version, while at the same time you can configure mailbox moves to Office
365 to use the new, hybrid, endpoint –this, of course, if you configure the URLs on all servers accordingly.
Although this approach can work, there are some severe drawbacks linked to it:
• It is unsupported. Microsoft does not test this scenario and cannot guarantee that it will continue to
work in the future.
• If you decide to upgrade, you must go through the namespace switchover, anyway. I have seen many
organizations make the decision after a few months. Contrary to what you may think now, do not
bank on the fact that you won’t change your mind!
• The approach adds a lot of complexity to your configuration. Because traffic flows can be vastly
different, depending on the client, and workload, troubleshooting can become very cumbersome and
difficult.
If you are 100% conscious of how to deal with such a scenario, and you are confident that you have the right
skill set and operational maturity, you might rule that this approach is worth the risk. While I understand the
reasoning for that decision, I have rarely seen a customer being able to pull this off without running into
issues at some point. Forewarned is forearmed!
Assuming you already have an Exchange 2010 hybrid deployment, ask yourself this: What is the purpose of
your hybrid configuration? Are you using your hybrid connection as a migration vehicle, or do you plan on
(long-term) coexistence? You should also not ignore the fact that Exchange 2010 is in extended support and is
rapidly approaching the end of its product lifecycle.
From a supportability and product lifecycle point of view, it makes sense to upgrade to Exchange 2013/2016.
This is even more so if the on-premises Exchange server(s) do not host mailboxes anymore. This would be the
case when all your users are moved to Office 365, and you have switched your Autodiscover record to point

Page 100
directly to Exchange Online. Upgrading to Exchange 2013/2016 then becomes a trivial task as there is no end-
user impact, and no namespaces left to switch over.
From a hybrid connectivity point-of-view, you are not getting many new features by upgrading to Exchange
2013/2016. You are essentially just substituting your Exchange 2010 servers with a newer version. Of course,
the same supportability and product lifecycle arguments as before are valid in this scenario as well. The
question is whether this should really drive your decision to upgrade? Unless some functionality breaks, or
Microsoft changes its support statement and moves an Exchange 2010 hybrid deployment out of the scope of
supported topologies, there is no rush to upgrade.
If you are not yet in a hybrid state, but you are looking to configure a hybrid connection, there are a few
things to consider:
• If you have Exchange 2010 deployed on-premises, and you are planning to use a hybrid connection
solely for migration purposes, continue using Exchange 2010. After the migration, and because you
must keep an Exchange server around for management purposes, you should consider upgrading the
remaining Exchange server to either Exchange 2013 or 2016. The normal reasons for not upgrading
from Exchange 2010 to Exchange 2013 or 2016 are complexity and cost. If you want to upgrade to
Exchange 2013/2016 prior to migrating to Office 365, you essentially face a semi-full on-premises
migration because you must switch over namespaces to the most recent version of Exchange server.
This action typically introduces short periods of service interruption for the end user.
• If you plan on long-term coexistence, using Exchange 2013, or 2016 has its benefits. As both versions
are still fully supported, you can keep workloads on-premises without having to worry about the
supportability of the entire solution (also non-hybrid components).
In the end, if you need a long-term coexistence, in my opinion, it is better to go with a newer version of
Exchange for your hybrid deployment. If you only want to use hybrid for migration purposes, even if it is for
several weeks, it is acceptable to continue using Exchange 2010 – provided that your intention is to move all
mailboxes to Office 365 and keep the on-premises server(s) for management purposes only. In the latter case,
I recommend switching Autodiscover and your MX records to Exchange Online shortly after the migration.
That enables you to easily upgrade your remaining on-premises management server(s) to Exchange
2013/2016 with little to no hassle.
If you have made the decision to upgrade to Exchange 2016 for your hybrid deployment, I suggest that you
first perform the on-premises migration before setting up hybrid. This might save you some work having to
re-run the HCW afterwards –although running the wizard is typically not disruptive.

Autodiscover and Namespaces


There should be no reason to create a new namespace for a hybrid Exchange configuration. Provided you
already have an on-premises Exchange 2010, Exchange 2013, or Exchange 2016 environment that is
configured correctly, the existing namespace(s) will automatically be used by the Hybrid Configuration Wizard.
For instance, the Hybrid Configuration Wizard uses the value of the ExternalUrl parameter of the Exchange
Web Services Virtual Directory on the Client Access Servers that are added to the hybrid configuration.
If multiple domains are in the Hybrid Configuration, the Wizard allows you to select an Autodiscover Domain.
This is the domain the wizard uses when sending Autodiscover requests to the on-premises Exchange servers.
For instance, the wizard uses the Autodiscover process to figure out the endpoint it should configure in Office
365 for the on-premises Organizational Relationship. By default, the wizard arbitrarily picks one of the
domains from the configuration. However, you might not want that specific domain to be used for
Autodiscover; especially if it isn't your primary email domain and/or you haven't included the namespace on
the SSL certificate that is installed on the on-premises Exchange servers. After all, for certain hybrid features to

Page 101
work, the SSL certificate installed on the on-premises servers must be issued by a third-party Certificate
Authority and it must be valid!
The case of why you might want to pick a different Autodiscover domain is best illustrated with an example.
Consider the following: you currently have three domains: contoso.com, office365itpros.com, and
fabrikam.com. You include all these domains in the Hybrid Configuration Wizard and the wizard picked
contoso.com to be the Autodiscover domain. Because autodiscover.contoso.com is not included in the on-
premises SSL certificate, you select fabrikam.com to be the Autodiscover domain instead. By doing so, you
instruct the wizard, and therefore also Exchange Online, not to send Autodiscover requests to
autodiscover.contoso.com but to use autodiscover.fabrikam.com instead. In this scenario, if you had retained
the domain picked by the Wizard, all Autodiscover requests performed by the Wizard or by Exchange Online
would fail, resulting in a variety of hybrid features, like Free/Busy, not working properly! Of course, if the
wizard selects the right domain, there is no need to change it.

Note: If you use multiple primary email domains in the on-premises organization, it is very likely that you
have correctly configured Autodiscover for each of those domains. However, many organizations own
several domains, some of which they never use to send emails with. As such, they might not have
configured Autodiscover for those domains. Also note that specifying a specific Autodiscover domain in
the Wizard does not change the need for Autodiscover to be configured correctly from an on-premises
perspective. You should always make sure to properly configure Autodiscover for all email domains that
are actively used (stamped as the user's primary email domain).
Although you rarely need to configure the settings for the Hybrid Configuration Wizard without using its GUI,
you can define the Autodiscover domain in PowerShell using the Set-HybridConfiguration cmdlet:
[PS] C:\> Set-HybridConfiguration –Domains "contoso.com, office365itpros.com","autod:fabrikam.com"

Third-party Applications and Devices


If your organization uses third-party applications that integrate with Exchange, it is worth investigating if
those applications support a hybrid configuration. Most incompatibility issues arise from specific permission
requirements which may not be supported in a hybrid environment. Sometimes applications are compatible
with Office 365 but cannot operate in a hybrid deployment because of the need to operate across on-
premises Exchange, and Exchange Online at the same time.
Another problem lots of organizations are faced with is that they do not exactly know what applications
interface with Exchange. A good start is to ask application owners if they know whether the application
interacts with Exchange. Unfortunately, very often you'll get to hear that people don't know.
Mail flow is usually not a concern, at least not from a compatibility point-of-view. Often, devices like multi-
functional printers and scanners use SMTP to send messages through the Exchange Servers. If you keep those
servers available, there will be no issue. But, if you are planning to move entirely to Office 365 it would be
handy to know which devices and applications still rely on Exchange. Identifying those application, or devices
is not a trivial task, and there is no easy way to quickly identify them either. Your best bet is to leverage
Exchange's logging capabilities, and keep track of user names, and IP addresses that show up in the various
logs that Exchange provides.

Rights Management
Although Office 365 supports rights management through Azure Information Protection, ensuring coexistence
between the on-premises organization and Office 365 needs some extra planning. This is especially true if the
on-premises organization uses RMS. Without extra configuration, messages previously encrypted by the on-
premises organization would potentially become inaccessible in Office 365. For example, Outlook Web App
would not be able to display those messages as it has no way to decrypt the contents. However, if you
Page 102
configure Azure Rights Management Services and import the Trusted Publishing Domain (TPD) from the on-
premises organization, decryption is possible. Of course, the configuration must be in place before you move
any mailboxes into Office 365. See Chapter 20 (main book) for more information about using Microsoft
Information Protection in Office 365.

Organization Configuration
The HCW includes the ability to transfer organization configuration from on-premises to Exchange Online. In
effect, this means that the HCW copies and applies settings to ensure that the same configuration applies for
both sides of the hybrid organization. The settings are:
• Mailbox retention policies and retention tags.
• OWA mailbox policy.
• ActiveSync mailbox policy.
• Mobile device mailbox policy.
Transfer is a one-time, one-way event that does not overwrite existing settings within Exchange Online. After
the transfer completes, you still have the task of keeping settings on both sides synchronized, if that’s what
you decide to do.

Unified Messaging
UM-enabled mailboxes must also be treated with care. If you have not configured Unified Messaging in Office
365, you cannot move a UM-enabled mailbox from the on-premises organization, unless you disable the
functionality first. If you care to keep the UM functionality, you must first setup Unified Messaging and
coexistence with the on-premises organization before trying to move UM-enabled mailboxes; mailboxes that
are not UM-enabled can be moved just fine. More information on this topic is in Chapter 5.

Mobile Devices
Moving mailboxes from an on-premises organization to Office 365 is normally transparent to end-users.
However, this is not always the case for mobile devices. Most mobile devices use the Exchange ActiveSync
protocol to communicate with Office 365. Although the protocol is well documented and stable, every mobile
device vendor that licenses ActiveSync is free to choose what features they implement, and how. As a result,
the experience of moving mailboxes might vary from device to device, and sometimes mobile devices do not
reconnect to the mailbox after it has been moved to Office 365. Even though the workaround is to simply
recreate the account settings on the device, the impact it can have from an organizational point of view can
be substantial, especially when you must cover several hundreds if not thousands of devices. Since Exchange
2013 CU8, and Exchange 2010 SP3 RU9 this limitation is addressed by changing how the on-premises
Exchange server responds to an ActiveSync connection after a mailbox has been moved to Office 365. Instead
of sending a mailbox not found status to the device, Exchange returns a HTTP 451-redirect. This redirect holds
the endpoint information in Office 365 and is generated automatically based on the value of the
TargetOwaUrl parameter of the Organization Relationship with Exchange Online. Once the device successfully
connects to Exchange Online, the profile on the device is normally updated automatically so that all future
connections flow to Exchange Online. For the redirect to work though, the following must be true:
• The device itself must be able to handle the incoming redirect. Not all devices support this ability. It is
paramount to test this functionality so that you can set your user's expectations accordingly.
• The username that is used to connect to Exchange must be based on the User Principal Name (UPN)
instead of the traditional domain\username format.

Page 103
Real World: The mobile device redirect only happens if the source mailbox (which is moved to Office 365)
resides on Exchange 2010, 2013, or 2016. If the mailbox is still hosted on Exchange 2007, and you are
moving through an Exchange 2010, or 2013 server, the redirect is never generated, and you must manually
update, or recreate the profile on the device.

Client Versions
Running a hybrid deployment creates several implications in terms of client compatibility. One example is the
Outlook version that is needed to support cross-premises permissions, or the ability to display Office 365
groups in Outlook. Just like the on-premises Exchange servers must meet specific version requirements, the
same is true for client operating systems and applications. Despite the support for older versions, the best end
user experience will be obtained by always installing the latest version of a certain client. This is not only true
from a functionality perspective, but updates might also have bug fixes. Some organizations have difficulties
keeping up with the release cadence of updates. Let's take Outlook as an example: unless you deploy
Microsoft's Click-to-Run version to update Outlook on an ongoing basis, updates must be manually
distributed to, and installed on every client computer that uses Outlook. Depending on the size of your
organization, this might be quite an undertaking and will require some additional planning.

Network Connectivity
Running a hybrid deployment changes the demand on network connections. Every mailbox that moves to
Office 365 will no longer connect to the on-premises Exchange servers, but instead connect to their mailbox
over the Internet. For many organizations, this means a substantial increase in bandwidth utilization for their
Internet connection. Microsoft has a set of articles and recommendations to help organizations plan
appropriate networks for Office 365 deployments.
In terms of Exchange Online, it’s important to understand the traffic characteristics for email activity. For
instance, the amount and average size of message being sent and received each day might help to
understand what the added load will be. Next to regular mail flow, mailbox moves and offline cache
synchronization (.OST files) play an important role in understanding the network traffic. While mail flow
typically is a stable amount of bandwidth that is used, the latter two scenarios burst (spike) network traffic and
can potentially congest the network unless network traffic rules exist to prevent that from happening. Another
change in behavior that is easily underestimated is the effect of mailing to a (large) group of people.
Consider the following example: a user sends a message to ten colleagues with a 10 MB attachment. In a
traditional on-premises deployment, that message would be sent to Exchange which then distributes the
message to the mailboxes (if they are all located on the same server) or sends a single copy of the message to
another Exchange server which will then deliver the message to the mailboxes held there, and so on. In Office
365, the message is sent to Office 365 across the Internet. Luckily Exchange will only send a single copy of a
message to Exchange Online, even if there are multiple recipients addressed in the message. Exchange Online
then delivers the message to mailboxes in Office 365. If one of the recipients is still an on-premises Exchange
user, a copy of the message will go to the on-premises Exchange server. Next, Outlook clients will download
the message from their mailbox. If the mailbox is on-premises, the message is downloaded from a local
server. However, if the mailbox is hosted in Office 365, the message will be downloaded through the Internet,
increasing the load on the Internet connection. Potentially this means that a single message might be
downloaded many times through the same Internet connection. For smaller messages that might not be a
problem, but it might be for larger messages. If you know that the maximum attachment limit in Office 365 is
currently 150 MB, it is obvious where the issue might lie. All-in-all, there will be a significant shift in how the
network is utilized. Typically, internal network connections are low-latency, high bandwidth while Internet
connections usually have a higher latency, lower bandwidth but above all a higher cost to consider.

Page 104
Real World: Bandwidth is not the only measure; connection latency is equally important as it relates 1:1 to
the end-user experience in clients like Outlook. Continuously increasing bandwidth isn't always the
solution to connectivity-related issues. Although it is possible to implement a dedicated network
connection to Office 365, such as Microsoft’s Azure ExpressRoute for Office 365 service, this option is not
recommended nor actively promoted by Microsoft.

Exchange Online Protection


If you are already using Exchange Online Protection without directory synchronization, it is likely that you
have previously setup connectors to, and from Office 365, or perhaps you configured some domains to be
relay domains. Running the Hybrid Configuration Wizard might override, or interfere with, the existing
configuration and thus potentially halt, or alter mail flow. If you find yourself in such a situation, it is wise to
take a backup of the existing Connector configuration first. That way, you can track changes made by the
HCW and, if needed, manually override the configuration afterwards. There are various ways to capture
configuration data through PowerShell, but the easiest way to dump all information into a text file is the
following:
[PS] C:\> Get-ReceiveConnector | Format-List * | Out-File ReceiveConnectorConfig.txt
[PS] C:\> Get-SendConnector | Format-List * | Out-File SendConnectorConfig.txt

Real World: EOP is not the only scenario that might be affected by running the Hybrid Configuration
Wizard. Basically, every deployment where an external (third-party) mail filtering or mail handling solution
is used must carefully consider the implications of running the wizard and how it affects their specific
setup.

Preparing for a Hybrid Exchange


Configuration
Before you can configure a Hybrid Exchange connection, the on-premises environment must meet specific
requirements. If these requirements are not met, you might not be able to complete the Hybrid Configuration
Wizard or certain features will not work correctly.
• Certificates are critical in a hybrid deployment. First, there's the SSL certificate which is used at the IIS
layer in Exchange. This certificate ensures that Office 365 can create a secure connection to the on-
premises environment to execute a variety of tasks including Free/Busy lookups and mailbox moves.
The certificate must be issued by a trusted public Certification Authority and include all the
hostnames used to connect into the on-premises Exchange environment. Secondly, there's the
certificate used for mail flow. The requirements for this certificate are explained in the Secure Mail
Flow section.
• Directory Synchronization must be enabled, and the hybrid coexistence option must be enabled.
Enabling Directory Synchronization will trigger the creation of the service domain in Office 365. This
domain, sometimes also referred to as the coexistence domain, is needed to enable cross-
organizational features such as Free/Busy lookups and hybrid mail flow.
• Autodiscover must resolve to the on-premises organization and must work correctly for at least one
Accepted Domain. In a mixed Exchange server environment, it must point to the highest Exchange
server version in the environment. For instance, in a mixed Exchange 2007/2013 environment,
Autodiscover must be configured to point to Exchange 2013. A good way to test Autodiscover
functionality is to use the Exchange Remote Connectivity Analyzer. The analyzer is a Microsoft tool
that allows you to test a variety of features of Exchange on-premises or online.

Page 105
• Ensuring that Office 365 and the on-premises organization and clients can communicate correctly is
key. As such, several outbound and inbound ports must be opened on the firewall to allow for these
communications to happen. Microsoft keeps an up-to-date list which describes what ports, protocols,
URLs and IP addresses are used by the different services in Office 365. An important note here is that
the list is maintained on a best-effort basis; the recommendation is to allow communications based
on domain names rather than IP addresses. If the organization is using proxy servers for internet
traffic, you might need to add the necessary exclusions to allow traffic to pass through
unauthenticated. Exchange itself can be reconfigured to use a proxy server using the Set-
ExchangeServer cmdlet. Microsoft’s Office 365 Support and Recovery Assistant is a great way to verify
connectivity and performance to Microsoft’s datacenters. Amongst other things, the tool checks
network performance by measuring routes and calculating bandwidth. In addition, it verifies DNS
configuration and checks that the necessary networks ports are open for Office 365.
• If your organization plans on using Edge Transport servers to secure hybrid mail flow, the Edge Sync
connection between the on-premises organization and the pool of Edge servers must be created
before running the Hybrid Configuration Wizard.
Not all these prerequisites apply to a minimal hybrid configuration. For example, because no coexistence is
configured by the Hybrid Configuration Wizard, you do not necessarily need to configure a certificate for the
transport services. On the other hand, the requirements for the SSL certificate used for securing
communications, directory synchronization, AutoDiscover, and network connectivity stay unchanged.

Real World: For many strictly-secured organizations, opening firewall ports can be a real nightmare. In
complex environments, it can take months of planning to get the right firewall rules in place. During the
Ignite 2017 conference, Microsoft announced a new hybrid architecture using a new connector to negate
the need for opening inbound firewall ports. The principle that underpins the connector architecture is
similar to an Azure AD App Proxy where requests to and from Exchange Online travel through a mutually-
authenticated connection via well-known ports. Only traffic from known sources, in this case, Exchange
Online or the on-premises servers, can travel through the connector. The expectation is that the new
connector will be in limited testing in early 2018. While a lot can happen between now and then, the
promise of a simpler hybrid architecture is something to look forward to!

Configuring a Hybrid Exchange Connection


After carefully planning your hybrid configuration, you are now ready to move to the next phase which
includes setting up the hybrid connection using the Hybrid Configuration Wizard and, if needed, configuring
additional hybrid features such cross-premises access to Public Folders.

Running the Hybrid Configuration Wizard


Once all prerequisites are met, you are ready to run the Hybrid Configuration Wizard and finalize your hybrid
connection to Office 365. The link to the wizard is available through the on-premises EAC.

Troubleshooting the Hybrid Configuration Wizard


Prior to the new wizard, troubleshooting the Hybrid Configuration Wizard was a real feat. The Hybrid
Configuration Wizard log file did not include much information at all. Since then, Microsoft upgraded the log
file output considerably, and they continue making improvements. In the old wizard, the log file was in the
Log Files section of the Exchange installation folder. The location of the log files for the new Hybrid
Configuration Wizard is the AppData folder of the user who ran the wizard.
Each time the wizard runs, it creates two log files. Each file holds the same set of information, albeit stored in a
slightly different format. The XML-based file allows the Hybrid Configuration Troubleshooting tool to

Page 106
consume the content. This tool is a web-based utility run from an Exchange server in the on-premises
organization to check a variety of parameters such as certificates, network connectivity, Exchange federation
trust properties and DNS records.
The tool reads the Hybrid Configuration Log File and parses the information to verify other areas of interest.
The result is an interactive web page which you can click through to quickly display what issues were found
and how to potentially solve them.
The text-based log files hold a lot of useful information on the configuration process as illustrated in the
following example:
2015.12.10 13:14:47.607 [Activity=OnPremises Connection Validation] START
2015.12.10 13:14:47.607 [Activity=OnPremises Connection Validation] Resolving DNS for EX2016-
01.OFFICE365ITPROS.AD...
2015.12.10 13:14:47.607 [Activity=OnPremises Connection Validation] EX2016-01.OFFICE365ITPROS.AD
resolved as: fe80::2066:883e:23d4:b73a%12, 192.168.10.65
2015.12.10 13:14:47.607 [Activity=OnPremises Connection Validation] Checking port 80 on host
EX2016-01.OFFICE365ITPROS.AD ...

Each line represents an action of the Hybrid Configuration engine and also shows in which environment a
certain command is executed by mentioning in what session (Activity=OnPremises or Activity=Tenant) the
command was issued:
2015.12.10 13:14:47.748 [Activity=OnPremises Connection Validation] Connecting to http:// EX2016-
01.OFFICE365ITPROS.AD/powershell...
2015.12.10 13:14:47.748 [Activity=OnPremises Connection Validation, Provider=OnPremises] Opening
Runspace.
2015.12.10 13:14:48.076 [Activity=Tenant Connection Validation] outlook.office365.com resolved
as: 132.245.73.18, 132.245.229.146, 132.245.226.82, 132.245.229.162, 132.245.36.114, 132.245.55.18,
132.245.57.34, 132.245.56.98, 132.245.196.34, 132.245.193.130
2015.12.10 13:14:48.076 [Activity=Tenant Connection Validation] Checking port 443 on host
outlook.office365.com...

The information contained in the log files is very useful to troubleshoot any errors you might run into when
running the Hybrid Configuration Wizard. It is also useful because it tracks the changes made by the wizard. In
addition to the commands and changes that the wizard executes, the logs also contain a summary of the
previous configuration.

Configuring Public Folders in a Hybrid Deployment


Although Public Folders are supported in a hybrid configuration, they can only be accessed in either on-
premises Exchange or Office 365. Although it is technically possible to have public folders enabled in both
environments at the same time, mailboxes will only be able to connect to public folders in the environment
they are hosted in, unless you configure cross-premises access to Public Folders (also referred to as hybrid
Public Folder access). Table 4-1 describes the version and location of mailboxes and public folders that are
supported in a hybrid deployment:
Scenario Exchange Online On-Premises Exchange On-Premises Legacy
Mailbox 2013 Mailbox Exchange Mailbox
Exchange Online Public Hybrid N/A Supported Not Supported
Folders
On-Premises Exchange Supported Hybrid N/A Hybrid N/A
2013 Public Folders
On-premises Legacy Supported Hybrid N/A Hybrid N/A
Exchange Public Folders
Table 4-1: Supported Hybrid Public Folder scenarios

Page 107
Allowing cross-premises access to public folders needs additional configuration and is not done automatically
by the Hybrid Configuration Wizard. The process for legacy Public Polders (Exchange 2007, 2010) and modern
Public Folders (Exchange 2013, 2016) is very similar. However, for legacy Public Folders, you first need to
complete the following steps:
1. For the Public Folders on the legacy Exchange versions to be accessible, the RpcClientAccessService
must be available on those servers. This means that if you are running the Mailbox Server Role
separately, you must add the Client Access Server role to those servers that are hosting public folder
databases. This is only true for Exchange 2010 Public Folder deployments.
2. An empty mailbox database must be created on each server that hosts a public folder database. To
avoid mailboxes being created automatically by Exchange's built-in provisioning load balancing
mechanism, make sure to set the IsExcludedFromProvisioning switch to $true. As in the previous step,
this is only necessary for Exchange 2010 Public Folder deployments.
[PS] C:\> New-MailboxDatabase -Server <ServerName> -Name <NewDbName>
-IsExcludedFromProvisioning $True

3. Create a mailbox for each Mailbox Database that you created. The SMTP address of this mailbox will
be returned in the Autodiscover response for the Public Folder information:
[PS] C:\> New-Mailbox –Name "PublicFolder1" –Database "DBName"

4. If working with Exchange 2010, it is also necessary to reconfigure the RPCClientAccessServer attribute
on each of the Mailbox Database you created earlier. This to enable Autodiscover to return the proxy
Addresses of the mailboxes that have been created:
[PS] C:\> Set-MailboxDatabase "name" –RPCClientAccessServer "PFServerName"

Note: For an Exchange 2013 or Exchange 2016 environment, it suffices to create a single proxy mailbox. If
you have a large, distributed, environment, it might make sense to add multiple proxy mailboxes.
The following steps are identical, regardless of whether you are using legacy or modern Public Folders:
1. You must synchronize the on-premises Public Folder hierarchy to Office 365. The hierarchy
information is needed for mailboxes in Office 365 to be able to connect to legacy public folders.
Despite newer versions of Azure AD Connect now synchronizing Mail-Enabled Public Folders to Azure
AD, they do not synchronize with the Exchange Online Directory Store. You must still use a series of
downloadable scripts to provision and automatically synchronize public folders to Office 365. There
are two scripts: an export script and an import script. The first script is used to export information
about public folders to a file while the second script will import that information into Office 365. By
creating a scheduled task of both scripts, a sort-of synchronization of public folder information is
achieved.
2. After the import and synchronization of the public folders has completed, you must enable the
Exchange Online organization to access the on-premises public folders. This is done by adding the
names of the on-premises mailboxes created in step 3 to the Organization configuration:
[PS] C:\> Set-OrganizationConfig -PublicFoldersEnabled Remote
-RemotePublicFolderMailboxes PublicFolder1, PublicFolder2, PublicFolder3

Marking as Remote: In the previous example, the PublicFoldersEnabled parameter indicates that the
Public Folders for this organization exist in a remote forest (the on-premises Exchange organization). Once
all the public folders have been migrated to Exchange Online, you must update this parameter to local to
reflect the new situation.

Page 108
When a mailbox in Office 365 performs an Autodiscover request, that request will include the SMTP address
of one of the Public Folder mailboxes specified in the Organization configuration. As described earlier,
Outlook performs an added Autodiscover request for the SMTP address of each resource, such as public
folders in this case. This Autodiscover request will connect to the on-premises organization which will return
the connection information for the Public Folder Mailbox. The returned information holds the Outlook
Anywhere endpoint of the on-premises organization, allowing the client to connect to the Public Folders.

Configuring Unified Messaging


Unified Messaging enables users to take advantage of features such as Hosted Voice Mail and Outlook Voice
Access. Being able to use these UM capabilities in Office 365 provides administrators with the flexibility to
move mailboxes 'freely' between the on-premises organization and Office 365. Chapter 5 has more detail
about the implications of using Unified Messaging in a hybrid deployment. For now, the most important thing
to understand is that mailboxes use the UM capabilities offered by the environment in which they are hosted.
For on-premises mailboxes this means they use whatever UM capabilities are enabled on-premises and
Exchange Online mailboxes whatever is configured in Office 365.
There are several ways in which UM in Exchange Online can be integrated into the on-premises environment.
For instance, if you are using Skype for Business on-premises, there is no need to install added equipment.

Deploying a Minimal Configuration


Configuring a minimal hybrid configuration is much easier and the process completes a lot faster as the
Hybrid Configuration Wizard has less work to do. Once the wizard has verified its connectivity to both the on-
premises Exchange server(s) and Office 365, you select either a full hybrid configuration or the Minimal
Configuration. Given that no Exchange federation or specific mail routing options are created in a Minimal
Configuration, you will not be asked to give more information after selecting it on the Hybrid Features page.
Once the wizard starts, it will configure the following elements in the on-premises environment:
• Two remote domains: one for the hybrid coexistence domain (tenant.mail.onmicrosoft.com) and one
for the tenant domain name (tenant.onmicrosoft.com).
• A new accepted domain which matches the hybrid coexistence domain (tenant.mail.onmicrosoft.com);
this is needed to allow emails to continue to be delivered before/after a mailbox move.
• The default email address policy is updated to add a proxyAddresses for recipients based on the
hybrid coexistence domain. The email addresses for a recipient are only updated if email address
policies are enabled for the recipient. If you have disabled the option, you must manually add a
coexistence address to those recipients (if you ever want to move them to Office 365, that is).
• The MRS proxy is enabled on all Client Access Servers in the environment. If the account running the
wizard has sufficient privileges, the wizard will enable the MRS Proxy through Active Directory. If not,
it runs the Set-WebServicesVirtualDirectory cmdlet. The downside of using the latter is that it might
take (a lot) longer for the wizard to complete in larger environments.

In Office 365, the following objects are created:


• An object to identify the on-premises organization.
• A migration endpoint (if none exists). This is on a best-effort basis and will not cause the HCW to fail
if an error occurs during creation.

When the wizard runs, it makes some other changes. For instance, to create the new on-premises organization
object in Office 365, the wizard creates two (temporary) disabled mail flow connectors. This is because the
New-OnPremisesOrganization cmdlet needs the InboundConnector and OutboundConnector parameters to be
specified. However, once the on-premises organization object is created, both connectors are removed.

Page 109
Managing a Hybrid Exchange Deployment
In most environments, the hybrid connection is created and then stays untouched for a long time because
most of the configuration changes are only needed once. Nonetheless, there are tasks which might need an
administrator to make change to the hybrid connection. For instance, the certificates that are used in a hybrid
configuration will inevitably expire at some point. Some organizations choose to buy a certificate which is
valid for only one year, others might have a certificate that lasts 5 years. Next to certificate information, there
are other parameters that might trigger the update of the hybrid connection, like adding and removing new
domains, or adding new servers into the environment. Point is that at some point the information in the
Hybrid Configuration must be updated.
Updating the hybrid configuration is a relatively straightforward and easy task. One can either re-run the
Hybrid Configuration Wizard or use the Set-HybridConfiguration and Update-HybridConfiguration cmdlets to
reflect necessary changes.

Using PowerShell in a Hybrid Deployment


Exchange and Exchange Online have excellent dashboards which allow to perform the most current
administration tasks, but many administrators use PowerShell as the go-to tool for managing their Exchange
environment. Although some limitations apply to what cmdlets one can execute in an Exchange Online
environment – mostly due to the multi-tenancy nature of the environment - administrators can also use
PowerShell to manage objects belonging to Exchange Online.
A challenge exists in managing both the on-premises organization and Exchange Online from a single
PowerShell window. Because the cmdlets in both environments are the same, PowerShell cannot differentiate
which session it should use to execute a cmdlet. This becomes clear when an administrator tries to setup a
remote PowerShell connection to Exchange Online using an Exchange Management Shell window that is
already connected to an on-premises Exchange server. Please note that in the output below contents have
been omitted for brevity:
WARNING: Proxy creation has been skipped for the following command: 'Add-AvailabilityAddressSpace,
Add-DistributionGroupMember, Add-MailboxFolderPermission, Add-MailboxPermission, Add-
ManagementRoleEntry, Update-DistributionGroupMember, Update-HybridConfiguration, Update-SiteMailbox,
Write-AdminAuditLog', because it would shadow an existing local command. Use the AllowClobber
parameter if you want to shadow existing local commands.

The message tells us that the cmdlets already exist in the PowerShell session and hence the Proxy creation
was abandoned. As you will notice, most the Exchange cmdlets are included in the list. When an admin would
now try to execute the Get-Mailbox cmdlet, results from the on-premises organization are still returned. Given
that the Get-Mailbox cmdlet has no parameter to tell it where to fetch information, another workaround must
be applied.
When creating a Remote PowerShell connection, you can specify a prefix parameter. For instance:
[PS] C:\> Import-PSSession (New-PSSession -ConfigurationName Microsoft.Exchange
-ConnectionUri https://ps.outlook.com/powershell -Authentication basic
-Credential (Get-Credential) -AllowRedirection) -Prefix Online

This time, the remote connection is successfully created without a warning. Instead of using the default
cmdlet names, PowerShell has now created proxy names including the Prefix that was specified in the
command. As such, Get-Mailbox effectively becomes Get-OnlineMailbox and so on.
While this technique allows an administrator to manage both environments from a single PowerShell window,
it does give some added challenges in terms of scripting. Any scripts you have created must now consider
that a prefix might exist, and you must ensure that you always use the same prefix if you want to re-use

Page 110
scripts. Any scripts that you find on the Internet and want to reuse will need to be updated before they can be
run in your environment. Given these issues, you might conclude that although it is possible to use a single
window, it is much easier to use separate PowerShell windows instead.

Monitoring a Hybrid Deployment


The effects of running two separate Exchange organizations, on-premises and Online, presents some unique
challenges. Monitoring and reporting in a hybrid deployment are often underestimated tasks. Many solutions
that exist today have not (yet) been updated to reflect the architectural changes, or added components and
workloads that are included in a Hybrid Exchange deployment.
You might wonder why a need exists to monitor cloud solutions. To some degree, it's fair to assume that
monitoring by the customer is not needed. After all, you are no longer responsible for the operation of the
infrastructure that provides the service, and you don't need to care about how individual components such as
servers, storage, and networks are performing. More to the point, Microsoft does not expose information
about their internal operations to support an individual tenant to the outside world. However, you do care
about the services that you consume. For example, are messages being delivered promptly, can users access
free/busy information, are objects being synchronized between on-premises and the cloud, and so on.
Outages do happen and will happen in the future, and it can be very difficult to figure out whether the root
cause of an issue lies with the service provider (Microsoft), or inside the on-premises organization. This is
especially true in a hybrid deployment where a lot of functionality depends on constant interaction between
components in both environments.
Regardless of the cause of an outage, those responsible for providing the service to end users need to
understand that the problem exists and have an indication where the problem occurs. Apart from anything
else, having an early and accurate diagnosis of the issue allows on-premises administrators to have a much
more productive conversation with Office 365 support. For this reason, modern Exchange Application
Monitoring solutions should include the ability to check on-premises Exchange (including the hybrid
components), the authentication mechanism and its dependencies (for example, AD FS or Directory
Synchronization with Pass-Through Authentication) if they want to be able to highlight (and preferably
diagnose) any issues that might affect hybrid connectivity.
If you currently do not have such a solution, you can do basic monitoring yourself. Microsoft provides some
cmdlets which an Administrator can use to proactively test a feature or troubleshoot in case of an issue. The
following cmdlets can be useful in a hybrid deployment:
• Test-MigrationServerAvailability tests the on-premises (or remote) migration endpoint and provide
additional (verbose) information on the failure.
• Test-OrganizationRelationship performs an end-to-end test of the Organization Relationship which
include testing the federation trust with Microsoft's Federation Gateway.
• Test-OAUTHConnectivity tests whether OAUTH authentication can successfully be used with the
remote organization.
Microsoft also gives some useful information through its tools and services. For instance, Azure AD Connect
Health has information on directory synchronization or AD FS than can be helpful to detect potential identity
and authentication problems.

Life After Hybrid


Hybrid configurations are usually deployed when companies want to run a long-lasting virtual Exchange
organization composed of on-premises servers, and an Office 365 tenant. In this instance, the connections
stay in place for extended periods. On the other hand, it is also possible to use a hybrid connection as a

Page 111
migration vehicle. In this context, the hybrid connection exists to enable the transfer of mailboxes from on-
premises Exchange to Office 365, and you can shut it down after moving all the mailboxes. The question then
arises as to how to remove the hybrid connection and/or the remaining on-premises Exchange servers.
After all mailboxes are in Office 365, Exchange servers become glorified management tools. As explained in in
Chapter 5, because some mailboxes start as on-premises objects, the on-premises directory is the source of
authority for those objects when synchronized to Office 365, which then means that you must manage the
synchronized objects through on-premises Exchange tools.

Real World: Various third-party tools are available to carry out basic recipient management tasks that you
normally perform in the Exchange management tools. Although these tools usually work just fine,
Microsoft does not support them. This does not mean you cannot use the tools: it just means that
Microsoft has not tested the software and therefore cannot guarantee that they work as expected. If you
run into problems with a third-party management tool, you must contact the vendor to get support.
In scenarios where you need to keep Directory Synchronization to support other applications, you can follow
the process to remove the hybrid configuration but keep one on-premises Exchange server. You only need a
second server if you want to spread the processing load, as in the case of a geographically dispersed
organization. The pre-requisites for being able to completely remove on-premises Exchange servers are that
all mailboxes are in Office 365, you no longer use the on-premises Exchange servers to host workloads such
as journaling, public folders, or mail flow, and you disable directory synchronization. In my experience, few
organizations ever disable Directory Synchronization.

Real World: Companies who implement Directory Synchronization and never used Exchange in the past
must still install Exchange to manage recipients in Office 365. This is common when an organization
migrates from foreign email systems like Lotus Notes to Office 365.
Before explaining how to go about removing the hybrid connection, it is important to understand which tasks
the on-premises Exchange servers perform in a hybrid configuration. Understanding the role of a hybrid
server makes it easier to understand the various steps you must run through before decommissioning your
hybrid connection and removing Exchange servers from your environment. The hybrid server performs the
following functions:
• Managing the attributes of both on-premises and Office 365-based recipients through the Exchange
management tools.
• Enabling the use of Email Address Policies to automatically generate email addresses for recipients.
• Handling mail flow between on-premises and Office 365 recipients. If you enable centralized mail
flow, the servers are also responsible for handling inbound/outbound mail flow between Office 365
mailboxes and the internet.
• Handling free/busy requests between on-premises and Office 365 recipients.
• Handling Autodiscover requests for on-premises and Office 365 mailboxes.
• Handling mobile device redirects after mailboxes move to Office 365.
• Redirecting Office 365 users who log in via OWA to the Exchange Online OWA URL.
Although you only need one on-premises Exchange server, leaving the hybrid configuration untouched and
removing all but one server might be undesirable because that server is then a single point of failure. This is
especially true in a scenario where you enable centralized mail flow.

Removing the Hybrid Exchange Connection


Before removing any Exchange servers from your hybrid configuration, it is best to verify the following
elements and, if applicable, make the necessary configuration changes.

Page 112
1. Run the following command from the Exchange Online PowerShell and verify that the value for
PublicFoldersEnabled is set to Local:
[PS] C:\> Get-OrganizationConfig | Select PublicFoldersEnabled

If the value of PublicFoldersEnabled is set to Remote, set it to Local, assuming you have migrated all
public folders or you do not need access to public folders anymore:
[PS] C:\> Set-OrganizationConfig –PublicFoldersEnabled Local

It might take a while for this change to take effect. If you previously have migrated all public folders
to Exchange Online, this value should already have been set. If you have not yet migrated Public
Folders to Office 365 and still need access to them, you should not continue until the public folders
are moved.
2. Ensure no part of mail flow is handled by the on-premises organization. If not done already, start by
pointing the MX records for your email domains directly to Office 365. Remove any connectors that
may have been created by the Hybrid Configuration Wizard. In the on-premises organization, logon
to the Exchange Admin Center and navigation to mail flow > send connectors and remove the
appropriate connector. The connector created by the HCW is typically called Outbound to <uid>.
Similarly, logon to the Exchange Online EAC and navigate to mail flow > connectors and remove
both the Inbound and Outbound connectors called Inbound from <uid> and Outbound to <uid>.
3. Point the public Autodiscover DNS record directly to Office 365 by creating a CNAME record for
autodiscover.yourdomain.com which points to autodiscover.outlook.com.
4. The official documentation suggests setting the internal Autodiscover SCP to $null. However, I have
seen cases where this caused some issues. Instead, it is safer to point the SCP also directly to Office
365. Either set it to a value of https://autodiscover.outlook.com/Autodiscover/autodiscover.xml, or make
sure your internal DNS record for the Autodiscover SCP resolves to autodiscover.outlook.com. To
update the SCP, run the following command from the on-premises Exchange Management Shell:
[PS] C:\> Set-ClientAccessServer –AutodiscoverserviceInternalUri
https://autodiscover.outlook.com/Autodiscover/autodiscover.xml

5. Remove the Organization Relationships that were created by the Hybrid Configuration Wizard in both
the on-premises organization and in Exchange Online. In the on-premises organization, open the
Exchange Admin Center and navigate to organization > sharing. From there, remove the
Organization Relationship called On-premises to O365 - <guid>. Similarly, logon to the Exchange
Online EAC and remove the Organization Relationship named O365 to On-premises - <guid>.
6. Next, remove the hybrid configuration object from the organization by running the following
command from the on-premises Exchange Management Shell:
[PS] C:\> Remove-HybridConfiguration

Note that the Remove-HybridConfiguration cmdlet only exists in Exchange 2013 or later. The only way
to remove the object in Exchange 2010 is to use ADSIEdit.
If you are running an Exchange 2013 or Exchange 2016 hybrid configuration and have configured OAuth,
disable the configuration by doing the following:
7. Logon to the on-premises Exchange Management Shell and run the following command:
[PS] C:\> Get-IntraOrganizationConnector <name> |
Set-IntraOrganizationConnector –Enabled $False

8. Repeat step 7, but this time run the command from the Exchange Online PowerShell.
Depending on how you have configured OAuth, the name of the IntraOrganizationConnector might vary. If it
was setup through the HCW, the connectors are typically named HybridIOC - <uid>.
Page 113
Once you have completed all the steps above, you can uninstall all Exchange servers, with exception of the
server(s) you want to keep for managing hybrid recipients.

Real World: Keeping a single Exchange on-premises server just for recipient management comes as a
surprise for many organizations. As explained in Chapter 5, the requirement exists because the on-
premises Active Directory is the source of authority for all Exchange-related object attributes. Therefore,
you cannot edit Exchange attributes in Office 365 if Azure AD Connect is still synchronizing objects from
on-premises. The inability to manage mail recipients using native online tools is frustrating for many
administrators. Although this limitation has been there since day one, Microsoft is actively working on a
solution to allow organizations to remove the last on-premises Exchange server gracefully. We don’t yet
know what the final architecture will be (Microsoft gave some hints at Ignite 2018), but there is no doubt
that something is coming to make the problem more manageable.

Page 114
Chapter 5: Managing Hybrid
Recipients
An email system that cannot deliver messages to recipients is all but useless. When deployed as a hybrid
organization, the Exchange on-premises and Exchange Online components behave as a unified environment.
From an end user perspective, slight differences exist in how they interact with Exchange. Unfortunately, an
administrator does not have the same experience. Under the hood, an on-premises Exchange 2013/2016
organization and Exchange Online are fundamentally different systems that need a different approach to
recipient management at various levels. Although the management tools used to manage hybrid recipients
are the same as those used to manage on-premises recipients, the structure needed to ensure consistent
coexistence of both environments and the resulting feature gap create many management challenges. In
addition, the dependency on Directory Synchronization adds a layer of complexity to hybrid deployments
which ultimately creates differences in how certain elements of recipient management are performed in
Exchange on-premises or Exchange Online.
The term "hybrid recipient" refers to recipients that are part of a hybrid deployment. More specifically, these
recipients exist in the on-premises organization and are synchronized with Office 365. A hybrid recipient could
have a mailbox on-premises or in Office 365, it could be an on-premises mailbox with an archive in Office 365,
but it could just as well be a mail-enabled user or distribution group in either environment. As explained in
Chapter 4, the directory synchronization process synchronizes objects from the on-premises organization into
Azure Active Directory. Because this is largely a one-way synchronization, the on-premises Active Directory is
considered the "source of authority". This is where adds, deletes and changes to objects should be made.
In this chapter, we review the challenges introduced by Directory Synchronization and consider the
differences between managing pure online and on-premises recipients when compared to hybrid recipients.
We also discuss the most common actions an administrator performs for each recipient type, including
managing individual recipients and the various features available to those recipients. Despite some of the
differences between traditional recipients and hybrid recipients, many actions occur in the same way, meaning
that you can follow the advice contained in the chapters dealing with those objects.

User Mailboxes
Creating a New Hybrid Mailbox
To create a mailbox for a hybrid recipient, you must use the on-premises EAC or PowerShell. If a mailbox
already exists, it can be moved from the on-premises organization to Office 365. Although it is possible to
create a new on-premises mailbox and then move it to Office 365, it is easier and faster to just go ahead and
create the mailbox in Exchange Online.
The on-premises version of EAC has built-in support for Office 365 which allows you to create a new user
object in the on-premises Active Directory and a new mailbox in Office 365. The functionality is like that in the
Office 365 version of EAC. However, the difference is in what happens behind the scenes, as the on-premises
EAC runs the New-RemoteMailbox command to create an on-premises mail-enabled user account instead of
connecting to Office 365 to create a new mailbox there. This is the command an administrator runs in a
PowerShell session connected to an on-premises Exchange server if they want to create a new cloud mailbox.

Page 115
However, in both cases, the Office 365 mailbox is only provisioned after the object is synchronized from the
on-premises directory to Azure AD and then back to the Exchange Online Directory Store.

Note: To create a new Office 365 mailbox from the on-premises environment, the administrator must at
least be a member of the Recipient Management role group. However, being a member of this group
alone only allows the administrator to create a mailbox using PowerShell. To light up the ability to create
an Office 365 Mailbox directly from the Recipients tab in the EAC, the user must also be a member of
the View-Only Organization Management role group.
Various recipient types exist within Exchange. For instance, a user with a mailbox is a UserMailbox, and a
regular Mail-Enabled User is a MailUser. Recipients are differentiated by the value of the RecipientTypeDetails
attribute of the object in Active Directory. When a new mailbox in Office 365 is created for an on-premises
user, the RecipientTypeDetails property for the on-premises mail-enabled user account (or MEU) is set to a
decimal value of 2147483648, which translates into the value assigned to RemoteUserMailbox.
In a typical on-premises Exchange deployment, the RecipientTypeDetails property can have many different
values. However, in a hybrid configuration, only the following values are used to identify hybrid recipients:
Value Meaning Mailbox Location
2147483648 RemoteUserMailbox Office 365
1 UserMailbox On-premises
4 SharedMailbox On-premises
16 RoomMailbox On-premises
128 MailUser On-premises
Table 5-1: RecipientTypeDetails values and meaning

Note: In the case of a remote mailbox, the msExchRemoteRecipientTypeDetails attribute denotes the type
of mailbox in Office 365. The values used for that property are those listed in Table 13-1. A full list of
values and their corresponding recipient types is available here.
Every mail-enabled user account should automatically get a proxyAddress based on the tenant's routing
domain, sometimes also referred to as the coexistence or service domain. If you create your hybrid
configuration using the Hybrid Configuration Wizard, the remote routing domain is added to the on-premises
Exchange Organization, where it is configured as a Remote and Accepted Domain. The routing domain is also
added to all email address policies. This ensures that every recipient in the organization (such as a remote
mailbox) gets stamped automatically with an address that matches the remote routing domain. As a result,
when an object is synchronized with Office 365, it is addressable using that proxyAddress. For instance, an on-
premises user might receive the proxyAddress: tredmond@office365itpros.mail.onmicrosoft.com.
The remote routing domain is automatically created in Office 365 as soon as directory synchronization is
enabled. As you can see in the example below, the routing domain is in the format of
<tenantname>.mail.onmicrosoft.com. You can examine the domain by looking at Settings > Domains in the
Office 365 portal or through the PowerShell Module for Azure Active Directory with the Get-AzureADDomain
cmdlet:
PS C:\> Get-AzureADDomain

Name AvailabilityStatus AuthenticationType


---- ------------------ ------------------
Office365itpros.mail.onmicrosoft.com Managed
Office365itpros.onmicrosoft.com Managed
Office365itpros.com Federated

As mentioned earlier, the creation of an Exchange Online mailbox is triggered after the Mail-Enabled user
account (which you created using the on-premises management tools) is synchronized from the on-premises
directory. Once the mailbox is created, you can query Exchange Online for the mailbox and detect its presence

Page 116
using a remote PowerShell session to Exchange Online using the Get-Mailbox command as illustrated in the
example below:
[PS] C:\> Get-Mailbox TRedmond

Name Alias ServerName ProhibitSendQuota


---- ----- ---------- -----------------
Tony Redmond TRedmond am3pr06mb0693 49.5 GB (53,150,220,288 bytes)

Although typically faster, the creation of the mailbox in Exchange Online might take several minutes after the
account is synchronized to Azure AD. If you do not see a mailbox right away, wait a short while and then try
again. The user can only access their mailbox after it is fully created in Exchange Online and they are assigned
a valid license.

Licensing: Moving a mailbox from the on-premises organization or creating one directly in Exchange
Online will not automatically assign an Office 365 license to the account. This is a task that has to be
executed manually. Also, unlike a cloud-only identity for which a mailbox is automatically created after a
license is assigned (if a plan including Exchange Online is chosen), assigning a license to a synchronized
('hybrid') user object in Office 365 will not automatically create a mailbox. A good way to avoid forgetting
to assign a license is to create a PowerShell script that runs periodically and automatically assigns a license
to unlicensed accounts.

Creating a New Mailbox for an Existing User


The on-premises EAC does not allow you to enable a mailbox in Office 365 for a pre-existing user account.
This can only be done through PowerShell with the Enable-RemoteMailbox cmdlet. What happens after
executing the command is almost identical to the process described earlier. However, a small difference exists
in that Email Address Policies do not apply to user accounts that do not have Exchange mailboxes. As such,
the existing user object has no targetAddress based on the tenant's routing domain. Without it, the mailbox is
not addressable in Office 365.
To overcome this limitation, the Enable-RemoteMailbox cmdlet includes a parameter called
RemoteRoutingAddress that can be used to manually stamp the targetAddress attribute of the user’s mailbox:
[PS] C:\> Enable-RemoteMailbox swalker –RemoteRoutingAddress
swalker@office365itpros.mail.onmicrosoft.com

Name RecipientTypeDetails RemoteRecipientType


---- -------------------- -------------------
Steve Walker RemoteUserMailbox ProvisionMailbox

In the process of enabling the mailbox, the local user account is converted to a RemoteMailbox account. As
soon as this happens, one of the Email Address Policies is applied and the user's proxyAddresses attribute is
updated automatically to include additional addresses, including the routing address, as you can see from the
following command:
[PS] C:\>Get-RemoteMailbox swalker | Select -ExpandProperty EmailAddresses |
Select SMTPAddress

SmtpAddress
-----------
Steve.Walker@office365itpros.com
swalker@office365itpros.com
swalker@office365itpros.mail.onmicrosoft.com

As we've seen, there are two ways to create a hybrid mailbox: either enabled directly in Office 365 or the
mailbox is moved to Office 365 from the on-premises organization. As an administrator, it is useful to
understand the distinction between the two methods. It might also be useful to know exactly how each
mailbox in the organization was created. In the on-premises Exchange Management Shell, you can execute

Page 117
the Get-RemoteMailbox cmdlet to list all mailboxes in Office 365. As part of the returned information, there is
a property called RemoteRecipientType. This property indicates whether the mailbox was moved (Migrated) or
created directly in Office 365 (ProvisionMailbox):
[PS] C:\> Get-RemoteMailbox

Name RecipientTypeDetails RemoteRecipientType


---- -------------------- -------------------
Andrew Dunn RemoteUserMailbox Migrated
Mark Spencer RemoteUserMailbox Migrated
Tony Redmond RemoteUserMailbox ProvisionMailbox
Steve Walker RemoteUserMailbox ProvisionMailbox

The same information can also be retrieved directly from Exchange Online, using the Get-Mailbox command.
The difference here is that mailboxes in Office 365 are shown as a regular UserMailbox instead of a
RemoteUserMailbox:
[PS] C:\> Get-Mailbox | Select Name, RecipientTypeDetails, RemoteRecipientType

Name RecipientTypeDetails RemoteRecipientType


---- -------------------- ------------------
Andrew Dunn UserMailbox Migrated
Mark Spencer UserMailbox Migrated
Tony Redmond UserMailbox ProvisionMailbox
Steve Walker UserMailbox ProvisionMailbox

New-RemoteMailbox and the ExchangeGuid


One of the major challenges that exist in hybrid recipient management is keeping objects synchronized
between the on-premises Exchange organization and Exchange Online. It is for this reason that you should
create mail-enabled objects on-premises and then synchronized to Azure Active Directory. At least, that is the
general rule. As you will discover, some exemptions exist for that rule; for example, when creating a new cloud
mailbox using the New-RemoteMailbox or Enable-RemoteMailbox cmdlets.
Earlier, we explained that these cmdlets allow you to provision a mailbox directly in Exchange Online without
having to first create the mailbox on-premises and then move it to Office 365. Behind the scenes, the on-
premises user object is stamped with a series of attributes to signal Exchange Online that it needs to provision
a mailbox after the object (and its attributes) has been synchronized successfully with Office 365.
So far, so good. In fact, when all you need to do is provision a cloud mailbox from on-premises Exchange,
using either cmdlet is probably the easiest way to do so. Everything will be fine until you try to move a
mailbox back to the on-premises organization. A hybrid mailbox move preserves the ExchangeGuid when a
mailbox is moved from the on-premises organization to Office 365 or vice versa.
If you examine a mailbox after it moves to Office 365, you will see a similar set of mailbox properties in both
the on-premises Exchange organization and Exchange Online. In the following PowerShell examples, we query
the mailbox of a user named Joseph Baker and look at the value of the ExchangeGuid in the on-premises
organization and then Exchange Online. In the on-premises organization, the (remote) mailbox has the
following properties:
[PS] C:\> Get-RemoteMailbox JBaker | Select ExchangeGuid

Name ExchangeGuid
---- ------------
Joseph Baker 4a38eded-77b5-43f6-8e2d-bf356d8497e0

These values should match what you see in Exchange Online:


[PS] C:\> Get-Mailbox JBaker | Select Name, ExchangeGuid

Name ExchangeGuid

Page 118
---- ------------
Joseph Baker 4a38eded-77b5-43f6-8e2d-bf356d8497e0

When you use the New-RemoteMailbox or Enable-RemoteMailbox cmdlet, the on-premises organization does
not generate the ExchangeGuid, possibly because the on-premises organization knows that the mailbox is to
be created in the cloud and therefore no need exists for Exchange to stamp an identifying GUID on the new
mailbox. Instead, Exchange Online takes over and stamps the mailbox in Exchange Online with its GUID.
Because the ExchangeGuid is not an attribute that is written back to the on-premises organization as part of
any of the write-back features, both environments are now out of sync. When querying the remote mailbox
from on-premises Exchange, you will now see that the ExchangeGuid is empty:
[PS] C:\> Get-RemoteMailbox JReese | Select Name, RecipientTypeDetails, RemoteRecipientType

Name ExchangeGuid
---- ------------
John Reese 00000000-0000-0000-0000-000000000000

Because the ExchangeGuid value is missing, some problems can be encountered. For instance, if you attempt
to move a mailbox created by the New-RemoteMailbox cmdlet to the on-premises organization, the mailbox
move fails. Unfortunately, this behavior is by design and directly related to how the New-RemoteMailbox or
Enable-RemoteMailbox cmdlets work.
Two things can be done if you meet the problem of a missing ExchangeGuid. The simplest solution is to avoid
using the New-RemoteMailbox and Enable-RemoteMailbox cmdlets entirely. This means that if you ever want
to create a cloud mailbox from the on-premises organization, you must first create an on-premises mailbox
and then move it to Office 365. As such, you ensure that all properties are stamped on the mailbox in the on-
premises organization and then maintained through the directory synchronization process. For many
organizations, this could mean that an additional step is needed in their provisioning process, but it’s not a
huge issue as creating and moving the new mailbox to Office 365 is quickly done.
If you absolutely want to use the New-RemoteMailbox or Enable-RemoteMailbox cmdlets, you can update the
on-premises object with the ExchangeGuid of the Office 365 mailbox. Obviously, you only need to do this if
you want to move the mailbox to the on-premises organization. First, look up the value of the ExchangeGuid
for the mailbox in Exchange Online with the Get-Mailbox cmdlet:
[PS] C:\> Get-Mailbox JBaker | Select Name, ExchangeGuid

Name ExchangeGuid
---- ------------
Joseph Baker 4a38eded-77b5-43f6-8e2d-bf356d8497e0

Next, connect to the on-premises Exchange server, and stamp the ExchangeGuid onto the on-premises
recipient (remote mailbox) using the Set-RemoteMailbox cmdlet:
[PS] C:\> Set-RemoteMailbox JBaker –ExchangeGuid 4a38eded-77b5-43f6-8e2d-bf356d8497e0

Once you've manually stamped the object in the on-premises Exchange organization, you will be able to
move the mailbox.

Real world: In addition to mailbox moves failing, there have been some reports of cross-premises
permissions not working properly when the ExchangeGuid property is blank for an on-premises object.
Your mileage may vary as some organizations have reported that cross-premises permissions work just
fine, even without the ExchangeGuid.

Exchange Admin Center Limitations


When you create a new mailbox in Exchange Online from the on-premises EAC, only a limited set of
configuration options are available. Normally, you can manage all mailbox properties through the EAC. After

Page 119
all, hybrid recipients are supposed to be managed using on-premises tools. However, certain options such as
Mailbox Usage, Mailbox Delegation and Mailbox features are absent. This is because the on-premises
environment cannot access the information with regards to mailbox usage etc. The option pages missing in
the on-premises EAC are available through the EAC in Office 365. But remember, synchronized objects cannot
be managed using the online tools, which is the EAC in Office 365 can only view those properties and access
information like mailbox usage. Attempts to update a property will result in a warning (Figure 5-1).

Figure 5-1: A warning that you cannot update a synchronized object in Office 365

Real world: A new mailbox sometimes takes a few moments to synchronize across directories. This is
because of how identities are synced to and inside Office 365. The synchronization from on-premises is
done with Microsoft Azure Active Directory. From there, a "back sync" process ensures object are
synchronized to the directory used to support Exchange Online (EXODS). Once the synchronization
happens, the mailbox is created. Usually, this entire process only takes a few seconds, but it's common to
have to wait a few minutes after the initial synchronization from the on-premises organization.

Permissions and Sharing Scenarios


Users can collaborate with one another in many ways. One common scenario is to give someone access to
specific resources within your own mailbox. Depending on the scenario (sharing type) and how you give
access to another user, specific permissions are granted. Permissions in a hybrid deployment can be
particularly challenging to deal with, not at the least because of the inconsistent behavior and varying support
statements for different sharing scenarios.

Full Mailbox Access


Administrators grant and configure Full Access to a mailbox using PowerShell or the EAC. To make
permissions work cross-premises some client-side updates are required; you need at least the November
2015 update for Outlook 2013 (or later) to ensure reliable cross-premises access to a mailbox.
Full Access permission works both ways; A mailbox in Office 365 can access an on-premises mailbox, and vice
versa. To grant an on-premises mailbox access to a mailbox in Office 365, follow these steps:
1. Login to the EAC.
2. Navigate to recipients > mailboxes and then select properties of the mailbox you want to assign Full
Access permissions for.
3. In the properties window, navigate to mailbox delegation.
Scroll down to the Full Access section. From there, use the recipient picker (plus-sign) to add the on-
premises mailbox to which you wish to grant permissions.

Page 120
Granting an Office 365 user (mailbox) access to a mailbox in the on-premises Exchange environment can be
done through the EAC or with PowerShell:
[PS] C:\> Add-MailboxPermission –Identity <OnPrem-Mailbox> -User <Office365Mailbox>
-AccessRights FullAccess –AutoMapping $False

By default, Automapping will work when the Full Access permission was granted prior to moving one of the
mailboxes to Exchange Online. If you have assigned the permission while one of the mailboxes was on-
premises and the other in Exchange Online, you must apply a workaround as described here.

Real World. As with many things, the end-user experience in sharing scenarios can vary, depending on
various factors like the client application and the authentication type. If Modern Authentication is enabled,
the experience is more likely to be seamless, but with basic authentication the user might see additional
credential prompts. This is because once Outlook has established a connection to the on-premises
organization, it then needs to connect to Office 365 which triggers the additional authentication.

Enabling objects to be ACLable


Before you can assign certain permissions like Send-on-behalf-of, Delegate Access or even folder permissions
to mailboxes that have been moved to Exchange Online, you must make some changes to the on-premises
recipient objects. Once you have updated the recipients as described below, you can assign permissions to
them. Assigning permissions means updating and Access Control List (ACL). Afterwards, we say that these
objects have become "ACLable." This is important to understand, as the following steps will make more sense.
Depending on what version of Exchange you are running and when you moved the mailbox, you need to
(manually) update the Remote Mailbox object to reflect the new capability. You do not need to do anything in
Office 365, as Microsoft took care of that as part of their service update mentioned earlier.

Real World. Note that you only need to make this update if you wish to support cross-premises
permissions such as Send-on-behalf-of, Delegate Access, etc. If you only need Full Access, no changes are
necessary.
When a mailbox is moved to Exchange Online, the on-premises recipient is updated from a "Mailbox User" to
"Remote Mailbox, Migrated" after the move is completed. Without an update to your configuration, the
object has a value of "-2147483642" in the msExchRecipientTypeDetails attribute of that recipient. However,
that value does not allow it to be assigned permissions to. Instead, a value of "-1073741818" is required. To
ensure that the attribute is correctly updated in the future, you must first update the on-premises
organization configuration:
[PS] C:\> Set-RemoteMailbox -ACLableSyncedObjectEnabled $true

Note: Microsoft's documentation states that you do not need to perform this step when running Exchange
2016 on-premises. However, personal testing contradicts this. Until this ambiguity has been resolved, it's
better to perform this step anyhow. If not, you will have to continue to update recipients manually, as
explained below.
Any mailbox that is moved to Exchange Online after you make the change will automatically be updated with
the correct value. For mailboxes that have been moved prior to the update, you must manually update the
msExchRecipientTypeDetails attribute. The easiest way is to use the on-premises Exchange Management Shell:
[PS] C:\> Get-RemoteMailbox | ForEach { Get-AdUser -Identity $_.Guid -Properties
msExchRecipientDisplayType} | Set-ADObject -Replace @{msexchRecipientDisplayType=-1073741818}

To verify that the command completed successfully, run the following command:
[PS] C:\> Get-RemoteMailbox | ForEach { Get-AdUser -Identity $_.Guid -Properties
msExchRecipientDisplayType} | Select UserPrincipalName,MSExchRecipientDisplayType

Page 121
UserPrincipalName MSExchRecipientDisplayType
----------------- --------------------------
bweaver@office365itprobook.com -1073741818
bchang@office365itprobook.com -1073741818
blane@office365itprobook.com -1073741818
bcampbell@office365itprobook.com -1073741818
bwilson@office365itprobook.com -1073741818
bkelly@office365itprobook.com -1073741818
bjones@office365itprobook.com -1073741818
tredmond@office365itprobook.com -1073741818

After you update all the migrated mailboxes, you can assign permissions to them.

Send-on-Behalf-of
In addition to the Q1 2018 service update, you must install the April 2018 update of Azure AD Connect
because it adds the two-way synchronization of the PublicDelegates attribute between the on-premises
organization and Exchange Online. This attribute contains information on what recipients have been granted
the Send-on-behalf-of rights and is required to ensure that permissions that are granted on-premises show
up in Exchange Online and vice versa. When you assign a permission to a recipient cross-premises, there
might be a delay before the permission starts to work; this is because Azure AD Connect first needs to
synchronize the update attribute.
The process of granting Send-on-behalf-of permissions in a hybrid deployment is no different from on-
premises; you can use both PowerShell or the Exchange Admin Center in Exchange Online to do so. To assign
the permission using the EAC, follow these steps:
1. Login to the EAC.
2. Navigate to recipients > mailboxes and then select properties of the mailbox you want to assign Full
Access permissions for.
3. In the properties window, navigate to mailbox delegation.
Scroll down to the Send on Behalf-section. From there, use the recipient picker (plus-sign) to add the
on-premises mailbox to which you wish to grant permissions.

Note: Once you have updated the ACLableObjectSynced parameter in your Organization Configuration,
you will be able to do the same in Exchange on-premises for a mailbox in Exchange Online.

Delegate Access
Delegate Access is typically configured by users without administrator help using a client like Outlook
desktop. Depending on what level of access is given, the delegate can perform various tasks such as accessing
specific mailbox folders, sending messages on behalf of the user, or managing the user's calendar. When a
user grants other users delegate access to his mailbox, specific attributes of the user object and the mailbox
are updated to reflect the newly granted permissions. If both the source mailbox and the assignee are in the
same environment (on-premises or Office 365), delegate access will work just fine.
This is not exactly true when the involved mailboxes are in different environments. Although cross-premises
Delegate access is not supported, most of the functionality (like calendar access, meeting requests forwarding
rules, etc.) work most of the time. In that statement also lies the danger: testing shows that the end-user
experience can be unpredictable and greatly depend on the (type of) client, authentication mechanism, etc.
Furthermore, more complex on-premises organizations (like multi-forest deployments) might have additional
challenges of their own.
As Microsoft is working to provide a more coherent story around cross-premises Delegate Access, you should
follow the advice given in this article.

Page 122
Send-As
Like Delegate Access, both the mailbox and assignee's mailbox must be hosted in the same environment for
these permissions to work without added effort. However, you can apply a workaround to make these
permissions work cross-premises. Paradoxically, although Microsoft provides details of the workaround, they
do not officially support this permission cross-premises yet. However, testing shows that Send-As seems work,
even in some complex scenarios.
Unlike other Exchange permissions, Send-As permissions are granted at the Active Directory level. This is why
you need to run the Set-ADPermission cmdlet, and not Add-MailboxPermission. Because there is currently no
mechanism to synchronize these permissions cross-premises, Send-As permissions will only work without
added effort if you assign them prior to moving a mailbox to Exchange Online. If you wish to assign Send-As
permissions cross-premises, you must manually keep the permissions in sync between Exchange Online and
on-premises. If an on-premises recipient adds an Exchange Online mailbox as assignee, you must manually
run the Add-ADRecipientPermission cmdlet in Exchange Online. In the reverse scenario, you must run the Set-
ADPermission cmdlet within the on-premises organization.
In the following example, an Exchange Online mailbox (TRedmond) is granted Send As permissions to an on-
premises mailbox (PGreen). To ensure that TRedmond can exert its rights, the administrator manually runs the
following command within the on-premises organization:
[PS] C:\> Set-ADPermissiong "PGreen" -User "Tredmond" -ExtendedRights "Send As"

In the reverse scenario, where PGreen would have been granted to TRedmond's mailbox, the administrator
would have to execute the following command in Exchange Online:
[PS] C:\> Add-RecipientPermission "TRedmond" -AccessRights SendAss -Trustee "PGreen"

Preserving Mailboxes for ex-Employees


The Managing Users chapter in the main book discusses how an organization can secure the information in
the Office 365 locations used by an employee leaving the company. For instance, the mailbox can be
converted into a shared mailbox, or it can be turned into an Inactive mailbox. While both actions are also valid
in a hybrid deployment, some limitations apply. There is, of course, always the option to export the data to a
.PST file before removing the mailbox, but that's a less attractive option than the other two.

Converting a Regular Mailbox into a Shared Mailbox


Converting a regular mailbox into a shared mailbox is normally a trivial task. However, as has already been
pointed out several times, the existence of directory synchronization sometimes causes a process to be
slightly different. One would expect that because of directory synchronization, converting a mailbox in Office
365 to a shared mailbox is started using one of the on-premises tools like Exchange PowerShell or the EAC.
More specifically, it seems like the Set-RemoteMailbox cmdlet would be the perfect candidate for making this
change. Unfortunately, this is not the case. In fact, once a mailbox is created in or moved to Exchange Online,
there is no on-premises PowerShell cmdlet or interface in the on-premises EAC that can convert a regular
mailbox into a shared mailbox, or the other way around for that matter. The reason for this is because the Set-
RemoteMailbox cmdlet does not have a –Shared switch or anything else to help complete the task. Hence the
question remains: how can you convert an online mailbox to a shared mailbox in a hybrid deployment?
This scenario is one of the many exceptions to the rule where you can manage a hybrid recipient using one of
the online tools. Microsoft includes a single-click-conversion in the cloud version of EAC to change a regular
mailbox into a shared mailbox. In the background, EAC does nothing more than call the Set-Mailbox cmdlet as
follows:

Page 123
[PS] C:\> Set-Mailbox MSpencer –Type Shared

After executing the command, you can call the Get-Mailbox cmdlet and observe that the RecipientTypeDetails
for the mailbox have successfully changed from UserMailbox into SharedMailbox:
[PS] C:\> Get-Mailbox MSpencer | Format-List *type*

ResourceType :
RemoteRecipientType : Migrated, UserMailbox
RecipientType : UserMailbox
RecipientTypeDetails: SharedMailbox

In a cloud-only deployment, this would be the end of it. However, in a hybrid deployment we still have the
on-premises counterpart where the "Remote Mailbox" is still shown as a regular mailbox instead of a Shared
Mailbox. If you are only using your on-premises Exchange servers for management purposes, you might not
care a great deal about what the value of the RecipientTypeDetails property is for a Remote Mailbox. However,
if your hybrid deployment contains mailboxes in both Exchange on-premises and Exchange Online, you want
both environments to show the mailbox for what it really is: a shared mailbox.
We took care of Exchange Online by executing the Set-Mailbox command to convert the mailbox to be a
shared mailbox. Officially, the 'correct' way of converting a regular user mailbox into a shared mailbox is to
convert it on-premises and then move it to Office 365. While this approach will work, it means that you would
have to off-board the mailbox to the on-premises environment first, convert it into a shared mailbox and the
move it back to Office 365. This seems like an awful lot of work to just convert a mailbox.
Another way exists to accomplish the same task. Unfortunately, it involves getting your hands dirty with a low-
level attribute editor such as ADSIEdit. Alternatively, you can also use the Attribute Editor in the Active
Directory Users and Computers console or edit the attribute in PowerShell. Note that neither of these
approaches are supported by Microsoft.
As discussed earlier, in a hybrid deployment the on-premises user object is synchronized to Azure Active
Directory so that the object is identical in both the on-premises organization and Office 365. In the previous
PowerShell example, we examined the RecipientTypeDetails attribute. The value of this attribute tells Exchange
what the type of a mailbox is. When viewing the RecipientTypeDetails through PowerShell, the value you see is
a string. For example, 'SharedMailbox' or 'UserMailbox'. However, the actual decimal value is what is stored in
Active Directory in the msExchRecipientTypeDetails attribute. In case of a hybrid deployment, a mailbox in
Office 365 isn't represented as a regular mailbox in the on-premises environment, but as a remote mailbox. In
fact, the attribute that controls the type of a mailbox in Office 365 is not the msExchRecipientTypeDetails but
msExchRemoteRecipientType (Figure 5-2).

Figure 5-2: The msExchRemoteRecipientType attribute of a remote mailbox, viewed through ADSIEdit

Page 124
The decimal value '6' (six) means a mailbox, migrated from on-premises, with a provisioned archive in
Exchange Online. Provisioned means that the archive was created in Exchange Online rather than being
moved from the on-premises environment. Using PowerShell connected to Exchange Online, you will see the
corresponding values when running the Get-Mailbox command and looking at the RemoteRecipientType
property:
[PS] C:\> Get-Mailbox Adunn | Select RemoteRecipientType

RemoteRecipientType
-------------------
ProvisionArchive, Migrated

Now that we know what attribute is used to control the type of the mailbox, we can examine the different
values that it can have. Table 5-2 gives the necessary guidance.

msExchRemoteRecipientType Explanation
3 ProvisionedMailbox, ProvisionedArchive (both the mailbox and
the archive were created directly in Office 365, for example using
the New-RemoteMailbox command)
4 Migrated mailbox (the mailbox was moved from the on-premises
organization to Office 365)
6 Migrated mailbox, ProvisionedArchive (the mailbox was moved
from the on-premises organization to Office 365, but the archive
was created directly in Exchange Online)
20 DeprovisionArchive, Migrated mailbox (this value is used when
an archive for a migrated mailbox is disabled)
97 Provisioned Mailbox, Shared
100 Migrated Mailbox, Shared
Table 5-2: Remote Recipient Type values
If we take the above information and put it together, converting a regular mailbox to a shared mailbox in a
hybrid deployment consists of the following two steps:
1. Convert the mailbox in Exchange Online to a shared mailbox using the Set-Mailbox –Shared
command. This will change the RecipientTypeDetails of the mailbox in Office 365 to 'Shared'.
2. Match the on-premises user object with the object in Office 365 by manually changing the value of
the msExchRemoteRecipientType attribute into 97 or 100 and then wait for directory synchronization
to happen. This change will ensure that the RemoteRecipientType property of the mailbox in Office
365 is changed into either ProvisionMailbox, SharedMailbox or Migrated, SharedMailbox. There is no
effective difference between the value of '97' and '100' for the msExchRemoteRecipientType and the
value does not affect how a shared mailbox works.
Looking back at the previous PowerShell example, we can see that the value of the RemoteRecipientType in
Exchange Online was 'ProvisionArchive, Migrated'. After executing both steps, the properties of the mailbox
will look differently as you can see from the following output:
[PS] C:\> Get-Mailbox Adunn | Format-List *type*

ResourceType :
RemoteRecipientType : Migrated, SharedMailbox
RecipientType : UserMailbox
RecipientTypeDetails: SharedMailbox

The process of converting a hybrid mailbox to a shared mailbox is not straightforward. It would be much
easier to just be able to run the Set-RemoteMailbox command with a –Shared parameter. However, until
Microsoft decides to create such a switch, this is just what you will have to do!

Page 125
Converting a Regular Mailbox into an Inactive Mailbox
Luckily, the process for converting a mailbox into an Inactive Mailbox is much simpler that the one to convert
into a Shared Mailbox. To remind you, an Inactive Mailbox is a mailbox in Office 365 for which litigation hold
or an in-place hold is activated before the user account is removed. Instead of purging the mailbox after the
deleted mailbox retention period, Office 365 keeps the mailbox in the soft-deleted state until the hold is
removed.
Following the general rule in a hybrid deployment, activating an in-place hold for a hybrid mailbox is done
through the on-premises management tools. For instance, using the EAC, you can create a new in-Place hold
policy for the mailbox you want to preserve. In-place holds are managed in the compliance section of EAC.
After the in-place hold is enabled, you must wait for the next directory synchronization cycle for the change to
be applied to the mailbox in Office 365. To verify that the changes were successfully synchronized and applied
to the online mailbox, you can use the Get-Mailbox command in PowerShell for Exchange Online:
[PS] C:\> Get-Mailbox Adunn | Select InPlaceHolds

InPlaceHolds : {f26df6b770e841f8a0b78b0f7c75d020}

Alternatively, you can also enable an In-Place Hold directly in Exchange Online and wait for the directory
synchronization process to synchronize the change back into the on-premises environment. This too is one of
the exceptions where you can manage a hybrid recipient through online tools because the attribute which
contains information about the hold policies (msExchUserHoldPolicies) is one of the few write-back attributes
that exist today.
If you perform this action for the sole purpose of being able to delete the user account, there is no real
purpose in making sure that the settings are consistent across both environments. After all, you are going to
delete the user object shortly anyway. Exactly for this reason, you can also choose to enable Litigation Hold
for the mailbox in Office 365 without waiting for any changes to be written back into the on-premises
environment. To do this, run the Set-Mailbox command with the LitigationHoldEnabled switch set to $True in
an Exchange Online PowerShell session:
[PS] C:\> Set-Mailbox Adunn –LitigationHoldEnabled $True
WARNING: The hold setting may take up to 60 minutes to take effect.

Note that this change will not be synchronized back into the on-premises Active Directory. But for the sole
purpose of converting a mailbox into an Inactive Mailbox, this is OK. After the mailbox has been successfully
configured for Litigation Hold or placed under an in-place hold, you can remove the user account. This will
ensure that the mailbox is 'disconnected' from the account and placed in a soft-deleted state. It will remain in
such a state until the hold is removed, or until you decide to recover or restore the Inactive Mailbox.
In a hybrid deployment, several options exist to delete a user account:
• Delete the on-premises account and wait for the directory synchronization process to sync the
deletion to Azure Active Directory after which the account is also removed from Office 365
automatically.
• Forcefully remove the user account from Azure Active Directory and exclude the on-premises user
object from being synchronized again.
The latter approach is a little tricky and should only be tried if you cannot remove the on-premises user
object. If you go down this route, it is critical that you make sure the user object is not (accidentally)
synchronized back to Office 365. If it is synchronized back, the soft-deleted mailbox will automatically be
reconnected to the user account. The recommendation to change the account password before converting a
mailbox is very relevant as this ensures that none can access the mailbox if it is accidentally revived.

Page 126
More About Inactive Mailboxes
To see which inactive mailboxes exist in your tenant environment, you can run the following command from
the Exchange Online PowerShell session. Note that this command is not available in on-premises Exchange
PowerShell:
[PS] C:\> Get-Mailbox -InactiveMailboxOnly

Name Alias ServerName ProhibitSendQuota


---- ----- ---------- -----------------
Tony Redmond TRedmond am3pr06mb0693 49.5 GB (53,150,220,288 bytes)

If you need to regain access to the information held in an inactive mailbox, there are two different actions we
can perform:
• Restoring an Inactive Mailbox.
• Recovering an Inactive Mailbox.
The difference between both is more than just semantics. Restoring an inactive mailbox means that the
contents of the Inactive Mailbox are transferred into another – already existing – mailbox. Recovering, on the
other hand, is the process of converting the inactive mailbox into a mailbox which can be used by a different
person. Both processes are explained in the Exchange Online chapter (main book) and apply to hybrid
recipients. However, there are some particularities to keep in mind.

Restoring an Inactive Mailbox


There is no difference in restoring an Inactive Mailbox to a hybrid mailbox or a cloud mailbox. The New-
MailboxRestoreRequest cmdlet, which is used to request the restore of the contents of that mailbox into a new
mailbox includes the AllowLegacyDNMismatch switch which enables it to do just that:
[PS] C:\> $Inactive = (Get-Mailbox –InactiveMailboxOnly –Identity "Jill Smith").DistinguishedName

[PS] C:\> New-MailboxRestoreRequest –SourceMailbox $Inactive –TargetMailbox


Abrus@Office365ITPros.com –TargetRootFolder "Jill Smith Old Mailbox" –AllowLegacyDNMismatch

Name TargetMailbox Status


---- ------------- -----
MailboxRestore Abrus Queued

Of course, the mailbox into which you want to restore the contents of the Inactive Mailbox must already exist.

Recovering an Inactive Mailbox


The alternative approach is to recover an Inactive Mailbox. This means that instead of copying contents from
the Inactive Mailbox to a different one, the inactive mailbox is attached to a user account without a mailbox in
Office 365 so that it becomes the user's mailbox. The problem with this approach in a hybrid deployment is
that you cannot create a mailbox for a hybrid recipient using the online tools. Because of this, you can't simply
run the New-Mailbox -InactiveMailbox command in an Exchange Online PowerShell session to recover an
Inactive Mailbox to a synchronized hybrid recipient. Furthermore, the on-premises Exchange Management
tools do not include the necessary cmdlets to perform the recovery of the mailbox. Luckily, none of this
means that you cannot recover an inactive mailbox for a hybrid recipient. However, it requires some additional
steps, including soft-matching objects through the Directory Synchronization process.
First, create a new online mailbox and attach the selected Inactive Mailbox to it using the Exchange Online
PowerShell session. This will also create a cloud-only user account in Azure Active Directory:
[PS] C:\> $Inactive = (Get-Mailbox –InactiveMailboxOnly –Identity "Jill Smith").DistinguishedName

[PS] C:\> New-Mailbox –InactiveMailbox $Inactive –Name "Joe Healy" –FirstName Joe
–LastName Healy –DisplayName "Joe Healy" –PrimarySMTPAddress "Joe.Healy@Office365ITPros.com"

Page 127
–Password (ConvertTo-SecureString –String "Testing123!" –AsPlainText –Force)
–ResetPasswordOnNextLogon $True

Next, you must create a matching object (RemoteMailbox) in the on-premises organization using the same
email address as the cloud-based mailbox that was created to attach the Inactive Mailbox to
(Joe.Healy@Office365ITPros.com). Additionally, you must ensure that the ExchangeGuid of the on-premises
object matches the one of the mailbox in Office 365. The next step is to run the following command in an
Exchange Online PowerShell session to get the ExchangeGuid of the recovered mailbox and update the
corresponding on-premises user object by adding the ExchangeGuid to it:
[PS] C:\> $mailbox = Get-Mailbox "Joe Healy"
[PS] C:\> Set-RemoteMailbox –Identity RecoveredMailbox –ExchangeGuid $mailbox.ExchangeGuid

If you do not already have an on-premises mail-enabled user object of the RemoteMailbox type, you can
create one with the New-RemoteMailbox cmdlet first as described earlier.
The next time that the Directory Synchronization process runs, the on-premises object will be "soft-matched"
to the cloud-based object so that in subsequent synchronizations the object is updated automatically. If you
have deployed SSO, at this point the on-premises user account can access the cloud-based mailbox using its
on-premises credentials instead of the credentials managed and stored in Office 365.

Shared Mailboxes
Shared mailboxes have a lot in common with regular mailboxes. Moving a shared mailbox to Office 365 is a
simple and straightforward process. Things become a little more complicated when attempting to create a
new hybrid shared mailbox directly in Office 365. If a cloud-only shared mailbox is created, only Office 365
users will be able to see the mailbox in the GAL. Therefore, just like hybrid recipients, a shared mailbox should
be created on-premises.
Only the cloud version of EAC supports the option to convert a shared mailbox to a regular mailbox and vice
versa. An administrator can convert a regular mailbox to a shared mailbox in Office 365, but that creates an
inconsistency between the on-premises object and Office 365 in the process. In the on-premises Exchange
organization the mailbox is still known as a regular mailbox, while it is a shared mailbox in Office 365. Because
users in both environments can access shared mailboxes cross-premises, this can cause confusion. The
inconsistency can be seen by looking at recipient details in both the on-premises organization and Exchange
Online. In the on-premises organization, the remote mailbox is of the type RemoteUserMailbox:
[PS] C:\> Get-RemoteMailbox <user> | Format-List *recip*

RemoteRecipientType : Migrated
RecipientLimits : Unlimited
RecipientType : MailUser
RecipientTypeDetails : RemoteUserMailbox

In Exchange Online, however, the mailbox is shown as a SharedMailbox instead:


[PS] C:\> Get-Mailbox <user> | Format-List *recip*

RecipientLimits : 500
RemoteRecipientType : Migrated
RecipientType : UserMailbox
RecipientTypeDetails : SharedMailbox

Today, the only supported way to create a hybrid shared mailbox directly in Office 365 is to create a shared
mailbox on-premises and then move it to Office 365. Another effective workaround – albeit unsupported - is
to first create a shared cloud mailbox in Office 365 and then use the 'soft-matching' process to connect the
on-premises object and cloud-object with one another. This approach is very similar to the one used to

Page 128
recover an Inactive Mailbox. When running the New-Mailbox cmdlet, make sure to specify an email address
using the PrimarySMTPAddress parameter as this is what will be used to match the on-premises object to the
cloud object later on. Run the following command from an Exchange Online PowerShell session to create a
new shared mailbox (and cloud-only user account):
[PS] C:\> New-Mailbox –Name SharedMBX –Shared
–PrimarySMTPAddress SharedMBX@office365itpros.com

Name Alias ServerName ProhibitSendQuota


---- ----- ---------- -----------------
SharedMBX2 SharedMBX2 db3pr06mb0619 49.5 GB (53,150,220,288 bytes)

Next, you create an on-premises remote mailbox using the New-RemoteMailbox cmdlet. This should be done
from the on-premises server. As you can see from the output below, the RemoteRecipientType is set to
ProvisionMailbox which means this is a regular mailbox, not a shared mailbox. To create a shared mailbox,
specify the -Shared switch.
[PS] C:\> New-RemoteMailbox –Name SharedMBX
–RemoteRoutingAddress sharedmbx@office365itpros.mail.onmicrosoft.com
–Password (Get-Credential).Password –UserPrincipalName "sharedmbx@office365itpros.com"
–PrimarySMTPAddress SharedMBX@office365itpros.com

Name RecipientTypeDetails RemoteRecipientType


---- -------------------- -------------------
SharedMBX RemoteUserMailbox ProvisionMailbox

To convert the mailbox to a shared mailbox, you must manually change the MsExchangeRemoteRecipientType
attribute of the on-premises object to a value of '97' using a tool like ADSIEdit. To be effective, this must be
done before the next directory synchronization cycle. Alternatively, you can also edit the attribute using
PowerShell, as illustrated in the following example:
[PS] C:\> Set-ADUser MSpencer –Replace @{msExchRemoteRecipientType=97}

Note: To avoid any problems due to the directory synchronization process starting while you are still
performing these actions, it is a clever idea to temporarily disable the synchronization schedule on the
directory synchronization server (AAD Connect).
As soon as you make the change, you can verify that the mailbox shows up as a "SharedMailbox" in the on-
premises environment:
[PS] C:\> Get-RemoteMailbox SharedMBX | Select Name,RecipientTypeDetails,RemoteRecipientType

Name RecipientTypeDetails RemoteRecipientType


---- -------------------- -------------------
SharedMBX RemoteUserMailbox ProvisionMailbox, SharedMailbox

At the next synchronization cycle, the on-premises object will be linked to the mailbox in the cloud thanks to
the soft-matching process. The mailbox in Office 365 will now still be a shared mailbox, as you can see from
the output below after running the Get-Mailbox cmdlet in the Exchange Online PowerShell session:
[PS] C:\> Get-Mailbox SharedMBX | Select Name, RecipientTypeDetails

Name RecipientTypeDetails
---- --------------------
SharedMBX SharedMailbox

Resource mailboxes: Unlike shared mailboxes, an administrator can easily change the type of a resource
mailbox using the Set-RemoteMailbox cmdlet in the on-premises organization. By adding the –Type

Page 129
parameter and specifying the desired value (Regular, Room, Equipment), the mailbox's type will be changed
at the next synchronization cycle.

Archive Mailboxes
Enabling Exchange Online Archives for On-premises Mailboxes
One of the features in a hybrid deployment is the ability to create a cloud-based archive for an on-premises
mailbox. Like most other tasks, an administrator can enable a cloud-based archive either using the on-
premises EAC or PowerShell. To enable the archive from the EAC, select the mailbox, click Enable under In-
Place Archiving in the action pane and then select Cloud-based archive: <tenant>.mail.onmicrosoft.com:
You can also enable a cloud-based archive for a mailbox using PowerShell. An example of doing so is shown
below. The command must be executed from an on-premises server.
[PS] C:\> Enable-Mailbox TRedmond –RemoteArchive
–ArchiveDomain office365itpros.mail.onmicrosoft.com

Name Alias ServerName ProhibitSendQuota


---- ----- ---------- -----------------
Tony Redmond Tredmond e15-02 Unlimited

Now, let's discuss what happens in the background when a cloud-based archive is enabled.

Archive Provisioning Process


The process of enabling an Online Archive for an on-premises archive can take anywhere from an hour to
several hours, depending on the version of Azure AD Connect you have deployed. The main reason for this is
because of how the provisioning of an archive is done. Figure 5-3 illustrates the provisioning process, step-by-
step:

Figure 5-3: Exchange Online archive provisioning process


1. The administrator enables the online archive for one or more on-premises mailboxes by either issuing
the Enable-Mailbox -RemoteArchive cmdlet in the on-premises Management Shell or using the EAC.

Page 130
2. Exchange "stamps" the on-premises object with archive details such as an Archive Name, the Archive
Guid, quota settings, and a temporary Archive Status of HostedPending as illustrated below:
[PS] C:\> Get-Mailbox bjones | Format-List Archive*

ArchiveDatabase :
ArchiveGuid : 5cddc878-69f9-4eef-a4c1-f5677d4b51f0
ArchiveName : {In-Place Archive - Brian Jones}
ArchiveQuota : Unlimited
ArchiveWarningQuota : Unlimited
ArchiveDomain : office365itpros.mail.onmicrosoft.com
ArchiveStatus : None
ArchiveState : HostedPending
ArchiveRelease :

3. The Directory Synchronization process updates the cloud-based object with the new attribute values
written to the on-premises object when the archive was enabled.
4. The updated values trigger a process in Exchange Online to create the archive mailbox. The archive
mailbox will be created using the Archive GUID from the on-premises mailbox. Once the archive
mailbox is created, the cloud-based mail-enabled user account is updated by changing the value of
the ms-Exch-ArchiveStatus attribute to 1.
5. The next time the directory synchronization process runs, the updated attribute is written back into
the on-premises Active Directory. When this happens, the ArchiveState attribute of the user's mailbox
is changed from HostedPending into HostedProvisioned.
6. Next time the user opens his mailbox in Outlook or Outlook Web App, the Autodiscover process
includes information of the cloud-based archive which triggers the client to connect to the added
resource as described in the Clients chapter (main book).

Note: The first time a user tries to connect to their archive mailbox, a credential prompt might appear,
depending on how authentication is configured. When you use the default authentication, Outlook
connects to Exchange Online using basic authentication which triggers the prompt for credentials. When
this happens, the user can select to save their password and they are not asked for credentials the next
time they connect to the online archive. However, if you have configured Single Sign On and have enabled
Modern Authentication, no more password prompts will appear.

Issues with the On-Premises Mailboxes Online Archives Combination


Enabling Exchange Online Archives for on-premises mailboxes is a straightforward way for an organization to
extend functionality into the cloud. However, there are some points to keep in mind. For instance, Outlook
might become temporarily unresponsive when dragging/dropping messages to/from the archive. Although
this is not specifically an Exchange Online issue (the same challenges exist in an on-premises deployment), it
is because Outlook processes these actions as foreground operations. In other words, Outlook waits for the
action to complete before allowing a user to perform another action. The challenge is that the time it takes for
the item to move into or from the archive depends on several factors such as the latency and bandwidth of
the Internet connection and the size of the item. It is more efficient to let Exchange handle the offloading of
messages to the archive with Exchange retention policies: When suitable retention policies are in place, the
Managed Folder Assistant moves messages that reach their retention age into the archive using a background
process that does not affect clients. Note that Office 365 retention policies, which you can apply to mailboxes,
do not support an action to move items to archive mailboxes.
Another issue that sometimes occurs is where an organization decides to move archives back from the cloud
to an on-premises server. There is no supported method to do this, largely because Exchange Online supports
auto-expanding archives while the on-premises version of Exchange do not. If you need to move content back
to an on-premises server, you must use the Outlook desktop client to drag and drop the content from the

Page 131
archive to a PST and later import from the PST to the target destination. Outlook desktop is the only client
that can access an on-premises mailbox and a cloud archive.
Finally, many of the PowerShell cmdlets available for Exchange do not support execution against an online
archive when the primary mailbox is on-premises. The problem here is that if you run cmdlets from a session
connected to the on-premises server, you can access the primary mailbox but not the archive (Get-
MailboxStatistics is a notable exception). On the other hand, if you connect to a cloud mailbox, you cannot
access archives that belong to on-premises mailboxes.
It is possible that these issues exist because of a view that on-premises mailboxes with cloud archives will
eventually move everything to the cloud and so cause the problems to disappear. It is unclear whether
Microsoft will dedicate any resources to fix the problems and they might then be something that you just
have to live with if you want to use this on-premises/cloud mailbox configuration.

Enabling Archives through the Security and Compliance Center


Administrators can also enable an archive for a hybrid (synchronized) user with an Office 365 mailbox from
the Security and Compliance Center. Considering that the general rule is that you should only manage hybrid
recipients using the on-premises tools, this is a little surprising, although not the only exception to the rule.
The benefit of provisioning an archive through the Security and Compliance Center is that only a single
synchronization cycle occurs for the on-premises mailbox to know that an archive is available. Although this
appears to be an advantage, some drawbacks exist. These issues exist because of how the directory
synchronization process works.
As described earlier, when you enable a mailbox through the on-premises management tools, the on-
premises mail object is stamped with a variety of attributes, including ArchiveState, ArchiveName and
ArchiveGuid. Subsequently, those attributes synchronize to Office 365 to trigger the creation of the archive.
When the Exchange Online provisions the archive, all details are available and both environments are in a
consistent state.
However, when you enable an Office 365 archive for a synchronized user through the Security and
Compliance Center, the ArchiveState, ArchiveName and ArchiveGUID attributes are not written back to the on-
premises object and only the msExchArchiveStatus is updated. To illustrate the issue, here is what the attribute
values look like for a hybrid recipient when an archive is created using on-premises tools:
[PS] C:\> Get-RemoteMailbox jpearson | Format-List Archive*

ArchiveState : HostedProvisioned
ArchiveGuid : f5ab68ba-f026-4e8f-b931-e95d08187bed
ArchiveName : {In-Place Archive – Joe Pearson}
ArchiveQuota : 100 GB (107,374,182,400 bytes)
ArchiveWarningQuota : 90 GB (96,636,764,160 bytes)
ArchiveDatabase :
ArchiveStatus : Active

Now compare the values created when an archive is created for an on-premises hybrid mailbox from the
Security and Compliance Center. Notice the lack of values for ArchiveGuid, ArchiveState, or ArchiveName:
[PS] C:\> Get-RemoteMailbox jbaker | Format-List Archive*

ArchiveState : None
ArchiveGuid : 00000000-0000-0000-0000-000000000000
ArchiveName : {}
ArchiveQuota : 100 GB (107,374,182,400 bytes)
ArchiveWarningQuota : 90 GB (96,636,764,160 bytes)
ArchiveDatabase :
ArchiveStatus : Active

Page 132
Despite the advantage of faster provisioning, the inconsistency that is generated between the on-premises
object and its Office 365 counterpart can create some management challenges. Through the normal
provisioning process, the on-premises objects have all the archive information and can be queried using the
on-premises management tools like PowerShell. In this approach, however, the on-premises objects have no
values for archive-specific attributes. As such, if you use archives both on-premises and in Office 365; you
must target reporting scripts and tools to fetch information from both Exchange environments to get the full
picture of your environment –rather than querying just the on-premises Exchange servers.
The lack of an ArchiveGuid stamped on the on-premises object also prevents you from moving the archive
back to the on-premises organization! Before you can move the archive, you must manually copy the value
from the mailbox in Office 365 to the on-premises object.
If you do not plan to off-board archives and you can live with the idea that you must focus on Exchange
Online any time information is needed about archives, you might not care a great deal about this issue. But if
you want to keep maximum flexibility, you might want to ensure that archives are created consistently using
only the on-premises EAC or PowerShell.

Hybrid Public Folders


Microsoft supports hybrid interconnectivity for public folders. This means that an on-premises user can access
public folders running inside Exchange Online, but only if their mailbox is located on an Exchange 2013 or
Exchange 2016 server. Exchange 2010 and Exchange 2007 mailbox servers have no knowledge of the public
folder implementation inside Exchange Online. The reverse is also true in that users whose mailboxes are on
Exchange Online can access on-premises public folders, but only if the public folders are on a mailbox server
running a supported version (at least Exchange 2010 SP3 or Exchange 2007 SP3 RU10) that includes the
necessary code to support public folder interoperability with Exchange Online.
Public folders can only exist in one place inside a hybrid deployment. You cannot split the hierarchy across
on-premises and cloud because no facilities exist to synchronize the hierarchy across the two platforms. Some
other steps are necessary to ensure that mail-enabled public folders work properly in a hybrid configuration.
Mail-enabled public folders are Active Directory objects but these objects are not synchronized in the same
way as used for mailboxes and other mail-enabled objects. Instead, a manual synchronization process based
on running PowerShell scripts to export (from the on-premises organization) and import (into Office 365)
information about mail-enabled public folders is needed. Ideally, this synchronization should be performed
daily to ensure that all mail-enabled public folders function properly from both sides of the hybrid
environment. The steps necessary to obtain the scripts and perform the synchronization are available online
along with the steps required to configure Exchange Online to redirect public folder requests to the on-
premises organization.

Note: The May 2017 release of Azure AD Connect introduced the ability to synchronize mail-enabled
Public Folders to Office 365. This functionality, however, cannot be used to make Public Folders available
cross-premises. The migration and hybrid configuration process for Public Folders still relies on the use of
scripts provided by Microsoft.

Accessing Public Folders Managed by On-premises Exchange


Mailbox Servers
Detailed steps needed to configure the environment to support access from Exchange Online mailboxes to
legacy public folders hosted in on-premises Exchange 2010 or Exchange 2007 servers are documented in

Page 133
TechNet. An assumption is made that the on-premises and cloud organizations share a hybrid configuration
that is working smoothly. In outline, the process follows these steps:
• Make the legacy public folders available to Exchange Online mailboxes. This is done by creating an
empty mailbox in a database on every on-premises mailbox server that hosts public folder databases.
The mailbox does nothing except to act as a proxy for incoming connections from Exchange Online
users.
• Exporting details of the on-premises mail-enabled public folders to an XML file. Microsoft provides a
script for this purpose.
• Importing the XML file to create mail-enabled objects so that they are recognized as such by
Exchange Online. Again, Microsoft has a script to do the work. Note that exporting and importing
public folder data needs to be done periodically to pick up changes made in the public folder
hierarchy on the on-premises servers. Synchronization using the scripts provided by Microsoft is not
automatic. However, you can deploy DirSync or Azure Active Directory Sync Services to achieve
automatic cross-premises synchronization of public folder objects. This is the preferred solution if you
want to maintain legacy public folders on-premises.
• Configure the Exchange Online tenant so that it knows to redirect attempts to access public folders to
the on-premises servers. This is done by running the Set-OrganizationConfig cmdlet to provide the
names of the public folder proxy mailboxes for the on-premises servers.
It is also possible to have the reverse configuration and have Exchange Online provide public folders to on-
premises mailboxes. In this scenario, the public folder mailboxes pointed to by the Set-OrganizationConfig
cmdlet are hosted by Exchange Online. In either case, it is recommended that the set of public folder
mailboxes used to redirect connections from legacy to on-premises or vice versa do not contain the primary
public folder mailbox as this mailbox should be reserved (as far as possible) for hierarchy maintenance and
propagation.
Cross-premise access is only supported by Outlook clients as these are the only clients that consume the
information provided by Autodiscover which tells the client how to find the on-premises public folders.
Outlook 2016 for Mac (with the April 2016 update) supports certain cross-premises scenarios for access to
public folders. OWA doesn't use Autodiscover so Exchange Online users can't use this client to access public
folders.

Groups
Although some types of groups are only available in Office 365 it does not mean that the other group types
aren't relevant in a hybrid deployment. As you will notice in the topics below, there are some particularities
which you need to look out for!

Managing Distribution Groups


Distribution Groups and group memberships are automatically synchronized to Office 365 by the directory
synchronization process. The cloud-based object and its on-premises counterpart are therefore kept in an
identical state. If distribution groups are managed only by IT staff, you should be safe. However, if you allow
users to manage their own Distribution Groups, there are some limitations that apply within a hybrid
deployment.
Let's have a look at how users can manage a distribution group on-premises. If permitted by the assigned
user role assignment policy, users can manage the membership of the distribution groups that they own
through Outlook Web App and/or Outlook. If we fast-forward to a hybrid deployment where the owner's
mailbox is moved to Office 365, we are presented with the challenge that hybrid distribution groups cannot
be managed by online tools. As such, no changes can be made because the distribution groups are
Page 134
synchronized from the on-premises Active Directory to Azure Active Directory, not the other way around. The
lack of a write-back capability or support for cross-premises permissions leave a migrated user with a loss in
functionality. If this is a feature which you have come to rely upon, there are a few things you can do. Neither
of them is a complete answer, but they provide a good start.
• Let IT handle all request to add or remove members from Distribution Groups, which is not a very
scalable solution.
• Use a third-party Identity Management solution to deal with group memberships. This would mean
loss of functionality through OWA as users will have to use the third-party tool to make changes to
distribution groups.
• Only use cloud-based groups for cloud mailboxes. However, this means that on-premises mailboxes
cannot see the cloud-based Distribution Groups as these are not synchronized back to Active
Directory. You can work around this problem by creating Mail Contacts for each cloud-based
Distribution Group, but that can be quite time consuming and is not very efficient. Not to mention
that it can be complex to manage and maintain if you have a sizeable number of groups.
• Give users access to tools such as the Active Directory Users and Computers console or Dsquery.exe.
The problem with this solution is that the computer used to access these tools must be connected to
the internal network. Unless you publish these tools through a tool such as RemoteApp or Citrix, users
cannot manage distribution group membership when not connected to the corporate network.

Mail-Enabled Public Folders and Distribution Group Memberships


Another effect of using directory synchronization is the occasional oddity observed in group memberships.
Group memberships are normally synchronized to Azure Active Directory. However, when an on-premises
mail-enabled public folder is a member of a distribution group, that group membership is not fully honored in
Office 365 because the directory synchronization process does not sync the mail-enabled public folder or its
membership. The result is that if a message is sent to the distribution group, the message will not be
delivered to the mail-enabled public folder. Until Microsoft lifts the limitation in the directory synchronization
process, there is only one way to work around this problem:
1. Create a mail contact for the mail-enabled on-premises public-folder in Office 365.
2. Then, create a cloud-only distribution group and add the contact as a member of the distribution
group. Don't forget to manually add other members of the group too.
3. Exclude the on-premises distribution group from syncing to Office 365. This to ensure that the cloud-
only distribution group is not overwritten at the next synchronization interval.
The problem with this workaround is that you now have created a distribution group for which group
membership is not automatically maintained cross-premises. As a result, each change to the group's
membership must be made by an administrator or by the group owner.

Dynamic Distribution Groups


Dynamic Distribution Groups are also not synchronized to Office 365. This means that there is no object
representing the group in Azure Active Directory and thus mailboxes in Office 365 cannot send messages to
the group. One way to overcome missing objects for dynamic distribution groups is to manually create a mail
contact in Exchange Online to represent each on-premises dynamic distribution group.

Note: When you create an OPATH query for an on-premises dynamic distribution group, you must pay
attention to the directory scope for the query, as defined in the RecipientContainer property for the group.
The scope is established by an Active Directory OU such as “/contoso.com/Exchange users”, including all
its child OUs. You can continue to do this in a hybrid environment because the on-premises Active
Directory is used to resolve the query. However, things are a little different in a pure cloud environment
because Office 365 creates all user accounts in the top level organizational unit for the tenant (as in

Page 135
tenant.onmicrosoft.com). All dynamic distribution groups created in this scenario will have their
RecipientContainer property set to the top-level organizational unit. You cannot change this value, so the
queries for dynamic distribution groups created in a pure cloud deployment of Exchange Online are
scoped to execute against the entire directory.
For example, assume a Dynamic Distribution Group called "DDG_Sales" exists in the on-premises organization
and the email address of the group is Sales@Office365ITPros.com. The administrator can then create a
contact in Exchange Online which preferably uses the same name (to avoid confusion amongst users) and
matching email address:
[PS] C:\> New-MailContact –DisplayName "DDG_Sales" –Alias "DDG_Sales"
-ExternalEmailAddress "sales@office365itpros.com"

The membership of a dynamic distribution group is calculated through its recipient query, which is resolved
against Active Directory to calculate the group members each time the group is used by someone to address
a message. The query can include only Exchange mailboxes. Exchange Online mailboxes are represented as
mail-enabled Users in the on-premises Active Directory which means that a mailbox, after it is migrated to
Exchange Online, will no longer be found within the scope of these queries.
To ensure that dynamic distribution groups also include Exchange Online mailboxes, their scope must be
modified to also include mail-enabled users. The easiest way to do so is through the on-premises EAC:
1. Click recipients and then groups.
2. Double-click the dynamic distribution group you want to modify.
3. On the membership tab, under Members, make sure to select Mail users with external email
addresses.

Note: You can apply the same workaround for dynamic distribution groups that exist in Office 365, but
not on-premises. By changing the scope of the cloud-based distribution group and adding a contact to
the on-premises organization, the distribution group can be used by people whose mailbox is hosted in
the on-premises organization.

Distribution List Naming Policy


The Groups chapter in the main book discusses how a Distribution Group Naming policy can be used to exert
control over the names given to Distribution Lists. Although the feature works as expected, it applies to both
groups created directly within Exchange Online and those that are synchronized from the on-premises
organization. In this example, we use a simple naming policy.
[PS] C:\> Set-OrganizationConfig -DistributionGroupNamingPolicy "DG <Department> <GroupName>"

Next, you create an on-premises Distribution List named "Human Resources". In the on-premises
organization, the details of the DL are as follows:
[PS] C:\> Get-DistributionGroup "Human Resources" | Select Name,DisplayName,PrimarySMTPAddress

Name : Human Resources


DisplayName : Human Resources
PrimarySmtpAddress : HumanResources@office365itprobook.com

Once the DL is synchronized to Office 365, the names assigned to the DL in EXODS (and therefore, in the
address lists visible to end users) are updated according to the policy. The email addresses remain untouched.
[PS] C:\> Get-DistributionGroup "Human*" | Select Name,DisplayName,PrimarySMTPAddress

Name : DG Human Resources


DisplayName : DG Human Resources
PrimarySmtpAddress : HumanResources@office365itprobook.com

Page 136
The naming policy has no retrospective effect over DLs that already exist in the directory. It is only applied
when a DL is created. Having different names for the same DL in the on-premises and cloud directories is
obviously not a good situation. The best approach is therefore to ensure that the same naming policy is
applied in both environments.

Office 365 Groups


Office 365 Groups pose a challenge in hybrid environments because they are an object type that only exists
within Office 365. There is no matching on-premises counterpart that offers a similar feature set. The issues
that arise when synchronizing Office 365 Groups with an on-premises organization appear in different ways.
An Office 365 Group does not necessarily need to be synchronized to the on-premises directory for on-
premises mailboxes to participate in conversations hosted by the group. However, if the Let people outside
the organization email the group setting is selected for a group, on-premises users can simply send email
to the group’s SMTP email address and participate in conversations in that manner. Given that the SMTP
address of the Group might use the same domain name as the on-premises Exchange organization, additional
configuration might be necessary. Without the proper changes, the on-premises organization will not find a
valid recipient in its directory and delivery of messages will fail.
One way to deal with this scenario is to convert the shared domain name from an Authoritative domain to an
Internal Relay domain in Exchange. The downside is that this approach opens the environment to all sorts of
problems, such as the potential for mail loops. A better solution is to create an on-premises Mail Contact to
represents the Office 365 Group. To avoid seeing two entries in the GAL, you should exclude these Mail
Contacts from synchronizing to Azure AD. If not, you might face synchronization errors because of the
conflicting email address being the same for two different objects.
To ensure that the contact object can be used to send email to the group, its targetAddress must be updated
to use the routing address for the group. The routing address is based on the tenant’s service domain, such as
O365book@office365itpros.onmicrosoft.com. Using the routing address avoids any need to change the type of
the shared domain. This PowerShell command shows how to retrieve the routing address for a group:
[PS] C:\> Get-UnifiedGroup -Identity "Test Group" | Select –ExpandProperty EmailAddresses

smtp:o365book@office365labonline.onmicrosoft.com
SMTP:o365book@office365lab.be

Real world: An on-premises Exchange organization does not bifurcate messages sent to an Office 365
Group. Instead, on-premises Exchange routes messages to Exchange Online, which can determine the full
and up-to-date membership for the groups. Exchange Online applies the necessary logic to route
messages to the members of the group – even if that means sending a copy of the message back to the
on-premises Exchange servers.
The experience is slightly better when you implement directory synchronization and enable the group write-
back feature. The group write-back feature is a part of Azure AD Connect and automatically takes care of
creating an on-premises object, and stamping it with a targetAddress. Unfortunately, the value that is written
back as the targetAddress is the Group's primary SMTP address. By default, the Group's primary SMTP address
is based on the tenant's routing domain. As such, if the default SMTP address' domain is not used in the on-
premises organization, if it is based on the tenant's routing domain, or if the domain is set to an Internal Relay
domain on-premises, you should be fine.
However, problems might appear if the domain used for the primary SMTP address for a group is the same as
any of the SMTP domains used by the on-premises organization. For example, if a group’s primary email
address is O365Group@Office365ITPros.com, then the value for its targetAddress attribute will also be
O365Group@Office365ITPros.com. When an on-premises mail user sends a message to that address, the

Page 137
email will not be delivered as the on-premises organization will not forward the message to Office 365 if the
on-premises organization is Authoritative for that domain. As explained earlier, changing the domain to an
Internal Relay domain could fix the problem, but that is not a change you want to make without careful
consideration. Not even the Hybrid Configuration Wizard updates the domain type when it runs!
Two workarounds exist. Either use the tenant's routing domain as the primary email address domain for all
Office 365 groups or implement a specific "Groups Domain" as outlined in the hybrid groups guidance by
Microsoft. Although neither of these options is ideal, they are the only options if you intend to synchronize
Office 365 Groups with an on-premises organization until Microsoft changes Azure AD Connect to always use
the tenant routing domain – as is the case with regular mailboxes.
Information on how to enable the various write-back capabilities in Azure AD Connect is in Chapter 3 (main
book). After enabling the group write-back feature, existing Office 365 Groups will automatically synchronize
with the on-premises Active Directory at the next synchronization cycle. Synchronized groups go into the
Organization Unit that was selected during the configuration of the write-back feature. To prevent duplicate
objects and resulting synchronization conflicts, the name used for synchronized Office 365 Groups in Active
Directory is not the actual display name of the Group. Instead, the objects receive names composed of the
word “Group” followed by a unique value (a GUID), which originates from the ObjectID property for the group
in AAD.
For instance, a group called “Test Group” in Office 365 might have a name of “Group_69e08789-46cd-49ec-
b2e8-8297636c788c2” in the on-premises directory. However, the display name of the Office 365 Group stays
the same as it is in AAD. As obvious in Figure 5-4 the use of identifiers for the name of synchronized groups
might make it difficult to understand which group objects are synchronized when viewed through tools like
the Active Directory Users and Computers console or the Active Directory Administrative Center:

Figure 5-4: Synchronized Office 365 Groups in the Active Directory Users and Computers console
You can retrieve the ObjectID for an Office 365 Group in two ways. First, with the Get-AzureADGroup cmdlet:
[PS] C:\> Get-AzureADGroup –SearchString "Test Group" | Select DisplayName, ObjectId

DisplayName ObjectId
----------- --------
Test Group 69e08789-46cd-49ec-b2e8-8297636c788c

Alternatively, you can retrieve the identifier using the Get-UnifiedGroup cmdlet. In this case, the
ExternalDirectoryObjectId property has the value.
[PS] C:\>Get-UnifiedGroup | Select DisplayName, ExternalDirectoryObjectId

DisplayName ExternalDirectoryObjectId
----------- -------------------------
Test Group 69e08789-46cd-49ec-b2e8-8297636c788c
Test Group2 af905347-5322-4183-a1aa-9522a85bfeb9

Page 138
Note: Group writeback only works one way: from Office 365 to Active Directory. Changes made to
synchronized groups in the on-premises Active Directory are not synchronized back to Azure AD. On top
of that, any change you make on-premises is lost at the next synchronization cycle; the synchronization
process automatically overwrites any value that do not match with the source object in Azure AD. Servers
need to run the latest cumulative update for Exchange 2013 or Exchange 2016 to ensure that groups can
be written back correctly into the on-premises Active Directory.
By default, synchronized Office 365 Groups appear as distribution groups in the on-premises Active Directory.
To get around the requirement for Azure Active Directory premium licenses, some have created scripts to
writeback Office 365 Groups as other objects (here’s an example where they are written back as mail
contacts). As with any code downloaded from the internet, be careful to test this script before you put it into
production.
Normally, synchronized groups automatically appear in the on-premises Global Address List (GAL).
Unfortunately, this is not the case in the version of Azure AD Connect used for this book (May 2017), as you
must update the synchronized group objects manually to populate all the required Exchange properties. It is
expected that Microsoft will fix this problem in the future. Until then, you need to run the Update-Recipient
cmdlet to update a synchronized group object. It is easy enough to find and update new group objects after
each synchronization cycle. Each group object is updated like this:
[PS] C:\> Update-Recipient Group_b5f87098-3f41-470b-ae8a-70e69a51f982

After they have the necessary properties, the group objects appear in the GAL the next time it is generated.
You can force an update of the GAL by running the Update-GlobalAddressList cmdlet to update the Default
Global Address List. The update might not be instantaneous as the on-premises Exchange servers must crawl
through all objects to rebuild the Global Address List. Outlook users have another step to take as they must
then download the Offline Address Book (OAB) after Exchange regenerates the GAL. However, the Office 365
Group objects will be visible in the online directory. Synchronized Office 365 Groups will never be displayed in
the on-premises EAC; they can only be viewed using PowerShell.
You can add on-premises users as members of an Office 365 Group. Unlike true hybrid recipients, the
membership or other properties of Office 365 Groups cannot be managed through the on-premises directory.
Instead, you must add new members using the methods described in the Groups chapter (main book). If you
have enabled group write-back, the group membership is automatically updated in the on-premises directory
at the next synchronization interval.

Real-world: Group membership in the on-premises Exchange organization is only reflected for valid
recipients that also exist in the on-premises directory. Valid recipients include Mail Users, Mailbox Users, or
Remote Mailbox Users. If your Office 365 Group consists of both hybrid and cloud-only recipients, the
latter will not show up in the group membership list in the on-premises organization. This is not surprising
because to update the group membership, the on-premises directory must have objects representing all
the members. Because cloud-only recipients do not exist in the on-premises organization, they cannot be
added to the list of members.
When you add on-premises users to an Office 365 Group, they might not be able to access the full breadth of
resources that are available to group members, like plans managed by Microsoft Planner or the chats
managed by Microsoft Teams. However, if their identity is synchronized with Office 365 and they can
authenticate with the service, they can access files in the group document library. In addition, on-premises
users are auto-subscribed to the group, which means that they receive all the contributions made to group
conversations and any group meetings through email. The users can then reply to messages they receive from
the group and participate in group conversations.

Page 139
Limitations for on-premises users: Several limitations exist in terms of how on-premises users can interact
with an Office 365 Group.
• On-premises users can send email to an Office 365 Group, but only if they use the primary SMTP
address of the group. They cannot use a secondary address because that address is probably
unknown to the on-premises Active Directory.
• The links to resources like Files contained in the notifications sent by Office 365 Group might be
invalid unless on-premises users can authenticate and connect to the group document library.
• On-premises users cannot be administrators of a group.
• On-premises users cannot use the SendAs or Send On Behalf features for a group because the
necessary permissions are not available to them.
For these and other reasons, if on-premises users begin to use Office 365 Groups extensively, it is best to
move them to Office 365.

Email Addresses
In a non-hybrid deployment, managing email addresses for an Exchange Online mailbox is a labor-intensive
manual process because Office 365 does not allow tenants to manage Email Address Policies except for Office
365 Groups. One of the benefits of a hybrid deployment is that the on-premises Email Address Policies also,
indirectly, apply to hybrid recipients. This is because when an on-premises policy is applied, the address
information that is written to the on-premises object is also synchronized to Office 365.
However, as mentioned before, the limitations imposed by the synchronization process must be considered: a
policy might be enabled and applied in the on-premises environment, but that does not mean the changes
have already been synchronized to Office 365.

Real-world: Although email address policies also apply to hybrid recipients, Office 365 mailboxes created
through the on-premises EAC as described in "Creating a new hybrid mailbox' earlier in this chapter, do
not have the Automatically update email addresses based on the email address policy flag set. As
such, email address policies will by default not apply to those hybrid recipients, unless you configure the
recipients manually to use an email address policy. For more information, you can check this KB article.

The Curious Case of the WindowsEmailAddress attribute


Throughout this book, we repeatedly make the point that when directory synchronization is enabled, you
cannot modify on-premises objects from within Office 365, and that if you need to make a change to one of
those objects, you must do so using the on-premises tools. We have also explained that some (limited)
exceptions exist for that rule, mostly because those exceptions pertain to features that are only available in
Office 365.
However, the WindowsEmailAddress attribute can be managed in Office 365 regardless of whether the object
is synchronized or not. WindowsEmailAddress controls the primary email address and can thus have a
significant impact when not used properly. On the other hand, sometimes the need exists to change this
attribute to make things work as they should.
For instance, normally when you want to change the primary SMTP address for a hybrid recipient, you make
the change on the on-premises object and wait for the directory synchronization process to do its job.
However, what happens if you do not want to change the email address for an on-premises object, or you
can't because the domain only exists in Office 365.
In these cases, you can set the WindowsEmailAddress attribute with any value, if the domain portion matches
any of the tenant's registered domains. For instance, to configure a user named "Joe Pearson" with a new
email address, you could run this PowerShell command from Exchange Online:

Page 140
[PS] C:\> Set-Mailbox JoePearson –WindowsEmailAddress NewAddressforJoe@office365itpros.com

This command causes two things to happen:


1. The value you specify for the WindowsEmailAddress attribute is set as the user's primary email
address.
2. The previous value for the primary email address is added as a secondary address (proxy address) to
the user's mailbox.
The downside of this approach is that the email address you configure through Office 365 is not synchronized
back into the on-premises environment. As such, on-premises recipients or systems cannot use the new email
address to send email to the recipient. The simple workaround is to continue to use the previous email
address, which continues to work because it is a known proxy address for the mailbox.

Real world: There seems little to no documentation about the use of this attribute. During my testing, I
found that the values I had configured persisted through subsequent Delta or Full synchronizations.
However, other MVPs reported that the values disappeared after a full synchronization. Proceed with
caution until more information is publicly available. Forewarned is forearmed!

User Thumbnail Pictures


When an on-premises Active Directory synchronizes with Azure Active Directory, the thumbnailPhoto attribute
of the user object, used to store a low-resolution version of the user’s profile picture, synchronizes with the
user object in Azure Active Directory. Before exploring the limitations of the synchronization process, you
should know that several ways exist to upload a user photo:
1. Upload a picture directly to Active Directory using PowerShell or third-party tools. Azure Active
Directory stores the image stored in a binary form in the thumbnailPhoto attribute. The image cannot
be larger than 100 KB.
2. Upload a picture to the user’s mailbox using the Set-UserPhoto cmdlet. This process uploads a high-
resolution version of the photo (648 x 648 pixels) into the root of the user’s mailbox and a low-
resolution version (96 x 96 pixels) into the thumbnailPhoto attribute in Active Directory. The thumbnail
generated through this process is always smaller than 10 KB.
3. A user can upload a picture through various clients such as Outlook Web App, Skype for Business
Online, Teams, SharePoint Online etc. What happens is essentially the same as described in step 2.
The size of the image in the thumbnailPhoto attribute plays a key role in the Directory Synchronization
process. Although the limit in the on-premises Active Directory is set to 100 KB, Exchange Online Directory
Services (EXODS) enforces a maximum size of 10 KB for the thumbnailPhoto attribute. Images from your on-
premises directory that are larger than 10 KB will not synchronize properly and will therefore not display
within clients. A script is available in Microsoft's Script Gallery that can be run against Active Directory to
identify user accounts that have a thumbnail picture larger than the allowed size. That information can then
be used to update those accounts with pictures of a lower size.
The 10 KB limit in Azure Active Directory and therefore in Exchange Online does not mean that you are limited
to using low-quality pictures only. As mentioned above, Exchange 2013 (and later) can store high-quality
pictures inside a user's mailbox, rather than having to solely rely on the thumbnailPhoto attribute in Active
Directory. If a picture is uploaded to the user’s mailbox, services like Skype for Business will automatically use
it. When a mailbox is moved to Office 365, the picture is automatically included.
Things are slightly different for mailboxes that are already in Office 365 because you cannot call the Set-
UserPhoto cmdlet from the on-premises Exchange Management Shell to process a mailbox which resides in
Exchange Online. However, you can do so from the Exchange Online Management Shell. By default, the
remote PowerShell connection only allows you to upload pictures up to approximately 10 KB in size. If you

Page 141
want to upload pictures of a larger size, you must establish your remote PowerShell connection to Exchange
Online in a different manner. Note the addition of ?proxyMethod=RPS to the PowerShell endpoint:
[PS] C:\> $UserCredential = Get-Credential

[PS] C:\> $Session = New-PSSession -ConfigurationName Microsoft.Exchange -ConnectionUri


https://outlook.office365.com/powershell-liveid/?proxyMethod=RPS -Credential $UserCredential
-Authentication Basic –AllowRedirection

[PS] C:\> Import-PSSession $Session

Once the session is established, you can import larger pictures. Similar to how the command operates in an
on-premises environment, a 648 x 648-pixel version of the picture is uploaded into the mailbox in Exchange
Online and a 96 x 96-pixel version is uploaded to Azure Active Directory. Pictures that are uploaded directly
into Office 365 do not show up in the on-premises environment because the directory synchronization
process does not write-back the thumbnailPhoto attribute, nor will the on-premises Exchange servers access
the pictures stored in the mailbox in Exchange Online. If you need profile pictures for Exchange Online
mailboxes to be available both on-premises and in Office 365, you have two options:
1. Upload the profile pictures using the Set-UserPhoto cmdlet before moving the mailbox to Office 365.
This ensures that a correct version is stored in the on-premises Active Directory which remains
available even after the mailbox is moved to Office 365.
2. If you upload pictures to Office 365 accounts, you can manually upload a thumbnail version of the
picture into the on-premises Active Directory. Be aware that if you upload a different version into the
on-premises directory from what has been generated in Office 365, the change will overwrite the
version stored in Azure Active Directory at the next synchronization interval.

Real world: If you thought the above was confusing, then wait until you read the whole story! The
workloads running inside Office 365 do not solely rely on Azure Active Directory. As discussed in more
detail in Chapter 3 (main book), each workload has its own instance of a directory service and is kept up-
to-date with Azure AD through a back-sync process which happens entirely transparently and is managed
by Microsoft. The sync process ensures that user data is kept consistent across all the Office 365
workloads. This is not always the case. Pictures uploaded through the directory synchronization process or
through the Set-UserPhoto cmdlet are almost instantaneously available in Exchange Online or Skype for
Business. The same seems untrue for SharePoint Online. There does not seems to be a logical explanation
as to why, nor does the experience seem to be consistent across all tenants. I recommend reading this
article which describes how SharePoint Online deals with synchronized profile pictures.

Directory-Based Edge Blocking


Directory-Based Edge Blocking (DBEB) allows you to block incoming messages for recipients that do not exist
in the directory: if a recipient cannot be found in the tenant, the message is blocked before it reaches the
organization. By default, DBEB is enabled for all domains in Exchange Online.
If you use Exchange Online Protection (EOP) to protect recipients in both Exchange Online and on-premises,
EOP must know about the recipients that exist on-premises or it will block any messages addressed to them.
As explained earlier, this is where directory synchronization comes into play as it will ensure that on-premises
recipients are represented by a matching object in Office 365.

DBEB and Public Folders


One of the limitations that exists in older versions of the directory synchronization tool is that it does not
synchronize mail-enabled public folders. As of May 2017, the option exists but you must explicitly enable it.

Page 142
Microsoft has a script that can be run (on a scheduled basis) to mimic the synchronization of mail-enabled
public folders to Office 365 to cater for situations when tenants do not enable the feature in Azure AD
Connect or run an older version. The script works differently from the Azure AD Connect tool. The script
creates a “Sync Mail Public Folder” in Office 365 for each on-premises mail-enabled public folder, while the
Azure AD Connect tool will only create an object in Azure AD. These objects are not synchronized back to
Exchange Online. As such, you cannot use the feature in Azure AD Connect to configure hybrid public folder
access, or if you are planning to move Public Folders to Exchange Online.
When using the scripts, you might sometimes run into some additional problems. There is a known issue with
Exchange Online Protection and on-premises mail-enabled public folders where sometimes Exchange Online
Protection does not recognize the objects created by the script. This is most likely because of a limitation or
issues in the synchronization between the various service-specific directories in Office 365. The result is that
incoming messages are still blocked by DBEB.
Until Microsoft solves this problem, there are a few workarounds:
1. Use the synchronization in Azure AD Connect in addition to the scripts. The former will create objects
in Azure AD while the latter takes care of relevant objects in Exchange Online.
2. Disable DBEB by converting the domain in Exchange Online from Authoritative to Internal Relay. The
problem with this approach is that you open your environment for a potentially high number of
emails addressed to non-existing recipients. In addition, these messages might cause mail loops as
Exchange Online will blindly forward the email to your on-premises organization which, in turn, will
do the same. Although Exchange has ways to detect mail loops of this nature, it does not prevent the
mail from traveling back-and-forth a few times first.
3. Manually create MailContacts (or MailUsers) instead of Sync Mail Public Folders (through the sync
scripts). Those objects are picked up by EOP correctly and allow mail to pass DBEB. Note that it can
take up to one hour to replicate the objects to all servers in Exchange Online Protection, and thus for
mail to be delivered correctly.

Note: Microsoft added the ability to optionally synchronize Mail-Enabled Public Folders in version
1.1.524.0 of Azure AD Connect (released in May 2017). However, this feature is currently in preview which
means it may change between now and when it’s released.

DBEB and Office 365 Groups


Like the problem with Public Folders, secondary email addresses for Office 365 Groups are not recognized by
Exchange Online Protection and therefore also not by DBEB. Thus, Office 365 Groups are only externally
addressable by their primary email address. Luckily, you can change the primary email address.
Unfortunately, there is no simple workaround to this problem. You can, however, “cycle” the SMTP address of
the Group (update the SMTP address to reflect the various aliases, one-by-one); once an alias is synchronized
to Exchange Online Protection, it stays recognized. Unlike with Public folders that exist in the on-premises
organization, you cannot add a Mail Contact in Exchange Online for which the email address already exists on
another object (like an Office 365 Group), even when it concerns only a secondary email address.
DBEB does not use the Exchange Online directory to lookup recipients. Instead, it uses its own directory which
is kept up-to-date with the Exchange Online directory through a back-sync process. This process is very
similar to how the various workload-specific directories synchronize with one another. It appears that the
synchronization process for EOP does not pick up on all the object types, or attributes (such as secondary
email addresses for Groups). This causes DBEB not to know about those objects or object attributes, and
ultimately leads to blocked messages for those recipients.

Page 143
Until Microsoft rectifies the problem, you can remove the secondary email address on the Office 365 Group,
create a Mail Contact and then forward all messages sent to the email address to the Office 365 Group using.
a transport rule.

Moving Mailboxes
Dealing with the Feature Gap between Exchange On-premises and
Exchange Online
One of the challenges an administrator faces is the growing feature gap between Exchange on-premises and
Office 365. In the early days of hybrid deployments Exchange on-premises used to have a feature advantage
over Exchange Online: many features that were available in the on-premises Exchange organization were
missing in Office 365. Today it's quite the opposite. Microsoft has released numerous new features such as
Office 365 Groups which are not available to on-premises customers. We know that it is unlikely that some of
these features will ever make it into the on-premises product. The net result is the creation of a feature gap
that forces an administrator to think twice when moving mailboxes back-and forth between Exchange On-
Premises and Exchange Online as moving a mailbox might lead to a loss in functionality.
There are not only technical challenges involved, but over time other problems are likely to rise. For instance,
Microsoft's decision to leave items undisturbed in the Deleted Items folder might create an interesting side-
effect: mailboxes are now likely to grow larger, faster. Imagine the situation where an organization decides to
move back from Office 365 in a year or two from now: chances are that the amount of data that must be
moved back will have grown significantly. Of course, one can decide to implement a policy in Office 365 to
delete the deleted items anyway or you can purge old data from a mailbox before moving it back to on-
premises servers. But regardless of what you choose to do at that point, it illustrates that the feature imparity
between both environments is something to consider.

Mailbox Move Considerations


Directory Synchronization
Previously we discussed the impact of the Directory Synchronization delay and how that affects the creation
and management of mailboxes. The same restrictions apply to mailbox moves: if you want to move a mailbox
to Office 365, a corresponding and matching object must exist in Office 365 before you can even start the
move.
This means that if you have just created a new user account in the on-premises organization, you must wait
until next synchronization cycle before moving the mailbox to Office 365. Similarly, if you have recently made
a change to an on-premises mailbox, it is worth checking whether these changes are already synchronized to
Office 365 or not. Otherwise, you might be faced with a failed mailbox move. Given that Exchange will retry
mailbox moves, this isn't a huge problem as you can attempt to move the mailbox again later. However, from
a good practice point-of-view, it is better to avoid running into this problem entirely.

Permissions
When moving mailboxes using hybrid mailbox moves, only permission that are applied explicitly to the
mailbox are migrated. Any non-explicit (inherited) permissions and permissions applied to non-mailbox
objects like regular mail-enabled users or distribution groups are not migrated.
Although explicit permissions are migrated, it does not mean that all permission types work or are supported.
A good example are delegate permissions. Although these permissions are migrated across, functionality
breaks if the mailbox of the owner and assignee are not migrated at the same time. For example, if a CEO's

Page 144
mailbox is moved to Office 365, his or her assistant's mailbox should be moved too to maintain delegate
access. It is also sensible to use the company’s organizational structure as the basis for grouping users when
the time comes to move their mailboxes as this will ensure that connected mailboxes are moved at the same
time and permissions will continue to work as before.
As discussed in the Permissions and Sharing scenarios earlier, Full Access permissions are migrated, and
supported. However, those permissions aren't always the only ones used. Hence, it is still better to try to
schedule mailbox moves of mailboxes that are connected at approximately the same time. That way, a lot of
hassle with non-supported permissions is avoided. Using the cross-premises permissions is a good thing
whenever you cannot keep the migration batch to a manageable size, for example because there are too
many mailboxes interweaved with one another.

Real world: When designing the migration batches (determining what mailboxes are best moved at the
same time), you often start with a handful of mailboxes, only to find out that each of these mailboxes is
interweaved with other mailboxes, and that those other mailboxes in turn are connected to yet another
batch of mailboxes. The process can sometimes feel like a duck hunt where the handful of mailboxes end
up being several hundred. Because permissions tend to fan like a spider-web, it is sometimes better to
stop at a certain level and just inform users that they will (temporarily) lose the ability to interact with
another mailbox or set of mailboxes. Although it is never pleasant to give bad news, it is sometimes the
only way to keep migration batches manageable and the project moving forward.
Enumerating mailbox permissions therefore becomes an important pre-migration task and one that can
quickly become very time consuming. Especially in larger environments, it can take a while to figure out the
dependencies between mailboxes. The scripts below are a great starting point for anyone wanting to manually
enumerate permissions. In very large organizations, it is sometimes just easier to invest in a tool that offers
these kinds of reports. Although a certain cost is associated with purchasing such a tool, it often saves you a
lot of time and thus money further down the road.

Enumerating Mailbox Permissions in the on-premises organization


Several PowerShell cmdlets exist to allow an administrator to enumerate the permissions that exist on a
mailbox. First and foremost, there's the Get-MailboxPermission cmdlet which displays any permission that
exists at the mailbox level:
[PS] C:\>Get-MailboxPermission wlopez | Select User,AccessRights,IsInherited | Format-Text –AutoSize

User AccessRights IsInherited


---- ------------ -----------
NT AUTHORITY\SELF {FullAccess, ReadPermission} False
O365EXCH\tredmond {FullAccess} False
O365EXCH\Administrator {FullAccess} True
O365EXCH\Domain Admins {FullAccess} True
O365EXCH\Enterprise Admins {FullAccess} True
O365EXCH\Organization Management {FullAccess} True
NT AUTHORITY\SYSTEM {FullAccess} True

The default output of the command also contains the permissions that are granted by default such as to the
object itself, administrators and the Exchange subsystem. When moving a mailbox, these permissions are not
of interest. Explicit permission assigned to other users (mailboxes) are the ones that matter. You can filter the
output to exclude results that you don't care about, like shown in the following example:
[PS] C:\>Get-MailboxPermission wlopez | ?{$_.User –notlike "O365EXCH\Administrator"
–and $_.User –notlike "O365EXCH\Domain Admins" –and $_.User –notlike "O365EXCH\Enterprise Admins"} |
Select User,AccessRights,IsInherited | Format-Table –AutoSize

User AccessRights IsInherited


---- ------------ -----------
NT AUTHORITY\SELF {FullAccess, ReadPermission} False
O365EXCH\tredmond {FullAccess} False

Page 145
O365EXCH\Organization Management {FullAccess} True
NT AUTHORITY\SYSTEM {FullAccess} True

Although this approach cleans up the output of the cmdlet significantly, there are a lot of user accounts which
you want to exclude. Additionally, the way that PowerShell handles this approach to filtering is that it will still
enumerate all permissions, but only display the ones that are not excluded through filtering. An alternative
approach is to export the results to a CSV file and use for instance Excel's built-in filtering capabilities to filter
out the unwanted user accounts. No matter what approach you take, enumerating permissions is a resource-
intensive task. If you have several hundreds, or thousands of mailboxes it is not a very good idea to run the
above command for all mailboxes at the same time. The chances are you will hit the imposed throttling limit,
or unwillingly consume a lot of resources on the server from where you run the command.

Real world. There is no fixed number which tells you for how many mailboxes you can safely enumerate
permissions without consuming “too much” resources. Various elements are at play here. It is not only the
number of mailboxes that you query, but also the amount of entries for each mailbox. So, it is very hard to
“predict” what a safe number of mailboxes to query is. It is better to start with a smaller number of
mailboxes and then gradually increase the amount until you see the performance drop or, worse, you are
throttled. However, that experience is still very subjective. For instance, an organization might choose to
temporarily lift the throttling policy to allow the script to run. In such case, you might not experience any
issues at all.
The Get-MailboxPermission cmdlet only retrieves a specific set of permissions, such as the FullAccess
permission. If you want to retrieve the Send-As, Receive-As, or Send-on-Behalf permissions, you will have to
execute several other commands as illustrated below.

Enumerating Send-As Permissions in the on-premises Organization


Send-As permissions are not assigned at the mailbox level, rather at the user object level in Active Directory.
Hence, we need to use the Get-ADPermission cmdlet to retrieve the appropriate permissions:
[PS] C:\> Get-Mailbox wlopez | Get-ADPermission | ?{$_.ExtendedRights –like "*Send-As*"} |
Select User, Deny, Inherited | Format-Table –AutoSize

User Deny Inherited


---- ---- ---------
EXCHANGELAB\tredmond False False

Enumerating Receive-As Permissions in the on-premises Organization


Similar to the Send-As permission, this permission is assigned at the user object level and can also be
retrieved using the Get-ADPermission cmdlet:
[PS] C:\> Get-Mailbox wlopez | Get-ADPermission | ?{$_.ExtendedRights –like "*Receive-As*"} |
Select User, Deny, Inherited | Format-Table -AutoSize

User Deny Inherited


---- ---- ---------
O365EXCH\tredmond False False

Enumerate Send-on-Behalf permissions in the on-premises Organization


Send-on-Behalf permissions are assigned directly to the mailbox object but do not show up as a permission.
Instead, the output of the Get-Mailbox cmdlet has a property called GrantSendOnBehalfTo:
[PS] C:\> Get-Mailbox wlopez | Select GrantSendOnBehalfTo

GrantSendOnBehalfTo
-------------------
{O365EXCH.COM/Accounts/Users/Tony Redmond}

Page 146
A few years ago, I wrote a script to automatically enumerate various permissions on mailboxes in an
organization. The script includes logic to loop through group memberships and display the actual user
accounts that are part of those groups. This reveals a list of indirect permissions, which makes re-assigning
those permissions afterwards a little easier. Alternatively, you can ask users to request specific access again
after their mailboxes have been moved to Office 365. This is a less user-friendly approach, but might
sometimes be the better option.

Unified Messaging (UM)


The Exchange Online Unified Messaging system is almost on-par in terms of functionality with that found in
Exchange on-premises. This means that features like Call Answering, Outlook Voice Access and Auto-
Attendant are also available in Exchange Online. Within the available features, there are some differences too.
For example, the Auto-Attendant in Office 365 only supports DTMF, just like Outlook Voice Access. In a hybrid
deployment, you can use both UM in Exchange Online and On-premises at the same time; mailboxes will
leverage the functionality from the system in which they reside. Similar to how messages are routed in a
hybrid configuration, voice mails and calls will be re-routed to Exchange Online using the targetAddress
attribute on the user's on-premises account.
Before you can move UM-enabled mailbox to Office 365, you must have configured Unified Messaging in
Exchange Online, at least if you want to keep the functionality. If that is not the case, you must disable Unified
Messaging before trying to move a mailbox to Office 365:
[PS] C:\> Disable-UMMailbox TRedmond

If you want to keep functionality and be able to move a mailbox without having to disable UM prior to the
move, it's not only important that you make sure that Unified Messaging in Office 365 is configured correctly,
but the UM policies and dial plan names that you create in Office 365 must match the policies and dial plans
that exist on-premises. The goal is to have policies in Exchange Online that have the same settings. After you
have created the UM Mailbox Policies in Office 365, use the Set-UMMailboxPolicy cmdlet to link the Exchange
Online policy to a on-premises policy:
[PS] C:\> Set-UMMailboxPolicy "Exchange Online Policy" –SourceForestPolicyNames "On-Prem Policy"

Similarly, if you envision to off-board mailboxes in the future, link up the on-premises UM Mailbox Policy to
its corresponding policy in Exchange Online, by running a similar command in the on-premises Exchange
Management Shell:
[PS] C:\> Set-UMMailboxPolicy "On-Prem Policy" –SourceForestPolicyNames "Exchange Online Policy"

Retention Policies
Whether you configure Retention Policies to move items into an Exchange Archive or you use them to delete
messages after a specified period, you want items that have been tagged to remain so after a mailbox is
moved to Office 365. The components of the retention system such as tags and policies and how the age of
items is calculated is identical across Exchange Online and On-premises. However, without adjusting the
configuration, when a mailbox is moved to Office 365, items that are tagged by the on-premises Exchange
server might lose their tag if a corresponding retention tag or policy is not found in Exchange Online. For this
reason, you must ensure that the tags and policies that are available in the on-premises Exchange
organization are also available in Exchange Online; very similar to how you must make sure that UM Mailbox
Policies are also present and matching if you want to move a UM-enabled mailbox.

Real world: Despite carrying out the export and import of the retention tags and policies, mailboxes that
are moved to Office 365 are automatically stamped with the Default MRM Policy after the move. If you

Page 147
have specific requirements regarding the retention of data, it is important that you apply the correct policy
to the mailbox afterwards.

Other Considerations
The above considerations are the most intrusive in terms of functionality loss or the inability to move
mailboxes. There are also some other, smaller, differences which are worth noting too:

• Custom Details Template (Outlook): it is not uncommon to see organizations implement a Custom
Details Template for Outlook. These custom templates modify how the full properties card for a
contact in Outlook looks like, often to include additional information about the contact. Exchange
Online does not allow to create Custom Templates which means a potential loss of functionality after
a mailbox is moved to Office 365.
• Offline access for Outlook Web App: users of Exchange 2013 and Exchange 2016 mailboxes can
maintain an offline copy of their mailbox in OWA. Taking a similar approach to Outlook, offline access
allows users to use OWA when a network connection is unavailable. However, when a mailbox is
moved to Office 365, the offline access cache is reset. Users must plan to manually re-enable offline
access through the options in Outlook Web App.

How Hybrid (remote) Moves Work


Hybrid Mailbox Moves, also known as Remote Moves or native mailbox moves work similarly to how a mailbox
move is performed between two on-premises Exchange organizations. In an on-premises environment, there
are two move types: push and pull. Mailbox moves to Office 365 are always initiated by Exchange Online, also
known as the pull type: Exchange Online connects to the on-premises Exchange Web Services endpoint over
the Internet.
In an on-premises Exchange 2013 or Exchange 2016 environment the Mailbox Server is responsible for
handling mailbox moves. This is also the case for a hybrid mailbox move, but the Client Access Server acts as a
proxy for incoming requests. The MRS Proxy endpoint on a Client Access Server is not enabled by default.
When you create a hybrid connection through the Hybrid Configuration Wizard, the MRS Proxy component
on each Client Access Server is automatically enabled. As shown in Figure 5-5, several steps occur during a
mailbox move:

Figure 5-5: Hybrid mailbox move process


1. The administrator creates a "request" to move one or more mailboxes from the on-premises
organization to Exchange Online. This request can for instance be a migration batch or a simple move
request in Exchange Online.
2. Exchange Online makes an inbound connection to the MRS Proxy endpoint of the on-premises
organization and initiates the mailbox move process. Mailbox moves happen 'online' which means
that a user can continue to work with his mailbox while contents are being moved (copied) over to
Office 365. Only at the end of the process, the mailbox is locked-out for a very brief period to copy

Page 148
over the last remnants or changes that were made while the other contents were copied over. At this
point, the status of the move – which can be queried using the Get-MoveRequest cmdlet - is changed
from InProgress to CompletionInProgress.
3. Once the final synchronization is completed, the MRS Service in the source environment (on-
premises) converts the mailbox to a mail-enabled user account. Similarly, the migration platform in
Exchange Online will change the mail-enabled user account to a mailbox user in Office 365.
4. If Outlook is still running, the user will be asked to close and re-open Outlook. This will trigger the
Autodiscover process which then points the user to Office 365 instead of to the on-premises
Exchange Organization. The update of the Outlook profile might take a few moments and require
Outlook to restart once or twice depending on the user's patience.

Real world: An administrator can decide to not complete a mailbox move right away, but have it suspend
when ready to complete. When this option is selected, the switchover of the mailbox (pointing the on-
premises user to Office 365) is held off, until the administrator decides to complete the move later. This
option if often used to prevent the "The Microsoft Exchange administrator has made a change that
requires you quit and restart Outlook"-popup from showing at an uncontrolled time. This is because
when, after the switchover, the user's Outlook client connects to the on-premises server, the latter will
issue a redirect to Office 365 in the form of forcing the client to perform an Autodiscover lookup. Once the
Autodiscover process is completed, the Outlook profile is updated. At that point, the pop-up will show.
Delaying the completion of a move allows you to decide when to complete the move and thus somewhat
control when the message box appears -- for instance after hours.
Hybrid mailbox moves are one of the main benefits of a hybrid deployment. However, in order to execute a
remote mailbox move, you must configure a hybrid connection first. Depending on the type of hybrid
connection (full or minimal), other features such as cross-premises Free/Busy and mail flow, amongst other
things; are automatically configured.
The downside of creating a hybrid connection is that it can be complex and time-consuming. Sometimes, all
an organization wants to do is to move mailboxes from an on-premises environment to Exchange Online,
without all the bells and whistles associated with a hybrid configuration. In such case, other migration options
such as a staged, cutover or third-party migration are generally used; if technically feasible. Unfortunately,
none of these options give you the benefits of a hybrid mailbox move. This is the reason why Microsoft
introduced the minimal hybrid configuration (Chapter 4). Unlike the full hybrid configuration, the minimal
hybrid option does not configure any of the enhanced coexistence features, but it does allow you to move
mailboxes cross-premises in the same way.
To execute an MRS or hybrid mailbox move, there must be a mail-enabled user object in Exchange Online
which matches the details of an on-premises mailbox. Hence why you need Directory Synchronization. The
required object details include a variety of Exchange attributes and the proxyAddresses attribute. All email
addresses that exist for the on-premises object, must also be present for the online mail-enabled account.
Additionally, the on-premises mailbox must have a proxyAddress which matches the coexistence domain of
the tenant to which the mailbox is moved. For instance, if your tenant's name is
office365itpros.onmicrosoft.com, the coexistence domain will be office365itpros.mail.onmicrosoft.com. When
you run the Hybrid Configuration Wizard, it automatically takes care of creating an additional accepted
domain and modifying the default email address policy in the on-premises organization to ensure that every
mailbox is stamped with such an email address. However, you can also do this manually and circumvent the
need to run the Hybrid Configuration Wizard.
Secondly, the MRS proxy component must be enabled on the internet-facing Client Access Server. This allows
Exchange Online to make an inbound connection into the on-premises environment and initiate the mailbox
move. Once these requirements are met, the mailbox move can be initiated from pretty much any on-
premises Exchange organization; even if the Exchange organization is in a remote Active Directory forest! All

Page 149
you need to do is point the migration batch or move request to a new migration endpoint which is linked to
the on-premises organization which has the mailboxes that need to be moved.
The minimal hybrid configuration is very popular for the growing number of cross-forest migrations to Office
365 which for example need to happen after a merger or acquisition where the target organization is already
in Office 365 or a hybrid configuration.

Real world: Executing a cross-forest mailbox move to Exchange Online is no different from an on-
premises cross-forest mailbox move. You can use the prepare-moverequest.ps1 script to create accounts in
the target Active Directory forest and those accounts will then synchronize with Azure AD. Once the
objects are available to Exchange Online, you can configure the migration batch to use the migration
endpoint pointing to the source Active Directory and Exchange Organization. Exchange Online will then
move the contents of a source mailbox belonging to a different Active Directory forest than the one used
for the hybrid part of the Office 365 tenant.
By using the outlined approach, you avoid having to go through a double-hop migration where mailboxes are
first moved from one Exchange organization to the other before being moved to Office 365. Alternatively, it
also prevents you from having to deploy a complex hybrid configuration with two on-premises Exchange
organizations. Careful planning is needed to ensure that these moves execute successfully.
If you are not familiar with executing the actions manually and you are unsure about what objects or object
attributes to synchronize between forests and to Azure AD, it is sometimes better to pick a third-party tool
which can automate a lot of the steps for you.

Managing Migration Batches


Migration Batches were first introduced in Exchange 2013, but the concept already exists in the way that
mailboxes were moved to Office 365. A Migration Batch is an administrative layer over traditional mailbox
moves. After a Migration Batch is created, Exchange Online will automatically create the underlying move
requests. Although you can still create move requests manually, migration batches provide several benefits,
including:
• Automatic Reporting and Notification which allows you to configure one or more email addresses
that will automatically receive status reports about the migration batch.
• Endpoint Validation ensures that the endpoint from which mailbox data will be pulled is available.
The on-premises endpoint is tested while the migration batch is created. If the remote endpoint is
unavailable or otherwise not functioning properly, the migration batch cannot be created.
• Incremental Synchronizations allows the pre-staging of mailboxes before finalizing the move to
Office 365. When incremental synchronizations are enabled, contents between the source and target
mailbox are synchronized once every 24 hours. As such, the target mailbox is asynchronously kept up
to date with the latest updates from the source mailbox. This is particularly useful if you are moving
larger mailboxes: you can copy most of the data ahead of time, sometimes even weeks before the
actual switch-over to Office 365 happens. In the meantime, MRS copies new mailbox content to Office
365 daily to ensure that the final synchronization requires less time to complete.
Mailbox moves to Office 365 can be managed from various locations within the Office 365 admin tools. For
instance, you can access the migration dashboard in the Exchange Online EAC by clicking recipients and then
migration. Alternatively, you can also initiate mailbox moves from the Office 365 admin portal, under Users >
Data migration.

Creating Migration Batches using the EAC


To create a new Migration Batch, click the plus-sign from the toolbar and select Migrate to Exchange Online.
The new migration batch wizard will then guide you through the following steps:

Page 150
1. Select migration type where you should select Remote move migration
2. Select the users to migrate to Exchange Online. Select them through the recipient picker or specify a
CSV file which includes the users you want to include in the batch.
3. Select the migration endpoint which allows you to select the migration endpoint through which
mailbox content will be migrated. More information on migration endpoints is in Chapter 4.
4. Mailbox Move configuration where you can specify options such as:
a. Bad item limit which denotes how many corrupted (non-movable) items the underlying
move request can encounter before aborting the move. The default limit is 10 items.
b. Large item limit which specifies how many items over 150 MB can be skipped before the
move request is aborted. The default limit is zero items.
c. Allows you to specify whether to move the mailbox and archive or archive only.
5. Migration batch options such as who gets to receive the notifications and whether the migration
batch should be started and completed automatically. If you choose to automatically start a batch, it
will almost immediately start moving mailbox contents. There might be a small delay to get the
underlying move requests generated. Selecting to automatically complete the batch will ensure that
the final synchronization and switch-over to Office 365 will happen immediately after the initial
synchronization has happened. If you do not select to automatically complete the batch, the
migration batch will stay in a pending state and perform incremental synchronizations every 24 hours
until an administrator decides to manually start the completion process.

Creating Migration Batches in PowerShell


Unfortunately, the EAC does not offer you to configure all options that are available to a Migration Batch. For
instance, you cannot specify the AllowIncrementalSyncs parameter to turn off the incremental
synchronizations, nor can you manually specify to move only the primary mailbox back on-premises and keep
the archive in Exchange Online. The following example creates a migration batch based on a CSV file that
contains the mailboxes to be moved to Office 365. This command is executed from Exchange Online.
[PS] C:\> $MigrationBatch = New-MigrationBatch –Name "Migration Batch A"
–SourceEndpoint (Get-MigrationEndPoint "O365ExchBook Europe").Identity
–TargetDeliveryDomain office365itpros.mail.onmicrosoft.com
–CSVData([System.IO.File]::ReadAllBytes("C:\Mailboxes-to-move.csv"))
–AutoComplete

Because we did not specify the AutoStart parameter, the migration batch will not be started. To start the batch
job, run the following command:
[PS] C:\> Start-MigrationBatch –Identity $MigrationBatch.Identity

Understanding Migration Batches


The migration batch status page only displays high-level information. More detailed information about the
objects within a Migration Batch is available by clicking the View details link. Each individual mailbox in a
Migration Batch is also referred to as a migration user. The migration user statistics show the status for the
selected migration user. Like a migration batch, a migration user - which is the synonym for a mailbox - can
be in one of several states, of which the following are probably the ones you will see most:
• Queued the mailbox is part of an active Migration Batch, but the migration of the mailbox hasn't
started yet. A Migration Batch will maintain a certain amount of connections to the on-premises
migration endpoint. Depending on how many connections are available, not all migration users in a
batch will be processed simultaneously. If a migration user is waiting for connections to become
available, it will be in the queued state.
• Synced which indicates that the initial synchronization of the mailboxes has completed successfully.

Page 151
• Failed indicates there was an issue with the move request and that the migration (initial
synchronization) failed.
Other states, like Syncing or Completing, also exist. However, these states are self-explanatory and only
appear when a migration batch is active. You can retrieve the status for current migration batches with the
Get-MigrationBatch cmdlet:
[PS] C:\> Get-MigrationBatch

Identity Status Type TotalCount


-------- ------ ---- ----------
AndrewDunn Completed ExchangeRemoteMove 1
MSpencer to EXO Completed ExchangeRemoteMove 1
First-Migration-Batch Completed ExchangeRemoteMove 2
MR1 Completed ExchangeRemoteMove 1

Unlike move requests, there is no Get-MigrationBatchStatistics cmdlet. Instead, you can get information about
each migration user through the Get-MigrationUserStatistics cmdlet. The output contains a variety of
information such as overall performance statistics as well as item counts and, if the batch is incomplete, a
progress report:
[PS] C:\> Get-MigrationBatch First-Migration-Batch | ForEach {Get-Migrationuser
-BatchId $_.BatchGuid} | Get-MigrationUserStatistics | Select EmailAddress, BatchId,
SkippedItemCount, TotalItemsinSourceMailboxCount, SyncedItemCount, Status, Error, SkippedItems

BatchId : First-Migration-Batch
EmailAddress : bcampbell@office365itpros.com
SkippedItemCount : 0
TotalItemsInSourceMailboxCount: 24
SyncedItemCount : 22
Status : Completed
Error :
SkippedItems : {}

If you want a more detailed overview of what happened during the migration, specify the IncludeReport
parameter to generate a detailed report of the underlying mailbox move. This example is formatted for
readability:
[PS] C:\> Get-MigrationBatch First-Migration-Batch | ForEach {Get-Migrationuser
-BatchId $_.BatchGuid} | Get-MigrationUserStatistics -IncludeReport |
Select EmailAddress, BatchId, SkippedItemCount, TotalItemsinSourceMailboxCount, SyncedItemCount,
Status, Error, SkippedItems

BatchId : First-Migration-Batch
EmailAddress : bcampbell@office365itpros.com
SkippedItemCount : 0
TotalItemsInSourceMailboxCount : 24
SyncedItemCount : 22
Status : Completed
Error :
SkippedItems : {}
Report : 3/13/2015 2:44:36 PM [AMSPR06MB133] '' created move request.
3/13/2015 2:44:58 PM [AMSPR06MB037] The Microsoft Exchange Mailbox Replication service
'AMSPR06MB037.eurprd06.prod.outlook.com' (15.1.106.16 caps:1FFF) is examining the request.
3/13/2015 2:44:58 PM [AMSPR06MB037] Connected to target mailbox
'exchangelabonline.onmicrosoft.com\63f45db4-a95e-4f09-b956-eec7c62dd044 (Primary)', database
'EURPR06DG003-db122', Mailbox server 'AMSPR06MB037.eurprd06.prod.outlook.com' Version 15.1 (Build
106.0)
3/13/2015 2:44:59 PM [AMSPR06MB037] Connected to source mailbox
‘office365itpros.onmicrosoft.com\63f45db4-a95e-4f09-b956-eec7c62dd044 (Primary)', database 'DB15-
02', Mailbox server 'E15-02.O365EXCH.COM' Version 15.0 (Build 1076.0), proxy server 'e15-
02.EXCHANGELAB.BE' 15.0.1076.0 caps:1F7FFFFFCB07FFFF.
3/13/2015 2:44:59 PM [AMSPR06MB037] Request processing started.
3/13/2015 2:44:59 PM [AMSPR06MB037] Source mailbox information:

Page 152
Regular Items: 7, 103.9 KB (106,344 bytes) Regular Deleted Items: 0, 0 B (0 bytes) FAI Items: 12,
14.9 KB (15,254 bytes) FAI Deleted Items: 0, 0 B (0 bytes)

Note: The information that is returned by the Get-MigrationUserStatistics also pulls information from the
underlying move request. If you want view just the information about the move request itself, you can do
so using the Get-MoveRequest and Get-MoveRequestStatistics cmdlets. This might be the case if you
manually created a move request, instead of a new migration batch.

Removing a Single User from a Migration Batch


Sometimes, you might find that one or more mailbox moves in a Migration Batch are stalled, or have failed. At
that point, it is worth removing those problematic mailboxes from the batch so you can troubleshoot them
independently from the others. The quickest way to do so, is through Exchange Online PowerShell.
While a Migration Batch is running, you cannot remove any of the mailboxes (Migration Users) from the
batch. Hence, the first action is to (temporarily) stop the Migration Batch (if it is still running):
[PS] C:\> Stop-MigrationBatch "Migration Batch 1"

Next, you can remove the mailbox(es) from the batch, using the Remove-MigrationUser cmdlet. Don't forget
to either include the Force parameter, or confirm when prompted for confirmation as in the example below:
[PS] C:\> Get-MigrationUser Tony.Redmond@office365itpros.com | Remove-MigrationUser

Confirm
Are you sure you want to perform this action?
Remove the migration user Tony.Redmond@office365itpros.com?
[Y] Yes [A] Yes to All [N] No [L] No to All [?] Help (default is "Y"): y

Once the migration request for the user has been removed, start the Migration Batch again:
[PS] C:\> Start-MigrationBatch "Batch 1"

Removing Migration Batches


Once a Migration Batch has completed, and you do not need the migration statistics anymore, you should
remove the migration batch. It is not a requirement, but you will not be able to move a mailbox back as long
as there is still a move request or migration batch for the user – even if it is in the completed state.
You can use the Office 365 EAC to remove the Migration Batch, or run the following command from the
Exchange Online PowerShell session:
[PS] C:\> Remove-MigrationBatch "Batch 1"

Confirm
Are you sure you want to perform this action?
Remove the migration batch "AndrewDunn"?
[Y] Yes [A] Yes to All [N] No [L] No to All [?] Help (default is "Y"): Y

Recovering Soft-deleted Mailboxes in a


Hybrid Environment
When a mailbox is deleted, it moves into a soft-deleted state, which means that the mailbox is held for a
period of 30 days during which it can be recovered. After the retention period elapses, the mailbox is
permanently removed from Exchange Online and can no longer be recovered. The process of recovering a
mailbox belonging to a cloud-only account is described in Chapter 6 (main book). Some extra work is
necessary to ensure correct recovery of a hybrid mailbox. Most of the work relates to the recovery or

Page 153
recreation of the deleted user account. Once the account is restored or a new account is created, restoring the
mailbox data can either be done automatically or by executing some PowerShell command. Unfortunately,
recovering the user account is usually the hard part.
Before starting, it is important to understand what caused the user account to be removed in the first place.
Basically, two major scenarios exist that result in a user account being removed from Azure Active Directory
(and thus pushing a mailbox into a soft deleted state):
1. The on-premises user account is (accidentally) filtered from directory synchronization resulting in the
removal of the account from Azure Active Directory.
2. The user account is removed from the on-premises Active Directory and the deletion is then
synchronized to Azure Active Directory.
Regardless of how the account is deleted, restoring the original account is always the easiest resolution. If the
object was filtered from directory synchronization (for instance because it was moved to an OU which is out of
scope for synchronization), all you need to do is include the object back into the synchronization scope to
automatically revive the corresponding account in Azure Active Directory. Once the user account is restored in
Azure Active Directory, the mailbox is automatically re-attached to it along with other resources such as those
managed by SharePoint and OneDrive for Business.
If the account was deleted from Active Directory, your on-premises remediation steps will dictate what
approach you can take. If you have enabled the Active Directory recycle bin, the solution is to restore the
deleted user object and wait for the directory synchronization process to pick it up. The same is true when you
use a third-party backup solution which can restore the same object.
However, if the Active Directory recycle bin is not enabled, or you do not have a backup, you are forced to
take a different approach. The best solution is to use a process which I like to call a “reverse restore”. Instead
of restoring the object in Active Directory and syncing it to Azure Active Directory, you restore the object in
Azure Active Directory, create a new on-premises user object and then (manually) recreate the link between
the new on-premises account and the restored object in Azure Active Directory. Here is how it works:
1. Before starting the recovery process, it is a clever idea to pause directory synchronization until you
complete the steps below. This prevents directory synchronization running while the recovery steps
are in progress which might potentially cause issues.
2. Restore the deleted user account in Azure Active Directory from the Office 365 Admin portal. Within
the portal, go to Users and then select Deleted Users. Select the user object you would like to restore
and then click Restore. This step will restore the user account and the data that was previously
associated with the user account. At this point you have a cloud-only account with no link to an on-
premises user object.
3. Create a new remote mailbox using the on-premises management tools. The easiest way to do this is
to either run the New-RemoteMailbox or Enable-RemoteMailbox cmdlet. This will ensure that a new
on-premises user account is updated with the required attributes so that the cloud-based mailbox
can be attached to it after it is synchronized.
4. Link the on-premises user account to the restored object in Azure Active Directory. There are two
ways to do this. Both rely on the ability of the directory synchronization to match two objects
together. The first option is to “hard match” the on-premises user account to the account in Office
365. The second option is to “soft match” both accounts. In case of the latter approach, you must
modify either the primary SMTP address or UPN of the account in Azure Active Directory and ensure
that the on-premises account (remote mailbox) has the same SMTP address as user in Office 365
before running directory synchronization with AAD Connect. More detail on the approaches to how to
link an existing user account is in Chapter 3 (main book).
After the next directory synchronization cycle, both accounts will be linked again.

Page 154
By now you might be wondering: why go through all the trouble of restoring the user account? Why not just
create a new user account and attach the mailbox to it? Although this approach would work, all the
permissions that were granted to/on the user and mailbox will be lost and it could potentially result in data
loss outside of Exchange Online. The reason for this is when you restore the original user account, the
connected object in Azure Active Directory is also restored to ensure that the object keeps its ObjectGUID
(which is used to tie everything together). A new user account has a new ObjectGUID and therefore starts with
a blank slate in Exchange Online, SharePoint, OneDrive for Business, and so on. It should come to no surprise
that this approach is therefore not supported.
The most important thing to keep in mind is that you can only take the above approach when the mailbox is
in a soft deleted state. Once the mailbox is hard deleted from Exchange Online, there is absolutely no way it
can be recovered.

Page 155
Chapter 6: Office 365 Analytics
Fitbit for the Office
At the Dreamforce 2015 conference, Microsoft CEO Satya Nadella described Delve Analytics (since renamed to
MyAnalytics) as “a new information discovery and knowledge tool to track your time, because that’s the valuable
resource you have.” In other words, MyAnalytics (now Viva Insights) analyzes information extracted from Office
365 (mostly email) to help people work smarter. Another description often trotted out is to say that Insights is
the equivalent of “Fitbit for the office” because like a health tracker, your habits should improve if you pay
attention to the information Insights delivers about your activities within Office 365.
I have been down the path before of trying to use data collected from Office products to predict and
influence human behavior. An HP Labs project from the 2002-2003 period interpreted results extracted from
data captured from Outlook to understand how people worked within an organization. Microsoft was also
investigating the possibilities of analyzing data gathered from user activities in a project called “Knowledge
Navigator.” Despite a lot of co-operation to develop these ideas on the part of both companies, the idea
foundered on the rock of privacy because no one could quite figure out how to protect user data in the way
that it needed to be. Privacy is still a huge concern within Office 365, but we have now a better understanding
of how to protect user data. It is also true that the advent of the Microsoft Graph means that size of the data
set available to describe user activities and the interaction that exists between individuals is richer and deeper
than ever before.
Technology moves on a lot in a decade and privacy and security is at the heart of the Microsoft Graph, which
never makes information available to anyone if they do not have access to that data. Apart from helping users
to find information, the signals gathered into the Microsoft Graph offer a rich vein of data to analyze to
understand how people interact with Office 365 applications, and that is where analytics and specifically the
Insights features come into the picture.
This chapter covers two applications that use the data in the Microsoft Graph to analyze how people use
Office 365. The intention is that the analysis will help individuals and businesses make changes to become
more effective and efficient. For the individual, the increase in effectiveness might result in them working
fewer hours by doing things smarter. For the business, it might help managers be better leaders. Insights is
the personal side of Office 365 analysis. Workplace Analytics is its business-centric counterpart.

MyAnalytics (Viva Insights)


Viva Insights is part of Microsoft’s Intelligent Cloud initiative designed to make users more productive
through better understanding of how they use time during the business week. The changes that someone
might make because of understanding where they spend time might be small, but the hope is that any
change will help that person become more effective. For instance, if you realize that attending a certain
weekly meeting is basically a waste of time because the meeting never generates results, then perhaps you
can save some hours by dropping the meeting from your calendar and reading the meeting minutes when
they arrive by email instead. On an organizational level, value is gained through the cumulative effect of
everyone becoming more effective.
As an example of what might be possible, consider the situation where a weekly two-hour staff meeting for a
company involves twelve department heads. Let us say that the meeting occurs 50 times annually, meaning
that the company dedicates a total of 1,200 hours of executive time for just this meeting. Naturally, those

Page 156
attending the meeting will probably spend some time preparing for it and that preparation might involve
effort from other team members. Let’s assume that each department spends 8 hours preparing for the weekly
meeting, so a further 4,800 hours of effort is consumed to support the weekly staff meeting, giving us a total
of 6,000 hours. To put a cost on what that means, we can assume that the fully-loaded hourly cost of each
executive is $125 and the cost for a team member is $60. The cost is therefore calculated as (1,200 * $125) +
(4,800 * 60) or $438,000. Some value is unquestionably gained from coordinating activities through the weekly
staff meeting, but you would wonder whether the value gained surpasses the cost. Changing the schedule to
a biweekly meeting or reducing the time set aside to an hour are two examples of how the organization can
gain benefit by simply understanding how people use time. The key here is that making data available helps
people to make better decisions.
In January 2019, Microsoft announced that MyAnalytics is now “available to everyone using Office 365 and
Microsoft 365 Enterprise and Business suites that include Exchange Online” In other words, anyone with Office
365 Business Essentials or the E1, E3, or E5 plans can use MyAnalytics. The app is an interesting attempt to
bring big data analysis to the world of the office to understand how people interact with each other and use
tools such as email. Let’s see what that means in practice.
In September 2021, Microsoft announced the rebranding of MyAnalytics as Microsoft Viva Insights as part of
an exercise to consolidate intelligence derived from user activity within Microsoft 365. The rebranding means
that:
• The daily briefing message from Cortana now comes from Viva. Microsoft says that the content will
be expanded with recommendations to help users prepare for the day and week ahead.
• The user digest comes from Microsoft Viva and is delivered monthly rather than weekly. The first
edition of the digest appeared in targeted release tenants in early September 2021. Microsoft says
that the new digest will “aggregate insights across these four outcomes: focus, wellbeing, network, and
collaboration.” Like the weekly digests, the monthly digests are injected directly into user mailboxes
and don’t pass through the Exchange Online transport system, which means that they’re not subject
to inbox or transport rules.
• A new Viva Insights home page is available to Microsoft 365 users.
• The Outlook Insights add-in is rebranded as Viva Insights.
• The MyAnalytics settings available in the Microsoft 365 admin center to control the defaults for new
accounts will receive the Microsoft Viva branding. The same will happen for the MyAnalytics user
dashboard where individual users can see insights derived from their activity and control if they can
access the dashboard, receive the monthly digest, and use the Outlook insights add-on.

Until the rebranding is complete in November 2021, you’ll see a mixture of MyAnalytics and Microsoft Viva
Insights in administrative and client interfaces.
Although the Insights dashboard and email digest have served as the primary delivery method for Insights in
the past, the Viva Insights app for Teams is now the preferred interface.

Microsoft Graph is the Foundation


The information exposed in Viva Insights is based on information drawn from the signals gathered about user
activities such as sharing documents and creating meetings in the Microsoft Graph. Because the Graph is the
collection point for user activities, it does not matter what client someone uses to interact with Office 365. All
signals are processed within the Office 365 infrastructure.
The concept underpinning Viva Insights is that by observing and understanding how someone interacts with
the components of Office 365, you can build up a picture of their day-to-day activities. You can also compare
how they spend their time against an anonymized set of data drawn from other tenant users. Based on data
drawn from Office 365, Microsoft knows that the average office worker spends up to 20 hours per week
working with email while senior managers will be glued to their keyboards for between 40 and 70 hours. We

Page 157
can therefore conclude that the form and focus of user activity differs considerably depending on an
individual’s position in the company and their responsibilities.
When Microsoft introduced a prototype of MyAnalytics at the Ignite 2015 conference, it was obvious that this
might be an interesting project. The demonstration showed how data could be analyzed to help someone
understand how to work smarter and achieve a better work-life balance. It was an effective demo, even if it
immediately set off some alarm bells from those who are concerned about personal privacy.
The code shown at Ignite was just a prototype. Reality kicked in before Microsoft made MyAnalytics available
to customers. No obvious trace of any work-life analysis exists in the current implementation of the
MyAnalytics personal dashboard. Although it made for a compelling demo, removing an arbitrary assessment
of how anyone should balance their life between work and non-work activities is a wise move. The problem
here is that the analysis needed to come to such a conclusion needs far more insight and data than is
available to MyAnalytics. And anyway, figuring out how to spend your time is an intensely personal choice
that cannot really be held up against some artificial standard.
Instead of trying to be proscriptive about what constitutes a good work-life balance, the current
implementation of Viva Insights focuses on giving insights based on data for an individual to interpret. You
might think that working for twenty hours per week outside the normal working day is a good thing because
you need to interact with people in many parts of the world. I might disagree and avoid any notion of working
outside a 9-5 window. We are both right because it all depends on the individual. If you get your work done
one way and I get mine done another, we are both effective. The question then is whether you can be more
effective and achieve the same results in a different way.
Removing the attempt to figure out a user’s work-life balance makes Insights much more acceptable to those
who raised questions about privacy. It also addresses the concern that some management might be tempted
to analyze work practices drawn from departments or workgroups to highlight inefficient or ineffective
workers. Hearing assertions that Insights “uses big data [gathered from Office 365] to reveal how [user]
behavior drives business outcome” adds fuel to user concerns, but it is not what it seems.
With the Microsoft Graph delivering the data and algorithms applied to interpret and refine that data, the
Insights personal dashboard helps people understand where they spend their time and how they might do
better. For example, salespeople who are burdened with internal meetings can see where time is being soaked
up and take steps to spend more time with customers. People who work at the weekend might better
understand the factors that are driving this activity and be better able to manage it and so avoid the potential
for burn-out.
Overall, I have a more positive view of the value that analysis of working habits can bring. The real value from
Insights is on helping people to work smarter by making better use of their time. We all have limited time and
it makes a heap of sense to look at how time is consumed in the office to see whether we have lapsed into
working habits that consume too much of our most precious resource.

The Volometrix Influence


The original promise of what could be delivered by MyAnalytics was taken to a new level when Microsoft
acquired Volometrix in September 2015. Led by ex-Bain and Company manager Ryan Fuller (who left
Microsoft in 2020), Volometrix specialized in developing and applying behavioral analytics to corporate
behavior to optimize business processes. This work cannot be done without access to data that describes
corporate activities such as sales and marketing. Volometrix had several years of experience in refining
algorithms to query and analyze large volumes of data to understand how companies really function. The
Microsoft Graph gathers information as users work with the Office applications – as people schedule
meetings, set up online meetings with Teams, create and send email, and so on. Office 365 is an obvious rich
source of the kind of anonymized data that Volometrix specialized in interpreting. As an example of

Page 158
Microsoft’s thinking about how judicious use of technology can help organizations achieve productivity gains
without being bogged down in bureaucracy, this Harvard Business Review article on “The Paradox of
Workplace Productivity” is a good read.

Preserving User Privacy


Insights addresses the privacy issue by providing user-specific dashboards where the information revealed is
personal to them and cannot be interrogated at a workgroup or organizational level. The analysis done for
Workplace Analytics uses anonymized data and it is impossible to compare the activities of an individual
against their peers.
It is also important to understand that the data used by Insights is available to users if they care to look. For
example, it is possible (but tedious) to count the number of messages you create and send. You can also
figure out how much time you take to respond to other people within your company by looking at the
timestamps recorded for messages and calculating the difference between when a message was sent and
when a response was generated (using the conversation identifier to track messages that belong to a topic).
Alternatively, you could measure your processing of email with a stopwatch and pencil. The same is true when
it comes to analyzing the number of meetings you attend, who also attends, the topics discussed, and the
outcomes. You can assess whether meetings were effective, ineffective, or just so-so. None of the data
exposed by Insights is invisible to users; it is just that computer code and algorithms are so much more
precise and consistent about analyzing raw data to arrive at conclusions, especially when presenting the data
in semi-obscured form. And unlike humans, who might be interested in analyzing email for a day or even a
week before they become bored and move on to other more interesting tasks, Insights is persistent in
continuing to analyze the stream of data gathered through user activities.
Comparisons with the company at large are available within the personal dashboard, but again it is based on
anonymized data and is intended to allow the dashboard’s owner to see how their work patterns vary from
others in the company. It is also fair to say that the dashboards for no two users will be identical. Everyone
works differently and interact with different people, so the signals that feed into the analysis will vary and the
outcome reported in the dashboard will reflect information for the owner rather than anyone else.
Apart from checking timestamps and working out whether messages go or come from external or internal
correspondents, Insights accesses no content that might reveal confidential information such as message
subjects or bodies. All the information used is available for all messages, even those encrypted with
Information Protection. In fact, it is the same information used for other administrative purposes such as
message tracking.
It should be obvious that if people do not use Office 365, the value of Insights degrades. For instance, if you
use Exchange Online but persist in using old-fashioned network file servers instead of SharePoint Online or
OneDrive for Business as the repository for shared documents, the data collected in the Microsoft Graph are
incomplete and are not an accurate picture of workflow within the organization. For this reason, any company
considering using Insights must first assure itself that enough data is available in the Microsoft Graph and
Azure Active Directory (to understand organizational relationships) to make analysis possible and useful.
Hopefully, the presence of applications such as Insights will help companies decide to move work off network
file servers.

Enabling or Disabling User Access


Some licensed users might not want to have Microsoft process the signals generated by their Office 365
activity. For example, a department is active in a country where bodies such as a worker’s council or union
needs to agree that it is OK to use software like Viva Insights. In such cases, users can set the privacy mode for
Viva Insights to opt-out from processing and disable the feature. An individual user can do this through the

Page 159
options for the MyAnalytics application. Click the cogwheel, select Settings, and then toggle the On/Off
slider. Separate sliders are available to control access to the MyAnalytics dashboard and weekly email digest.
Administrative control for Insights access by individual users (mailboxes) is available by:
Turning Viva Analytics on or off for individual mailboxes by running the PowerShell code in this article.
Users can re-enable Analytics afterwards if they wish. As explained in the article, administrators can use the
Set-MyAnalyticsFeatureConfig cmdlet (in the Exchange Online management PowerShell module) to remove
access to individual features. For instance, many users don’t like the monthly email digest message. You can
block the mail digest while allowing users access to the Analytics dashboard and Outlook add-in by running a
command like:
[PS] C:\> Set-MyAnalyticsFeatureConfig -Identity Vasil.Michev@office365itpros.com -PrivacyMode "opt-
in" -Feature digest-mail -IsEnabled $False

Removing the Insights by MyAnalytics service plan from individual user licenses using the PowerShell
script described in this article. It’s likely that Microsoft will rename the service plan in the future to reflect the
Viva brand. Users cannot reenable Analytics after the service plan is removed from a license.

Using the Insights Dashboard


Viva Insights uses the information about user activities to analyze the time the user spent working in different
activities and presents the information in their own dashboard. The dashboard refreshes weekly with updates
available early on Sunday morning (Seattle time). Personal is an important word for Insights because when
you peel everything back, the idea behind this application is to help data-driven conversations between users
and their peers or supervisors. You might have a feeling today that you’re spending too much time in
meetings or that you must work too many hours outside the normal working day. It’s possible to extract data
from your diary or notes to construct a case for debate, but it’s so much easier when software does the heavy
lifting to sort data and figure things out. Here’s what’s shown in the dashboard:
• Your time summary: A snapshot showing how much time you gave to meetings, dealing with email,
focused time, and outside normal working hours (Figure 6-1).
• Your insights: Observations based on the analysis of the data recorded about your activities and
suggestions for how you might change your working habits to be more effective.
• Your network: Who did you spend time communicating with? Are they external or internal people or
groups? Are those people important to you? Are you losing touch with anyone important because
you have not spent any time communicating with them recently?
• Your meetings: How much time did you spend in meetings during the week. Insights shows the kind
of meetings you attended (long, recurring, conflicting, and after hours) and whether you multitasked
during meetings.
• Your email: How much time did you spend composing and reading email. An “email etiquette
section” tells you how effective your email is in terms of how responsive others are to your messages
and how you respond to inbound messages.
• Your focus: How much of your time was available to concentrate on important tasks.
• Your after hours: How much of your work took place outside normal working hours. The dashboard
shows whether you used the time working with email or at meetings.

Page 160
Figure 6-1: A snapshot of a working week
The idea behind the Insights is that they might help you change the way that you work and become more
effective. For example:
• Reduce the amount of time spent working after hours to create a better work-life balance for yourself
and to avoid giving co-workers tasks or other things to take care of when they are relaxing.
• Improve the effectiveness of meetings by reducing the amount of time scheduled to focus the minds
of attendees on what they need to do.
• Improve your responsiveness to email that you receive.
• Understand the impact of different technologies. For instance, does a new mobile email client
increase the hours spent working with email? Does the introduction of Teams within an organization
reduce the hours spent working on email?
The insights suggested in the dashboard are based on common sense, but common sense can be overtaken
by bad work habits developed over years. Some will appreciate the insight and advice, and some will think
that the recommendations will never work for them. The inferences and suggestions picked up by Insights are
based on observation of the working habits of millions in conjunction with the data available through Office
365. You know yourself best, and just like when you examine the output from a health monitor and can put it
into context with your current aches and pains, you should look at the dashboard as an opportunity to gain
insight into how you work rather than a definitive statement of what you do. People are highly individual in
the ways that they approach and deal with tasks and a technique that is effective for one is not necessarily
effective for another. In other words, don’t disengage your brain and accept everything shown in the
dashboard as absolute fact.

Anonymized Data
When Insights compares the time spent by a user against the company average, it uses anonymized
information gathered for those who opt-in for Insights (we will get to discuss how to opt out soon). Because

Page 161
the company data is anonymized, you cannot compare your weekly results against any other specific
individual. To avoid the possibility that information can be associated with individual users, Insights only
displays company averages if more than five users within the tenant are active within the analysis period and
make their data available for analysis.

Working Week
In today’s world, the notion of a working day is pretty fluid. Some people have a fixed schedule and never
contemplate doing anything work-related after that period ends. It is more common to find “knowledge
workers” are active at various times during the day, depending on the makeup of their team and the level of
responsibility they have. For instance, someone who works in an international company where colleagues
work across in multiple continents invariably finds that they need to schedule or attend conference calls at
times outside the traditional working day. It is just a fact that adds some context when you consider the out-
of-hours workload reported in the dashboard. The default working day is usually 9AM to 5PM, but you can
customize this to suit through the Calendar section of OWA Options or by clicking the cogwheel symbol to
change your time settings.
Insights normally displays information for each category based on a sliding six-week window of data,
including comparisons to show how time spent on different activities varies over time. The Trends view (Figure
6-2) gives a long-term overview of how a user devotes their time to different areas of work. You can use the
slider to move the date range back through up to two years of data. The work to preserve activity data began
in March 2017, so it will be March 2019 before users can access the full two years of activity data.

Figure 6-2: Activity trends over time

The Tyranny of Email


Like many knowledge workers, I spend too much time working with email. The data reported by Insights
focuses on how two aspects of email. First, how much time is spend reading and writing messages exchanged
with other people in your Office 365 tenant. Second, “email etiquette”, which really means how quickly you
respond to messages from other people and how quickly they respond to you. Insights does not try to assess
the quality of the responses as time is the sole measurement. It is possible that you might be highly satisfied

Page 162
by responding to every email you receive within five minutes while your correspondents might be totally
unhappy with the quality of your responses. Solving that issue is a totally different challenge to understanding
the flow of email. After all, a one-word “Yes” or “No” response is proper and useful in one context and could
offend in another.
Unsurprisingly, it is usual to discover that we spend more time dealing with inbound messages than writing
new messages and responses. A couple of factors are at play here. Although the Focused Inbox feature is
reasonably good at filtering low-priority messages from immediate view, we all deal with an ever-increasing
volume of email. Perhaps we belong to too many distribution groups and receive email that we really do not
need to see. Perhaps it is just a function of our job and we receive notifications of tasks that need attention or
updates on developments that we should know about. Or perhaps we are just chatty and use email for that
purpose instead of moving all social interaction to Twitter, Facebook, or even Yammer. For whatever reason,
Insights usually reports that people spend more time reading messages than they do writing and spending
new email.
The way that Insights calculates the time spent dealing with email also contributes to the expected outcome.
A relatively simple algorithm calculates how much time you spend working with email. Instead of trying to
track exactly what happens with a message when it arrives in your Inbox or how much effort you put into
composing a message, data gathered by Volometrix based on observations of multiple large organizations
shows that people spend an average of 2.5 minutes dealing with an inbound message and five minutes to
compose and send a message. As we will see later, Insights uses these measurements as benchmarks for
reporting email activity. A message counts as read when its read status changes from “unread” to “read.” This
applies no matter what folder the message is in or whichever client processes the message. In other words,
unread mail moved by rules to a non-Inbox folder also count. However, a message that is deleted without
being opened is not included in the calculations. A new message counts when the user sends it, including
messages to people outside the tenant. Finally, messages read or sent outside the defined working day end
up in the After-Hours calculation.
To take account of the diverse ways that people process email, Insights does the following:
• For created (and sent) messages: Measure the difference between the time when someone first
creates a message and starts to write text and the time when they send it. To make things simple,
Insights calculates that this operation takes five minutes (the benchmark time). However, if the user
sends multiple messages within a five-minute period, Insights assigns the time between each send
operation to the relevant email.
• For read messages: Measure the difference between the time when someone first opens a message
and when the read status changes because the user closes the message. Insights does not count
messages that the user deletes without opening, including when they take a quick peek at the
message content in the preview pane. Insights assigns 2.5 minutes for a user to read a message.
However, if you open an email and then open or send another message within the 2.5-minute period,
Insights assigns the time difference between the two operations to the first email.
• A message must have the user’s email address as a TO: or CC: recipient to count, including when the
user is a member of a group that is a TO: or CC: recipient.
• Because any client can create, send, or read messages, it follows that the analysis includes messages
accessed using any client.
• Messages accessed after the defined work hours go into the After-Hours category.
This sounds complicated, but the ruleset is logical. Messages have timestamps that Insights can analyze to
measure activity.
With these assertions in mind, it is possible to reckon how much time someone spends working with email by
counting the number of messages that they send and receive and then calculating the minutes spent on this

Page 163
activity. Because the calculation happens on the server, it does not matter what client you use. Figure 6-3
shows the data for a typical week of my email activity. To understand the total of 12.1 hours reported for
email hours in the screen shot, we must understand how Insights computes the time working with email.
Every message is different, and every person differs in how they read messages., Applying the benchmark
measurements is one way to produce a result. If we use the 5-minute benchmark to break down the 5.8 hours
reported for writing messages, the calculation is (348 minutes/5) = 69.6 messages. The calculation for the 6.2
hours reading email is (372 minutes/2.5) = 148.8 messages.

Figure 6-3: A week’s statistics for email


However, the notional calculation using benchmark figures differs quite considerably from the real
measurement of email activity, where I sent 140 and read 358 messages during the week. We might therefore
hazard a guess that the user data in the personal dashboard reflects the activity of someone who processes
email faster than the average person. Based on these results, which could differ from week to week depending
on the kind of messages that arrive into the inbox, we could also theorize that the user sends a lot of very
short messages – or at least, messages that take considerably less than five minutes to compose and send.
The graph on the right-hide side of the email card tells us when the user processes messages during the day.
Broadly speaking, you can divide people into “nibblers”, or those who process messages as they arrive, and
“bulk processors”, more structured individuals who set aside blocks of time during which they process their
Inbox. Historically, users have peaks in term of sending and receiving email first thing in the morning, which is
what appears to be in the case for the data shown here. When mobility was not as pervasive as it is today,
uses could only process email when in the office, you would also expect to see peaks at lunchtime (sending
messages before eating) or before leaving the office in the evening (clearing the desk). Given today’s working
habits, it is more common to see constant activity during the day and that work continues outside business
hours.
Attempting to come up with an algorithm to track how people process email and use that data to estimate
how much time they consume in this activity is very complex. Factors such as clients (many people use mobile
email clients to triage an Inbox by deleting or refiling items), personal work habits, and even client settings
(how long Outlook will display an item in the preview pane before updating its read status) can all contribute
to the mix. Microsoft understands the current algorithm is basic and can be inaccurate in some circumstances.
However, it does work in general and Microsoft has refined it over time to become smarter and more precise.
Multiple iterations of code tweaking and testing might be needed before the algorithm works well for
everyone, or as close as you can get to this goal in practical terms.

Page 164
Mastering Meetings
We all spend too much time in meetings and possibly not always to good effect. The calendar is a rich source
of data for analytics because it holds blocks of formally scheduled time set aside for specific topics. But
meetings vary from a 1:1 discussion with a direct report or your manager to much larger group meetings
whose only benefit is in an opportunity to grab some free coffee and donuts. Unfortunately, many of the
meetings that occur inside corporate environments are not particularly useful or productive and some can be
a huge soak of valuable time (another good article on the topic of avoiding unproductive meetings is
available here).
The time reported for meetings is based on calendar events that include at least one other participant
(external or internal). The logic here is that events that have no other participant are personal. Only meetings
marked as “busy” are counted, so meetings that are tentatively scheduled but not accepted are excluded
because you probably never devoted any time to these events. Meetings marked as private and meetings that
have more than 500 attendees are excluded from the calculations, as are all-day meetings. Research shows
that all-day meetings are seldom for collaborative purposes and are often used to block out large amounts of
time. Private meetings are excluded because of their nature as are meetings with more than 500 attendees,
which are at the other end of the privacy scale. Neither are particularly interesting from the perspective of
collaboration. And obviously, if you do not add items to your calendar, those time slices are never analyzed.
The ability for a user to tell Insights what type of meetings to include in its calculation does not exist today.
The only data used to review meetings are the start time, duration, number of attendees, and who attends
(used to figure out working relationships). Again, all this information is available in the header of calendar
notification messages. The data reported in Figure 6-4 reveals that I spent quite a lot of time in meetings
during a week. Most of the meetings were scheduled by me, meaning that I avoided being pulled into a lot of
meetings arranged by other people. It is also obvious that I multitask quite a lot during the meetings I
attended (probably by processing email when listening to conference calls) and that a lot of after-hours time
was consumed by meetings. The contradiction in some of these results (for example, 45 hours spent
multitasking in a total of 28.4 hours in meetings) is explained by multiple meetings being scheduled at the
same time (42.3 hours of conflicting meetings). Although these data are a little extreme, this kind of
scheduling happens. It is common that corporate executives have multiple competing requests for their time,
many of which appear in their calendar to allow a last-minute decision about which meeting is more
important.

Page 165
Figure 6-4: Clocking up meetings in a week
The requirement for a meeting to have at least one other participant in addition to the organizer before the
analysis includes it in its calculations results in the exclusion of many calendar events. For example, I
commonly create events for airline bookings so that the information is available in the calendar on my mobile
device. Those events do not count unless I add someone to the event, such as when my wife joins me on a
trip, and I add her to the event so that she knows when we are flying. In the eyes of Insights, it is a meeting
that counts. Again, you can criticize the algorithm for measuring meeting hours as being overly simplistic. It is
correct that the calculations are simple, but it is only a start and over time this is another area that Microsoft
will probably tweak based on the experience gained at Volometrix. To find out what meetings are counted
and what are excluded in a week, click the View Details link in the Meeting Hours tile.
The Meeting Quality assessment is an attempt to estimate just how effective the time spent at meetings really
is. Again, you must put the data presented in this section into context because the raw information might not
really tell the whole story. For instance, if attendees at a meeting send email, Insights considers this to be
evidence of “multitasking” on the basis that people who come to a meeting should give their undivided
attention to the topic at hand to drive to a resolution. Unfortunately, given the way we use mobile devices
today, getting the undivided attention of a complete room is difficult. However, an email exchange during a
meeting is not necessarily bad. Messages sent to request information or start an action might instead be an
indicator that the meeting was effective in terms of getting things done.
Double-booking is an easier problem to understand as people who have good control over their calendar
should not end up with multiple events at the same time. If you are constantly double- or triple-booked
during the work day, you might need some help to understand why this happens so that you can use your
time more effectively (for example, by sending a deputy to a meeting). In some large companies, people are
invited to meetings just to make sure that they know about the gathering rather than in the expectation that
they will show up. If you have many double-booked timeslots, it is a sign that it might be time to review which
meetings are important to you, accept the invitation for those meetings, and decline the others.
Other problem signs for ineffective meetings include situations where more than 20 people are invited (which
often makes it more difficult to make decisions) or when managers from more than two levels attend (creating
the question whether every one of the managers need to be there). Of course, there will always be important
meetings that involve more than 20 people and really do get things done or those that require the
involvement of many levels of management. But when you think about it, these are out-of-the-norm meetings

Page 166
and too many of them might be a sign that the organization is becoming obsessed with meetings, a sickness
that I have observed many times.
Assessing the quality and effectiveness of meetings is another complex task. For now, we have a simple
calculation of the hours consumed in meetings. I suspect a lot more data and a lot better understanding of
how to gather that data and preserve user privacy is needed to go much further.

Focus Time
Insights defines focus time as a two-hour block in the calendar during working hours that is free of meetings.
The idea is that you use focus time to concentrate on specific activities that need to get done, such as creating
a budget or working on a presentation. If you realize how successful (or not) you are at finding focus time to
attack complex issues, the theory is that you will be able to rearrange your schedule to become more
productive.
Of course, theory is often negated by real life. Structured individuals whose working day is not ruled by the
interruptions caused by arriving email often reserve blocks to focus on certain tasks while others whose days
tend to be more fluid and crisis-driven might struggle to find the time to be able to dedicate to tasks. You
might argue that your practice is to set aside one-hour blocks to tackle tasks and find that this is enough time
to get the job done. On the other hand, it can be argued that while some can keep focus and driving to a
conclusion about a topic quickly, on average it takes humans longer. Studies have identified 90 minutes as a
good average and the developers decided to use a more aspirational two-hour block. A tenant administrator
cannot change this definition.
Insights does not try to track anything that you do during focus time. You could, for instance, spend the entire
period playing Solitaire or looking at cat videos on YouTube. It does not matter because as measured, you are
“focused.” In my case (shown in Figure 6-5), Insights reported that I had 40.5 focus hours during the week. I
also spent 7 hours in meetings, meaning a total of 47.5 hours, so it seems like I had a full week. However, it is
possible that the two figures do not add up to 40 hours. When this happens, it is usually because the extra
time over the working week is explained by the fact that some meetings started before the working day
began or ended after the working day finished.

Figure 6-5: Analysis of focus hours

Page 167
Like any measurement, focus hours need to be put in personal context. For instance, does a two-hour period
spent over a long lunch count as focus time? On the surface, it might because you are not in meetings or
working on email, but on the other hand you might not be too focused during the meal.

After Hours
The notion of working from 9AM to 5PM every day sounds archaic to many who work in IT who are
accustomed to the need to perform systems maintenance and other tasks once other workers have left for the
day. As noted earlier, it is common for those who work on international teams to need to stretch the working
day at either end to accommodate conference calls with other team members. Insights measures how much
time a user spends on late-night or early-morning activities as measured by meetings or working on email.
You can process email during an after-hours meeting and clock up time for all measurements to seem like
you are really putting in the time.
As before, the data used to figure out when you are working outside normal business hours (Figure 6-6)
comes from the calendar and email headers and divides between time spent processing email and time spent
in meetings. The time reported for email activity is a subset of the overall time reported in the email section.
We can see here that the sum of after-hours email activity is 6.9 hours (3 hours for meetings).

Figure 6-6: When work happens out of hours


It is important to realize that the after-hours report is not an attempt to measure overtime or individual
productivity. You might focus on processing personal email or connect to a boring conference call with one
eye on reading news from different web sites. In short, just because Insights detects some after-hours activity,
there is no way to tell how effective or productive that time was.
It is good to know how much extra time we use after hours work as this is time taken away from family and
personal pursuits. Clearly, we can all have a week when the calendar is crazy because of customer demands,
project timelines, or a crisis at work, but the goal should be to understand and recognize why we use that
time outside the normal working day so as not to let it become the norm. Unless of course you like working a
sixty-hour week.

Page 168
Teams Signals
Signals for Teams come in two forms: chats with other people and meetings. Scheduled online meetings have
always been counted by Insights because these events are in user. Teams ad-hoc meetings, which are not in
the calendar, are now counted based on the actual length of the meeting. This data shows up in the meetings
and after-hours charts and a total time spent in online meetings is reported in the user’s Your Time overview.
Chats are recorded based on 30 seconds per sent chat, with no time taken for read chats. Based on
observation of user data, the Insights development group believes that only counting sent chats delivers an
accurate estimate of how much time someone spends on IM. Time spent on chat is reported in the user
overview and the out-of-hours chart. Chats also affect the multitasking report but chats during a meeting
don’t count because they are an inherent part of the meeting experience.

Working Relationships
The “Network” section of the dashboard (Figure 6-7) displays information about the other people with whom
you collaborate with most often. In effective, this is an attempt to measure with whom you spend your
working time. The data used are:
• Time spent together with individuals in 1:1 or small meetings (less than 25 attendees).
• Time spent in email exchanges.
The calculation of how many hours you spent with key contacts is then adjusted based on the number of
people who attended meetings that you participated in with key contacts. Clearly, a 1:1 meeting provides far
more “face time” with someone than when you gather around a conference table with 20 others. In the same
way, email exchanged between you and someone else is more personal and important in terms of
maintaining contact than if you both receive a message sent to a distribution list where you might delete the
message because you consider it unimportant while your contact thinks it is critical. Further adjustments are
made based on the frequency of meetings and email communication.
Microsoft has been down this path before with the (now deprecated) People View feature in OWA, which tried
to find the most important correspondents for a user and offer fast access to messages that they had sent.
The view presented here might be thought of as quite the reverse, as Insights helps you understand when you
are losing touch with key contacts who are important to you. In this context, losing touch means that you
have not emailed someone in a while or scheduled a meeting with them. Of course, you might meet them for
coffee every day, but that kind of human interaction is not logged by the Microsoft Graph. “All caught up”
means that I manage to keep in touch with all my important contacts, which seems like a good thing.
Insights misses some ways people communicate. Even email might be underreported because interactions
with people could flow through email conversations in a Microsoft 365 group that do not appear in user
mailboxes because people turn off email notifications for the group. This underlines the importance of
treating this data as a baseline for discussion rather than a statement of fact.

Time Investments
The Time Investments view shows how the user spends their time with different people (internal and external)
and groups. In Figure 6-7, we see the breakdown of a week’s activity based on total time. The set of people in
the list is determined by Insights based on the email and calendar interaction between you and those
individuals and varies from week to week. People whom the user deems important to them are marked with a
star. To add someone to your important list, click the star opposite their name. To remove someone from the
important list, do the reverse. External people are listed with their email address while internal people (those
who have accounts in the tenant) do not. You can view the list by All (show everyone), Important (just starred
users), and Work Groups (distribution groups, Microsoft 365 Groups, and Teams).

Page 169
Figure 6-7: Time spent communicating with different people and groups
If Insights finds that you are losing touch with people that you used to communicate with often, it lists them
in a section called “Losing Touch.”
Management relationships are the glue that keep organizations together. You have a different relationship
with your direct manager than you have with a co-worker, so it is important for Insights to understand the
corporate structure. To do this, Insights depends on the management relationships recorded in Azure AD to
know for whom someone works as well as their direct reports. If this information is missing or inaccurate then
some parts of the personal dashboard are not going to be as exact as they would otherwise be.

The Map View


You can view the information as a list (as shown) or as a map, where the people who you spend most time on
are shown closest to you (Figure 6-8). The idea of the map is that it helps users to visualize their network and
prioritize how they spend their time. Using the map with the Important filter is especially valuable because it
clearly shows how much time the user spends with people that they mark as important. Those noted as
important have a star to denote their status.

Page 170
Figure 6-8: Viewing the people map

The Importance of Data


Any attempt at analysis can only measure the data that are available to it. Insights does not yet use some of
the user activities recorded in the Microsoft Graph, which might lead to an analysis that does not accurately
reflect the full gambit of how someone uses Office 365. For instance, although Insights notes details of
meetings scheduled using Teams and ad-hoc video or audio calls made by Teams when you’re signed into
your home tenant, it does not note any details of work done as a guest in other tenants.
Working on Office documents is another category where Insights is blindsided. I spend a lot of my working
day writing. I do everything through Word and store documents in OneDrive for Business or SharePoint
Online sites. None of this activity shows up in Insights but the signals noting my interactions with documents
exist in the Microsoft Graph. Finally, data for some of the newer Office 365 applications is still unavailable in
Insights. For instance, a two-hour session to create the tasks for a new plan with Microsoft Planner is an
activity that goes unnoticed by Insights.

Moving Away from Raw Statistics


In 2019, Microsoft began the process of refocusing Insights away from the presentation of raw statistics about
the number of messages read and sent, meetings attended, and so on, towards a more interpretative and
reflective approach. The aim is to give users insights into how they can rebalance work activity to be more
effective and to have a better work-life balance.
The new dashboard reports observations based on activity recorded over the last four weeks divided in four
areas:
• Focus: How much focus time is available to dedicate to specific tasks? Someone might have enough,
or they might be interrupt-driven and never have the time necessary to get deep into a topic.
• Wellbeing: In order to recharge your batteries, some quiet days are needed (days when you don’t
have an overload of work, including after-hours activity). Insights measures how many quiet days
someone has over the last four weeks (including weekend days, which should be quiet) and describes
how they work out-of-hours. For instance, Insights might note that out-of-hours email is read and

Page 171
responded to on a mobile device, which then begs the question if the device can be powered off from
time to time, like on a weekend.
• Network: Who are the important people in someone’s network? How many people are they actively
connected to by email or chat and how many in total in the last four weeks have they contacted.
• Collaboration: How much time does someone spend working with other people on collaborative
activities.

Figure 6-9: The new MyAnalytics dashboard


For now, the two dashboards are available. Users can access either the raw statistics or the insightful
commentary, or both. Microsoft has steadily moved Insights away from the reporting of raw data to a more
thoughtful approach since 2016, so it would come as no surprise if the raw data was increasingly hidden in
the future.

The Insights App for Outlook


The personal dashboard gives access to a lot of interesting information about personal work habits, but some
people will never go near it. To help these users understand their email and calendaring habits, Insights has
an Outlook add-in app available for OWA and Outlook desktop. The app is not available for Outlook for Mac
because that client does not support the same add-in model. Exchange Online installs the Insights app
automatically for licensed users.
Previous versions of the Insights app-in focused on giving insights into the progress of a sent message. The
latest version moves away from a design goal to help users understand what happens when they send a
message to show a feed of personalized insights generated from observation of how you interact with Office
365. When you open the app, it reveals a panel with a set of cards. The cards include:
• Reminders to book focus time to help you achieve goals.
• Update your important people list to ensure that Insights tracks their communication with you.
• Information about conflicts that might be in your calendar.

Page 172
• View outstanding tasks that you might have committed to delivering in an email.
• Add people to your important list.
You can dismiss cards from the list shown to request Insights to show you other insights in the same category.
Figure 6-9 shows a typical set of cards shown by the Insights app (left-hand screen). At the end of the list, we
see that Insights thinks we might have left some important people off the list that Insights tracks, so it
suggests that we add these people.

Figure 6-9: The Insights app highlights some important people

Task Reminders
Insights uses natural language processing and artificial intelligence to scan messages in user mailboxes to
identify email that might contain commitments to do something and then highlights those commitments as
potential tasks in the Insights app. For instance, in Figure 6-10, we see that I made a comment in a message to
someone that Insights has found and now surfaces as something I might have forgotten to do.

Page 173
Figure 6-10: Highlighting a task
To generate these reminders, Insights relies on a mailbox assistant that uses machine learning techniques to
analyze messages as Exchange delivers them to mailboxes. The code looks for text strings that might show
that you have made a commitment to the recipient to do something. For instance, the first and second
examples both have “I shall” in the text, showing that the sender intends to do something in the future.
Because the message is going to someone else, you can reasonably conclude that the intention is to do
something for that person. If the assistant considers that a message holds a commitment, it creates an item in
the MyAnalytics-ActionLog folder in the user’s mailbox. This folder is hidden from clients and can only be
seen using a utility like MFCMAPI when working online rather than in cached Exchange mode. The items
include information to find the source email containing the commitment. The next time that the user consults
Insights, the app displays a list of the items in the folder for the user to process.
Like all machine learning programs, Microsoft constantly tweaks the code to make it more precise and better
at detecting commitments. This insight is only available in English.

Booking Focus Time


The Insights app offers users to schedule time in their calendar to focus time. The app suggests two-hour
blocks that are free in your calendar, preferably in the morning (and avoiding lunch time) when you are
freshest. If you select any of the two-hour blocks, Outlook creates a calendar event to block the time out and
stop others taking away your focus time. There is nothing special about these events. They exist purely to set
some time aside to work on a dedicated task.

Insights Digest
The Viva Insights settings include a choice to receive a monthly email digest summarizing the user’s activities
from the last month using the same methodology as in the personal dashboard. The digest aggregates
insights across four outcomes: focus, wellbeing, network, and collaboration.” In addition, the digest includes
some personalized tips that the user might like to consider. These tips are like those displayed by Outlook
Insights and include information such as the person whom you collaborated most during the week, whether
you send email during meetings, and what day of the week was most open in terms of focus hours.

Workplace Analytics
On July 5, 2017, Microsoft announced that Workplace Analytics is now available as an add-on for any Office
365 enterprise plan. The post says: “Workplace Analytics provides unprecedented behavioral insights that can
be used to improve productivity, workforce effectiveness and employee engagement.” Workplace Analytics does
not combine the collective personal dashboards within a tenant to tell management who is productive and
who needs some help sorting out their overpacked calendar. Instead, Workplace Analytics is a tool to help
understand how an organization functions. For instance, among the examples of how organizations use
Workplace Analytics included in Microsoft’s post are:
Page 174
“…the behaviors of managers were pivotal in determining employee engagement and retention”
“…analyzed the metadata attached to employee calendar items to calculate the travel time associated with
meetings.”
Anyone who has ever worked inside a large organization likely understands that how direct-line managers
deal with employees directly affects how people work and whether they stay with the company. The same is
true for travel time. If you force people to come into central offices to attend meetings, you expect a
productivity hit when attendees are traveling to the meetings.
The critical item to understand is that Workplace Analytics generates a set of collaboration metrics that
analysts can use to detect inefficiencies and problems within a company. Like anything dealing with people,
this is an imperfect science, and context is all-important in understanding why people work the way that they
do.
Workplace Analytics is only available to Office 365 enterprise tenants with over 5,000 seats. The cost is $6/user
per month unless you have E5 licenses, in which case the cost is $2/user per month. The price of Workplace
Analytics is enough to pause for thought. In addition, before you deploy, consider that you probably need to
do some up-front work such as working with HR to ensure employee privacy is respected, establishing the
core population for analysis, setting the goals for the exercise, and so on.
Outside the U.S. or in multi-national companies where worker unions or councils are more common, it is
probable that approval is necessary from these bodies before any analysis can go ahead. For these and other
reasons, you will likely need help from outside consultants with proven ability in similar exercises to help run
the project and gain usable results.
Another thing to consider is that Viva Insights and Workplace Analytics both focus in on email and calendar
data and do not look at other areas of activity within Office 365. Analysis includes time spent on Teams calls if
it is in your calendar but not otherwise. But it ignores the time composing an article like this in Word, even if
you store the document in OneDrive for Business or SharePoint. Chatting in Teams or Yammer is also outside
the current boundary for analysis.

Company Culture is Critical


Both Viva Insights and Workplace Analytics depend on access to data. If insufficient data is available, the
analytics cannot and do not work. In effect, this means that these tools are unlikely to be very useful in small
companies where some users opt-in for data analysis. In any case, those who work in small organizations tend
to know what is going on without having to consult an analysis. The true opportunity for analytics lies in large,
distributed enterprises who depend on email as a primary method for communications. Come to think of it,
Microsoft is exactly the type of company that can use this kind of insight.
The big question is how will end users accept the kind of insight into working habits that analytics delivers.
Anyone who is interested in interpreting and understanding what data means will find value in an analysis of
how they work, but when it comes to “average user”, I think the answer lies in the culture of the company and
the personal attitude of individual users.
If you work in a typical American company who emphasizes personal growth and ongoing improvement, you
will probably regard the information exposed by Viva Insights as just another tool to help people achieve their
very best. The YouTube video posted by Microsoft to explain MyAnalytics reflects this attitude while an
infomercial PDF produced by Microsoft is a good starting point for companies who want to introduce the
technology internally. Microsoft has also described several examples of how their own employees used
Insights to influence their work habits.

Page 175
Those charged with introducing Insights to an organization will have to be able to clearly outline why the
company wants to deploy Insights and the personal advantages that its users can gain. At the Ignite 2016
conference, speakers from companies that had deployed MyAnalytics reported how they use the data to
change user behavior. One company took the simple step of changing the default length for an internal
meeting from one hour to 30 minutes and said that this had returned four hours per week to knowledge
workers. Your mileage might vary.
Outside the U.S., Viva Insights and Workplace Analytics might find a more measured welcome in many
international companies, especially in Europe, where personal space is more heavily protected and valued
than it in the U.S. Microsoft clearly has a tightrope to walk here when it comes to privacy and to communicate
the value its analysis tools.

The Value of Analysis


Viva Insights and Workplace Analytics are first generation Office 365 analytics tools. It is likely to take several
years before we see the full worth of the analytics calculated from the massive data set generated by Office
365. Companies who base their collaboration on email will be able to derive value from the current analysis
while those who focus on other activities must wait.
Microsoft acknowledges that some of the current algorithms use simple calculations that will need refinement
and enhancement over time. In addition, the analysis of the work people do is too email-centric. Over time,
the algorithms will improve, and Microsoft will incorporate data for other Office 365 workloads to more
accurately reflect the working day of more users. Understanding the interaction with people who do not have
accounts in the Office 365 tenant is a bigger challenge, but it is one that is necessary, not least because of the
almost mandatory presence of hybrid deployments (spanning both on-premises Exchange and SharePoint)
found in the large enterprises that appear to be the natural target for analysis.
To some, analytics seems like the old-style time and motion studies where overseers check the productivity of
factory workers with the aim of weeding out the unproductive. That concern might hold water if Workplace
Analytics generated dashboards for departments or the organization that allowed supervisors to drill down to
the individual, but that is not the case. And anyway, the current data set is so incomplete that anyone could
plead a case that the analysis does not reflect their workload. The current implementations are pointers to the
future rather than the ultimate analysis of everything that happens within a tenant

Page 176
Chapter 7: Exchange Online
This chapter covers Exchange Online topics that we couldn’t fit into the main book.

Public Folders
If you are a new Office 365 tenant who has never used Exchange before, you might never use public folders
and can ignore this text. After all, better collaboration functionality exists in Microsoft 365 Groups or Teams,
and it is obvious that Microsoft is investing here rather than in public folders. On the other hand, if you want
to create a platform for large-scale company-wide discussions, Yammer is probably a better choice. For all
that, public folders are known as the "cockroaches" of Exchange for a reason: this is a feature that has existed
since the earliest day of the product, one that Microsoft tried to kill off over the years only to run into
enormous customer resistance at each attempt because public folders are an easy-to-use shared repository
suitable for many purposes. Public folders are still in widespread use to manage large amounts of corporate
data to this day.
Originally introduced in 1996 with Exchange 4.0, public folders received their first major architectural
makeover in Exchange 2013. The upgrade was necessary for two major reasons. First, public folders used
separate databases for their data and synchronized the databases with an old-and-creaking replication model
that had been in place since 1996. Replication usually worked, but it was a black box when the need arose to
understand what was happening to synchronize databases. The second requirement was to create a form of
public folders that could run in Exchange Online, preferably using an architecture based on existing
investments in database and high availability technologies. Although it might seem strange to want to bring
an antiquated part of Exchange into Office 365, the fact is that public folders have been in use within many
large organizations for years, they hold a lot of corporate knowledge and are embedded in working practices,
and a migration to Office 365 would be impossible for many customers if public folders could not be moved.
To address these issues, Exchange 2013 introduced a new public folder architecture based on using public
folder mailboxes to store the hierarchy, the public folders, and the data held in the folders. The approach
offered many advantages including that public folders no longer used a separate repository and could take
advantage of Exchange’s high availability features. But most importantly, Exchange Online could now support
public folders. The set of supported clients for public folders hosted by Exchange Online is Outlook 2013,
Outlook 2016, Outlook 2016 for Mac, and OWA. No mobile clients support access to public folders. Although
migrations are limited to moving up to 250,000 public folders from an on-premises organization to the cloud,
thereafter Exchange Online can expand the hierarchy to accommodate 500,000 public folders.
The options included in the Exchange Online version of the EAC to manage public folders are like those for
on-premises Exchange and the same cmdlet set is available to help scripted management. The same
recommendation exists to exert tight access over the root folder to prevent users creating root-level public
folders.
Figure 7-1 shows the public folder management interface in EAC. Among the items shown in the details pane,
we can see that the mail settings for the selected folder are enabled and that it holds 2,981 items amounting
to 259 MB.

Page 177
Figure 7-1: Public folder management in the Exchange Admin Center

Cmdlet Inconsistencies
The same inconsistencies that exist in the PowerShell cmdlets on-premises also exist in Exchange Online. For
instance, you can use the name of a public folder with the Get-MailPublicFolder cmdlet, but not with Get-
PublicFolder. For example, this works:
[PS] C:\> Get-MailPublicFolder –Identity "Exchange MVP Private List"

But even though the same folder is targeted, this command does not work:
[PS] C:\> Get-PublicFolder –Identity "Exchange MVP Private List"

Instead, you must provide the full path to the public folder through the hierarchy, like this:
[PS] C:\> Get-PublicFolder –Identity "\Email Lists\Exchange MVP Private List (PF)"

It is all to do with the way that Exchange Online considers that two kinds of public folders exist: those that are
mail-enabled and those that are not (the default). Mail-enabled means that the public folder can receive email
and that email can be sent on behalf of the folder.
Apart from mail-enablement, one technical reason why some public folders are different to others is where
Exchange Online must fetch information from when you work with a public folder. Details of all public folders
are registered in the public folder hierarchy and the *-PublicFolder cmdlet set reads data about folders from
the hierarchy. However, some of the information about mail-enabled public folders must be retrieved from
EXODS (see Chapter 4) as these objects do not exist in Azure Active Directory, which then creates a need for
the –MailPublicFolder cmdlet set. It would have been better if the software masked these differences and
resolved matters internally to allow administrators to work with public folders using a single method but now
we must deal with the legacy of the past.

Page 178
On the upside, because the same quirks exist on both platforms, you can be sure that on-premises scripts
used to manage public folders should work within Exchange Online. The downside is that this is an area that
often confuses people who are new to working with public folders.
Oddly, the method of finding public folders using the full path from the root also works with Get-
MailPublicFolder! Another example that might confuse is that Get-MailPublicFolder returns all mail-enabled
public folders while Get-PublicFolder only returns the root public folder. You must run Get-Public Folder –
Identity “\” –Recurse to list all public folders.

Differences with On-Premises Public Folders


Both on-premises and cloud versions of Exchange use the same basic public folder architecture with some
tweaks to management for Exchange Online.

Storage
Public folders are stored in public folder mailboxes in mailbox databases. As with user mailboxes, tenant
administrators do not have the choice of the databases where public folder mailboxes are placed. Each public
folder mailbox holds a copy of the public folder hierarchy. The first public folder mailbox created in Exchange
Online holds the only writable copy of the hierarchy. As with on-premises, you cannot change the public
folder mailbox that holds the primary writeable copy of the hierarchy. However, you do not have to worry
about protecting the primary public folder mailbox because Exchange Online automatically protects all
mailboxes with Native Data Protection. All other public folder mailboxes have secondary read-only copies. The
easiest way to discover which mailbox holds the primary copy of the hierarchy is to check with EAC. However,
you can also look at the organization configuration. For example:
[PS] C:\> Get-OrganizationConfig | Select RootPublicFolderMailbox

RootPublicFolderMailbox
-----------------------
7297de53-15d8-4bbc-9b98-1dde1022ff9b

The output from this command is a GUID, a unique identifier that Exchange Online can use to look up its
configuration data to locate the primary public folder mailbox by matching the GUID against the
ExchangeGUID stored for each of the public folder mailboxes that exist in the tenant. You can do the same
thing by using the Get-Mailbox cmdlet to retrieve a list of public folder mailboxes and then checking the GUID
returned from the organization configuration against the value of the ExchangeGUID for each mailbox. As you
can see from the results shown below, we know that the "PF Mailbox" is the primary public folder mailbox
because the (bolded) GUID reported for "PF Mailbox" matches the value reported for RootPublicFolderMailbox
by the Get-OrganizationConfig cmdlet. Simple!
Get-Mailbox –PublicFolder | Select Name, ExchangeGUID

Name ExchangeGuid
---- ------------
PF Mailbox 7297de53-15d8-4bbc-9b98-1dde1022ff9b
PF Mailbox 1 1a120b51-efff-4ef2-805a-8f7e90d0342d
PF Mailbox2 1c60d391-da63-4bc1-ba66-46fa0c4afb42

Hierarchy Synchronization
A synchronization process ensures that all the updates made to the primary copy of the public folder
hierarchy are replicated to the secondary copies. This process happens automatically, and a tenant
administrator doesn't have to do anything to ensure that hierarchy updates are performed. The
synchronization process used to update copies of the public folder hierarchy is unique to public folders and
Page 179
only applies to the hierarchy. Public folders and the data contained in public folders differ across public folder
mailboxes. The regular replication mechanism used by the Database Availability Group is used to protect the
contents of public folder mailboxes.
Although you cannot control the synchronization process, you can force Exchange Online to synchronize a
specific public folder mailbox by running the Update-PublicFolderMailbox cmdlet. As we can see here, the final
output tells us that synchronization with the primary hierarchy is complete.
[PS] C:\> Update-PublicFolderMailbox –Identity PFMailbox2 –FullSync –InvokeSynchronizer

RunspaceId Message
---------- -------
dfdb12ee-dfe5-4018-b452-3fc54558b79d Synchronizing folder hierarchy of mailbox "PFMailbox2" w...
dfdb12ee-dfe5-4018-b452-3fc54558b79d Sync in progress - job state is: "Queued".
dfdb12ee-dfe5-4018-b452-3fc54558b79d Sync with mailbox that contains primary hierarchy is com...

User Connections to Public Folders


Users connect to a public folder mailbox to access a copy of the public folder hierarchy. The public folder
mailbox for a user mailbox (or any other type of mailbox, including shared and group mailboxes) is selected
by random from the set of public folder mailboxes that exist in the tenant when the mailbox is created. Over
time, it is possible that the distribution of users across available public folder mailboxes becomes skewed and
you might want to rebalance the load. And in large public folder deployments (more than 10,000 folders), you
should move user connections away from the public folder mailbox that holds the primary copy of the
hierarchy so that this mailbox is dedicated to hierarchy updates.
You can control the assignment of a public folder mailbox by running the Set-Mailbox cmdlet to update the
DefaultPublicFolderMailbox property for a user mailbox. For example:
[PS] C:\> Set-Mailbox –Identity JSmith –DefaultPublicFolderMailbox "PF Mailbox 2"

This technique is commonly used on-premises where administrators have much better awareness of where
public folder data is held and how close that data is (in terms of network paths) to its users. However, the
same conditions don't exist inside Exchange Online and a case can be made that you should not attempt to
assign a default public folder mailbox to a user mailbox as this might affect the way that Microsoft
automatically balances public folder load within Exchange Online by splitting large public folder mailboxes up
after they reach a certain size. When this happens, a user might be left in a position where all their favorite
folders are in a different public folder mailbox, yet they are still configured to use the same default public
folder mailbox. All in all, it's best to leave automatic management to do its work and not attempt to second-
guess load balancing. The only time when you might need to explicitly define a default public folder mailbox
for a user mailbox is when you test the results of a migration, as explained later.
To search for mailboxes that connect to the public folder mailbox holding the primary mailbox and move
them to another public folder mailbox, you can use a command like the one shown below. Of course, this
command moves all mailboxes to another public folder mailbox. If you have multiple public folder mailboxes
in use (excluding the one holding the primary hierarchy), it would be best to distribute the users across the set
of available public folder mailboxes:
[PS] C:\> Get-Mailbox | Where-Object {$_.DefaultPublicFolderMailbox –eq "PF Mailbox" } |
Set-Mailbox –DefaultPublicFolderMailbox "PF Mailbox 1"

You can stop Exchange Online from selecting a public folder mailbox as the default connection point for user
mailboxes by excluding it from serving the hierarchy. In this case the mailbox becomes a simple repository for
public folder data. A copy of the hierarchy still exists in the mailbox, but it is not used by clients. An example
of how to stop a public folder mailbox serving the hierarchy is shown below:

Page 180
[PS] C:\> Set-Mailbox –Identity 'PF Mailbox 1' –IsExcludedFromServingHierarchy $True
-PublicFolder

Public Folder Scalability


An Office 365 tenant can support up to 1,000 public folder mailboxes, but only 100 of the mailboxes can serve
the public folder hierarchy to clients. Each of those mailboxes can support connections from up to 2,000
active users, meaning that the maximum supported connected population is 200,000 users. This is enough for
most organizations but might become an issue in the largest Office 365 tenants.
When Outlook desktop connects to Exchange, AutoDiscover returns a list of resources to which the mailbox
can connect. Among these resources is a public folder mailbox. If you want to reduce the overall load on
public folders, you can do so by removing public folders from the view of mailboxes that don’t need to access
them. This can be controlled at an organizational or mailbox level.
By default, every mailbox in a tenant can access public folders, so the PublicFolderShowClientControl setting in
the organization configuration is $False. In other words, you’re not interested in controlling public folder
access on a per-mailbox level. If you update the configuration and change the setting to $True, AutoDiscover
checks the PublicFolderClientAccess setting for the mailbox to know if it should include public folder
information in the data returned to Outlook. If the setting is $True, AutoDiscover returns public folder
information. If $False, it does not.
To change the setting for a mailbox, run the Set-CASMailbox cmdlet. For example:
[PS] C:\> Set-CASMailbox -Identity James.Ryan -PublicFolderClientAccess $True

To set the organization configuration, run the Set-OrganizationConfig cmdlet. For example:
[PS] C:\> Set-OrganizationConfig -PublicFolderShowClientControl $True

After changing the setting, it takes a little time for cached information to clear and the new setting to be
effective. You should expect to see public folders appear or disappear (depending on the setting) from
Outlook within an hour of an update. Because these settings control how AutoDiscover behaves, it is not
dependent on any specific version of Outlook. However, the settings do not affect OWA as it does not use
AutoDiscover to retrieve resource information.
The default value for PublicFolderClientAccess is $False, so it is not a good idea to update the organization
configuration and set PublicFolderShowClientControl to $True unless you first update the mailboxes that you
want to continue accessing public folders. For instance, you might decide that only users in the U.K. should
use public folders, you can run some PowerShell to find those mailboxes and update the setting. For instance:
[PS] C:\> Get-Mailbox -RecipientTypeDetails UserMailbox -Filter {UsageLocation -eq "United Kingdom"
} | Set-CASMailbox -PublicFolderClientAccess $True

Public Folder Mailbox Size


Exchange Online proactively manages the size of public folder mailboxes. An auto-split feature, which is not
available on-premises, is invoked automatically whenever a public folder mailbox is roughly half-way to its 50
GB mailbox quota. Auto-split means that a new public folder mailbox is created, and the data held in the
original mailbox is divided across the two. This is a useful way of keeping public folder mailboxes under
control, but it shouldn’t happen too often if enough public folder mailboxes are created by administrators.
After all, a 50 GB quota is more than adequate to hold over one million 50 KB items – and that’s a lot of user
posts to public folders.
Overall, Exchange Online restricts a tenant to one thousand public folder mailboxes. A public folder mailbox
can be a maximum of 50 GB, so the total storage assigned to public folders can extend to 50 TB. Given that

Page 181
the largest known on-premises implementations are in the order of 10 TB, more than enough headroom
exists for growth.

Public folder Storage


Along with copies of the public folder hierarchy, public folder mailboxes also store public folders and their
content. The EAC does not give you any control over which mailbox is used to store a new public folder.
However, except in cases when relatively few public folders are in use (less than a few hundred), it is usually a
good thing to keep the primary public folder mailbox clear of public folders and leave it to the task of
maintaining the hierarchy. For this reason, it is a good idea to create new public folders using PowerShell and
assign them to mailboxes that hold secondary copies of the hierarchy. For instance:
[PS] C:\> New-PublicFolder –Name "Calais" –Path "\France" –Mailbox PFMailbox2

Unlike on-premises Exchange 2013/2016, you cannot use the New-PublicFolderMoveRequest cmdlet to move
folders from one public folder mailbox to another. Likewise, you cannot redirect the creation of new content
for a folder to a different mailbox as you can on-premises by running the Set-PublicFolder –
OverrideContentMailbox command. These restrictions are in place because Microsoft takes care of the
management of public folder data. For these reasons it is absolutely critical that you ensure that public folders
are created in the right mailbox.
To discover what public folders are stored in a mailbox, use the Get-PublicFolder cmdlet as shown below:
[PS] C:\> Get-PublicFolder –Mailbox PFMailbox1 –Recurse –ResidentFolders

Name Parent Path


---- -----------
IPM_SUBTREE \
Exchange MVP Private List (PF) \Email Lists
Yammer \Email Lists
Exchange 2013 \
Flayosc \
France \
Toulon \France

The Get-PublicFolderStatistics cmdlet provides a different view of what’s in a public folder mailbox.
[PS] C:\> Get-PublicFolderStatistics –Mailbox PfMailbox2

Name ItemCount LastModificationTime


---- --------- --------------------
IPM_SUBTREE 0 03/01/2015 20:29:29
Email Lists 0 21/07/2014 13:44:11
Exchange MVP Private List 3185 21/07/2014 13:46:04
Yammer 0 04/01/2015 17:34:32
Exchange 2013 2 28/11/2014 15:29:41
Flayosc 5 03/01/2015 20:50:37
France 1 04/07/2014 15:59:12
Calais 0 04/01/2015 18:03:15
Paris 0 04/01/2015 17:44:04
Toulon 0 04/01/2015 18:05:41

To see statistics for all public folder mailboxes, you cannot pipe the results of Get-Mailbox –PublicFolder to
Get-PublicFolderStatistics because the identity returned by Get-Mailbox isn't the input value needed by Get-
PublicFolderStatistics (remember the discussion about software disguising issues like this?). In any case, you
can do the same thing by using Get-PublicFolder. Note the use of the –Recurse switch to force Exchange
Online to transverse the entire public folder hierarchy.
[PS] C:\> Get-PublicFolder –Recurse | Get-PublicFolderStatistics

Page 182
This is interesting, but it's even better when we know what mailboxes are storing the data. We can tweak our
command as follows:
[PS] C:\> Get-PublicFolder –Recurse | Get-PublicFolderStatistics | Sort MailboxOwnerID |
Format-Table Name, ItemCount, MailboxOwnerID –AutoSize

Name ItemCount MailboxOwnerId


---- --------- --------------
France 1 PF Mailbox
Flayosc 5 PF Mailbox
French Contacts 1 PF Mailbox
Toulon 0 PF Mailbox
French Public Holidays 1 PF Mailbox
Exchange MVP Private List 3949 PF Mailbox
Email Lists 0 PF Mailbox
IPM_SUBTREE 0 PF Mailbox
Exchange 2013 2 PF Mailbox
Yammer 0 PF Mailbox
Paris 0 PF Mailbox 1
Calais 0 PF Mailbox 2

Warning! Although this command returns some interesting data and tells us that the public folder mailbox "PF
Mailbox" is the one that is currently overloaded (it’s also the primary public folder mailbox), commands like
this are great examples of ones that work so nicely in small environments (like demos) and can be horrible
when faced by the scaled-up realities of production systems. In other words, you might need to call for a
beverage of choice when waiting for this command to complete if more than a few thousand folders need to
be processed.

Mail-enabling Public Folders


As in all previous versions of Exchange, new public folders are not automatically mail-enabled. If you want
users to be able to email contributions to a public folder, you must first mail-enable the folder through the
option available in the EAC or by running the Enable-MailPublicFolder cmdlet. You can also set the
HiddenFromAddressListsEnabled property for the folder to $True if you don’t want the newly mail-enabled
folder to be included in the GAL. For example:
[PS] C:\> Enable-MailPublicFolder –Identity "\France\Toulon"
–HiddenFromAddressListsEnabled $True

Mail-enabling a public folder assigns it a number of properties to allow Exchange Online to route email to the
public folder. This command reveals the set of mail-enabled public folders in a tenant. Only one of the public
folders is hidden from the GAL.
[PS] C:\> Get-MailPublicFolder | Format-Table Identity, PrimarySmtpAddress,
HiddenFromAddressListsEnabled –AutoSize

Identity PrimarySmtpAddress HiddenFromAddressListsEnabled


-------- ------------------ -----------------------------
Flayosc 18922635 Flayosc1@office365itpros.com False
Exchange MVP Private List ExchangeMVPList@office365itpros.com False
Exchange 2013 Exchange2013@office365itpros.com False
Calais Calais@office365itpros.com False
Toulon Toulon@office365itpros.com True

One of the slightly irritating features of Exchange Online is the control exerted by email address policies that
cannot be overridden by a tenant administrator. In this instance, the email address assigned to a public folder
when it is mail-enabled is derived from the primary domain for the tenant. Some companies have vanity email
domains (like Office365ITPros.com) that they want to use for communication with external clients, but you
cannot assign SMTP addresses for the vanity domains to mail-enabled public folders. Any attempt to do so

Page 183
will be resisted with a firm admonition that the object is under the control of an email address policy (that you
can't change) and so the address can't be updated.

Public Folder Moderation


Although the EAC user interface does not expose the necessary properties needed to create a moderated
mail-enabled public folder, the necessary steps can be accomplished with PowerShell. Moderated public
folders are an old collaborative mechanism, but they might still be needed for some purposes. Moderation
happens in the same way as for other mail-enabled recipients. Any item mailed to the folder is intercepted
and sent to a moderator for a decision to be made whether the item should be posted. The moderator can
accept or reject the post.
You can follow these steps to set up moderation for a public folder. First, ensure that the public folder is mail-
enabled. Next, ensure that the people who will post to the public folder have the necessary access rights to do
so. For example, this command assigns permission to create items in the specified folder to members of the
“Finance Planners” distribution group:
[PS] C:\> Add-PublicFolderClientPermission –Identity "\Departments\Finance\Budget 2016"
–User "Finance Planners" –AccessRights CreateItems

When you're happy that the right people have access to the folder, go ahead and populate the moderation
properties for the public folder.
[PS] C:\> Set-MailPublicFolder –Identity "Departments\Finance\Budget 2016"
–ModeratedBy "JSmith", "Bowens" –ModerationEnabled $True - SendModerationNotifications Always

Messages posted to the public folder after moderation becomes effective are redirected to the moderators.
Note that direct posting to the public folder is not controlled by moderation. Users will be able to add the
public folder to their folder favorites in Outlook or OWA and then post items into it.

Public Folder Clients


Outlook and OWA are the only clients that support public folders. No mobile client supports public folders
because the necessary interfaces are not included in protocols such as ActiveSync. Outlook gives the most
comprehensive support for public folders. You can add and manage public folders, update folder permissions,
and include public folders in the set of "favorite" folders. Figure 7-2 shows Outlook 2013 being used to
browse the contents of a reasonably large public folder, which is used as an archive for discussions in the very
important Exchange MVP distribution group. Public folder items look and behave much like items in any other
folder with the understanding that all access is online: Outlook does not cache public folders for offline
access. This can slow down access if you need to access a public folder across a low-speed or low-quality
connection.
Outlook's normal "Mail" view is shown in Figure 7-2. If you want to browse the public folder hierarchy, you
have to switch into the "Folders" view (click […] in the bottom menu bar). This then allows full access to the
set public folders determined by the permissions assigned to the user.

Page 184
Figure 7-2: Accessing Office 365 public folders from Outlook 2013
Outlook also allows users to manipulate permissions for folders that they own. Figure 7-3 shows how
permissions are set through Outlook. In this case, we see that the default permission is "Author", meaning that
any user can access and add items to the folder.

Figure 7-3: Using Outlook to assign public folder permissions


If your account holds the "Create subfolders" permission for a public folder, you can use Outlook to create
new folders under this root. In Figure 7-4, a new public folder is being created under the "France" folder. The
default is to create folders that hold "Mail and Post items", which means that they are used for email.
However, the drop-down list allows the user to select other types of items, of which calendar, contact, and
tasks are the most common. Creating folders to hold these types of items can only be done through Outlook
Page 185
as the EAC does not support the creation of public folders holding anything other than mail items. Given that
Exchange Online supports more modern ways to share information (such as group mailboxes), support for
public folders that hold calendar and contact information is for backwards compatibility only.

Figure 7-4: Using Outlook to create a new public folder


Even though the future might not see many new public folders used for purposes like shared calendars, there
are many legacy public folders that will be migrated to Office 365 that hold this kind of information because
using public folders to hold non-mail items is a practice often exercised in Exchange deployments.

Figure 7-5: Adding a public folder to Outlook favorites


OWA includes a picker (Figure 7-5) to allow public folders to be added to the set of folder favorites. However,
OWA is designed to only work with mail folders, so the picker doesn’t allow you to select public folders that
hold either calendar or contact items. Public folders holding these types of items can be added as a favorite
using Outlook 2016, but they won’t be listed as a favorite when you work with email. Instead, both Outlook
and OWA show public folder calendars under “Other Calendars” and public folder contacts under “Other
Contacts” in the Calendar and People sections respectively.

Page 186
Public Folders and Compliance
Public folders do not enjoy the same breadth of support for the compliance features available in Office 365 as
mailboxes do. Table 7-1 lists the major areas of compliance features available in Office 365 and the degree of
support available for these features in public folders. As you can see, apart from the ability to include public
folders in content searches, the picture is not particularly good. Note that you cannot specify individual public
folders to be a search source – all public folders are included. In addition, only public folders that exist in
Office 365 be a search source.
Compliance feature Public folder support
Mailbox or Microsoft 365 Retention Unsupported
policies
Content searches The contents of public folders are indexed and fully
discoverable. Folders can be added as a source for a content
search and a hold can be placed on the results. If an attempt is
made to alter content in a public folder that is on hold, details
of the change are kept in the DiscoveryHolds folder in the
public folder mailbox and remain indexed and discoverable.
eDIscovery cases Public folders can be included in the searches and holds used
by eDiscovery cases.
Data Loss Prevention policies Unsupported
Administrative auditing (unified Public folder administrative and user events are not recorded in
auditing) the Office 365 audit log.
Table 7-1: Public folder support for Office 365 compliance features

Discovery mailboxes
Discovery mailboxes are special repositories to hold items retrieved through Exchange eDiscovery searches. In
many respects, they function like a regular shared mailbox, but often receive special attention because of their
involvement in confidential discovery operations. Microsoft is replacing Exchange eDiscovery searches with
Office 365 content searches and eDiscovery cases.

Inbox Rules
Exchange Online supports both server-based rules and client-side rules. Server-based rules, or Inbox Rules as
they are referred to in the documentation, are kept in the user mailbox and are accessible to all clients. Inbox
rules run whether a client is connected to the mailbox or not and have priority over client-side rules (which are
specific to a client) because they are processed as part of the delivery pipeline. Inbox rules also have priority
over server-based processing like the Focused Inbox and offer the significant benefit of being able to process
inbound messages no matter what client is used. On the other hand, client-side rule implementations like the
one found in Outlook differ across client types and are highly specific to that client. In other words, you
cannot take a rule created on one client and expect to be able to use it with another.
You can also create Inbox rules with Outlook if the actions performed by the rule can be executed on the
server. For instance, a rule that calls for an item to be moved to a folder in a PST cannot be processed by
Exchange Online because the server does not usually have access to PSTs. In all cases, client-side rules can
only process whatever messages are left in the inbound mail stream after the transport system is finished its
processing and Outlook is running.
The processing steps that can be performed with Inbox rules include:

Page 187
• Delete an inbound message.
• Move or copy the message to a folder.
• Mark the message with a category, importance level, or as read.
• Redirect or forward the message to another email address.
The conditions and exceptions that can be incorporated into an Inbox rule include if the message:
• Was sent by a specific person or addressed to a specific user.
• Has specific words in the subject, body, sender’s address, recipient’s address, or message header.
• Includes the recipient’s name in the To: or CC: box or is the only recipient listed.
• Is marked with a specific importance or sensitivity level.
• Has an attachment, is of a specific type, is classified, or flagged as something.
• Is a specific size.
• Is received within a specific date range.
Users create and edit Inbox rules by selecting a message and using the “Create rule” option in OWA.
Alternatively, they can manage rules through the Automatic processing section of OWA options (Figure 7-6).

Figure 7-6: Creating a server-side rule with OWA


Both approaches end up using the same wizard to build the rule. OWA divides rules into Inbox rules, which
check inbound mail and apply processing based on the characteristics of the messages, and sweep rules,
which run in the background to move messages from a mailbox into the Deleted Items folder. Only the
mailbox owner can create sweep rules. To do this, select a message using OWA and click Sweep in the
navigation bar (Figure 7-7). When saved, you see the sweep rule in OWA options under Inbox rules. However,
you cannot change sweep rules. To change a sweep rule, you must remove and recreate the rule. In addition,
sweep rules cannot be accessed with Outlook.

Page 188
Figure 7-7: Creating a sweep rule

Using PowerShell to Manage Rules


If an administrator needs to set up rules for a user, they can do so logging into the mailbox to manage rules
for a user through OWA options. PowerShell can also be used by administrators to manage rules on behalf of
users using the *-InboxRule cmdlet set, including the ability to create new rules. For example, this command
tells us whether any rules exist for a mailbox and lists the rule name, the priority order, and whether rule
processing stops when a rule fires:
[PS] C:\> Get-InboxRule –Mailbox "Kim Akers" | Format-Table Name, Priority, StopProcessingRules
–AutoSize

Given the complexity of the conditions and exceptions that can be defined for a rule, the easiest approach to
creating a rule is to use the OWA interface. For instance, you could create the rule for your own mailbox, make
sure that the rule works as expected, and then replicate it for a user mailbox. When the rule is ready, you can
see all its properties with a command like this:
[PS] C:\> Get-InboxRule –Mailbox "Kim Akers" –Identity "Remove Rubbish" | Format-List

You need to note all the conditions and exceptions for the rule and then use them as the basis for the
parameter values provided to the New-InboxRule cmdlet. For example, this rule looks for inbound messages
that have “Hello World” in their subject and forwards the messages to a different SMTP address, unless the
message is marked with high importance. In this case, we assign a name to the rule to show that this is an
administrator-created rule. Using a naming convention like this is not mandatory, but it is a good practice
because, in the case of disputes, it helps to isolate user-created rules from administrator-created rules. It is
always a good idea to secure agreement with a user before you create a rule on their behalf – or at least to
tell them that a rule exists after creation.
[PS] C:\> New-InboxRule –Name "Process Email (Created by Admin)" –Mailbox "Kim Akers"
–StopProcessingRules $True –SubjectContainsWords "Hello World" –ExceptIfWithImportance High
–ForwardTo "Jill.Smith@Office365ITPros.com"

When creating the new rule, Exchange Online places it at the top of the rule priority order by assigning it a
priority value of 1. You might not want this to be the case as other rules should be processed first. If this is the
case, you can change the priority order with the Set-InboxRule cmdlet. Note that you always must tell
Exchange Online to which mailbox the rule belongs.
[PS] C:\> Set-InboxRule –Mailbox "Kim Akers" –Identity "Process Email (Created by Admin)"
–Priority 3

Outlook can also manage user rules. Because it spans actions and conditions that take advantage of the
client’s capabilities, Outlook supports a more comprehensive set of rule conditions and actions than OWA

Page 189
does. For this reason, Outlook can be a better choice for rule creation. For instance, you can create an Outlook
rule to have Exchange Online send a template message in response to inbound mail (the “have server reply
with a specific message action”). This feature is not supported by the *-InboxRule cmdlet set. Outlook rules
that use advanced options cannot be edited through OWA.
The standard quota for the storage of inbox rules in a mailbox is 64 KB. This is usually enough, especially as
anyone who uses the Focused Inbox can probably remove many rules that filter unwanted messages. Once
the quota is reached, a user won't be able to create new rules. You can increase the rule quota to a maximum
of 256 KB by running the Set-Mailbox cmdlet. For example:
[PS] C:\> Set-Mailbox –Identity 'Kim Akers' –RulesQuota 256KB

PowerShell for Sweep Rules


You can retrieve a list of sweep rules for a mailbox with the Get-SweepRule cmdlet. For example, this
command returns a list of rules and tells you whether each rule is enabled and which sender the rule
processes.
[PS] C:\ Get-SweepRule -Mailbox TRedmond | Format-Table Name, Enabled, Sender

You can also create and remove sweep rules. However, an administrator can only create a sweep rule for the
mailbox that they own. Here is an example of using the New-SweepRule cmdlet to create a new rule:
[PS] C:\> New-SweepRule -Name "Messages from Microsoft Tech Community" -Provider Exchange16 -
KeepForDays 7 -SourceFolder "Kim Akers:\Inbox" -Mailbox "Kim Akers" -DestinationFolder "Kim
Akers:\Deleted Items" -Sender "notift@mstechcommunity.onmicrosoft.com" -Enabled $True

On the other hand, an administrator can remove a rule from a mailbox, if they know the identity for the rule,
which can be retrieved with Get-SweepRule. The identity for a rule is composed of the mailbox name and a
unique string. For example:
TRedmond\CzzdSOaFuESmQGBLtw/yZw==
We can remove the rule as follows:
[PS] C:\> Remove-SweepRule -Identity TRedmond\CzzdSOaFuESmQGBLtw/yZw==

Calendar Sharing
Asking for the ability to share their calendar with external recipients is a common user request. This is possible
with Office 365, but only after you enable calendar sharing through the Office 365 Admin Center (Figure 7-8).
To control calendar sharing, go to Settings, then Services & Add-ins, and then Calendar.

Page 190
Figure 7-8: The choices to control how users share calendars
Sharing can take three forms. Users control how much data is revealed when they issue a sharing invitation to
another person. The options are:
• View when I am busy: The calendar shows slots to indicate when the person is free, busy, or has a
tentative appointment. This is the same free/busy data used to establish whether a potential attendee
is available for a meeting.
• Titles and locations: The calendar shows booked time slots with some information – time, subject, and
location.
• All details: All the information held in calendar events is available for view.
After a tenant allows calendar sharing, users can click the Share button in the OWA calendar or Share
Calendar in Outlook to generate a sharing invitation to people within the tenant or in external organizations.
Within the tenant, individual users can assign delegate permission to other users to allow them to process
calendar events on their behalf. If you assign someone to be a delegate, you can restrict access to calendar
events marked as private.
Sometimes administrators like to know which users are sharing their calendars. This PowerShell command
generates a list of all calendar sharing, internal and external, and is one way to find out what is happening
inside the tenant.
[PS] C:\> ((Get-Mailbox –RecipientTypeDetails UserMailbox).Alias) | % { Get-MailboxFolderPermission
–Identity ($_+":\Calendar") } | Where-Object { $_.User –notlike "Anonymous" } | Format-Table
Identity, FolderName, User –AutoSize

Identity FolderName User


-------- ---------- ----
Deirdre Redmond:\Calendar Calendar Tony Redmond
TRedmond:\Calendar Calendar Deirdre Redmond
TRedmond:\Calendar Calendar ExchangePublishedUser.Someone@gmail.com

Any entry under “User” that starts with “ExchangePublishedUser” indicates that a calendar is shared externally
with the email address that follows.
Strictly speaking, shared mailboxes do not support calendar sharing. However, you can set up calendar
sharing for a shared mailbox with these steps:
• Grant an account Full Access to the shared mailbox.
Page 191
• Use OWA to log into the shared mailbox with the account.
• Set up sharing using the Calendar as normal.
• Check that everything works.
A detailed discussion of how to convert a mailbox type from shared to regular and back again is later in the
next chapter.

The Default or My Organization calendar permission: The Default permission for calendars sets the
level of access that other users in the tenant or other federated organizations have to free and busy data.
Typically, this information is used by the Scheduling Assistant when it displays the set of available slots for
a requested meeting. The normal is to allow free and busy slots to be displayed, in which case the assistant
can show when a person is available, but it can be changed to display more information, such as the titles
and locations for events. The default permission is also used if a user opens another person’s calendar. As
part of the new sharing model, the default permission is renamed “My Organization” to clarify that this is
how the permission is used. In addition, Microsoft is removing the ability of clients to remove this
permission from calendars.

Allowing Cross-Tenant Free/Busy Sharing


Being able to share calendars is one way to approach the need to enable cross-tenant scheduling. You can
also create a relationship with other Office 365 tenants to allow users to browse free/busy information for
people in the other tenant when they schedule meetings. To allow bi-directional access, you need to create an
organization relationship in both tenants. You can do this through the Organization section of the EAC.
Navigate to Sharing and create a new relationship under Organization Sharing. You can then enter the
domains you want to share with, what degree of calendar free/busy information you want to share, and
whether to share for all users in the tenant or just those who are members of a specified security group.
Remember to have the other domain allow your users to access free/busy information for their users in return.
You can also use PowerShell to create an organization relationship. This command in run in a tenant to create
an organization relationship with another tenant called “Domain1.” The free and busy access level is “Limited
Details, “meaning that users can see the name of a meeting, its location, and start and end date, but they
cannot see the attendees. Information about of meetings marked as private stays confidential.
[PS] C:\> Get-FederationInformation -DomainName Domain1.onmicrosoft.com | New-
OrganizationRelationship -Name FreeBusyDomain1 -Enabled $True -FreeBusyAccessEnabled $True
-FreeBusyAccessLevel "LimitedDetails" -FreeBusyAccessScope $Null

To complete the process, run the same command inside the other tenant, changing the domain name to point
to the first tenant and the name of the relationship appropriately. To test that everything works, create a new
meeting and use the scheduling assistant to browse the free and busy information for a user in the other
tenant. If you can see some details, you know that the relationship works. Alternatively, use the Test-
OrganizationRelationship cmdlet to verify that the configuration with another organization is correct.

Resource Mailboxes
Exchange Online supports both room and equipment mailboxes, collectively referred to resource mailboxes. A
room mailbox is a mailbox created to allow users to reserve a room for a meeting. In other words, the
calendar of the room mailbox is used to reserve a time slot for the meeting to take place in the room. An
equipment mailbox is very similar except that the mailbox represents a piece of equipment such as a TV set,
projector, or other moveable device. Experience indicates that room mailboxes are used more than equipment
mailboxes. Apart from anything else, room mailboxes can be used to build room lists (see the section in the
Groups chapter).

Page 192
Resource mailboxes are valid Office 365 users, but they don’t need to be licensed because no one logs into
their user accounts. They can be created from the EAC or PowerShell, or indeed from the Office 365 Admin
Center under "Meeting Rooms". Here’s how to create the two types of resource mailbox:
[PS] C:\> New-Mailbox –Name "Sonoma Conference Room" -Room
[PS] C:\> New-Mailbox –Name "TV Set" –Equipment

After we create some resource mailboxes, the list of resource mailboxes that exist in a tenant can be
discovered with this command:
[PS] C:\> Get-Mailbox –Filter {RecipientTypeDetails –eq "RoomMailbox, EquipmentMailbox"} |
Format-Table DisplayName, RecipientTypeDetails

Resource mailboxes operate on the basis that they have a calendar and can be scheduled for meetings just
like regular user mailboxes. Meeting requests sent to resource mailboxes are processed by the Resource
Booking Assistant, a background process responsible for arbitrating requests to schedule resources. The
assistant works with the Exchange Calendar Attendant, another background process that is responsible for
coordinating responses from meeting attendees, to ensure that the resource calendar is correctly updated
and accurate. Direction as to how inbound meeting requests should be processed is given to the Resource
Booking Assistant by updating the calendar processing properties of resource mailboxes. You can view these
options through the EAC (Figure 7-9) or use the Get-CalendarProcessing cmdlet to view the properties and the
Set-CalendarProcessing cmdlet to set them.

Figure 7-9: Viewing the calendar processing options for a room mailbox
For example:
[PS] C:\> Get-CalendarProcessing –Identity "Sonoma Conference Room" | Format-List

AutomateProcessing : AutoUpdate
AllowConflicts : False

Page 193
BookingWindowInDays : 180
MaximumDurationInMinutes : 1440
AllowRecurringMeetings : True
EnforceSchedulingHorizon : True
ScheduleOnlyDuringWorkHours : False
ConflictPercentageAllowed : 0
MaximumConflictInstances : 0
ForwardRequestsToDelegates : True
DeleteAttachments : True
DeleteComments : True
RemovePrivateProperty : True
DeleteSubject : True
AddOrganizerToSubject : True
DeleteNonCalendarItems : True
TentativePendingApproval : True
EnableResponseDetails : True
OrganizerInfo : True
ResourceDelegates : {}
RequestOutOfPolicy : {}
AllRequestOutOfPolicy : False
BookInPolicy : {}
AllBookInPolicy : True
RequestInPolicy : {}
AllRequestInPolicy : False
AddAdditionalResponse : False
AdditionalResponse :
RemoveOldMeetingMessages : True
AddNewRequestsTentatively : True
ProcessExternalMeetingMessages : False
RemoveForwardedMeetingNotifications: False
Identity : Sonoma Conference Room

Based on user feedback that too many options existed to control calendar processing, Microsoft simplified
matters so that the EAC does not reveal all the properties that can be set for the mailbox. However, you can
still manipulate them with PowerShell.
Table 7-2 describes the most important parameters used by Exchange Online to automate the processing of
meeting requests that arrive in a resource mailbox.
Parameter Meaning

AutomateProcessing The default value is AutoAccept, meaning that the Calendar Attendant
will process inbound meeting requests and create a meeting in the
calendar in the resource mailbox if the requested meeting is within
policy. AutoUpdate will create a tentative meeting in the calendar and
the meeting is then processed by the resource booking assistant, which
will confirm the meeting based on the policy in force for the mailbox.
None means that all meeting requests sent to the mailbox have to be
manually processed by a delegate.
AllowConflicts The default value is False, meaning that conflicts are not allowed. If set
to True, the ConflictPercentageAllowed parameter then governs the
maximum percentage of conflicts allowed for recurring meetings. The
default for this parameter is 0 (zero).
BookingWindowInDays The default is 180, meaning that the resource can be booked up to 180
days in advance. This parameter is used in conjunction with the
EnforceSchedulingHorizon parameter, which controls how recurring
meetings are processed. If True (the default), an attempt to schedule a
recurring meeting past the booking horizon is automatically declined. If
False, the meeting is accepted but the number of recurring events is
limited so that meetings will not occur after the booking horizon.

Page 194
MaximumDurationInMinutes The maximum length for a meeting. The default is 1440 minutes (24
hours), which seems excessive.
DeleteAttachments If set to True (the default), the resource booking assistant will delete any
attachments included with a meeting request. Similar parameters control
whether Comments (DeleteComments), the meeting subject
(DeleteSubject), non-calendar items (DeleteNonCalendarItems) are
provided to strip information from meeting requests and to remove
non-calendar items that arrive into the resource mailbox. The idea is that
the calendar should only store free/busy slots to indicate when the room
is available and should not be used to store non-calendar items. In
addition, stripping non-essential data from calendar requests preserves
the privacy of the scheduled appointments from others who can view
the calendar.
AllBookInPolicy Controls whether meeting requests from all users should be
automatically approved if compliant with the policy (like the length of
the meeting and the booking window). The default is True.
AllRequestInPolicy Controls whether users must submit meeting requests for manual
approval by a room delegate. The default is False. It is overridden if
AllBookinPolicy is True. A similar parameter (AllRequestOutOfPolicy)
controls whether users can submit meeting requests that do not comply
with policy. The default for this parameter is False.
AdditionalResponse Specifies some customized text to add to the message sent back to
meeting organizers when a meeting is accepted.
ProcessExternalMeetingMessages Controls whether meeting requests from external domains will be
processed. The default is False, but this should be set to True in hybrid
deployments to enable correct cross-platform processing of meeting
requests.
Table 7-2: Parameters to automate room calendar processing
The default values of the calendar processing parameters are set in such a way that you should not have to
change them unless you have good reason to do so.

Controlling Access to Rooms


Some meeting rooms are restricted and access to them needs to be tightly controlled, perhaps because the
room is in a sensitive area or its use should be reserved for specific people. The easiest way to restrict access
is to:
• Set up some delegates to have access to the room mailbox. The delegates are needed to approve
incoming meeting requests when automatic approval is not enabled for the room.
• Make sure that the AutomateProcessing property of the calendar to “AutoUpdate”.
• Provide text in the AdditionalResponse property to tell users that restrictions are in place for the room
when they attempt to book it. You cannot change any other text in the notification message so this is
an important way to communicate that a meeting might or might not be accepted because of a
policy applying to the room.
The following steps occur when someone schedules a meeting in a room controlled by delegates.
• The meeting request goes to the delegates listed for the room.
• A notification message goes to the meeting organizer to tell them that a meeting request is pending
approval. The text set for the calendar mailbox in the AdditionalResponse property mentioned above
is used to inform the meeting organizer if any restrictions exist for room bookings. In this case, the

Page 195
text (also sent to the calendar delegate, perhaps as a reminder to them too) tells us that the
conference room is restricted to meetings of the corporate financial planning staff.
• One of the delegates decides to accept or decline the meeting request. If the meeting is accepted, it
is inserted into the room calendar. As some of the meeting data might be confidential, the Booking
Assistant only retains the time and date of the meeting, the name of the organizer, and the number of
attendees in the item created in the room calendar.
• A notification message is sent to the meeting organizer to tell them that their request is accepted or
denied.
The situation is a little different if you enable a room to auto-accept meetings. If the room is configured to
accept meeting requests automatically, incoming meeting requests will be accepted as long as no conflict
occurs and the meeting organizer will receive a confirmation message. If a conflict happens because two or
more people want to book the room at the same time, some human intervention is required to sort out who
gets the room. If delegates are assigned to the room, the following occurs:
• A tentative meeting is created in the room calendar. Again, any confidential data is removed from the
meeting entry.
• It would be unreasonable to expect calendar delegates to stay fixated on a room calendar, forever
scanning for new meetings to pop up for approval. Exchange Online therefore sends a notification
message to calendar delegates to ask them to accept or reject the meeting request.
• Another notification will go to the meeting organizer to tell them that the booking is regarded as
tentative and is awaiting confirmation.
• A delegate processes the meeting request to confirm or reject the tentative meeting and a
notification of their decision is sent to the meeting organizer.
This goes to prove that a certain amount of testing is required to settle on a room calendar booking
configuration that matches the need of a particular organization!

Processing room requests for important people


You can automate the processing of requests to book a room even further by creating a list of users whose
requests will be automatically accepted if their requests comply with the policy. The EAC doesn’t allow you to
set up such a list, but you can do it with PowerShell. For example, this command allows any request from a
member of the Executive Committee distribution group and another specific user to be accepted and
confirmed without needing approval from a delegate. Meetings created by a delegate are also automatically
confirmed.
[PS] C:\> Set-CalendarProcessing –Identity "Sonoma Conference Room"
–BookInPolicy "Executive Committee", "Ben Owens"

Custom Room Properties


Exchange Online supports two properties to help users decide which room they would like to book. The first is
ResourceCapacity, a strictly notional property that contains a numeric value entered by an administrator that
might or might not be accurate. It certainly is not the legal capacity as set by whatever regulations govern the
location of the room. Room capacity is shown by Outlook when the All Rooms list is displayed in the address
book. However, it is not used by either Outlook or OWA when scheduling meetings. You can update the room
capacity for a room as follows:
[PS] C:\> Set-Mailbox –Identity "Las Vegas Conference Room" –ResourceCapacity 121

The other property is ResourceCustom. Originally this was intended to provide more context for those
scheduling meetings so that they knew what resources are available in different rooms. This property is not

Page 196
supported by Exchange Online, nor is the associated Set-ResourceConfig cmdlet. One way around this
restriction is to create a MailTip for the room mailbox.
[PS] C:\> Set-Mailbox –Identity 'Las Vegas Conference Room'
–MailTip 'Capacity 121; contains video conferencing equipment'

Equipment Mailboxes
Equipment mailboxes are like room mailboxes in that they have a calendar that can be scheduled as part of a
meeting. At one point in time it must have seemed like a good idea to provide a facility to add mobile
equipment like a projector to meeting requests and allow booking delegates to arbitrate incoming requests
for their use. Unfortunately, practice demonstrates that equipment mailboxes are not used extensively.
You can create new equipment mailboxes through the EAC or PowerShell. As shown below, the PowerShell
command is very simple. Delegate access can be configured after the new equipment mailbox is created.
[PS] C:\> New-Mailbox –Name "VCR Recorder" –Equipment

Exchange Online Mailbox Retention Policies


Our recommendation is to use Office 365 Retention Policies whenever possible because these policies cover
Exchange Online, SharePoint Online, OneDrive for Business, Microsoft 365 Groups, Teams, and Public Folders.
However, sometimes a mailbox retention policy is a better choice because it can:
• Use a default retention tag to impose retention for every item in the mailbox that doesn’t have a
personal or folder tag. Office 365 retention policies work like default tags.
• Use a default archive tag to move items into the mailbox’s archive. Office 365 retention policies don’t
support a move to archive action.
• Use folder tags to process items in default folders, like the Inbox or Sent Items folder. Office 365
retention polices don’t recognize default folders.
• Use personal tags to select items for specific retention. Office 365 retention labels work like personal
tags.
This section was originally in the compliance chapter in the main book.

Benefits of a Mailbox Retention Policy


The need for user coaching and training to communicate the value of a good retention policy and the benefits
it brings to users is obvious. The benefits include:
• Automatic clean-up of mailbox contents: An argument can be advanced that the automatic
removal of information from a mailbox is bad but given the volume of email that users must cope
with nowadays, having old items removed from a mailbox after a reasonable period seems like
goodness. Given the very large mailboxes that are now in use, users can argue that it is a waste of
time to go through their mailbox to clean it up on a regular basis. Automatic clean-up is an example
of how intelligent technology helps to save time for users.
• Ability to control retention: Users can control how long items remain in mailboxes by applying
personal tags to folders and individual items. Including personal tags that prevent items ever being
removed or archived from folders is popular with users, even if they never actually use the tags.
• Ability to achieve the desired effect of corporate retention policies: Retention policies can help
users to keep information that is necessary to follow regulatory or legal guidelines. On the other
hand, retention policies can also be employed to remove information from mailboxes that would
otherwise be embarrassing or unhelpful under the harsh spotlight of legal discovery. Personal tags

Page 197
allow users to clearly mark items needed for audit or other regulatory purposes and default tags can
be deployed to remove non-essential items that the company considers should not be retained.
Of course, operational needs differ enormously from company to company and a general-purpose retention
framework such as that in Exchange Online can only deliver value if work is done to understand the business
requirements and then adapt the available functionality in the most effective way to meet those requirements.
Gaps might exist, but it is always surprising quite how much can be done with off-the-shelf software.

Different Types of Retention Tags


A retention policy is composed of a set of retention tags. Exchange Online supports three distinct types of
retention tags:
• Retention policy tags (sometimes called “folder tags”) the default folders found in every mailbox use
these tags. You only assign retention policy tags to the default folders that you want MFA to process.
The set of default folders are:
o Inbox, Sent Items, Deleted Items, Calendar, Contacts, Tasks, RSS Feeds, Recoverable Items,
Clutter, Archive, Conversation History, Journal, Junk E-Mail, Notes, Outbox, Sync Issues
Microsoft does not support the creation of retention policy tags for non-default folders. It does not
make sense to create a folder tag for the Outbox because messages only stay here while they wait to
be transmitted by the server.
• Personal tags allow users to exercise control over specific items or non-default folders. A policy can
have many personal tags, but it is good practice to avoid user confusion by limiting the tags to
between 7 and 10 in total.
• Default tags tell MFA how to process any item that is not covered by a retention policy tag or a
personal tag. A policy can have up to three default tags, each of which serves a different purpose:
o The delete tag: Instructs MFA to move items into the Recoverable Items folder or to
permanently remove them from the mailbox.
o The archive tag: Instructs MFA to move items into the archive mailbox. This tag is only
effective if the mailbox is archive-enabled.
o The voicemail tag: Gives specific instruction as how to handle voicemail messages. The reason
for this tag is that many companies have different retention requirements for voicemail than
they do for regular messages.
When label policies publish labels for use with Exchange Online mailboxes, the labels show up in clients and
behave much like personal tags.

The meaning of retention: People who approach Exchange retention policies for the first time can be
forgiven for misunderstanding what “retention” means in this context. Retention means “keep what you
need”. There are two ways of achieving the goal. You can either make sure that you preserve what is
needed or you can remove information that is not needed. Tenants often use Exchange retention policies
to remove information after the items reach a certain age. You can also use retention policies to preserve
items by moving them from primary mailboxes to archive mailboxes where the items can be kept for a
further period. Policies can combine delete actions and archive actions to achieve the retention goals of
the organization. Unlike Office 365 retention policies, Exchange retention policies do not place holds on
content.

The Default Retention Policy


To make it easier for administrators to implement a messaging records management strategy and keep
mailboxes under control, Exchange Online includes a default retention policy called the “Default MRM Policy.”
This is the same policy used with Exchange 2013 and Exchange 2016. However, while in an on-premises
deployment, an administrator must take an action to apply an MRM policy to mailboxes, Exchange Online
Page 198
automatically applies the Default MRM Policy to every mailbox when it is created, including the mailboxes
that are moved to Exchange Online from an on-premises server.
The set of tags included in the Exchange Online version of the Default MRM Policy are a good example of the
diversity and use of tags usually found in a well-designed retention policy. Table 7-3 lists the retention tags
that are included in the Default MRM Policy.
Tag name Type Meaning
1 Month Delete Personal Items move to Recoverable Items after 30 days
6 Month Delete Personal Items move to Recoverable Items after 180 days
1 Week Delete Personal Items move to Recoverable Items after 7 days
1 Year Delete Personal Items move to Recoverable Items after 365 days
5 Year Delete Personal Items move to Recoverable Items after 1825 days
Never Delete Personal Prevents MFA deleting items. Note: MFA can still archive items if
dictated by an archive tag
Personal 1 year move to Personal Moves an item into the archive after 365 days
archive
Personal 5 year move to Personal Moves an item into the archive after 1825 days
archive
Personal never move to Personal Prevents MFA moving items into the archive mailbox
archive
Default 2 year move to Default Items not under the control of another tag move to the archive
archive (if available) after 730 days
(*) Deleted Items Folder Items in the Deleted Items folder move to Recoverable Items
after 30 days. Microsoft stopped processing this retention tag for
the Default MRM Policy in early 2015
Junk Email Folder Items in the Junk Mail folder move to Recoverable Items after 30
days
Recoverable Items 14 Folder Moves items in the Recoverable Items (plus sub-folders) to the
days move to archive archive after 14 days
Table 7-3: Retention tags in the Default MRM Policy
The design of the default retention policy is quite interesting. A total of thirteen tags exist, but only three of
the default folders are covered (Deleted Items, Junk Email, and Recoverable Items). Many retention policies
implemented by companies focus on removing old content from the Inbox and Sent Items folders and include
retention tags for this purpose, possibly moving items after 60 days or so. No default delete tag is included in
the policy, so MFA leaves items in their folders unless they are explicitly tagged for deletion.
Although no default delete tag is present, the Default MRM policy does contain a default archive tag. As we
know, a default tag tells MFA how to process any item that is not stamped with a more specific tag. In this
case, Items that do not inherit a tag from their current folder or have not been assigned a tag by the user are
moved to a folder of the same name in the archive mailbox after 2 years, or 730 days. If the mailbox is not
archive-enabled, MFA ignores the default archive tag.
Because newly-migrated mailboxes are stamped with the Default MRM Policy, the potential exists that the
retention tags that govern how the MFA processes mailbox items will change when the mailbox becomes
active in Exchange Online. Consider the situation where a company runs Exchange 2016 on-premises servers
and changes the Default MRM Policy to limit the period that items can be kept in common folders such as the
Inbox and Sent Items. Everything works in a satisfactory manner for on-premises mailboxes. A decision is then
made to introduce Office 365 and a hybrid connection is configured to enable interoperability between the
on-premises and cloud components. When mailboxes move to the cloud, Exchange Online assigns the Default
MRM Policy to them. No tags are present in this policy to cover the Inbox and Sent Items folder with the

Page 199
result that a different retention model runs for the newly-moved mailboxes. The potential exists for no one to
notice the issue for months or years afterwards, which might or might not create a compliance issue.
For this reason, experienced consultants often recommend that customers do not use the Default MRM Policy
and create and apply a new specially-tailored retention policy to all mailboxes. The new policy can be
imported into Exchange Online and stamped onto mailboxes as they are moved to the cloud to ensure that
the same retention regime is applied on both sides of the on-premises/cloud divide. Some added scripting is
necessary to ensure that the correct policy is assigned to mailboxes, but this is not difficult to do. We will get
to discussing how to assign retention policies to mailboxes later.
Figure 7-10 shows a set of retention policies as viewed through the compliance management section of the
EAC. A custom retention policy called "Management retention policy" is highlighted and the tags in the policy
are shown in the details pane. This retention policy has a mixture of default retention tags available in every
Office 365 tenant and some custom tags created to meet specific business needs.

Figure 7-10: Details of the retention tags contained in a mailbox retention policy

Discovering How Many Items Move into The Archive


Knowing that a default archive tag is included in the Default MRM Policy, it might strike you that a lot of items
already have been moved into archives for mailboxes in your tenant. The easiest way to find out is to run the
Get-MailboxFolderStatistics cmdlet to check out the archiving activity for a mailbox. The code below retrieves
information for the copies of the Inbox and Sent Items folder in the archive. Apart from noting the number of
items in each folder, we also report the creation date of the newest and oldest items were added to the folder,
which should give some insight into how retention works. For instance, the newest items added to the archive
are dated from early June 2016. Given a “2-year default move to archive” tag in the policy, this is what you
would expect to see if you looked in June 2018. The creation date of the oldest items in the folders depend on
the default delete tag applied to the mailbox. In this case, because the oldest item dates from March 2011, we
know that the tag allows items to be kept for at least seven years.
[PS] C:\> Get-MailboxFolderStatistics –Identity "Kim Akers" –Archive -IncludeOldestAndNewestItems |
Where-Object {$_.Name –Like "Inbox" –or $_.Name –Like "Sent Items"} | Format-Table Name,
ItemsinFolder, FolderSize, NewestItemReceivedDate, OldestItemReceivedDate –AutoSize

Page 200
Name ItemsInFolder FolderSize NewestItemReceivedDate OldestItemReceivedDate
---- ------------- ---------- ---------------------- ----------------------
Inbox 15642 1.845 GB (1,981,302,727 bytes) 05/06/2016 23:38:01 25/04/2012 00:40:18
Sent Items 12946 1.685 GB (1,809,115,575 bytes) 05/06/2016 09:11:00 12/04/2011 16:52:41

You can also use the Get-MailboxStatistics cmdlet with the Archive switch to list the properties of an archive
mailbox. For example:
[PS] C:\> Get-MailboxStatistics -Identity "Kim Akers" -Archive | Format-List

Changing the Default MRM Policy for a Tenant


The Default MRM policy is stamped onto every new user, shared, room, or resource mailbox when it is
created. The policy is not applied to the mailboxes used by Microsoft 365 Groups. Exchange Online finds the
correct policy to use by consulting the mailbox plan that is assigned to the mailbox. The configuration settings
for each tenant lists the mailbox plans that are available, and each plan has an associated mailbox policy. For
instance, the tenant in this example has three plans and each plan has the Default MRM Policy as the default
retention policy:
[PS] C:\> Get-MailboxPlan | Format-Table Name, RetentionPolicy

Name RetentionPolicy
---- ---------------
ExchangeOnline-12c139bc-eafa-4a43-b4d2-e285f83e075d Default MRM Policy
ExchangeOnlineDeskless-bc1e76cc-4c0b-491c-a518-3a0a43cbf78e Default MRM Policy
ExchangeOnlineEnterprise-8fc1c029-5e32-485e-9810-179fb4701447 Default MRM Policy

In an on-premises Exchange organization you can change the default retention policy by setting the IsDefault
flag to $True:
[PS] C:\> Set-RetentionPolicy –Identity 'New Retention Policy' –IsDefault $True

However, this approach does not work in Exchange Online. The command to reset the IsDefault flag
completes but Exchange Online continues to use whatever retention policy is assigned in the mailbox plan. If
you want to ensure that new mailboxes get a different retention policy, you can update the retention policy
defined in the mailbox plans used within the tenant. For example, to assign a different retention policy to the
plan used for users holding Office 365 E3 and E5 licenses, we run this command:
[PS] C:\> Set-MailboxPlan -Identity ExchangeOnlineEnterprise-8fc1c029-5e32-485e-9810-179fb4701447 -
RetentionPolicy "Management Retention Policy"

You can then check that the correct retention policy is assigned to the plan by running the Get-MailboxPlan
cmdlet. The final check is to create a new user account and assign them an E3 or E5 license. When Exchange
Online creates the user’s mailbox, it should assign the “Management Retention Policy”. We check this with:
[PS] C:\> Get-Mailbox -Identity NewUser | Select RetentionPolicy

To make sure that the intended retention policy stays assigned to mailboxes, it is sensible to check from time
to time. For example, this PowerShell code looks for user mailboxes that do not have the right policy assigned
(or no policy assigned) and then assigns the correct policy:
[PS] C:\> Get-Mailbox –RecipientTypeDetails UserMailbox -Filter {RetentionPolicy –ne "Preferred MRM
Policy" –or RetentionPolicy –eq $Null} | Set-Mailbox –RetentionPolicy "Preferred MRM Policy"

Retention Actions and Retention Periods


The two most important settings in a retention tag are the retention period and the retention action. The
period is the length of time in days that must elapse before MFA will enforce the defined action. Figure 7-11

Page 201
shows the properties of a retention tag created to control how long items stay in the Inbox. These properties
are:
• Tag name. Folder and default tags are never shown to users, so their names do not matter as much
as the personal tags that appear in client user interfaces (see discussion later). However, in all cases,
the tag name should show its purpose and in the case of personal tags, they should tell users the
length of the retention period. “1 Week Delete” is a reasonable example of a personal tag name that
combines both pieces of information. Some companies prefer the simplicity of names like “Required
for Audit” to communicate the use of a personal tag without bothering users with details of what this
means in terms of time.
• Retention action. This is the instruction given to MFA to execute when the retention period expires.
Exchange retention policies support three actions:
o Delete and Allow Recovery: MFA moves the item into the Recoverable Items\Deletions folder.
o Permanently Delete: MFA removes the item from the database unless it comes under the
control of a hold. This action can be dangerous unless you are certain that you want to
remove items. Given the storage quota available to mailboxes, it is safer to use the Delete and
Allow Recovery action as users can then recover items if necessary.
o Move to Archive: MFA moves the item into a folder of the same name in the archive. This
action only applies when an archive mailbox is available. MFA ignores the tag if an archive is
not enabled for the mailbox. The Move to Archive action is only available for default tags and
personal tags.
• The retention period. This sets the age of an item before MFA will process the retention action in the
tag. The period can be “Never” or between 1 and 24,855 days. Microsoft has published an interesting
description explaining how MFA calculates retention dates for items. If a tag uses Never as a retention
period, it means that MFA will never execute the retention action because the item will never expire.
Tags that have a retention period of Never also have their RetentionEnabled property set to False.
• Comment. Free-form text used to explain the purpose of a retention tag when administrators work
with the tags through EAC. The comment is also seen by users when they use OWA options to add
personal tags to the set of tags made available to them through policy.

Page 202
Figure 7-11: Properties of a retention tag
Folder tags are always associated with one of the default folders while default tags apply to items across the
entire mailbox. You must define the tag type when you create a new retention tag and you cannot change a
tag to another type thereafter. If you make a mistake, you will have to remove the bad tag and recreate it as
the right type.
You can also create retention tags with PowerShell. To begin, here is an example of a personal retention tag
that applies to all types of items. The effect of the tag is to move items to the archive after 180 days.
[PS] C:\> New-RetentionPolicyTag –Name "Project Contoso Documents" –AgeLimitForRetention 180
–Comment "Archive documents required for Project Contoso" -RetentionEnabled $True
–Type Personal –RetentionAction MoveToArchive

Remember that the MoveToArchive retention action can only be specified for default and personal tags and is
not available for folder tags that apply to one of the default folders. Items that are processed by this retention
action are moved to a folder of the same name in the archive mailbox.
The first retention tag created in the next example is for the Inbox folder and removes items that are more
than 30 days old. The items can be recovered if necessary. The second is to remove items from the Deleted
Items folder after they are 120 days old. Again, the mailbox owner can recover items if necessary.
[PS] C:\> New-RetentionPolicyTag –Name "Inbox 30 day expiry" –AgeLimitForRetention 30
–Comment "Move items out of Inbox after 30 days" –RetentionEnabled $True
–RetentionAction DeleteAndAllowRecovery –Type Inbox

[PS] C:\> New-RetentionPolicyTag –Name "Deleted Items 120" –AgeLimitForRetention 120


–Comment "Move items out of the Deleted Items folder after 120 days" –RetentionEnabled $True
–RetentionAction DeleteAndAllowRecovery –Type DeletedItems

Page 203
Finally, here is how to create two types of default retention tag. The first removes any item in a mailbox that is
more than seven years old. The second specifically processes voicemail messages (as defined in the message
class) and will remove these items after 7 days.
[PS] C:\> New-RetentionPolicyTag –Name "Delete Items after 7 years" –AgeLimitForRetention 2555
–Comment "Clean up mailboxes by removing items older than 7 years" –RetentionEnabled $True
–RetentionAction PermanentlyDelete –Type All

[PS] C:\> New-RetentionPolicyTag –Name "Voicemail Removal" –AgeLimitForRetention 7


–Comment "Clean up voicemail after 7 days" –RetentionEnabled $True
–RetentionAction PermanentlyDelete –Type All –MessageClass Voicemail

Although it is always best to come up with a logical name that makes sense to end users, you can give a
retention tag any name you like and can set translated values for display through different language
interfaces. This must be done through PowerShell as the EAC does not allow you to manipulate language
settings. Unfortunately, OWA does not support language translations, but Outlook does. In this example, we
configure French, Dutch, and Spanish values for the “Keep for Audit” tag:
[PS] C:\> Set-RetentionPolicyTag –Identity "Keep for Audit"
–LocalizedRetentionPolicyTagName "fr-FR: Conserver pour verification", "nl-NL: Houden voor
controle", "es-ES: Mantenga la auditoria"

Remember that a retention tag is not unique to a policy. Multiple policies can use the same retention tag, so it
is best to create tags with the intention in mind that they can be used in many policies. Users can also add
personal retention tags to the set available to their mailbox through OWA Options.

Retention Processing
Although date calculation differs slightly for different item types, to simplify the discussion, we can say that a
retention period begins at the point when an item is first created in a mailbox (for messages, this is when the
item is delivered to the mailbox).
If you remove a default or folder tag from a policy, MFA will remove the tag from all items to which it
previously applied and will restamp these items with whatever tag now applies. This might be a default tag or
no tag. Clearly MFA must do a lot of work to do to update the items in a mailbox, so it is not recommended to
remove tags without some consideration of the possible impact. It is always less disruptive to change the set
of tags contained in a policy than it is to remove a policy and start again.
Any change made to the default tags or the retention policy tags in a policy will force the MFA to assess every
item in a mailbox to which the updated tags are applied and potentially update the properties of the item to
reflect a new retention expiry date. For example, changing the retention period for a tag invokes similar
processing in that MFA must find every item stamped with the tag and then update its retention period and
expiry date. The same is true if you change the retention action defined in a tag.
In addition, for Outlook desktop clients, if MFA updates an item because of a change to its tag, Outlook must
synchronize that update from the server to the cached copy in the OST. Although the amount of data involved
to synchronize the updated property for an item is small (less than 1 KB per item), the number of items in user
mailboxes and the number of mailboxes affected by the change might combine to create an unexpected load
on the network. The load is temporary and will pass once synchronization is complete.
An exception exists for personal tags. These tags stay in place on whatever items to which they have been
applied and MFA will continue to process the tags as before. The idea is that the user explicitly placed these
tags on items or folders and Exchange Online should continue to follow their wishes. However, personal tags
that are removed from the retention policy will be invisible to the user and cannot therefore be applied to
newly-created items unless the user chooses to add them back as optional tags using OWA settings.

Page 204
Removing a tag forces its removal from every retention policy known to Exchange Online and creates a great
deal of server processing to remove the now-deleted tag from items. MFA cannot process a non-existent tag
so as it processes mailboxes, it finds items that were previously stamped with the defunct tag and restamps
the items with whatever tag is now applicable (usually a default tag).
You can disable a tag by setting its retention period to Never. MFA will leave these tags in place but will
ignore the items stamped with the tag because it knows that they will never expire. Another way of forcing
MFA to ignore previously-stamped items is to set the RetentionEnabled property to False (this is what happens
when you use the EAC to set the retention period for a tag to "Never"). For example:
[PS] C:\> Set-RetentionPolicyTag –Identity "Keep for Audit" –RetentionEnabled $False

If you remove a retention policy, Exchange Online removes the policy from its configuration and removes it
from any mailbox to which it applied. This means that MFA will no longer process these mailboxes. You can
find out if any mailboxes have not been assigned a retention policy and apply a policy to these mailboxes by
running the command:
[PS] C:\> Get-Mailbox –Filter {RetentionPolicy –eq $Null} |
Set-Mailbox –RetentionPolicy "New Default MRM Policy"

Renaming a retention policy does not force any processing on the part of MFA. Exchange Online uses the
policy GUID to associate the policy with mailboxes and the GUID is unchanged if a policy is renamed. The
name of the policy is simply a convenience to allow the administrator to know the intent and purpose of the
policy, so the policy can be renamed as often as you wish. For instance, here is how you can rename the
Default MRM Policy.
[PS] C:\> Set-RetentionPolicy –Identity 'Default MRM Policy'
–Name 'Tenant Default Mailbox Retention Policy'

If you rename the default mailbox retention policy, be aware that MFA will process and remove items from the
Deleted Items folder.

How MFA Processes Retention Policies and Labels


MFA performs the following processing for retention policies and labels:
• Makes retention labels available to clients so that users can apply labels to items in the same way as
personal tags.
• Processes mailbox contents per the settings of retention policies assigned to mailboxes. A mailbox
can have both an Exchange mailbox policy and a retention policy assigned. If the settings in the two
policies clash, the MFA uses the rules of retention explained earlier to resolve the differences.
Administrators do not have to do anything to force MFA to execute the added tasks. It all happens
automatically.

The Recoverable Items Folder Tag


The Recoverable Items folder tag is a special case. EAC does not display this tag when listing available
retention tags in EAC or in client user interfaces, but you can see details of the tag by running the Get-
RetentionPolicyTag cmdlet. Another special case is that only the “move to archive” action is available for the
tag. This is logical because it does not make much sense to use the “delete and allow recovery” action as the
items already exist in the Recoverable Items structure. A case that the “permanently delete” action should be
supported fails because Exchange Online uses a different method (the deleted items retention period) to
remove deleted items from mailboxes. The effect of the Recoverable Items folder tag in the Default MRM
policy is:

Page 205
• If a mailbox has an archive, MFA moves deleted items into the archive after 14 days. The items are
stored in corresponding folders in the Recoverable Items structure in the archive mailbox. The items
remain in the archive indefinitely because the retention policy does not contain a default delete tag to
instruct that items should be permanently removed after a set period. In addition, the normal deleted
items retention period does not apply to archive mailboxes.
• If the mailbox is not archive-enabled, MFA ignores the instruction contained in the tag.

A cause of possible conflict: A retention policy can have default tags for both deletion and archival. Tags
applied to items in the primary mailbox stay active when items move into the archive. When both a delete
tag and an archive tag are stamped on items, care must be taken that the retention periods cause actions
to be taken in the correct sequence. For example, if the delete tag specifies that an item is to be
permanently removed after 365 days and the archive tags causes items to be moved into the archive after
730 days, logic dictates that nothing will end up in the archive because items will be permanently removed
long before they can be old enough to be archived. Logic works if the retention periods for the two tags
are reversed because in that case, MFA will move items to the archive after 365 days and then remove
them after a further year.

Why Items Stay in The Deleted Items Folder


The Default MRM Policy in on-premises Exchange versions has a retention policy tag to instruct the Managed
Folder Assistant to remove items from the Deleted Items folder after they are 30 days old. This also used to be
the case for Exchange Online. Now the MFA for Exchange Online ignores that tag when a mailbox is stamped
with the Default MRM Policy. The tag is processed if it is included in any other retention policy, including if
you rename the standard policy.
Microsoft does not use database backups inside Office 365. Instead, the protection delivered by multiple
database copies spread across multiple datacenters and other native data protection features ensure that
information is protected for as long as it is needed. Evidence from support calls proved that many users
removed items in error and later found that they could no longer recover those items because they had been
expunged from the database. At this point the items are irrecoverable because backups are not available. It is
unreasonable for users to understand the complexities involved in a backup-free regime and this led to many
calls to Microsoft Support and many instances of unhappy customers who could not get their data back. This
became a prime factor in Microsoft’s decision to stop removing items from the Deleted Items folder.
Other reasons include:
• Users often triage Inbox items by deleting them. In other words, if a message is unimportant or does
not have to be dealt with at once, removing the item from the Inbox allows the user to concentrate
on items that need action. Later, the user can go to Deleted Items and review items there to see if
anything needs attention. This is a primitive form of the Focused Inbox feature that exists in almost all
mail systems. Problems occur when items disappear from the Deleted Items and cannot be recovered.
• Many small companies use Office 365. Small companies tend to be less sophisticated in terms of IT
policies than large enterprises and have fewer IT resources to create and manage policies. However, a
support call from a small company costs the same energy, time, and expense to process and resolve.
• Although Exchange Online includes a wide variety of compliance tools, it takes time and knowledge
to master these tools. Given the choice of activities, an administrator would probably take care of day
to day tasks rather than plunging into the details of how to build and deploy a well-crafted retention
policy.
• The added storage needed to keep items is not a concern because Office 365 mailboxes are assigned
large quotas.

Page 206
Compliance features are designed to support the regulatory and legal requirements of large enterprises. In
this instance, Microsoft seems to have done the right thing by building retention policies into Exchange
Online and assigning a default policy to user mailboxes. The only retention tags in the default policy that
could be construed to interfere with user management of personal data are the folder tag that remove items
from the Deleted items folder and the default archive tag that moves items from the primary mailbox to the
archive (if enabled) after two years. Looking at the arguments advanced above, you can see how a case can be
made to stop cleaning out the Deleted Items folder.
By default, users can accumulate deleted items for as long as they like. If you decide that this is not sensible
and believe that the Deleted Items folder needs to be managed, you can:
• Rename the Default MRM Policy to use any other name. MFA will recognize the new name and
understand that it should process the Deleted Items tag as before.
• Create a new retention policy that has the standard Deleted Items tag and apply that policy to all
mailboxes. You might want to increase the retention period (to between 60 and 120 days) to allow
users more time to recover items. If someone has not worked out that they have lost something
important after an extended period, it is possible that the information might not be so important after
all.
• Include a default delete tag in the Default MRM Policy that removes items in all folders after a certain
period. This approach is not as precise as applying a folder tag, but the Deleted Items folder will come
under the control of the default delete tag if no other more specific tag is in place.
If users empty the Deleted Items folder, normal processing will continue, and items will stay in Recoverable
Items until the deleted items retention period expires. Given human nature, you can expect that users will not
empty the folder (unless someone tells them how to do it) and so will be able to recover deleted items for as
long as they need.
The change in policy means that items will continue to accumulate in the Deleted Items folder over time.
Apart from increasing the size of the OST synchronized to workstations, this should not be a concern because
a single folder in an Exchange Online mailbox can hold up to one million items and adequate quota is
available to mailboxes to keep the data for many years. Most users will accumulate two or three gigabytes of
deleted information annually, so this should not be a problem in an era of 100 GB mailboxes. And if an archive
mailbox is available, the default archive tag in the policy will mean that items are moved to the Deleted Items
folder in the archive after two years.

Some ramifications from keeping Deleted Items around: Although the notion that you can keep a
deleted item for as long as you want might seem like a promising idea, many large companies consider
this to be perfectly horrible because of the impact on their compliance strategy. Items that stay in
mailboxes stay discoverable and items that are discoverable can be found by investigators. Companies do
not want to break any regulations but equally they do not want to keep anything around that might be
embarrassing or useful in eDiscovery situations. If you are in this situation, you need to create a new
retention tag to govern Deleted Items and include that tag in the retention policies that are applied to user
mailboxes. The steps needed to create a new retention tag for the Deleted Items folder are explained in
above.

A further issue is created for those running hybrid environments as MFA behaves differently on-premises
to the way that it processes online items. For example, if you move a mailbox from Exchange Online back
to an on-premises server, MFA will begin to process items in the Deleted Items folder according to
whatever on-premises retention policy is applied to the mailbox. If the on-premises retention policy
applies a retention action to these items, many items might be removed from the mailbox. The other issue
when moving mailboxes around between cloud and on-premises environments is that the dramatically
increased size of the Deleted Items folder means that moving a mailbox from Exchange Online will take
much longer than before. It is always best to ensure that information is treated the same on both

Page 207
platforms, so you should make sure that the same retention policies that apply the same retention actions
for the same periods are used for all mailboxes.

Creating a Mailbox Retention Policy


A retention policy is created by linking together a set of retention tags. Any retention tag defined within the
tenant can be added to a new retention policy, subject to the following constraints:
• Only a single folder tag can be added for each default folder. For example, a policy cannot have two
folder tags to instruct MFA how to process the Inbox.
• Only a single default tag can be defined for each of the supported conditions:
o Delete (either permanently or delete and allow recovery).
o Move to Archive.
o Voicemail. The default tag to cover voicemail items must be defined with a message type of
“voicemail.”
• Some personal tags to allow users to preserve information according to their business needs. Due to
the limited space available to display tags in client user interfaces, you should restrict this set to
between 7 and 10 tags.
If you assign a tag to a folder, the tag is inherited by any sub-folders that might exist or will be created in the
future and the tag will apply to all items in those folders. You can override the influence of inheritance by
assigning a personal tag to an item or folder. It is sensible to write down the business justification to create a
new retention policy and to chart out the set of tags to be included in the policy before you create anything.
You should also be able to define the set of mailboxes to which the policy will be applied. If you cannot make
a cogent case to create a new retention policy, some doubt exists whether the new policy is needed. A
profusion of policies can create confusion and uncertainty in those who are asked to administer a tenant after
you have happily moved to your next job.
To create a new retention policy, go to the compliance management section of the EAC, select retention
policies, and click [+]. Figure 7-12 shows the screen used to collect the information needed to create a new
policy. As you can see, all we have is the name of the new policy (invisible to users), and the set of tags that
form the policy.

Page 208
Figure 7-12: Creating a new retention policy with EAC
You can also create a new retention policy with PowerShell. The command is very simple because all it must
do is to associate the policy with a set of links to the tags.
[PS] C:\> New-RetentionPolicy –Name "Business Analysts"
–RetentionPolicyTagLinks "Keep for Audit", "6 Month Delete", "1 Year Delete", "Inbox 30 day expiry",
"Never Delete"

If you forget to include a tag or need to remove a tag from a policy, you can add or remove it with the Set-
RetentionPolicy cmdlet. When reviewing the example shown below, you might ask why the complete set of
tags is passed to the cmdlet instead of trying to add or remove a single item from the list. The answer is that
experience shows that this is the safest way to update a set of tags in a policy. Too many instances have
occurred where administrators tried to update a policy and ended up with just a single tag, which then causes
MFA to do a lot of work to remove tags from mailbox items. If you do not like this approach, use the EAC
instead. (In fact, if you look at the command log, you will see that the EAC always writes out the complete set
of tags when it updates a retention policy).
[PS] C:\> Set-RetentionPolicy –Identity "Business Analysts"
–RetentionPolicyTagLinks "Keep for Audit", "6 Month Delete", "1 Year Delete", "Inbox 30 day expiry",
"Never Delete", "Junk Email"

The Remove-RetentionPolicy cmdlet can be used to remove a retention policy from a tenant. When this
happens, MFA removes the tags in the policy from items the next time it processes mailboxes. To ensure that
automatic maintenance continues smoothly for mailboxes, make sure that an alternate policy is assigned to
mailboxes before you remove a policy.
[PS] C:\> Remove-RetentionPolicy –Identity "A bad retention policy"

After you remove the policy, you might use the Get-RetentionPolicy cmdlet to view the set of retention
policies that are defined in the tenant. The RetentionPolicyTagLinks property shows the set of tags that are
linked or associated with the policy and link the policy with its tags.

Page 209
[PS] C:\> Get-RetentionPolicy | Format-Table Name, RetentionPolicyTagLinks -AutoSize

Name RetentionPolicyTagLinks
---- -----------------------
ArbitrationMailbox {AutoGroup, ModeratedRecipients, Never Delete, AsyncOperationNotification}
New MRM Policy {Keep for Audit, Keep 10,000 days, Recoverable Items 14 days move to archive,
Junk Email...}
Business Analysts {Inbox 30 day expiry, 1 Year Delete, 6 Month Delete, Keep for Audit...}
Default MRM Policy {5 Year Delete, 1 Year Delete, 6 Month Delete, Personal 5 year move to
archive...}

You might wonder why a retention policy called “ArbitrationMailbox” is present. Do not remove this policy. It
is used by Exchange Online to look after the contents of arbitration mailboxes, which are used for different
background processes such as message moderation and OAB generation. Although irreversible harm will not
be caused by removing this policy, it will stop MFA processing the arbitration mailboxes and they will
consequently swell up with unremoved content.

System tags: The set of tags defined for the ArbitrationMailbox policy have some special tags like
ModeratedRecipients, AutoGroup, and AsyncOperationNotification that are not available for use in normal
retention policies. You can view these tags by running the Get-RetentionPolicyTag cmdlet and passing the
IncludeSystemTags parameter. Do not remove these tags!

Assigning retention policies to mailboxes


A retention policy is assigned to a mailbox with the Set-Mailbox cmdlet or through the EAC. Using a GUI is an
effective way to apply a policy to one mailbox but the navigation from mailbox to mailbox becomes tiresome
if more than a few mailboxes must be processed. To assign a retention policy to a mailbox with EAC, select the
mailbox, edit its properties, go to Mailbox Features, select the retention policy, and then Save.
You do not have to assign a retention policy to a mailbox. To do this, select "No policy" and then Save. You
can also select a group of mailboxes and assign the same retention policy to the selected mailboxes in one
operation. The Advanced Search facility can be used to select a group of mailboxes based on their
department, city, country, or region, or one of the customized attributes. You can then bulk-apply a retention
policy to those mailboxes.
Even though the EAC include search facilities to find mailboxes to which you want to apply a retention policy,
PowerShell is often the fastest and most efficient way to do the job, especially the need arises to update
retention policies for many mailboxes. For example, this command searches for a set of mailboxes based on
the Office location and then assigns the same policy to each mailbox in the set:
[PS] C:\> Get-Mailbox –Filter {Office –eq "NYC"} | Set-Mailbox
–RetentionPolicy "Retention Policy for NYC Users"

Preventing Retention Policies from Running


Exchange Online supports the use of retention holds to suspend the processing of the retention policy
assigned to a mailbox. The retention hold is designed for situations such as an extended vacation or illness
when a user might not be able to access their mailbox for an extended period and might therefore return to
work to find that the Managed Folder Assistant had removed many items from the mailbox.
You can only set a retention hold through PowerShell. This example sets a retention hold on Rob Young's
mailbox between 21:00 on 3 March and 09:00 on 1 April. A retention comment gives some information to
other administrators who might wonder why the hold is in place.
[PS] C:\> Set-Mailbox –Identity 'Rob Young' –RetentionHoldEnabled $True
–StartDateforRetentionHold '03/03/2015 21:00' –EndDateforRetentionHold '01/04/2015 09:00'
–RetentionComment 'Rob Young on vacation until 1 April'

Page 210
A retention hold can exist alongside a litigation or in-place hold. The retention hold only stops the processing
performed by the Managed Folder Assistant; the other holds stop items being removed from mailboxes until
the holds elapse. Exchange Online automatically places any recovered inactive mailbox on retention hold for
30 days after the recovery. See the Exchange Online chapter in the main book for more information on how to
recover inactive mailboxes.

How Clients Use Mailbox Retention Policies


The tags contained in a mailbox retention policy (including Office 365 labels published through label policies)
become available to a client after MFA has processed a mailbox to apply the policy. Part of the processing is
the creation of a hidden “folder associated item” (FAI) of message class IPM.Information.MRM in the Inbox to
hold details of the retention policy, including its tags. Although you cannot see the FAI unless you use a utility
like MFCMAPI to examine its contents, the XML data contained in the PR_ROAMING_XMLSTREAM property of
the item tells clients that a retention policy in place for the mailbox and gives the necessary information to
reveal retention tags through the interface. If Office 365 labels are in use, their details are combined into the
information held about retention policies in a mailbox and they are shown to users in the same way as
personal tags. The MFA updates the hidden item each time it processes the mailbox to add or remove tags or
to stamp the mailbox with a different retention policy.
The next time a client opens a mailbox, the retention policy tags are available for display. Not all tags are
shown for every folder and item as the client limits the set to the tags that the user can apply. Figure 7-13
shows how OWA displays retention tags. In this case, we are in the Inbox folder, which is a default mailbox
folder. In some respects, this is a bad example because the user is forced to select from a large set of tags.
The set includes the parent policy (whatever tag applies to the Inbox), personal tags, and Office 365 labels.
Each tag displays its retention period, making it easy for the user to know when the retention action will
happen. The current state of the selected item is “Use parent folder policy” (a tick appears beside this entry),
which means that if the user does not select another tag, MFA applies whatever policy exists for the folder. As
the item is in the Inbox, this might be a default folder tag for the Inbox (if one exists in the policy). If not,
whatever default tag is in the policy is applied.

Page 211
Figure 7-13: How OWA displays retention tags
Unfortunately, experience proves that most users have no idea whatsoever what the meaning of personal tags
are and what happens when personal tags are applied to folders or items. They have no concept of the
difference between an archive policy and a retention policy. Indeed, most users have never explored how to
use “Assign Policy” to tag an item. In addition, if they use a mobile client, they will never see this information
because the client does not have the necessary user interface to expose policy information.
You can apply personal tags to folders in an archive mailbox. However, when this happens, the MFA checks to
see if a personal tag is stamped on the same folder in the primary mailbox and if a difference exists, the tag
used for the primary mailbox is replicated to the archive folder. This is done deliberately to ensure that a
difference does not exist in processing behavior for the two folders.
Users are sometimes confused to discover that Outlook uses a slightly different approach. However, the same
principles apply. Select an item and use Assign Policy to see the available tags. In the case shown in Figure 7-
14, a different mailbox is open, and a different set of retention tags are available. The tags are divided into
archive and retention (which is how Outlook refers to what OWA calls labels). Again, the retention period is
clearly visible. In this case, because a tick does not appear beside any tag, we know that this folder does not
have an explicit tag assigned to it. Note that if Outlook’s user interface cannot display the full set of tags
available to the user, they see an entry for More retention policies, which they can select to see all the tags.
If you select a tag and apply it to an item, Outlook checks the retention period and compares it to the age of
the item. If the item is older than the retention period, Outlook warns the user that applying the tag will cause
the item to expire. In other words, the next time that the MFA processes the mailbox, it will apply the
retention action to this item. The user can then decide to go ahead and apply the tag or not. OWA does not

Page 212
check items in the same way and it is possible for a user to assign a tag using OWA that will cause an item to
expire without warning.

Figure 7-14: How Outlook 2016 displays retention tags


Set Folder Policy allows users to view what policy is assigned to a folder and set the retention and archive
policy for the folder (Figure 7-15). You cannot assign a policy to a default folder as these folders are governed
by folder tags that can only be set through a retention policy assigned to the mailbox. However, you can set
the archive policy, even for default folders. Policies can also be viewed and set by selecting a folder and
viewing its properties. Another choice offered by Outlook is to View Items Expiring Soon, which conducts a
search of the mailbox to find items whose retention period will expire in the next month.

Page 213
Figure 7-15: Viewing the policies assigned to a folder

Revealing the actual retention dates: Although you might have read and understood the information
published online to explain how the Managed Folder Assistant calculates the retention period stamped
onto items, there is nothing quite like checking to satisfy yourself that the right period is being used. You
can confirm how long an item will be kept by examining its properties with MFCMAPI. Look for the
PR_RETENTION_DATE property and you will find the date and time that the item will expire. Once expired,
the Managed Folder Assistant will invoke the retention action and the item will be remove or archived.

Optional Personal Tags


Exchange does not limit users to the personal tags included in the retention policy assigned to their mailbox.
In fact, if the MyRetentionPolicies option is set in their user role assignment policy, they can use any personal
tag defined within the tenant. The set of personal tags is revealed through the "Retention Policies" section of
OWA Options. Here the user can see the set of tags that are already available to them through the retention
policy that is assigned to mailbox. They cannot remove any tag that is inherited from the assigned policy, but
they can see an explanation as to what a tag does and they can click [+] to add personal tags from the set of
tags available in the tenant that are not already available to the user by being included in the policy currently
assigned to their mailbox (Figure 7-16). When a user adds a personal tag and saves the new set, Exchange
Online updates the MRM information held in the hidden FAI in the Inbox folder. If MFA has processed the
mailbox to update its retention policy data, the tag then becomes available to the user the next time they
restart Outlook or OWA.

Page 214
Figure 7-16: How to use OWA options select personal tags to use with items

Blocking the ability for users to select personal tags. The retention requirements for your company
might not accommodate the ability of users to select personal tags to apply to items because you want to
restrict users to whatever tags exist in the retention policies that are applied to their mailboxes. It is easy to
prevent users from being able to add personal tags by editing the default role assignment policy, the role-
based access control policy that controls the options available to users. To block users from selecting
personal tags:
1. Go to the Permissions section of the EAC
2. Select User Roles
3. Edit the Default Role Assignment Policy and scroll down to the MyRetentionPolicies section
4. Uncheck MyRetentionPolicies and save the policy
The restriction will be enforced the next time the user logs on to OWA.
An administrator can also add a tag to a set available to a user by running the Set-RetentionPolicyTag cmdlet.
In this example, two personal tags are added to the mailbox of Kim Akers and the Get-RetentionPolicyTag
cmdlet is then used to check that the tags are in the set available to the mailbox.
[PS] C:\> Set-RetentionPolicyTag –Mailbox "Kim Akers" –OptionalInMailbox "Keep 10,000 days", "Keep
for Audit"

[PS] C:\> Get-RetentionPolicyTag –Mailbox "Kim Akers" | Format-List

Name
----
5 Year Delete
1 Year Delete
6 Month Delete

Page 215
Personal 5 year move to archive
Personal 1 year move to archive
Personal never move to archive
1 Week Delete
1 Month Delete
Never Delete
Keep 10,000 days
Keep for Audit
Default 2 year move to archive

To remove optional personal tags from a mailbox, run the Set-RetentionPolicyTag cmdlet and null the list. This
is also a good way to force a refresh of the MRM configuration data for a mailbox.
[PS] C:\> Set-RetentionPolicyTag –Mailbox "Kim Akers" –OptionalInMailbox $Null

Managing Hybrid Mailbox Retention Policies


The Hybrid Configuration Wizard (HCW) can transfer retention policies and tags from an on-premises
Exchange organization to Exchange Online. The transfer process does not overwrite policies and tags that
already exist within Exchange Online. The transfer is on a one-time basis and the HCW does not synchronize
the policies and tags created within Exchange Online thereafter. The two platforms run separate and distinct
retention regimes, so manual intervention is needed to keep the two sides aligned if you make changes
afterwards on either platform.
A major difference between the two platforms is that Exchange Online assigns the “Default MRM Policy”
(more on this soon) to all newly-created mailboxes, including those transferred from on-premises servers,
while Exchange on-premises leaves it to the administrator to decide whether to assign retention policies to
new mailboxes.

Office 365 and Groups


The term “groups” covers a multitude of different objects that tenants can deploy and use in Office 365. The
term is therefore prone to cause confusion. As summarized in Table 7-4, the groups available in Office 365
are:
• Email distribution list: the “traditional” type of group composed of one or more mail-enabled
recipients and used to route messages to all members of the group. An email distribution list can be
nested within another group.
• Security group: used to define a set of users or other security principals that can then be assigned
permissions over other objects. For example, every RBAC management role group deployed inside
Exchange Online is underpinned by an Azure Active Directory security group.
• Mail-enabled security group: an upgraded form of the email distribution group that also acts as a
security group. Although you can add recipients like mail-enabled contacts to a mail-enabled security
group and use that group for access control, these recipients are ignored when the group is used to
assign or evaluate permissions for objects.
• Dynamic distribution group: an email distribution group used to route messages to members
calculated by the dynamic expansion of an OPATH query associated with the group. Each time
someone sends a message to the group, Exchange Online executes a query against Azure Active
Directory to identify the mail-enabled recipients to receive copies of messages sent to the group.
• Microsoft 365 group: Microsoft also refers to these groups as “modern” or “unified”. Microsoft often
refers to the original (November 2014) implementation of Office 365 Groups as “Groups in Outlook”
(or even “Outlook Groups”) to differentiate them from Yammer Groups, which also use the Microsoft
365 Group service to manage membership and access. Unlike traditional email distribution groups,
Microsoft 365 Groups manage content in addition to acting as a distribution mechanism. That content
Page 216
comes from multiple sources within the service including a group mailbox and a SharePoint Online
document library. Members who subscribe to a Microsoft 365 group receive copies of group
conversations via email, much like they would receive mail sent to a distribution group. You cannot
nest an Office 365 group inside another group and a cannot use an Office 365 group as a security
group. These groups do not exist in the on-premises version of Exchange, but you can synchronize
the objects back to an on-premises Exchange deployment using Azure AD Connect, where they
behave like a regular distribution group.
• Yammer group: the basis for sharing information within Microsoft’s enterprise social networking
product. From a communications perspective, Yammer groups are functionality equivalent to
Microsoft 365 Groups. The big difference between the two is that Yammer groups store conversations
in the Yammer data store while Microsoft 365 Groups use Exchange. Both types of groups enjoy the
same level of access to SharePoint and Planner.
Users can also create their own contact groups within Outlook desktop. We ignore these groups because they
are personal and not system objects.
On-Premises Hybrid Cloud
Universal Mail-enabled Yes Yes Yes
Distribution List (UDG)
Universal Mail-enabled Yes Yes Yes
Security Group (USG)
Microsoft 365 Group No Yes Yes
Yammer Group No No Yes
Security Group Yes Yes Yes
Dynamic Distribution Group Yes Yes Yes
Table 7-4: Types of Groups supported in Office 365
The confusion that can occur because so many types of groups exist inside Office 365 is due to legacy,
development, and acquisition reasons. Some types, like email distribution lists (or universal mail-enabled
distribution groups) have been around since Exchange first appeared. Some are more recent, like dynamic
distribution lists, but are specific to Exchange.

PowerShell Cmdlets for Groups


For historic and other reasons, different sets of PowerShell cmdlets interact with the various sorts of groups.
Inside Office 365, the cmdlet sets that are most important are:
• *-DistributionGroup: manage Exchange email distribution groups.
• *-DistributionGroupMember: manage the membership of Exchange email distribution groups.
• *-DynamicDistributionGroup: use to manage Exchange dynamic distribution group.
• *-UnifiedGroup: manage Microsoft 365 groups and Yammer groups that use the Microsoft 365
Groups service.
• *-UnifiedGroupLinks: manage the membership of Microsoft 365 Groups.
• *-AzureADGroup: manage Azure Active Directory groups. All groups are Azure Active Directory
objects, and these cmdlets deal with the basic properties of groups. This cmdlet set replaces the older
*-Group cmdlets used with on-premises Active Directory and still supported inside Azure Active
Directory for backward compatibility.
• *-AzureADMSGroup: some cmdlets for Azure Active Directory are based on the Microsoft Graph API
and therefore use a different prefix.
The PowerShell module for Teams also includes cmdlets that create, modify, and remove Microsoft 365
Groups. See the chapter on Managing a tenant with PowerShell in the main book for more information.

Page 217
Managing POP and IMAP Clients
POP3 and IMAP4 are email access standards-based protocols that are now quite old. However, because they
are standards-based, many organizations and individuals continue to use clients based on POP3 or IMAP4.
Both protocols support access to mailboxes but as they don’t include any functionality to send email, they
both need to configure an SMTP connection to send messages. The difference between the two is that POP3
(the older) downloads and removes messages from the server while IMAP4 allows clients to synchronize with
a server-based mailbox. Because it is more functional and is still under development, IMAP4 is the
recommended choice.
Users can connect clients using these protocols to Exchange Online mailboxes using a variety of software
including Mozilla Thunderbird, Eudora, eM, and the Mail client for Mac OS. People use these clients for a
variety of reasons including:
• They prefer to use a simpler client than the feature-rich Outlook or OWA (many newer versions of
IMAP4-based clients include functionality like OWA). It’s important to recognize that any functionality
that depends on a specific understanding of Exchange Online will never appear in IMAP4 clients.
Focused Inbox and support for retention policies and tags are two examples.
• They run on a platform like Linux and prefer to use a standards-based client.
• They’ve used a specific client for years and wish to connect it to several servers, including Exchange
Online.
• Their Office 365 subscription doesn’t include access to Outlook (for example, Office 365 F1 or
Exchange Online Plan 2).
Outlook supports both protocols to allow it to connect to other email systems like Gmail, but there’s no
reason to connect Outlook to Exchange with IMAP or POP. Several mobile email applications also use POP
and IMAP instead of ActiveSync to connect to mailboxes. POP and IMAP are also commonly used by third
party applications that need to retrieve email items from a mailbox for processing.
Out of the box, POP and IMAP are insecure protocols that transmit all client-server communications (including
usernames and passwords) in clear text that can be read by any intermediate network device. To secure the
use of POP and IMAP, Exchange Online enforces SSL/TLS encryption for all POP and IMAP client connections.
While on-premises Exchange servers permit administrators to relax the SSL/TLS requirement, Office 365 allows
no such choice to customers. You must use the protocols securely with SSL/TLS if you want to use them at all.

Client settings for POP and IMAP


Table 7-5 lists the POP and IMAP settings for Office 365. The SMTP settings are included because POP and
IMAP clients use SMTP to send email.

Protocol Server Port Encryption


POP3 outlook.office365.com 995 SSL
IMAP4 outlook.office365.com 993 SSL
SMTP smtp.office365.com 587 TLS
Table 7-5: POP and IMAP settings for Office 365
As an example, Figure 7-17 shows the correct Office 365 settings configured for IMAP and SMTP connectivity
in the Mozilla Thunderbird email client.

Page 218
Figure 7-17: Setting up IMAP access to Office 365

POP and IMAP protocol defaults


The POP and IMAP protocols have configuration settings that can be changed to suit the organization. These
settings are applied on a per-mailbox basis. By default, each Exchange Online mailbox is configured with the
settings shown below, as viewed using Get-CASMailbox.
[PS] C:\> Get-CASMailbox ServiceDesk | Format-List pop*,imap*

PopEnabled : True
PopUseProtocolDefaults : True
PopMessagesRetrievalMimeFormat : BestBodyFormat
PopEnableExactRFC822Size : False
PopSuppressReadReceipt : False
PopForceICalForCalendarRetrievalOption : False
ImapEnabled : True
ImapUseProtocolDefaults : True
ImapMessagesRetrievalMimeFormat : BestBodyFormat
ImapEnableExactRFC822Size : False
ImapSuppressReadReceipt : False
ImapForceICalForCalendarRetrievalOption: False

These default settings work just fine for most mailboxes. But there are always a few mailboxes that need some
extra customization, particularly when it comes to mailboxes being accessed via IMAP or POP by software
systems. Before any custom configurations for a mailbox take effect the *UseProtocolDefaults settings must be
set to $False. You can set this for POP only, IMAP only, or for both protocols if required.
[PS] C:\> Set-CASMailbox ServiceDesk -PopUseProtocolDefaults $False
-ImapUseProtocolDefaults $False

For both POP and IMAP, the EnableExactRFC822Size settings that are visible in the Get-CASMailbox output are
not configurable in Exchange Online. The Set-CASMailbox cmdlet simply doesn’t make that parameter
available to you, unlike on-premises Exchange. That leaves the following settings that can be configured.

Messages Retrieval MIME Format


The *MessagesRetrievalMimeFormat setting is used to specify the format for messages sent to the mailbox
from other internal senders. This setting has no impact on messages sent from an external sender. The default
value is BestBodyFormat, and you can also choose from:
• TextOnly.
• HtmlOnlyHtml.
• HtmlAndTextAlternative.
• TextEnrichedOnly.
• TextEnrichedAndTextAlternative.
• BestBodyFormat.
• Tnef.
Unless you have encountered a specific problem the default of BestBodyFormat can be left as is. However, if
you do identify an issue the format can be changed to one of the other values. An example of this would be a
service desk ticketing system that connects to a mailbox and ingests email items into its own database for

Page 219
processing. If the service desk system is having issues reading the email messages, then the MIME format can
be configured to a different setting using Set-CASMailbox.
[PS] C:\> Set-CASMailbox ServiceDesk -PopUseProtocolDefaults $False
-ImapUseProtocolDefaults $False

[PS] C:\> Set-CASMailbox ServiceDesk -ImapMessagesRetrievalMimeFormat TextOnly


-PopMessagesRetrievalMimeFormat TextOnly

Force ICal for Calendar Retrieval Option


When a POP or IMAP client receives a meeting request from a sender within the organization, the meeting
request contains a link within the body of the email that takes the recipient to OWA to accept or decline the
meeting. However, some POP and IMAP clients are compatible with ICAL messages, and so people who use
those clients may prefer to receive meeting requests as ICAL messages instead.
[PS] C:\> Set-CASMailbox Kim.Akers -PopUseProtocolDefaults $False
-ImapUseProtocolDefaults $False

[PS] C:\> Set-CASMailbox Kim.Akers -PopForceICalForCalendarRetrievalOption $True


-ImapForceICalForCalendarRetrievalOption $True

This option has no impact on meeting requests from external senders, which will arrive as an email message
with an .ICS file attachment instead.

Suppress Read Receipt


When a mailbox is accessed using either POP or IMAP there are two triggers for read receipts to be
generated; when the email message is downloaded by the client, and when the mailbox user opens the email
message to read it.
The second trigger (when the mailbox user reads the email message) is user controllable. The user can
configure their email client to never send read receipts, to always send read receipts, or to prompt each time a
read receipt is requested. Given that this is something that the user can control there are no administrative
controls to influence this on a per-mailbox basis.
That just leaves the read receipt triggered when the message is downloaded from the mailbox. The read
receipt generated by that event may not be desirable in every situation. Just because an email message has
been downloaded by a piece of software, doesn’t mean it has actually been read. Furthermore, if the mailbox
is for a high volume transactional mailbox, such as an ordering system or help desk system, and the email
messages are being accessed by a software system the volume of read receipts generated could be incredibly
high while serving no actual benefit at all.
For those types of scenarios, you can suppress the read receipts generated when email messages are
downloaded via POP or IMAP. Suppressing the read receipt works for both internal and external senders.
[PS] C:\> Set-CASMailbox ServiceDesk -PopUseProtocolDefaults $False
-ImapUseProtocolDefaults $False

[PS] C:\> Set-CASMailbox ServiceDesk -ImapSuppressReadReceipt $True


-PopSuppressReadReceipt $True

Managing the Focused Inbox


Microsoft’s introduced the “Clutter” mechanism in 2015 to help users avoid the need to waste time on
processing unimportant messages. A product of the machine learning work done by Microsoft Research,
Clutter attempts to find messages that are unimportant to a mailbox owner and moves them out of the Inbox
into the Clutter folder. The idea is that users then see what is important in front of them rather than having to

Page 220
deal with an overflowing Inbox cluttered up with unimportant notifications, update messages, marketing
bulletins, and so on. Over time, users train Clutter to become more precise by moving items into the Clutter
folder to show that they are unimportant. Conversely, they can mark messages that end up in the Clutter
folder as important by moving them back to the Inbox or another folder.
Time and organizational politics have a habit of changing strategy. Microsoft’s acquisition of Acompli in 2014
created the opportunity to explore a different approach because the Outlook for iOS and Android clients
supported a feature called the “focused inbox” that was popular with their users. Much the same idea existed:
to filter items that did not require immediate user attention to a place where the user could process those
messages when they had some time. The mechanism used to filter messages was simpler than Clutter, but it
delivered a great user experience that worked well, especially on mobile devices.

Client Updates
Microsoft has deployed two different approaches for intelligent email filtering for Exchange Online. The client-
side approach implemented in the Focused Inbox delivers a good user experience. The server-side approach
implemented in Clutter is potentially more powerful and flexible in terms of machine learning and extensibility
and is also easier to administer centrally. However, Clutter never achieved high degree of acceptance with
users. As part of the introduction of Focused Inbox, Microsoft rolled out updates for Outlook clients. The
changeover will complete when everyone uses a version of Outlook that supports the Focused Inbox.
Microsoft will then remove Clutter from Office 365 in January 2020.
Outlook was the last major client to support the Focused Inbox, largely because this client has the most
complex of all user interfaces. Support became available in Office ProPlus from March 2017. Only the click-to-
run version of Outlook in Office Pro Plus supports the Focused Inbox: the MSI version of Outlook 2016 does
not. However, Microsoft says that the MSI version of Outlook 2019 will include the necessary code.
Because the protocol does not support folder views, ActiveSync (EAS) clients do not support the Focused
Inbox. The EAS architecture is based on folder storage and retrieves and displays information on that basis.
Many third-party mobile device vendors such as Apple and Samsung license and use EAS to allow their email
clients to access Exchange mailboxes. Even if Microsoft released an updated version of the protocol to
support folder views, no guarantee exists that any of the mobile email client vendors would ever implement
the feature in their clients.

How Focused Inbox marks important items


The basic idea embodied in the Focused Inbox is that Inbox it separates items into two views. Items deemed
to be important are shown in the Focused view while all other items are shown in the Other view, where they
can be left until the user has some time to process them. Behind the scenes, Exchange Online uses a MAPI
property (tag 0x1213003) to indicate the view to which an item belongs. That property, which is visible with
the MFCMAPI utility, is stamped on items when they are delivered to the Inbox and processed by algorithms
that assess the characteristics of messages to make the decision as to which view an item belongs. Figure 7-18
illustrates how the Focused Inbox tag for an item in the Inbox appears when viewed through the MFCMAPI
utility. In this case, the value of tag 0x1213003 is 1 (one), meaning that the item belongs to the Other view. If
the tag value is 0 (zero), the item belongs to the Focused view. If you move an item from one view to the
other, its tag is updated with the appropriate value so that all clients display the item in the right place.
Because the item property governing the Inbox view to which an item belongs can be read by the majority of
the client protocols supported by Exchange (MAPI, EWS, and Microsoft Graph, but not IMAP4 or POP3), clients
can construct their own versions of the views as deemed appropriate for the hardware form factor and client
user interface in use. The property is reset if an item is moved from one view to the other. Because any client

Page 221
can move an item from one folder to another, it is possible to use any client based on a supported protocol to
train the algorithms that make the decision about how to categorize new mail.

Figure 7-18: Using MFCMAPI to view the Focused Inbox tag for a mailbox item

Rules have priority


Inbox rules have been supported by Exchange for a very long time. Some users are devoted to using rules to
organize their mailbox and have built their own set of rules to process inbound email. Because they are
created by a user and are assumed to be an absolute directive as to how certain messages are processed,
rules have priority and are executed against new email by Exchange Online first. New items are then subject to
Focused Inbox classification.

Acquiring intelligence
The Focused Inbox gains knowledge about the importance someone assigns to different messages by
gradually building up a training model based on how the mailbox owner deals with messages. The model is
based on over 40 distinct characteristics of messages, including the sender, the recipients, the subject, priority,
and so on. Over time, the characteristics of the messages left by the user in the Focused view creates a more
reliable model against which new messages can be assessed. On the other hand, messages moved by the user
to the Other view gives a strong signal that these messages are relatively unimportant.
To ensure that the Focused Inbox works even for new mailboxes, a basic set of rules is applied to messages
from the start. For instance, anything that looks like a notification message is directed to the Other view while
mail from someone’s boss (using a lookup against Azure Active Directory) is considered important and
remains in the Focused Inbox.
Outlook clients support other mail systems like Gmail and Outlook.com in addition to Exchange Online and
Exchange on-premises servers. The same model is used for all servers and is implemented within Office 365
using the architecture deployed by Microsoft to support background processing for mobile clients. Microsoft
plans to make the Focused Inbox available to users of on-premises Exchange and non-Microsoft email servers
but they have not yet announced when this will be possible.
If you used Clutter, the transition to the Focused Inbox is easy. All the work you did in the past to train Clutter
is carried over to the Focused Inbox. Items in the Clutter folder stay there until you remove them. Once you
begin using the Focused Inbox, the Clutter folder loses its status as a system folder, which means that you can

Page 222
remove all the items out of the folder and then remove it. In fact, if the Clutter folder is empty when a mailbox
switches to the Focused Inbox, the folder is removed automatically.
If a message arrives into the Focused view that should not be there, the user can flag the message as
unimportant by using the “Move to Other inbox” option in the context-sensitive menu exposed by a right
click. It is also possible to rescue an important message that ends up in the other view and move it back to the
Focused view (Figure 7-19). In either case, you can either move the selected conversation or all messages from
the sender (Always move). Unlike Clutter, the Focused Inbox does not send notifications to users telling them
about messages that Clutter recently categorized as unimportant and moved to its folder. The notifications
are unnecessary because the two views are always available in the Inbox.

Figure 7-19: Marking an unimportant message to move to the Other view

Focused Inbox Administration


Tenants can manage the Focused Inbox feature at both the organization and for individual mailboxes. You
enable or disable the feature for the tenant by updating the Exchange Online organization configuration. The
default state is that the Focused Inbox is enabled, which can be checked by running this command:
[PS] C:\> Get-OrganizationConfig | Format-Table FocusedInbox*

FocusedInboxOn FocusedInboxOnLastUpdateTime
-------------- ----------------------------
1/1/0001 12:00:00 AM

The FocusedInboxOn setting indicates the overall state of the feature. As you’d expect, this is True when
enabled or False when disabled. If the value is not set, the mailbox-specific settings take precedent (see
below). FocusedInboxOnLastUpdateTime is a timestamp recording when an administrator last updated the
FocusedInboxOn setting. It’s quite normal to see a blank value for FocusedInboxOn and a rather strange date
value for the last update timestamp as shown in the example. This is because Microsoft doesn’t explicitly
enable the Focused Inbox for any tenant and a blank value means that it’s enabled by default. In fact, the
tenant configuration setting really exists to allow a tenant to block the Focused Inbox if required. To disable
the Focused Inbox for a tenant, use the following command:
[PS] C:\> Set-OrganizationConfig -FocusedInboxOn $False
Page 223
To enable the feature again, we set the property back to $True:
[PS] C:\> Set-OrganizationConfig -FocusedInboxOn $True

If you examine the organizational configuration after manipulating the Focused Inbox organizational setting,
you’ll see that the date and timestamp is updated.
[PS] C:\> Get-OrganizationConfig | Format-Table FocusedInbox*

FocusedInboxOn FocusedInboxOnLastUpdateTime
-------------- ----------------------------
True 10/11/2016 11:15:06 AM

After the feature is enabled for a tenant, all new mailboxes are automatically enabled to use the Focused
Inbox. The settings are configured when a new user first logs onto their mailbox and learning commences as
soon as messages start flowing into the mailbox. The learning model will gradually improve over time and
refine the initial basic classification as more messages are received. Basic classification means that items such
as notification messages will end up in the Other view while personal messages are more likely to be placed in
the Focused view. You can control the Focused Inbox for individual mailboxes by running the Set-
FocusedInbox cmdlet. For example, here is the command to disable the feature for a mailbox:
[PS] C:\> Set-FocusedInbox -Identity TRedmond -FocusedInboxOn $False

When the cmdlet executes, it confirms that the Focused Inbox is enabled and stamps the current date and
time into FocusedInboxOnLastUpdateTime property for the mailbox. We can check the status of mailboxes by
running a command like this:
[PS] C:\> Get-Mailbox -RecipientTypeDetails UserMailbox | Get-FocusedInbox | Format-Table
MailboxIdentity, FocusedInboxOn, FocusedInboxOnLastUpdateTime

To force a switchover for all user mailboxes in a tenant, we can construct some code to find all user mailboxes
and then use the Set-FocusedInbox cmdlet to enable the Focused Inbox and the Set-Clutter cmdlet to disable
Clutter. A slight downside exists in this approach in that disabling Clutter is a signal to the learning algorithms
that the user is unhappy at the way that Clutter functions, which might in turn change the way that the
learning model behaves in the future. However, the experience is that no great harm follows from taking this
approach.
Because the cmdlets used to manipulate the settings are slow, this code will take some time to complete. As
explained above, the tenant setting controls whether new mailboxes are enabled for Focused Inbox, so you
don’t have to worry about maintaining settings for those mailboxes.
[PS] C:\> $Mbx = Get-Mailbox -RecipientTypeDetails UserMailbox -ResultSize Unlimited
$Mbx | Foreach {
$DN = $_.DistinguishedName
$Status = (Get-FocusedInbox -Identity $DN).FocusedInboxOn
If ($Status -eq $False -or $Status -eq $Null )
{
Write-Host "Processing Focused Inbox for" $_.DisplayName "current status" $Status
Set-FocusedInbox -Identity $DN -FocusedInboxOn $True
Set-Clutter -Identity $DN -Enable $False
}
}

Aside from a desire to switch all mailboxes at one time, disabling Clutter for a mailbox should not be done
without good reason. The preferred way for a user to switch from Clutter to the Focused Inbox is to wait for a
client to offer the switch and then accept the option. If someone prefers to continue using Clutter, you can
disable the Focused Inbox for their mailbox and make sure that Clutter is enabled to revert to the older
feature. Clutter will continue working until Microsoft finally deprecates Clutter (expected in about a year or
so), at which time Clutter will be removed from Office 365.

Page 224
Sorting out the various Focused Inbox settings: The mixture of organization and mailbox settings for
Focused Inbox might be confusing, so here are the rules:
• If the Focused Inbox setting does not exist for a mailbox, the tenant configuration applies.
• If an explicit value for the Focused Inbox is not set at either the mailbox or tenant level, the Focused
Inbox is enabled.
• If the feature is blocked for the tenant, then the mailbox setting is ignored.

Stamping Messages as Important


Like Clutter, an administrator can create conditions to mark messages that should always be deemed to be
important, no matter what the user thinks. For instance, messages sent by the CEO or the HR department are
always important. Both Clutter and the Focused Inbox use transport rules to mark important messages with an
x-header that can be used to recognize their status. Here’s an example that stamps any message with “CEO
Communication” in the message subject with the x-header to force Focused Inbox to present the message in
the Focused view.
[PS] C:\> New-TransportRule -Name “Focused Inbox Bypass” -SubjectContainsWords "CEO Communication"
-SetHeaderName "X-MS-Exchange-Organization-BypassFocusedInbox" -SetHeaderValue "true"

To check that the rule works, create a message with the bypass key words in the subject and send it to a user.
Use a tool like MessageHeaderAnalyzer (Figure 7-20) to check the received copy in the user’s mailbox and
look for the X-MS-Exchange-Organization-BypassFocusedInbox: header, which should contain a value “true.”
See this article for more information on viewing message headers.

Figure 7-20: Examining a message header to check that the bypass rule is working

Page 225
To ensure backwards compatibility with Clutter, any message stamped with the old X-MS-Exchange-
Organization-BypassClutter header is marked as important.

Some Focused Inbox Issues


The deployment of any new technology invariably creates several issues that need to be understand so that
they can be handled if encountered. Focused Inbox is no different, largely because of the change from folder-
based classification to view-based classification. Swapping out a folder, as used by Clutter, for a view seems
quite a small step, but the change has some consequences that might not be expected. Here are some issues
that flow from the change:
• Until updated mobile clients are available and deployed, ActiveSync-based clients will download all
messages from the Inbox, including those deemed to be unimportant, which might lead to additional
work to process those messages as well as extra synchronization activity (only really an issue across
slow networks).
• The Clutter folder is a system folder and a retention tag can be applied to the folder that is different
to the retention tag applied to the Inbox. For instance, you might remove items from the Inbox after
120 days but use a 30-day retention period for Clutter. That’s not possible now because retention
tags can only be applied to folders, not views within folders.
• Outlook rules created by mailbox owners do not take the Focused Inbox views into account. You
can’t, for instance, create a rule to move specific messages into the Focused view.
• The PowerShell cmdlets that run against Exchange Online mailboxes are built to expect folders rather
than views. This should not cause too many problems, but it is something to remember. For example,
if you run Get-MailboxFolderStatistics against the Inbox, the data returned is for the Inbox and is not
broken down by view.
• Searches (eDiscovery, those performed with the Search-Mailbox cmdlet, or Office 365 content
searches performed through the Security and Compliance Center) do not tell you whether found
items belong to a specific view. If you want to find out this level of detail about a found item, you
need to examine its properties with the MFCMAPI utility.

Reporting Exchange Administrative Audit


Data with PowerShell
Exchange sends mailbox and administrative audit data for ingestion into the Office 365 audit log, and that’s
where you should search to find audit data. However, in some cases, you might want to use the older
methods that come from Exchange on-premises to concentrate on the raw audit data generated by Exchange
Online – or use scripts developed for on-premises use to process cloud data. In either case, you end up using
the Search-AdminAuditLog and New-AdminAuditLogSearch cmdlets.
The Search-AdminAuditLog cmdlet can be used to access and extract the last 90 days of audit data. This is the
easiest and quickest way to find out what might have happened in the recent past. If you want to search over
an extensive period, the New-AdminAuditLogSearch cmdlet searches administration logs in the background.
The “export the admin audit log” report in EAC uses this approach. The result is an email holding an XML
attachment with the discovered audit entries delivered to specified recipients. The message will not arrive
soon as the searches are queued to be run when demand is low on the infrastructure, but you can expect to
receive the message containing the data within 24 hours. An example command to create a report detailing
all new mailboxes, distribution groups, and dynamic distribution groups created in the last 14 days is shown
below. The users specified to receive the emailed report do not have to belong to the Office 365 tenant.

Page 226
[PS] C:\> New-AdminAuditLogSearch –StatusMailRecipients Jan@contoso.com, Jim@contoso.com
–Cmdlets New-Mailbox, New-DistributionGroup, New-DynamicDistributionGroup
–StartDate (Get-Date).AddDays(-14) –EndDate (Get-Date)

The audit report that you receive from a background search is in XML format and audit events appear as
shown below. The example reports the creation of a room mailbox. Although the information is easily
recognizable in terms of what it conveys, some work is necessary to transform the XML content into
something more digestible for those who do not speak fluent XML.
<Event OriginatingServer="AM3PR04MB0775 (15.01.0049.002)" ExternalAccess="false"
ObjectModified="EURPR04A002.prod.outlook.com/Microsoft Exchange Hosted
Organizations/Domain.onmicrosoft.com/Sonoma Conference Room" Succeeded="true" RunDate="2015-01-
12T09:47:49+00:00" Cmdlet="New-Mailbox" Caller="EURPR04A002.prod.outlook.com/Microsoft Exchange
Hosted Organizations/Domain.onmicrosoft.com/TRedmond">
<CmdletParameters>
<Parameter Value="SonomaConferenceRoom@Domain.onmicrosoft.com" Name="WindowsLiveID"/>
<Parameter Value="<*******>" Name="Password"/>
<Parameter Value="Sonoma Conference Room" Name="Name"/>
Parameter Value="True" Name="Room"/>
</CmdletParameters>
</Event>

Common Audit Search Operations


Here are some common reasons why the administrator audit log might be usefully interrogated interactively.
To audit actions performed by one or more users. Each of the users you want to look for is identified by
alias, email address, display name, or distinguished name. Separate the names with commas.
[PS] C:\> Search-AdminAuditLog –UserIds Administrator, "Tony.Redmond@contoso.com" | Out-GridView

To audit specific actions. In this example, you want to know who has recently modified mailboxes or
distribution groups. To find the audit records, you specify the cmdlets that are used for these purposes. The
ObjectModified property returned in each audit log tells you the name of the mailbox or group that was
operated on.
[PS] C:\> Search-AdminAuditLog –Cmdlets Set-Mailbox, Set-DistributionGroup | Out-GridView

To audit within a date range. In this case, you want to find out who has been creating new mailboxes over a
specified period. It is important to check the Succeeded property in the output because it is possible that some
attempts to run the New-Mailbox cmdlet were unsuccessful. Figure 7-21 shows the kind of information you
can expect to see in the output grid generated by the Out-GridView cmdlet (the easiest way to go through
audit entries). The names of the new mailboxes are clearly visible.
[PS] C:\> Search-AdminAuditLog –StartDate "01-Jan-2018 00:00"
–EndDate "15-Jun-2018 23:59" –Cmdlets New-Mailbox | Out-GridView

Page 227
Figure 7-21: The creation of new mailboxes as reported in Exchange Online administrative audit records
To analyze what the Office 365 Administrators are doing with your tenant. The Office 365 administrators
do a lot of maintenance behind the scenes that you might not be aware of. You can gain some insight into
their activities by adding –ExternalAccess $True to a Search-AdminAuditLog command. To see all the
commands executed by Office 365 administrators over a month, use a command like this:
[PS] C:\> Search-AdminAuditLog –StartDate "01-Mar-2015" –EndDate "31-Mar-2015"
–ExternalAccess $True | Out-GridView

A lot of the output is mundane, but some of it is very interesting!


To analyze the cmdlets that administrators are running. This is not really very useful but it could return
some interesting and unexpected results. This random example from a tenant revealed that a lot of mailbox
permissions were being set, which prompts the question "why"? Remember that only cmdlets that add,
change, or remove data will feature in this list as they are the only cmdlets for which Exchange Online
generates audit entries.
[PS] C:\> Search-AdminAuditLog | Sort CmdletName | Group CmdletName |
Format-Table Count, Name –AutoSize

Count Name
----- ----
10 Add-MailboxPermission
4 Add-RecipientPermission
3 Enable-AddressListPaging
3 Install-AdminAuditLogConfig
3 Install-DataClassificationConfig
3 Install-DefaultSharingPolicy
3 Install-ResourceConfig
1 New-DistributionGroup
3 New-ExchangeAssistanceConfig
5 New-Mailbox
3 New-OutlookProtectionRule
4 Remove-Mailbox

A useful script to help interrogate and interpret the administration audit log is available here.

Reporting Mailbox Audit Data with


PowerShell
The Search-MailboxAuditLog cmdlet creates ad-hoc reports from mailbox audit data. You can search across all
mailboxes (a slow operation) or specify the mailboxes you want to search. For instance, this command

Page 228
searches all mailboxes to look for audit entries recorded when someone used delegate access to a mailbox to
send a message using the SendAs permission:
[PS] C:\> Search-Mailboxauditlog -ShowDetails -StartDate "5-feb-2018" -EndDate "25-Feb-2018"
-LogonTypes Delegate -Operations SendAs | Format-Table LastAccessed, MailboxOwnerUPN,
LogonUserDisplayName, ItemSubject

As an example of a more focused search, here we search for audit entries in the Customer Services mailbox for
delete actions from the Inbox performed by a delegate. As with all searches, the more specific you make the
criteria, the better, so specifying a date range and the operations you are interested in knowing about is
always a good idea. A busy shared mailbox might accumulate thousands of audit entries and examining them
all can quickly become tiresome.
[PS] C:\> Search-MailboxAuditLog –Identity "Customer Services" –ShowDetails
–StartDate "1-Jan-2017" –EndDate "28-Feb-2017" –LogonTypes Delegate –ResultSize 1000 |
Where-Object {$_.Operation –eq "SoftDelete" –and $_.FolderPathName –Like "\Inbox"} |
Format-Table LastAccessed, Operation, LogonUserDisplayName, FolderPathName, SourceItemSubjectsList,
LastAccessed –AutoSize

LastAccessed Operation LogonUserDisplayName FolderPathName SourceItemSubjectsList


------------ --------- --------------------- -------------- ----------------------
02/01/2017 13:57:51 SoftDelete Tony Redmond \Inbox All done and dusted
02/01/2017 13:32:39 SoftDelete Tony Redmond \Inbox A funding issue
02/01/2017 13:31:32 SoftDelete Tony Redmond \Inbox Complaint
02/01/2017 18:37:18 SoftDelete Tony Redmond \Inbox Critical problem

This output certainly helps to understand what might have happened within a mailbox. An alternate approach
is to pipe the search results to the Out-GridView cmdlet to create a grid containing information about all the
objects that match the search criteria. You can then drill down into the results set as you wish, including the
ability to sort items via the different headings.
For some reason, Out-GridView does not display the LastAccessed property (the date and time when an
operation occurred), the ItemSubject property (the subject line of a message), or the ClientInfoString property
(which can reveal information such as the type of mobile device used with an ActiveSync transaction). Even
with these restrictions, it is still a very useful tool.
Exchange Online also supports the New-MailboxAuditLogSearch cmdlet. Instead of interrogating audit entries
interactively, this cmdlet performs a batch search and returns the information found to a nominated email
address. Here is an example of how to run a batch search to examine delegate access audit entries for the
Customer Services mailbox.
[PS] C:\> New-MailboxAuditLogSearch –Name "Review of Delegate Access"
–LogonTypes Delegate –Mailboxes "Customer Services" –StartDate "1-Dec-2016"
–EndDate "31-Dec-2016" –StatusMailRecipients "Tony.Redmond@Office365ITPros.com"

The exact time when the email holding the result returns depends on the current workload within Office 365.
As you might expect, Office 365 assigns batch jobs a relatively low priority and the results might not arrive for
quite some time. When the email arrives, you will find an XML format attachment with the search results. The
data are complete, but some help is necessary to make sense of it.

You might have to wait: New mailbox audit events might not accessible to searches created with the New-
MailboxAuditLogSearch cmdlet. For several reasons, including caching for performance reasons, it takes a
little time before audit events show up. What is frustrating is that the audit records exist in the user
mailboxes. In any case, to ensure accuracy of audit information, it is best to wait for a couple of hours after
the period you are concerned with elapses, just to give Exchange Online a little time to expose all the
necessary information. The same delays occur for on-premises Exchange servers.
A useful script by Paul Cunningham (available in the TechNet gallery) interrogates the mailbox audit log for a
selected mailbox and mails the results back in the form of a nicely formatted report. Apart from the value that

Page 229
the script delivers, its code is an excellent example of how to extract information from Exchange Online and
format it so that the data is much more consumable for non-technologists, such as a legal investigator who
might be interested in this kind of information.
Bypass auditing: Sometimes you have service accounts or other accounts that access the mailboxes of other
users to retrieve information. This is less likely within Office 365 than it is in an on-premises situation because
fewer add-on products, which often use this approach, are supported. If you run into the situation where it
does not make sense to accumulate all the audit entries recording mailbox access by a service account, you
can bypass auditing for that account with the Set-MailboxAuditBypassAssociation cmdlet. For example:
[PS] C:\> Set-MailboxAuditBypassAssociation –Identity "ServiceAccount"
–AuditBypassEnabled $True

To discover whether any mailboxes are enabled for audit bypass, use the command:
[PS] C:\> Get-MailboxAuditBypassAssociation | Where {$_.AuditBypassEnabled –eq $True} |
Format-Table Name

Exchange DLP policies


An Exchange DLP policy is essentially a container holding a set of specialized transport rules to check
messages as they pass through the transport pipeline. Transport rules incorporate DLP policy checking
through the condition "if the message contains sensitive data," which tells the rule to check for a certain type
of sensitive data together with what to do if the rule encounters sensitive data.
Using transport rules means that tenants have assurance that DLP checking will process any message sent
from any client for compliance with company policy and blocked if whenever a rule finds a violation. In
addition, details of messages with sensitive content can be captured in incident reports and analyzed later to
ensure that the policy is working and that users are not including an inappropriate amount of sensitive
information in email. Other transport rule actions such as copying messages to a user’s manager can also be
used to increase awareness of the compliance policy in both employees and management. The mixture of soft
education through visual clues and hard enforcement through transport rules is effective at telling users why
blocks are in place and ensuring that sensitive information does not leave the organization.
To detect problems in content, Exchange uses a deep content analysis engine that can check for sensitive data
types in message bodies and attachments (to some 30 levels deep). The engine runs within the transport
system and a modified version is available inside Outlook desktop. The content analysis occurs as the user
enters text into the Outlook compose message window and is enough to detect whether a problem might
exist. A much deeper content analysis occurs when messages are processed by transport rules. The analysis
combines several analytical methods to find content including keyword searches, pattern matches, searches
against dictionaries, and specific tests for certain items such as credit card numbers.
As we will explore in the example described later, you can implement a DLP policy in a single rule to check for
a single sensitive data type. However, it is more common to find that organizations deploy more complicated
DLP policies composed of multiple rules, each of which checks for a specific sensitive data type under
different conditions. For example, one rule might capture audit data for messages that have credit card
information sent to internal recipients while a second rule blocks messages having the same credit card
information but covering the condition when messages go external recipients. The logic here is that it might
be OK to circulate sensitive information internally but not so good to allow it to go outside. Another common
rule found in DLP policies is one that blocks messages with unreadable attachments. Exchange Online can
decipher most file formats in widespread use, so if a strange format or password protected attachment is
presented it could be a sign that someone is trying to hide sensitive information. DLP is capable of checking

Page 230
attachments many levels deep in a message and can detect attempts to hide information contained in a zip
file in a zip file in a zip file and so on.

Designing an Exchange DLP policy


When you create a new ETR-based DLP policy, you can choose a template and Exchange Online will populate
the new policy from the settings in that template. You can alter the policy afterward by removing unwanted
rules or adding new rules as your needs dictate.
The steps in defining a new DLP policy are:
1. Decide the types of sensitive data to protect.
2. Decide the set of rules needed to protect sensitive data:
a. Conditions (recipients inside or outside the organization, what kind of sensitive data is
involved).
b. Actions (block or pass, allow override, generate incident report).
c. Exceptions (are all users subject to the same policy).
3. Create a new DLP policy:
a. Based on a Microsoft template.
b. Created from scratch.
4. Test and refine the DLP rules.
5. Ensure that the DLP rules do not interfere with other transport rules.
6. Enforce the DLP policy.
7. Audit the DLP policy by checking incident reports and DLP reports.
The starting point for defining any policy is to understand what the policy is designed to do. DLP policies
protect the company against inadvertent disclosure of sensitive data, so the first step is to decide what types
of sensitive data need to be protected. You then must think about the conditions that you want to check for,
whether any exceptions exist, and what action to take if a policy violation is detected. Because the
implementation of a DLP policy might affect users, it is sensible to get buy-in from the HR and legal
departments, and to inform users about the new policy when implementation time comes around.

Blocking Actions
One question that you need to answer is what should happen when sensitive data is detected in messages.
The set of available actions that can be specified in a DLP rule is described in Table 7-6. The NotifySender
value is used with PowerShell to create a new DLP rule (New-TransportRule) or to amend the action specified
in an existing rule (Set-TransportRule).
Action Meaning NotifySender value
Notify Notify the user that a problem exists because NotifyOnly
sensitive data has been found in a message but
allow the message to be delivered as normal.
Reject and block Transport will block the message and notify the RejectMessage
user with a non-delivery notification.
Reject unless false Transport will block the message unless the RejectUnlessFalsePositiveOverride
positive user marks it as a false positive.
Reject but allow Transport will block the message unless the RejectUnlessSilentOverride
override user overrides the policy and decides to send
the message even though some sensitive data
is detected.

Page 231
Reject but allow Transport will block the message unless the RejectUnlessExplicitOverride
override with user overrides the policy and gives a business
justification justification that can be logged for auditing.
Table 7-6: Available DLP actions

Policy Modes
Exchange Online supports three modes for DLP policies. The modes are:
• Test without policy tips: This is the default mode when a new policy is created. It allows the
transport system to check messages for the sensitive data types dictated by the policy and to track
the actions that would have been taken if the policy was enforced. This data can be viewed through
DLP reports.
• Test with policy tips: This mode exposes policy tips to Outlook clients to allow you to know whether
the policy tips have the desired effect of informing users about the need to protect sensitive data.
Actions associated with the rules are not applied. Some functionality does not work in test mode. For
example, you cannot override a rule by giving a business justification.
• Enforce: The actions in DLP rules are active and enforced as messages pass through the transport
system.
If you change the mode for a policy, all the rules associated with the policy are updated to reflect the new
mode. However, you can selectively set a different mode for a rule by editing its properties. The processing of
an individual rule can also be limited by a start and end date.
DLP policies also have an enable state. When enabled, the policy is in force according to its mode as
described above. When disabled, the rules associated with the policy are disabled within the transport
pipeline.

Starting with a DLP Template


Although you can create a DLP policy from scratch to reflect the customized needs of a company, it is often
easier to start with a template, especially when your policy is going to be based on the need to satisfy a
certain set of regulations. Each template is designed to meet the requirements set out in well-known
scenarios describing the protection of sensitive data. For example, the Australia Financial Data template “Helps
detect the presence of information commonly considered to be financial data in Australia, including credit cards,
and SWIFT codes,” while the U.S. Health Insurance Act (HIPAA) template “Helps detect the presence of
information subject to United States Health Insurance Portability and Accountability Act (HIPAA), including data
like social security numbers and health information.“ Each template contains a set of DLP-enabled transport
rules to check for the data types covered by the chosen national regulations. Third parties can create suitable
DLP templates designed for use in certain industries or circumstances. These templates can then be given to
customers in an XML format and imported to create a new DLP policy.
To illustrate the rules that are created when a template is used to instantiate a DLP policy, Table 7-7 describes
the five rules created by using the standard “U.S. Patriot Act” template.
Rule Condition Actions
order
1 Message has the word “Override” in the Set the special message header X-Ms-
subject. Exchange-Organization-Dlp-
SenderOverrideJustification to “TransportRule
override” to make the override known to rules
that process the message later in the pipeline.

Page 232
2 The message Is sent to an external recipient Set the audit severity level to “Medium” and
and has a low (less than 4) count of these notify the user that the message violates a DLP
sensitive data types: policy but allow the message to be sent.
Credit card number
U.S. Bank Account Number
U.S. Individual Taxpayer Identification
Number (ITIN)
U.S. Social Security Number (SSN)
3 The message Is sent to an external recipient Set the audit severity level to “High” and notify
and has a high (4 or more) count of these the user that the message cannot be sent
sensitive data types: because of the sensitive data in its content.
Credit card number Allow the user to give an override. If an
U.S. Bank Account Number override is not provided, reject the message
U.S. Individual Taxpayer Identification with a DSN status code 5.7.1.
Number (ITIN)
U.S. Social Security Number (SSN)
4 The message has an attachment that Set the audit severity level to “High” but allow
cannot be fully processed for some reason. the message to be delivered.
5 The message has an attachment that Set the audit severity level to “Medium” but
cannot be opened. allow the message to be delivered.
Table 7-7: Rules created by the U.S. Patriot Act DLP template
You might decide that the rules created from a template are insufficient or unsuitable for your DLP policy. If
so, you can edit rules to add, remove, or change conditions and actions, remove rules, and add your own rules
to the base set inherited from the template.
DLP rules are executed alongside other transport rules as messages pass through the transport pipeline.
Although an organization can use multiple DLP policies, this can create confusion and the possibility that rules
might interfere with each other. For this reason, it is recommended to use a single DLP policy that
encompasses all the sensitive data types of concern to the organization and to make sure that the DLP
execute without causing problems for other transport rules.

Creating a New DLP policy for Exchange


DLP policies for Exchange Online are created and managed through the compliance management section of
the EAC. Here you can create and edit DLP policies and the transport rules associated with those policies.
Underpinning the EAC is a set of Exchange PowerShell cmdlets such as New-DLPPolicy that you can use to
programmatically create and control DLP policies should you prefer. The normal *-TransportRule cmdlet set is
used to update the transport rules linked to DLP policies.
Before we get into the details of how to create comprehensive DLP policies, we can examine what happens
when a very simple policy consisting of just one rule is put in place. We can create this policy with two
PowerShell commands. The first creates the DLP policy, the second creates a new transport rule to check for
credit card data and associates the rule with the policy. The rule blocks any message found to have credit card
numbers unless the user gives an override reason. We enable the policy by setting its mode to “Enforce”. The
scope of the rule means that it only fires when messages have external recipients.
[PS] C:\> New-DLPPolicy -Name "Irish Personal Documents"
-Description "Protect against the transmission of Irish personal documents in email"
-State Enabled -Mode Enforce

[PS] C:\> New-TransportRule –Name "Irish Personal Documents: Credit Card Data"
–DLPPolicy "Irish Personal Documents" –Mode Enforce
–MessageContainsDataClassifications @{"Name" = "Credit Card Number"}

Page 233
–SetAuditSeverity High –NotifySender RejectUnlessExplicitOverride
–SentToScope NotInOrganization

Soon after the new policy and rule are created, they become active within the transport pipeline. To explore
what happens then, we can use Outlook and OWA to create and send some messages holding some credit
card information.

Outlook and DLP


When Outlook clients initialize and connect to Exchange Online, they check for the presence of a DLP policy
for the organization. If found, Outlook downloads the policy details for use in the current session. Outlook is
also capable of operating offline, so to make sure that messages can checked when offline, Outlook stores
DLP policy information in two local XML files in %USERPROFILE%\Appdata\Local\Microsoft\Outlook. The files
are:
• PolicyNudgeRules_guid.XML: a small file containing the text of customized policy tips used by the
rules in the policy.
• PolicyNudgeClassificationDefinitions_guid.XML: a much larger file holding the set of sensitive data
classifications including local language translations of the name of the sensitive data type. The extract
shown below is some of the international names for the French National Identity Card.
<Resource idRef="f741ac74-1bc0-4665-b69b-f0c7f927c0c4">
<Name langcode="am-et" default="false">የፈረንሳይ ብሔራዊ መለያ ካርድ (CNI)</Name>
<Name langcode="ar" default="false">‫( بطاقة الهوية في فرنسا‬CNI)</Name>
<Name langcode="ar-sa" default="false">‫( بطاقة الهوية في فرنسا‬CNI)</Name>
<Name langcode="bg-bg" default="false">Френска национална идентификационна карта (CNI)</Name>
<Name langcode="bn-bd" default="false">ফ্রান্স জাতীয় পরিচয়পত্র (CNI)</Name>
<Name langcode="bn-in" default="false">ফ্রান্স জাতীয় পরিচয়পত্র (CNI)</Name>
<Name langcode="ca-es" default="false">Targeta d'identificació nacional (CNI) de França</Name>
<Name langcode="cs-cz" default="false">Národní identifikační karta (Francie) (CNI)</Name>
<Name langcode="cy-gb" default="false">Cerdyn ID Cenedlaethol Ffrainc (CNI)</Name>
<Name langcode="da-dk" default="false">Nationalt fransk id-kort (CNI)</Name>
<Name langcode="de" default="false">Französischer Personalausweis (CNI)</Name>
<Name langcode="de-de" default="false">Französischer Personalausweis (CNI)</Name>
<Name langcode="el-gr" default="false">Εθνική ταυτότητα Γαλλίας (CNI)</Name>
<Name default="true" langcode="en-us">France National ID Card (CNI)</Name>

The guid part of these names is a unique identifier for the policy. Outlook refreshes these files every 24 hours.
You can force Outlook to refresh the DLP policy information by editing the system registry and removing the
HKCU\Software\Microsoft\Office\16.0\Outlook\PolicyNudges key. Substitute “15.0” for “xx.x” if you use Outlook
2013.
As the user types text into a message body, Outlook can scan the text for sensitive data types, including
custom sensitive data types created by document fingerprinting. When sensitive data is detected, Outlook
consults the policy to decide how to tell users that such data is present and what they should do about it. The
Policy Tip might display a message saying the email cannot be sent because of the presence of some sensitive
data, or the Policy Tip might just be advisory (e.g., "you shouldn't really be sending that kind of data).” It is
also possible to allow users to override a policy by asking them to give a justification for the override. That
justification can then be sent together with information about the message to a compliance manager, who
can decide on what action to take after they examine the full context of the message. For example, an HR
representative might need to send a message with a social security number from time to time. An override
received from a HR representative is OK, while an override from an administrator working in another
department who circulated some HR data might not be. This is not an approval process that eventually leads
to the release of a message; it is simply noting that a policy violation was detected and overridden by
someone.

Page 234
Figure 7-22 shows what happens when some credit card information of the type covered by the one-rule DLP
policy we created is found in an Outlook message. The cheery policy tip shown above the recipients is a
customized policy tip. We will discuss how you can customize the standard policy tips shortly. Because our
rule allows users to override that choice is available. Clicking the override link in the policy tip invokes the
dialog shown in the center. You can see that the user has input what they believe to be a valid business
justification to override the restriction on sending credit card data through email. Click Override to complete
the process. Outlook then displays a different policy tip to inform the user that even though they have opted
to override and send the message, that decision might be later reviewed by the organization. In other words,
an audit report – called a DLP incident report - is created and might be reviewed by a compliance manager.
The user can choose to go ahead and send the message and it will be delivered.

Figure 7-22: How Outlook displays DLP prompts

OWA and Exchange DLP Policies


Because OWA is an online client, it does not keep a cache of DLP policy information like Outlook does. In
addition, OWA does not include the deep content analysis capability incorporated into Outlook. Exchange
Online performs DLP checking each time OWA saves a draft of a message (roughly every minute) or when the
user adds a new recipient to the message. It is entirely possible to send a message with sensitive data if the
user composes and sends the content before Exchange Online has an opportunity to check the message.
However, in this case, the DLP validation performed in transport rules will intercept the message and take
whatever action is defined by policy.
Figure 7-23 is a composite image showing how OWA displays DLP policy tips and allows a user to override a
DLP check. In this instance, Exchange Online has detected that the message has some credit card information
and the recipients include an external addressee (our travel agent). The external recipient is the problem
because the DLP policy does not allow the inclusion of credit card information in messages to external people.
The original policy tip informed the user that the message cannot be sent because it appears to hold sensitive
data. More information, shown at the top of the message header, is revealed when the user clicks Show
details in the policy tip to discover that the travel agent is the problem. Clicking Learn more will tell the user
Page 235
what kind of sensitive data OWA has detected, even if this is clear from the content. Note that OWA does not
use customized text for policy tips.

Figure 7-23: How OWA displays DLP policy prompts


Clicking Override exposes the input form shown in Figure 7-23 to allow the user to input a justification for the
override. The options are to give a free-form business justification or to say that the message is a false
positive and does not hold any sensitive information. The rule allows the user to override in this manner and
once an override reason is provided, the message will be delivered as normal. The override reason and
message details will be kept for auditing purposes.
Supporting mobile clients: Remember that DLP checking is implemented as a predicate for transport
rules and that all the other actions and options available in transport rules can be combined in the rules
you create for DLP. For instance, because mobile client interfaces do not support policy tips (the same is
true for the Outlook for Mac client), you could create an exception in the transport rule to allow mobile
clients to override DLP checking when needed by including a code word in the message subject. The same
override will work for all clients, including Outlook and OWA. This is the approach taken by the “allow
override rule” found in many of the rule sets created when a DLP policy is created from a template (see
rule 1 in Table 7-7). Including an exception is a quick and effective way to support mobile clients and to
avoid support calls that come in after messages are rejected and users call up to find out why this
happened. You can decide whether to communicate up-front to explain how users can override DLP
checking or to accept that the help desk will receive calls from users. The latter approach is often better
because it reduces the number of overrides and gives the chance to explain why DLP policies are in place
and why a message was bounced. Incident reports can be generated by the rule to allow overrides to be
checked and, if necessary, for further user education to occur.

Customizing Standard Policy Tips


By default, a set of four policy tips are used by Outlook and OWA when messages holding sensitive data
controlled by a DLP policy are detected. The policy tips are invariably polite and pertinent in terms of

Page 236
informing users when they have erred and included some sensitive information in a message, but sometimes
you want to emphasize the point by using different language. You can do this by customizing the policy tip.
The four DLP policy tips are:
• Notify the sender: “This message appears to contain sensitive data. Make sure all recipients are
authorized to receive it.”
• Allow the sender to override: “This message can’t be sent because it appears to contain sensitive data.”
• Block the message: “This message can’t be sent because it appears to contain sensitive data.”
• Provide a link to compliance URL: Used to configure a URL to a web page that hopefully explains the
organization’s compliance policy and how sensitive data types should be handled in email. This link
shows up in OWA when the user clicks Learn More in response to a policy tip “View details about the
information that appears sensitive.”
Exchange Online provides local language translations for the default policy tips. You can see the default text
for a language with the Get-PolicyTipConfig cmdlet. For example, to show the original (exclude any custom
text) values for the English language (or locale), use the command:
[PS] C:\> Get-PolicyTipConfig –Original –Locale en

You cannot change the default text for policy tips, but you can input custom text for Exchange Online to use
instead. The EAC offers the choice to customize policy tips through an icon that resembles a document with
check marks and a superimposed gear. The navigation from this point is straightforward and leads you
through adding a new custom policy tip, selecting the language, and finally inputting the custom text. As can
be the case, it might be faster to do the job with PowerShell. Here is how to change the text shown when
some problematic data is detected in a message and to set up a URL for users to learn more about our
compliance policy. Note that Exchange Online is picky about the capitalization of “Url”.
[PS] C:\> New-PolicyTipConfig –Name en\NotifyOnly –
Value "Oh dear! We have a problem. Some of this text is sensitive data. What shall we do next?"

[PS] C:\> New-PolicyTipConfig Url –Value "http://www.Office365ITPros.com/Compliance.html"

Quite logically, you cannot have multiple custom text entries for a policy tip, so if custom text already exists
for a policy tip, we must use the Set-PolicyTipConfig cmdlet to overwrite it with new text.
[PS] C:\> Set-PolicyTipConfig –Identity en\NotifyOnly
–Value "Oh dear! We have a problem. Some of this text is sensitive data. What shall we do next?"

To remove a custom policy tip:


[PS] C:\> Remove-PolicyTipConfig –Identity en\NotifyOnly

Updates to policy tips take a little while to become effective because caches must be refreshed before clients
see the new text. Although you can change the text shown in policy tips, you cannot change the text shown
under the tips to inform users about actions they can take such as removing an external recipient or
overriding the policy.

DLP Incident Reports


It is often difficult to understand how effective a data loss prevention policy is in action. The Reports section
of the Microsoft 365 Compliance Center includes several DLP reports that provide an overview of the number
of incidents that have been detected, including:
• DLP policy matches.
• DLP false positives and overrides.
• DLP incidents.

Page 237
DLP incident reports give the necessary information to fine-tune DLP policies and improve user education on
the subject. An incident report is emailed to an incident manager whenever a DLP transport rule has the
optional “generate incident report” action. Figure 7-24 shows a rule being changed to add this action. As you
can see, you can specify the information that should be included in the incident report, including a copy of
the original message.
Incident reports can be sent to any user mailbox. However, using a “normal” mailbox to hold incident reports
is bad practice and should only be done during testing. Any instance when oversight is applied to user
messages needs to be done under controlled circumstances and incident reports can hold confidential
information. It is therefore best if a special incident mailbox is created for this purpose and access to the
mailbox is restricted on a need-to basis. To avoid paying for a license, you might think about using a shared
mailbox to hold incident reports. Note that you can nominate a distribution group to receive incident reports,
so it is possible to ensure that people who need to know about the reports receive them along with a copy
going to the dedicated mailbox.

Figure 7-24: Amending a rule to add the generation of DLP incident reports
The value of an incident report is that a human being can review the content to decide whether this is a real
violation of policy or something more benign. Perhaps the policy is detecting too many potential problems
and needs to be updated, possibly by increasing the threshold for sensitive data types that can be included in
messages. On the other hand, incident reports might show that rules are not firing as you would expect, so
they are a useful way to confirm that rules work properly.
From an auditing perspective, incident reports hold the justifications given by users when rules allow for an
override. The provided reason needs to be considered in the light of the recipients for the message, its
content, and the business rationale that can be construed by reading the content. It would seem reasonable
Page 238
to send some credit card information to an external address when a credit card is needed to complete a
justifiable business purchase. It might even be acceptable when someone sends email with a credit card
number for personal purposes, like to pay for a child’s after-school activity. It is completely a different matter
if someone is circulating batches of credit card numbers for no good reason to people outside the company.
Standard DLP reports tell you that incidents occur; the incident reports tell you why they occurred.

Building out an Exchange DLP policy


Our single-rule policy works and is effective at preventing the transmission of credit card information to
external recipients. However, it is a very specific policy at this point and only covers a single scenario when
sensitive data is sent through email. As such, the policy probably does not have the depth necessary to
achieve the kind of comprehensive protection needed in most organizations. As we learned by examining the
set of rules in Table 7-7 generated by Exchange Online when a template is used to create a DLP policy, it is
common to need a broad collection of rules to handle different circumstances and conditions that might
occur when sensitive data are detected in email. Let’s explore how to add a couple of additional rules to
expand and enhance the policy.
An obvious rule we might add is one to deal with credit card information circulated internally. Generally, we
are happy to have users include credit card numbers in internal email, but we would still like to keep an eye
on what is happening. An example of a suitable rule is shown in Figure 7-25. This rule generates incident
reports to keep our compliance team aware of what is happening in mail traffic but does not block messages.
An exception is present for the Accounting Department group because these users will probably deal with
credit card numbers in their normal activities.

Figure 7-25: Amending a rule for a sensitive data type


Note the right-hand screen showing details of the sensitive information types processed by the rule. To get
here we click on the sensitive data type selected within the rule. The screen allows us to edit the lowest
number of occurrences of a sensitive data type that must be present in a message before a rule fires. In this
case, we can see that just one instance of a credit card number is enough for the rule to fire. If we wanted a
less sensitive rule, we could adjust the threshold and set it to 2 or more instances. You cannot change the
confidence levels as these conditions depend on other processing decided by the provider of the sensitive
data type definition.

Page 239
Like the rule described above, the process of building out a DLP policy involves understanding the conditions,
actions, and exceptions for each scenario in which we need to give protection. Common scenarios that might
be considered include:
• Allow clients to override by including a code word in the subject. An incident report should always be
gathered to ensure that this facility is not abused.
• Allow some level of sensitive data to be circulated within the organization with different thresholds
used for external recipients.
• Block messages that cannot be scanned for some reason. These are usually file formats that cannot be
accessed by the set of filters supported by Microsoft Search, such as the files generated by an
unsupported application.
You might also decide that protection is needed for a sensitive data type that Microsoft has never heard of
because it is unique to your company. A custom DLP document type can help here, but it is often easier to
create a document fingerprint from a sample of a specific document.

Hybrid Exchange DLP


Transport rules are specific to a platform. If you run a hybrid environment, you will have one set of transport
rules, including those that for DLP, for on-premises Exchange and one for Exchange Online. To ensure that
policies are implemented in a consistent manner across both platforms, you must make sure that the same
transport rules are enabled everywhere. Exchange includes the Export-DlpPolicyCollection cmdlet to export
DLP policy information from an on-premises or cloud deployment to an XML file. The XML data can then be
imported using the Import-DlpPolicyCollection cmdlet. This is a manual process and you should be aware that
importing a set of DLP rules using the Import-DlpPolicyCollection cmdlet will overwrite any DLP policies and
associated transport rules that exist in the tenant. This example shows how to export a DLP policy collection
from an on-premises organization).
[PS] C:\> Set-Content -Path '.\DLPPolicies.xml' -Value (Export-DlpPolicyCollection).FileData
-Encoding Byte

To import (in this example, to Microsoft 365):


[PS] C:\> Import-DlpPolicyCollection -FileData ([Byte[]]$(Get-Content -Path '.\DLPPolicies.xml'
-Encoding Byte -ReadCount 0))

The Big Funnel Mailbox Index


The “Big Funnel” replaces the Search Foundation content index used by on-premises Exchange servers. The
major difference between the two technologies is that the Search Foundation indexes items on a database
level while Big Funnel does so for an individual mailbox and stores its indexes as hidden items within the
mailbox. Holding indexes within the mailbox makes searches faster and more precise; it also means that
should a database failure occur, the activation of another database copy cannot be stopped by the need to
update an index as the index is always updated in the copy of the mailbox within the database. To see
information about the indexes within a mailbox, use the following command (output edited for space):
[PS] C:\> Get-MailboxStatistics -Identity TRedmond | Format-List *funn*

BigFunnelIsEnabled : True
BigFunnelUpgradeInProgress : False
BigFunnelMessageCount : 321633
BigFunnelIndexedSize : 6.021 GB (6,464,565,475 bytes)
BigFunnelPartiallyIndexedSize : 124.5 MB (130,597,574 bytes)
BigFunnelNotIndexedSize : 3.816 KB (3,908 bytes)
BigFunnelCorruptedSize : 0 B (0 bytes)

Page 240
BigFunnelShouldNotBeIndexedSize : 417.7 MB (437,972,533 bytes)
BigFunnelIndexedCount : 321487
BigFunnelPartiallyIndexedCount : 141
BigFunnelNotIndexedCount : 1
BigFunnelCorruptedCount : 0

Using the Search-Mailbox Cmdlet


Note: Microsoft deprecated the Search-Mailbox cmdlet along with other legacy Exchange Online eDiscovery
features on July 1, 2020. See this post for more information. When last checked on 13 September 2021, the
cmdlet continues to work.
The focus for eDiscovery searches within Microsoft 365 is on multi-workload content searches and eDiscovery
cases. However, the need to search and remove content from mailboxes existed long before eDiscovery
searches. This simpler kind of search is still available in Exchange Online, albeit only through PowerShell.
Based on the number of appeals for help in constructing the correct syntax to scan user mailboxes for
inappropriate content or to remove copies of messages sent in error within the organization, Search-Mailbox
is an under-appreciated cmdlet. Search-Mailbox works against both cloud and on-premises Exchange and can:
• Remove content from user mailboxes (including shared mailboxes). When Search-Mailbox removes
items from user mailboxes, the removal is permanent (unless the mailbox is on hold). Permanent
removal means that an item is irrecoverable, so you must use Search-Mailbox with care as not even
Microsoft can recover items removed by Search-Mailbox. Being able to purge items permanently is
useful when you need to do something like remove spam or malware from a set of mailboxes or be
sure to eliminate some objectionable content. Microsoft 365 content searches can have a purge
action added to remove items found by searches. However, content search actions can remove only
10 items at a time. See the eDiscovery chapter for more information on content searches.
• Search against a wide variety of message properties using KQL syntax (the same search syntax used
elsewhere in Exchange Online and SharePoint Online). Note that searches conducted against on-
premises Exchange 2010 servers with the Search-Mailbox cmdlet use a different syntax.
• Copy messages from source to target mailboxes. Administrators often need to do this to recover
items from user mailboxes before removing the items.
• Search-Mailbox can search user and shared mailboxes. It cannot search group mailboxes or public
folder mailboxes. Use Office 365 content searches for this purpose. Search-Mailbox can find Teams
compliance records in user mailboxes, but not the compliance records for channel conversations
stored in group mailboxes.
In July 2019, Microsoft began the deprecation procedure for Search-Mailbox. In January 2020, they announced
that the cmdlet will no longer be available within Exchange Online from April 1, 2020. However, they have not
yet removed the cmdlet.

Only Exchange Mailboxes


Search-Mailbox hasn’t been updated by Microsoft for Office 365, so it can only handle mailbox content. If you
need to search for content across multiple Office 365 workloads, use a content search. Used wisely, Search-
Mailbox is a tremendously useful weapon for administrators, but you should always remember that fools rush
in where cautious administrators take their time to tune and test before they start removing items.

Restricting a search to specific folders: Search-Mailbox always searches the full mailbox. You can’t
restrict the search to specific folders. You can limit the search to the primary mailbox by passing the
DoNotIncludeArchive parameter. If you need a finer degree of control, you can use Exchange Web Services
as this API allows the scope of a search to be defined. An example of how to use EWS to perform mailbox
searches can be found here.

Page 241
RBAC Requirements for Search-Mailbox
Like other Exchange cmdlets, the ability for anyone to run Search-Mailbox is controlled by RBAC. Two RBAC
roles exist that include Search-Mailbox:
• The Mailbox Search role allows users to search mailboxes. This role is included in the Discovery
Management role group.
• The Mailbox Import Export role allows users to remove items from mailboxes. The Mailbox Import
Export role is not included in any standard role group. To use it, you must add the role to an existing
role group (like Organization Management) or create a dedicated role group and include the role.
Only members of the role group that includes the Mailbox Import Export role can run searches and
remove the results of those searches. The restriction exists to ensure that only nominated
administrators can remove content from user mailboxes.
To discover who has access to the Mailbox Import Export role, we run the Get-ManagementRoleAssignment
cmdlet:
[PS] C:\> Get-ManagementRoleAssignment -Role 'Mailbox Import Export' -
GetEffectiveUsers | ? {$_.AssignmentMethod -eq "RoleGroup"} | Format-Table
EffectiveUserName, Name

EffectiveUserName Name
----------------- ----
TRedmond Mailbox Import Export-Organization Management-Delegating
James Redmond Mailbox Import Export-Organization Management-Delegating
Marc.Vigneau Mailbox Import Export-Organization Management-Delegating
Brian Weakliam Mailbox Import Export-Organization Management-Delegating
TempAdmin Mailbox Import Export-Organization Management-Delegating
Administrator Mailbox Import Export-Organization Management-Delegating
TRedmond Import Export Org Management
James Redmond Import Export Org Management
Marc.Vigneau Import Export Org Management
Brian Weakliam Import Export Org Management
TempAdmin Import Export Org Management
Administrator Import Export Org Management

In this case, the Mailbox Import Export role is part of the Organization Management role group and has been
delegated from that role group to the special Exchange Administrators role group that’s populated when
users are nominated to be an Exchange administrator through the Office 365 Admin Center. It is sensible to
review the list of those assigned the Mailbox Import Export role periodically to ensure that the right people
have access. You can see a list of the users who can run the cmdlet by running this command:

Running Search-Mailbox
Although you can use Search-Mailbox to search a single mailbox, you can also pass a set of mailboxes
retrieved with the Get-Mailbox or Get-Recipient cmdlets to search up to a maximum of 10,000 mailboxes at
one time. Up to 10,000 results can be returned for a query, which means that if more results are available, you
need to split the search. For example, if you need to remove 50,000 items from a mailbox, you will have to run
the cmdlet to find and remove items five times.
The goal is always to be as precise as possible with the search used to find messages. This can be a little
challenging at times because no GUI is available to help construct complex queries in KQL syntax, but you can
build queries for content searches in the Compliance Center and use the same queries with Search-Mailbox.
To help understand what Search-Mailbox can do, let’s look at some common examples of its use.

Page 242
Estimate Only
As you develop search queries to find, copy, and even remove mailbox content, you can use the
EstimateResultOnly parameter to estimate how many items a search is likely to retrieve. For example:
[PS] C:\> $Search = Search-Mailbox -Identity "Customer Services" -SearchQuery "My Best Customer"
-EstimateResultOnly

Putting the output in a variable allows us to use the different results returned by the search.
[PS] C:\> Write-Host "The search found" $Search.ResultItemsCount "of size" $Search.ResultItemsSize
The search found 630 of size 32.33 MB (33,897,743 bytes)

Remember that the result returned is an estimate and that the actual results returned when you execute a
search might differ.

Find a Message Based on Subject


The need often arises to find and remove messages with specific subjects. In this example, we search a set of
mailboxes for messages that have “Kazuma” in the subject. This is the kind of search you might do if some
malware penetrated your defenses and arrived in user mailboxes.
[PS] C:\> Get-Mailbox -ResultSize Unlimited -RecipientTypeDetails UserMailbox | Search-Mailbox
-SearchQuery {Subject:"Kazuma*"} -TargetFolder "Malware Searches" -TargetMailbox MailboxSearches
-LogLevel Full -SearchDumpster

We fetch all user mailboxes in the tenant and pipe them to Search-Mailbox to search for messages with a
string starting with Kazuma in the message subject (using the wildcard character to say that we want to find
any message with this word in the subject). Search-Mailbox copies the messages it finds to the target folder in
the mailbox specified. We also pass the SearchDumpster parameter to force Search-Mailbox to look through
items in the Recoverable Items folder.
Any messages that match the search query are copied to the target folder in the target mailbox. If we don’t
want this to happen, we pass the -LogOnly parameter. The example pipes all user mailboxes to be processed
by Search-Mailbox. If you only want to process a single mailbox, pass its identifier direct to Search-Mailbox:
[PS] C:\> Search-Mailbox -Identity James.Ryan

Building a Search Query to Find Messages


Another common reason for using Search-Mailbox is to rescue the situation when someone sends a message
that they didn’t intend sending or they send it to the wrong set of people.
[PS] C:\> Get-Mailbox –Filter {Office -eq "Dublin"}| Search-Mailbox -TargetMailbox Searches
-TargetFolder "Retrieve Email" -SearchQuery {Body:"*I have announced today the promotion of John
Baker*"} -LogLevel Full -SearchDumpster

When people ask for messages to be “taken back” from recipients’ mailboxes, you should be able to find out a
lot of information about the problem message, including its originating email address, so we can improve the
search query by including the sender:
[PS] C:\> Get-Mailbox –Filter {Office -eq "Dublin"}| Search-Mailbox -TargetMailbox Searches
-TargetFolder "Retrieve Email" -SearchQuery {Body:"*I have announced today the promotion of John
Baker*" From:"Senior.Manager@outlook.com"} -LogLevel Full -SearchDumpster

In this case, we use the SMTP address of the sender. If we know what the display name (resolved name) of the
sender is, we can use it instead. If we are unsure of the exact SMTP address but know the domain the message
came from, we can use it instead, as in:
From:"*outlook.com"

Page 243
If we know the sent date for a message and its subject, we can include those pieces of information in the
query:
[PS] C:\> Get-Mailbox –Filter {Office -eq "Dublin"}| Search-Mailbox -TargetMailbox Searches -
TargetFolder "Retrieve Email" -SearchQuery {Body:"*I have announced today the promotion of John
Baker*" From:"Michael McArthur" Sent:"14-Aug-2018" Subject:"Promotions"} -LogLevel Full
-SearchDumpster

When you use multiple keywords within a query, like the one above, Exchange combines them with AND
operators when it executes the search. You can include the OR operator in a query. For example:
From:"Michael McArthur" OR From:"Kim Akers"

In effect, what we’re doing here is building up a complex search query to focus in on the exact message we
are interested in. This might not be so important when you only search and copy messages, but it is critical
when the time comes to remove the messages from the source mailboxes.

Looking for Attachments


Another common scenario is when someone asks to find messages with an attachment or a certain
attachment. Perhaps a user attached and sent the wrong file, or the attachment has a known virus lurking
within it.
To find messages with an attachment, we include the HasAttachment keyword.
HasAttachment -eq $True

For example:
[PS] C:\> Get-Mailbox –Filter {Office -eq "Dublin"} | Search-Mailbox -SearchQuery {Sent: "13-Aug-
2018" HasAttachments -eq $True} -TargetMailbox Searches -TargetFolder AttachmentSearch
-LogLevel Full -SearchDumpster

To look for messages with a specific attachment, specify the AttachmentNames keyword and the name of the
attachment in the search query. For instance:
[PS] C:\> Get-Mailbox –Filter {Office -eq "Dublin"} | Search-Mailbox -SearchQuery {Sent: "13-Aug-
2018" AttachmentNames:Sales_data_2018-7-1.CSV} -TargetMailbox Searches -TargetFolder
AttachmentSearch -LogLevel Full -SearchDumpster

Keyword queries also support wildcard matching for attachment names. For example, you can look for all
spreadsheets starting with "SalesForecast" by including "AttachmentNames:SalesForecast*" in the search
query. A variant is to look for messages that have an attachment of a specific type. For instance, to look for
messages with PDF attachments, use “AttachmentNames -like "*..pdf." You can even look for messages of a
certain size. For example, to look for messages that have .DOCX attachments and are larger than 10 MB, you’d
use
{AttachmentNames -like "*.DOCX" Size -gt 10 MB}

Searching by Date Range


Because Search-Mailbox uses KQL to interrogate the content indexes for mailbox databases, some queries can
be complex. Because people often use dates to find items, KQL includes some interesting date-based filters to
look for items. In this example, we look for items received from 1 June through 14 August 2018 (U.S. date
format) sent from Outlook.com addresses:
[PS] C:\> Get-Mailbox –Filter {Office -eq "Dublin"} | Search-Mailbox –SearchQuery
{Received:"06/01/2018..08/14/2018" From: "*.outlook.com"} -TargetMailbox Searches -TargetFolder
DateSearch -LogLevel Full -SearchDumpster

Page 244
The date format used depends on the default format used on your workstation. You can spell the dates out if
you want to avoid problems with different date formats. For example
Received:"1-Jan-2021..1-Sep-2021"

Here’s an example of specifying a canned period in a search. In this case, we want to look for items received in
the previous year:
[PS] C:\> Search-Mailbox –SearchQuery {Received: "last year"}

Other date intervals supported by KQL include "today", "yesterday", "this week", and "this month." Note that
although KQL supports ISO 8601 DateTime data types in searches, you cannot use date/time values when
searching against fields such as “Read” and “Sent” because the parser used by Exchange Online drops the
time segment and uses only the date. This means that you cannot search for a message sent or received at a
specific time using a value such as Read:”20-July-2019T19:53.”
You don’t have to enclose the clauses in a search query within curly braces unless the KQL parser needs to
resolve something. For example, the query contains an AND or OR logical operator. In most cases, you can
proceed without the braces and add them if the KQL parser complains about a query.

Using Different Queries for a Set of Mailboxes


Sometimes you might have to search a set of mailboxes and use a different search for each mailbox. The
easiest way to do this is to build a CSV file holding the mailbox alias and the parameters for the search. You
can then read the file to get the list of mailboxes to process and the query to use and construct the full search
command for input as a script block to the Invoke-Command cmdlet. Here’s an example of how to approach
the problem. In this case, the CSV file has columns for name, a word or phrase to search against the Subject
property, and optional start and end dates.
[PS] C:\> $Users = import-csv "C:\temp\people.csv"
CLS
ForEach ($i in $Users) {
$Search = 'Subject: "' + $i.Subject + '"'
If ($i.StartDate -ne $Null -and $i.EndDate -ne $Null) {
$Search = $Search + ' Received:"' + $i.Startdate + '..'+ $i.Enddate + '"'}
$Command = { Search-Mailbox -Identity $i.Name -SearchQuery $Search -TargetFolder Search
-TargetMailbox CServices -LogLevel Full }
Write-Host "Searching" $i.Name "using query" $Search
Invoke-Command -ScriptBlock $Command }

KQL Properties for Exchange


KQL queries can run against both Exchange Online mailboxes and SharePoint Online/OneDrive for Business
sites. However, each repository has a different set of searchable properties that match their usage. For
example, Exchange messages might have attachments, but SharePoint documents never do. For more
information about the properties you can use with content searches, see this support article. You can use any
of the Exchange properties in a search performed with Search-Mailbox.

Using GUIDs as Identities


Because Search-Mailbox is an Exchange Online cmdlet, it is very flexible about the identities it accepts to find
mailboxes. Along with the usual identities (name, display name, alias, distinguished name, SMTP address), you
can use the following GUIDs to find mailboxes to process:
• ExchangeGuid: The Exchange Online identifier for the mailbox.
• ExternalDirectoryObjectId: The Azure Active Directory object identifier for the account owning a
mailbox. Objects unique to the Exchange Online Directory that do not exist in Azure Active Directory
(like public folders and inactive mailboxes) do not have this identifier.
• ArchiveGuid: The Exchange Online identifier for the mailbox archive (if one exists).
Page 245
• Guid: Another identifier used by Exchange Online.
To find the identifiers, run the Get-Mailbox cmdlet. For instance, to find the Azure Active Directory account
identifier for all mailboxes, run this command:
[PS] C:\> Get-Mailbox -RecipientTypeDetails UserMailbox | Select DisplayName,
ExternalDirectoryObjectId

No matter what identifier you use with Exchange Online cmdlets like Search-Mailbox, Exchange resolves the
identifier to find the associated mailbox and then searches the mailbox. If an archive mailbox is available, it is
included unless you specify the DoNotIncludeArchive parameter. You cannot use an identifier to make Search-
Mailbox process a group mailbox.
When should you use identifiers to find mailboxes for Search-Mailbox to process? Sometimes there are many
mailboxes with the same or similar names. The identifiers are always unique, so by passing an identifier you
can be sure that Exchange will process the precise mailbox you want to search or remove items from.

Output from Search-Mailbox


Search-Mailbox runs interactively and can be quite slow to search large mailboxes. Three factors contribute to
the search time: the numbers of mailboxes processed, the size of those mailboxes, and the number of items
copied from the mailboxes to the target mailbox. When it completes a search, Search-Mailbox reports its
results for each mailbox:
Identity : TRedmond
TargetMailbox : Search Investigations
Success : True
TargetFolder : \Searches\Tony Redmond-16/05/2018 15:16:08
ResultItemsCount : 3514
ResultItemsSize : 188.1 MB (197,276,026 bytes)

This output tells us the following:


1. A search results folder (Searches) is created in the target mailbox (Search Investigations). Under the
search root, you find an entry for each mailbox scanned by the search and the folders where items
were found. In this instance, the search scanned a single mailbox, and the results of the search are in
the folder “Tony Redmond – 16/05/2018 15:16:08” (the date and time when the search started). A
separate folder is created for each mailbox scanned, if some results are found in that mailbox. Under
this folder you’ll find sub-folders for the primary mailbox and archive mailbox (if one exists and the
search processes it), and then the set of folders to hold copied items.
2. The search copied 3,514 items totaling 188.1 MB that matched the search criteria. The search results
include items found in system folders and the Recoverable Items folder (if you specify the
SearchDumpster parameter). Search-Mailbox copies found items to folders in the target mailbox as it
processes the source mailbox, so you can check the target mailbox as the search proceeds to see
what it finds. The names of the folders holding the copied items are the same as those in the source
mailboxes where the items were found. For instance, if items are found in the Inbox and Sent Items
folders in the source mailbox, the copied items are in the Inbox and Sent Items folders.
In addition, the search generates a log report in the top-level target folder. If you specify “Full” for the
LogLevel, Exchange generates a ZIP file holding a spreadsheet (CSV) detailing the results for individual
mailboxes and attaches it to the log report.

Removing Mailbox Content


You can remove items from mailboxes with the Search-Mailbox cmdlet by specifying the DeleteContent
parameter. Because Search-Mailbox marks the items found by the search for permanent removal, it’s easy to

Page 246
imagine the potential havoc that might be wreaked on user mailboxes, so you need to be careful that the
correct items are found before you try to remove anything. You should make sure to take account of the
following points.:
• You cannot use the DeleteContent parameter unless your account is part of a management role group
that holds the RBAC “Mailbox Import Export” role.
• Items removed by Search-Mailbox go into:
o If not subject to a hold, the Purges sub-folder in Recoverable Items and remain there until the
single item recovery period for the mailbox lapses.
o If items are subject to a litigation or in-place hold, they go into the
SearchDiscoveryHoldsFolder folder in the Recoverable Items structure of their host mailboxes
until all retention requirements elapse.
When all holds expire on an item, the Managed Folder Assistant permanently removes it. While the
items are in Recoverable Items, they are inaccessible by users, but the items still are available for
search and compliance purposes. If you need to remove items under hold, follow the guidance in this
article.
• If you need to keep a copy (for instance, for investigation purposes), you can specify the name of a
target mailbox in the TargetMailbox parameter. In this case, Search-Mailbox copies the items to the
target mailbox before removing them from the source mailboxes. Note that if the logging level for
the search is set to Basic or Full, you must supply the name of a target mailbox and folder to store the
log file. To remove messages without copying them, omit the LogLevel, TargetMailbox, and
TargetFolder from your command.
• Have management backing in place and have approval clearly documented before removing any
content from user mailboxes. Make sure that removals follow the data governance policy for your
organization.
• Test, test, and test again before you begin to remove content. The WhatIf switch for the Search-
Mailbox cmdlet is especially useful here, as is the EstimateResultOnly parameter, which gives an
estimate of the total number and size of found items. Obviously, you should check the accuracy and
precision of any search which returns thousands of items before using it to remove data. Make sure
that the search criteria are as exact as possible and run them against a test mailbox to validate that
they will work when used. It would be a shame to clean out everything in the CEO’s mailbox just
because of a slip in a PowerShell command. Remember that Exchange Online does not use backups,
so if you permanently remove an item, it is irrecoverable.
For example, this command searches all user mailboxes and permanently removes the items found by the
search query. Search-Mailbox prompts for confirmation that deletions should go ahead unless you include the
Force parameter:
[PS] C:\> Get-ExoMailbox -RecipientTypeDetails UserMailbox | Search-Mailbox -SearchQuery {Subject:
"Spam Email" AND Received:1-Apr-2018..1-Aug-2018} -DeleteContent -Force

An example script to demonstrate how to use Search-Mailbox to report estimate results for a search query
and remove items if required is downloadable from GitHub.

Auto-Expanding Archives: Microsoft recommends that if you have mailboxes with auto-expanding
archives, you should be very careful when removing items using the Search-Mailbox cmdlet. You can
certainly remove items from primary mailboxes, but if you remove items from archives that are in the
middle of the expansion process, a small chance exists that some data loss might occur.

Cleaning Up Big Utility Mailboxes


Search-Mailbox is often used to clean out items from utility mailboxes. In some cases, several hundred
thousand messages might accumulate in these mailboxes, which can then pose a problem if the action fills

Page 247
the Purges or Deletions folders in Recoverable Items. If you just want to delete messages without having the
items pass through Recoverable Items, consider reducing the deletion item retention time for the mailbox.
Don’t do this with mailboxes where it’s important to retain deleted items for compliance purposes. It’s strictly
something that should only happen where a good reason exists to clean out a mailbox. This command tells
Exchange that deleted items should be retained for zero days and disables the single item recovery feature (as
the deleted item retention period is set to zero, the feature wouldn’t apply anyway). With these settings in
place, running Search-Mailbox to remove items means that Exchange removes the items immediately and
permanently from mailboxes.
[PS] C:\> Set-Mailbox -Identity BigMailbox -RetainDeletedItemsFor 0 -SingleItemRecoveryEnabled
$False

It can take a couple of hours before the updated single item recovery setting becomes active. To return the
mailbox to the normal state, run the command:
[PS] C:\> Set-Mailbox -Identity BigMailbox -RetainDeletedItemsFor 14 -SingleItemRecoveryEnabled
$True

Comparing Content Searches and Search-Mailbox


Given that Microsoft has deprecated Search-Mailbox, you should start using Microsoft 365 content searches to
find and remove items from user mailboxes. Content searches use the Purge search action to hard-delete or
soft-delete mailbox items. You can find more information about how to use content searches to remove items
in the eDiscovery chapter in the main book. Table 7-8 compares the functionality of the two methods.
Criterion Search-Mailbox Content Search Action
Number of items that can be All items are removed from Remove up to 10 items per
removed from a mailbox. mailboxes that match search mailbox that match search
query. query.
Number of mailboxes that can Maximum 10,000. Maximum 50,000.
be processed in a single run.
Speed Search-Mailbox is single- Multi-threaded distributed
threaded, so processing work makes content searches
multiple mailboxes can take more scalable and faster.
some time.
GUI to create search criteria. No Yes
Process inactive mailboxes No Yes
Copy items to another mailbox Yes No
Ignore archive mailbox. Yes No
Search Recoverable Items only Yes No
Search specific folder in No Yes
mailbox.
Logging only Yes No (but preview and statistics
are available).
Table 7-8: Differences between Search-Mailbox and Content Search purges

Audit Records for Search-Mailbox


Exchange Online captures two types of audit records for Search-Mailbox and writes them into the Office 365
audit log:
• Records for each mailbox processed by the Search-Mailbox cmdlet: Exchange Online captures
this record even if no data is removed. It reveals the search query and other parameter used. These
records can be retrieved by specifying “Search-Mailbox” in the Operations parameter. For example,

Page 248
this command finds all the audit records created for Search-Mailbox operations run by a specified user
on a specific day.
[PS] C:\> [array]$Records = Search-UnifiedAuditLog -StartDate 13-Sep-2021 -EndDate 14-Sep-2021
-UserIds ComplianceAdmin@office365itpros.com -Operations Search-Mailbox -Formatted -ResultSize 1000

• Records logging the removal of items: The record reveals who ran the command and what items
were removed from a mailbox. If more than a few items are removed and the details of the items
cannot fit in the audit data part of a record, Exchange captures a series of records. These records can
be found using the “HardDelete” operation.
See the auditing and reporting chapter for details about how to search the Office 365 audit log.

Determining Mailbox Location


Office 365 runs in data centers around the world. New data center regions come online on an ongoing basis
to deliver enough processing power to handle demand and serve different markets. When you create a new
tenant, you associate the tenant with a specific country. That decision then drives the choice of data center
region as the home of the tenant. Depending on the region, multiple data centers become available to host
the physical location of the mailbox databases that hold tenant mailboxes and other tenant data. Tenants who
use the multi-geo capability can distribute mailboxes across multiple data center regions and move mailboxes
between those regions to meet local data regulations.
Given the utility nature of Microsoft 365, users are generally unaware of the physical location of their data, but
administrators can check the country associated with a tenant at any time by running the Get-MgOrganization
or Get-AzureADTenantDetail cmdlets. For example, this tenant is in the United States. The country letter code
follows the ISO standard, so you see values such as “IE” (Ireland), AU (Australia), BE (Belgium), and so on.
[PS] C:\> Get-MgOrganization | Format-Table DisplayName, CountryLetterCode

DisplayName CountryLetterCode
----------- -----------------
Office365ITPros US

Inside a Data center


In all regions, Microsoft pairs the primary data center with a secondary data center to ensure that Exchange
services can continue to run should a major outage occur. Exchange Online stores mailboxes in databases
within a DAG, which distributes database copies across at least two data centers in the same region. If a data
center outage occurs, Exchange Online activates the databases in the secondary data center, just as it would
happen with a stretched data center implementation for on-premises Exchange. You can see some
information about the status of a mailbox with the Get-Mailbox cmdlet. For example:
[PS] C:\> Get-Mailbox –Identity TRedmond | Format-List Database, ServerName

Database : EURPR04DG049-db159
ServerName : vi1pr04mb3039
It is difficult to tell exactly which data center hosts the database because the “EUR” prefix in the database
name is non-specific. We know that the database is in the EMEA region, but the mailbox could be active on a
server in any data center within the region. The database name is composed of the DAG name and the
database within the DAG, so we can say that this mailbox is in the DB159 database in the EURPR04DG049
DAG. Each database has four copies within the DAG, so we still do not know which copy is active and what
server hosts that copy.
The ServerName property is a hangover from on-premises Exchange and only tells us the server where
Exchange Online originally provisioned the mailbox. You cannot control where Exchange Online creates a new

Page 249
mailbox within a region or where the mailbox is active afterward. The first two characters of the server name
indicate the data center that owns the original server. For instance, “db” is a server in the Dublin data center
while “vi” belongs to the Vienna data center. Exchange does not update the ServerName property when it
moves mailboxes across servers within a data center to rebalance load or to transfer mailboxes to a different
server within a DAG, nor when databases move between regions because of a tenant relocating when a new
data center region is available or if they decide to use the multi-geo capability. Because of its limited
usefulness, the Get-ExoMailbox cmdlet does not return the ServerName property.
Given these limitations, it can still be interesting to understand the way that Exchange distributes mailboxes
(at least at the time of the last update). This code shows that even for a small tenant, Exchange places
mailboxes in all the data centers within a region (in this case, EMEA).
[PS] C:\> $Mbx = Get-Mailbox -RecipientTypeDetails UserMailbox -ResultSize Unlimited | Select
DisplayName, ServerName
Write-Host $Mbx.Count "user mailboxes found"
$Mbx | Group {$_.ServerName.SubString(0,2)} | Select @{Name="Data center";Expression={$_.Name}},
Count

32 user mailboxes found

Data center Count


----------- -----
am 11
vi 4
he 5
db 12
In addition to the data center region to which a tenant belongs, each user account has a country setting,
created when an administrator assigns a license to their account. The license governs the set of services that
Microsoft can deliver to the user.

Page 250
Chapter 8: Office 365
Information
This chapter discusses some topics relating to Office 365 topics that we couldn’t fit into the main book.

Importing PSTs into Office 365


With generous mailbox quotas and archiving features in Office 365, it makes sense to move data that has
accumulated in user PSTs to Exchange Online mailboxes or archives, especially if you want to ensure that the
data is available for compliance and eDiscovery purposes.
The Office 365 Import Service is available through the Data Migration section of the Office 365 admin portal
Before using the Import Service, you must collect user PST files from user PCs or your network environment.
Once the PSTs are ready, you can either upload them to Microsoft over the Internet or, if you have a large
amount of data that makes a network transfer unfeasible, package the files on 3.5-inch SATA II/III drives
(currently limited to 4 TB capacity) and ship the drives to a Microsoft datacenter.
The mailboxes that are used as the target for PST imports can be an Exchange Online primary or archive
mailbox, or an Inactive mailbox.
You can test the Import Service process by importing a few PSTs to see how the process works and to satisfy
yourself that the procedure is suitable for your tenant.

Configuring the RBAC Permissions for the Office 365 Import Service
Before creating a new import job, you first need to assign the Mailbox Import Export role to the Office 365
administrator account that you will be using for the task. You can either add the role to an existing
management role group or create a new management role group that can then be assigned to the accounts
responsible for processing import jobs. To create a new management role group, go to the Exchange Admin
Center, navigate to Permissions, and select Admin Roles. Click on the “plus” icon to create a new role group.
Give the new role group a meaningful name such as “Mailbox Import Export”, then click the “plus” icon below
Roles.
From the list of available roles choose Mailbox Import Export and click Add. Click OK, then click the “plus”
icon below Members to add the user accounts that you wish to grant the mailbox import/export permissions
to. When you have added all the users that you need click Save to complete the task of creating the new role
group. The new permissions will take effect for the users the next time they log in to the Microsoft 365 admin
portal.

Page 251
Figure 8-1: Creating a new management role group to control PST imports

Migrating PST Data with the Drive Shipping Method


When you use the “drive shipping” method to transfer data to Microsoft, the data on the drives is protected
by BitLocker encryption and includes a mapping file that associates each PST with a user account. Each drive is
prepared using a special Azure Import/Export tool that creates a journal file for the drive containing the drive
ID and the BitLocker key used to protect the data.
When the drives arrive at Microsoft, they are loaded into Azure and made available to the tenant
administrator. At this point, the tenant administrator can invoke an import job to start importing the data
from the PSTs into the target mailboxes. The administrator who runs the job must possess the RBAC Mailbox
Import-Export role and have access to the journal files. Once launched, the import job runs on Azure to
process the PSTs found on the drive and uses the mapping file to move content from the PSTs into the target
mailboxes. The mapping file can direct information to either primary or archive mailboxes. You can monitor
progress of the import job from the Office 365 Admin console.

Migrating PST Data with the Network Transfer Method


If you have a high-capacity connection to the Internet and don't have a lot of PSTs to process, you can
consider moving the PSTs over the network to a Microsoft datacenter instead. In this scenario, the data is still
protected because:
• It is uploaded over HTTPS, so it is encrypted in transit
• It is uploaded to storage that is encrypted at rest, and you have the keys

Once the data is uploaded, you can then run an import job to process the PSTs and transfer the content to
user mailboxes.
If you have a requirement for additional encryption of the individual PST files, you can use Azure Rights
Management (Azure RMS) to encrypt the files before they are uploaded. This process follows a similar process
as uploading unencrypted files but uses the Office 365 Import Tool to perform the encryption and upload,
instead of AzCopy which simply performs the upload, although AzCopy is still installed as part of this process.

Page 252
Uploading Unencrypted PST Files to Office 365
In the Office 365 admin portal select Users, then Data Migration from the left menu, and then choose
Upload PST files. Click the “plus” icon to start a new import job (Figure 8-2). For this example, we’re choosing
to “Upload email messages (PST files)”. The option to ship data on physical hard drives might not appear if
your Office 365 tenant is hosted in a datacenter that doesn't support that option yet.

Figure 8-2: Creating a new PST import job


A wizard appears to guide you through the steps for creating the PST import job. If you haven’t already done
this, you need to download and install the Azure AzCopy tool from Microsoft. Install AzCopy on a workstation
or server that has access to the location where the PST files have been collected. Next, click on Show network
upload SAS URL to retrieve the secure storage account key (Figure 8-3). You should protect this key in a secure
location as you would with any other administrative usernames and passwords for your environment.

Page 253
Figure 8-3: Accessing the network upload SAS URL
Next, we use AzCopy to upload the PST files to Azure. In this example the PST files are stored on a shared
folder on the network called \\MGMT\PST. The source path, destination (secure upload URL), and secure
storage account key need to be provided as command line options for the AzCopy tool. The AzCopy tool can
be run from CMD.exe.
C:\>cd "Program Files (x86)\Microsoft SDKs\Azure\Azcopy"

C:\Program Files (x86)\Microsoft SDKs\Azure\Azcopy> azcopy.exe /source:\\mgmt\pst


/dest:"SECURE_UPLOAD_URL" /S /V:C:\Admin\PSTupload.log

The network upload URL must be placed in quotes or you will receive an error message. Some additional
command line parameters are also used in this example:
• /S tells AzCopy to also upload files in subfolders of the source directory
• /V specifies the file to output logging information to

A full list of AzCopy command line options is available from Microsoft.

Warning: You may be tempted to enter the command line into a batch file to make it easier to run. However,
if you do then you must keep the batch file secure, as it contains the highly sensitive secure upload URL and
storage key for Azure that will be used for the PST file uploads.
AzCopy runs as an interactive process, not a background service. You will need to keep the CMD window
open and running while the PST files are uploaded. However, you do not need to keep the web browser
window open that provides you with your storage key and secure upload URL. Those can be retrieved again at
any time in the Import section of the Office 365 admin center by clicking on the “key” icon (Figure 8-4).

Page 254
Figure 8-4: Accessing secure information for the import job
Naturally the amount of time it takes to upload all the PST files will largely depend on the speed of your
network connection to the internet. While AzCopy is running it stores two journal files in the
%appdata%\Local\Microsoft\Azure\AzCopy folder so that it can detect whether a transfer succeeded or failed.
If you encounter timeout errors, you can simply run the command again and any incomplete transfers will
resume.

Real World: AzCopy uses default parameters that are optimized for high bandwidth network connections. If
you’re attempting to upload PST files over a low bandwidth connection, you may see the operation
repeatedly failing due to time out errors. If that occurs, you can reduce the number of concurrent operations
to avoid overloading the network.

When the file upload is complete you’ll see a summary confirming the number of successful, skipped, or failed
file transfers.
Finished 3 of total 3 file(s).
[2016/05/09 17:03:55] Transfer summary:
-----------------
Total files transferred: 3
Transfer successfully: 3
Transfer skipped: 0
Transfer failed: 0
Elapsed time: 00.00:52:13

Uploading Encrypted PST Files to Office 365


For organizations that require additional encryption of the individual PST files before they are uploaded, it is
possible to use Azure RMS to achieve that goal. Azure RMS is not available in all Office 365 subscriptions, so
you should first verify that you’re eligible to use the service.

Next, connect to Exchange Online. Refer to the PowerShell chapter


in the main book to learn how to do this. After connecting to
Exchange Online with PowerShell, configure Information Rights
Management (IRM) to use Azure RMS. The RMS key sharing location
URL depends on the location of your organization. Microsoft has
URLs available for different regions, such as:
Preparing to Encrypt PST Files
On the computer or server where you’ll be encrypting and uploading the PST files, install the Rights
Management Service Client. Open PowerShell and connect to Office 365.
PS C:\> Connect-MsolService

Page 255
Generate a new encryption key that will be used for encrypting the PST files. This is a symmetric key, meaning
it can be used for both encryption and decryption. You should protect this key as you would any other
password. When you run the New-MsolServicePrincipal command shown below, the key will be output to your
PowerShell console.
PS C:\> New-MsolServicePrincipal -DisplayName PstEncryptionPrincipal

The output from the command also includes an application identifier, displayed as the AppPrincipalId
attribute. You will also need to save that value for use later in this process. If you neglected to save the
information you can retrieve it later by running the following command.
PS C:\> Get-MsolServicePrincipal | Where {$_.DisplayName -eq “PstEncryptionPrincipal”}

Connect to the Azure RMS again, and run the following commands to retrieve the BPOSId and the
LicensingIntranetDistributionPointUrl values. You’ll need these values in an upcoming step as well.
PS C:\> Connect-AadrmService

PS C:\> Get-AadrmConfiguration | Select BPOSId

BPOSId
------
2b9bca49-687e-4e5f-8a52-21350b719b06

PS C:\> Get-AadrmConfiguration | Select LicensingIntranetDistributionPointUrl

LicensingIntranetDistributionPointUrl
-------------------------------------
https://e1180438-e46b-4695-9cde-cafbcbf06b36.rms.eu.aadrm.com/_wmcs/licensing

Encrypting and Uploading PST Files


Encrypted PST file uploads start in much the same way as the unencrypted method. Log in to the Office 365
admin center and navigate to Users, Data migration, and then choose Upload email messages (PST files).

Figure 8-5: Starting a new upload


A wizard appears to guide you through the steps for creating the PST import job. If you haven’t already done
this, you need to download and install the Azure AzCopy tool from Microsoft. Install AzCopy on a workstation
or server that has access to the location where the PST files have been collected.
AzCopy is used for uploading encrypted PST uploads, but first the files need to be encrypted by the Office 365
Data Encryption and Import Tool. At the time of this writing Microsoft’s own documentation describes how to

Page 256
encrypt and upload the files using a single tool called O365ImportTool.exe. However, that tool is not available
as a download. Instead, the O365Protect.exe tool is made available. Microsoft has not added a link to the tool
in the new admin center, but you can access it here. You’ll need to be logged into the portal in your web
browser for that link to work. There’s no installation required for the encryption tool, just download the zip file
and extract the O365Protect.exe file to a folder on the computer where AzCopy is installed, and that you’ll be
uploading the PST files from.
Next, click on Show network upload SAS URL to retrieve the secure storage account key. You should protect
this URL in a secure location as you would with any other administrative usernames and passwords for your
environment.
The syntax for O365Protect.exe is as follows:
C:\Admin> O365Protect.exe /sourcefolder:<source folder> /sourcepattern:"*.pst" /rmsserver:<RMS URL>
/tenantid:<BPOS Id> /ownerid:<owner Id> /key:<Symmetric Key>

Use the following values for your environment:


• Source folder – the folder containing the PST files to be encrypted. This can be the UNC path to a file
share, or a local path. You must have access to modify files in this location.
• Source pattern – Use “*.pst” to encrypt all PST files in the source folder. If PST files are stored in sub-
folders use the /recurse switch as well.
• RMS server – use the LicensingIntranetDistributionPointUrl value you retrieved earlier.
• Tenant Id – use the BPOSId value you retrieved earlier.
• Owner Id – use the AppPrincipalId value you retrieved earlier for the MsolServicePrincipal.
• Key – use the symmetric key value you generated earlier when you ran New-MsolServicePrincipal.

Wrap each of the parameter values in quotes when you construct your own command line. When you’re
ready, run the O365Protect.exe command line you’ve constructed, and watch the output to determine
whether encryption was successful or not.
Encrypting \\mgmt\pstimport\Import Demo 1.pst
100%
Elapsed Time: 00:00:00.1933768
Encrypting \\mgmt\pstimport\Import Demo 2.pst
100%
Elapsed Time: 00:00:04.2313517
Encrypting \\mgmt\pstimport\Import Demo 3.pst
100%
Elapsed Time: 00:00:03.9624163
Total Elapsed Time: 00:00:20.1379127

After the encryption process has finished you can use AzCopy to upload the PST files following same steps
outlined earlier in this guide when demonstrating the upload of unencrypted files.

Starting the Import Job


Uploaded PST files are retained for 30 days, allowing you time to perform multiple uploads before you begin
the import process itself. Once you are happy that you’ve uploaded all the PST files that you want to import,
you can provide the Import Service with a mapping file and then commence the import process itself.

Creating a Mapping File


After the PST file has been upload the next step is to create a CSV file that maps each uploaded file to an
Office 365 user. Microsoft provides a sample PST mapping file that you can download and use to ensure that
you have the correct data entered. Each line of the CSV file maps one PST file to one mailbox (Figure 8-6). If
you have multiple PST files for a mailbox then you will need to use multiple lines in the CSV file, one for each
PST file. The file path format needs to be changed by removing leading backslashes and replacing all other
Page 257
backslashes with forward slashes. For example, if a PST file was in the root of the path \\MGMT\PST then the
file path in the CSV should be left blank, whereas if the file was in \\MGMT\PST\Server01, then the file path
in the CSV file should be “/Server01”. The IsArchive field is used to specify whether the PST file data is
imported into the primary mailbox or the archive mailbox. If you are importing to the archive mailbox the
mailbox user must be archive-enabled first, because the Import service will not automatically archive-enable
mailboxes for you. An IsArchive value of “FALSE” will import into the primary mailbox. Whether you’re
importing to the primary mailbox, or to an archive mailbox, you can use the TargetRootFolder field to specify a
folder to import the data to.

Figure 8-6: A PST mapping file

Starting the Import Process


Return to the Office 365 admin center. If you had already closed the browser window, simply click the “plus”
icon again to create a new import job. This time we’ve already completed the upload of PST files and
prepared the mapping file, so we can tick the two confirmation boxes and then click Next to continue to the
next step.
Enter a simple name for the import job when prompted, taking care to comply with the specific naming
standards for import jobs, and click Next to continue. Now you can upload the mapping file that was created
earlier. Click the “+” icon and navigate to the CSV file that contains the mapping information (Figure 8-7). CSV
files of less than 100 rows need to be validated first, and if anything causes the validation to fail, you can click
on the link in the Status column to download a validation report that will explain why it failed.
When the file has finished uploading and has been validated, tick the box to agree to the terms and
conditions of the import service. Finally, when everything is ready, click the Finish button to finish creating the
import job.

Figure 8-7: Uploading mapping files

Page 258
Monitoring the Import Process
After the import job begins, from the Import Service area of the Office 365 admin center you can click the
View details link to view the progress of the data import and see any errors that have occurred. The most
likely error you’ll run into is a PST file that either hasn’t been uploaded or can’t be found by the import job
because the path you specified CSV file is not correct.
The PST file names and paths are also case sensitive, so you should take care to use the correct case in the
CSV file when you create it. As the import proceeds and information is loaded into user mailboxes, the owners
of the mailboxes will see the imported data appear in the root folder that you specified in the CSV mapping
file (Figure 8-8).

Figure 8-8: Newly imported information shows up in a user mailbox

Real World: Importing PST data into Office 365 is only part of the process. Naturally you need to find the
PST files first, using tools such as Microsoft Exchange PST Capture Tool. You’ll also need to disconnect any
Outlook profiles from the PST files, work out who owns each PST file that you find, and prevent creation of
new PST files by users in your environment.

Completing the PST Migration


After you’ve completed the importing of PST files into Exchange Online you may wish to take additional steps
to eliminate the use of PST files within your network. Microsoft provides no specific tools for disconnecting
PST files from Outlook profiles, however you can use Group Policy to prevent users from adding any more
data to their PST files, and to prevent them from creating new PST files.
The Group Policy administrative templates for Outlook 2007 and later include options under
Miscellaneous/PST Settings that can be used to:
• Prevent users from adding more items to an existing PST file
• Prevent users from connecting any more PST files to their Outlook profiles

Page 259
Figure 8-9: Group Policy options to control PST files
If you have multiple versions of Outlook in your environment, you’ll need to configure Group Policies that use
the administrative templates for each specific version.

Microsoft Forms
Forms allows users to collect information from other people through surveys, quizzes, and polls. The person
who creates the form decides what type of form it is and the questions it contains. They then publish the form
online or through an application like Teams for the target audience to complete. After the answers come in,
the form’s author can then evaluate and analyze the responses using the built-in analytics or export the
answer data for further analysis using Excel or Power BI. Forms is intended for simple forms processing and
not designed to replace something more comprehensive like SharePoint Lists.
Forms is available to all Office 365 business and enterprise tenants and is in public preview for consumer
Office 365 plans. By default, every licensed user in a tenant can access Forms by using their tenant credentials
to sign into the Forms portal or select Forms from the apps listed in the Office 365 waffle menu. If you want to
stop specific users accessing Forms, you must disable the Forms option in their license through the Office 365
Admin Center or with PowerShell. Forms data is stored in Office 365 datacenters in the U.S. (for U.S. and non-
European tenants) and EMEA (for European tenants).

Forms Admin Settings


Four settings are controllable in the Microsoft Forms section under Settings in the Office 365 Admin Center
(Figure 8-10). The sliders control different aspects of how Forms work. If you turn off the first setting, Forms
doesn’t do very much.

Page 260
Figure 8-10: Forms settings in the Office 365 Admin Center
• Send and collect responses allows users to create forms.
• Share to collaborate allows users to share their forms with users inside and outside the organization.
• Share as a template allows users to share forms as template so that others can use the forms to create
new forms.
• Share form result summary allows users to share the summary results for a form.
An individual user can create up to 200 forms. Each form can receive up to 50,000 responses.

Creating a Form
The best way to understand an application like Forms is to see it in action. Let’s go ahead and create, share,
and complete a form. When you go to Forms for the first time, you have the choice of creating a new form or
a new quiz. The difference between the two is that a form leads respondents through a series of questions to
collect ad-hoc information while a quiz is composed of a series of questions with set answers that the
respondent is graded against. You can assign different points for the correct answer to each question and
Forms will tot up the total and report the result when a respondent finishes the quiz.
Click New form to begin the creation process. In Figure 8-11, we see that the title, description, and a picture
for the form are populated. You don’t have to add a picture, but this is a good way to highlight the purpose of
the form or the organization responsible for the form (if you use its logo). The title for a form can be up to 90
characters and the subtitle up to 1,000 characters. Forms automatically saves your work as you make changes.

Page 261
Figure 8-11: Entering a title, description, and picture for a new form
The […] (ellipsis) menu has settings controlling how the form is used (Figure 8-12), including the ability to set a
start and end date for responses, and if users can submit multiple responses. If someone tries to respond
outside the date range, they are told that the form isn’t accepting responses at present. Another option is to
shuffle questions so that each respondent sees them in a different order.

Figure 8-12: Settings for a form


If you choose Anyone with the link can respond, then only anonymous responses are possible as Forms
cannot record personal information for people outside your organization. If you keep everything internal, you
can choose to record the name for each response. In all cases, avoid asking respondents to input anything
Page 262
that could be construed as personal data such as credit card numbers, tax identifiers, and so on. If you need
to collect this type of information, use another tool that’s sanctioned by your HR and IT departments. If you
create a quiz form, the available settings include the option to Show results automatically, which means that
respondents see the correct responses after they submit their answers. The next step is to add some
questions. Forms supports the following types of questions:
• Choice.
• Text.
• Rating.
• Likert.
• Date.
• Ranking.
• File upload.

Choice Questions
These questions ask the respondent to choose an answer from one or more values. In Figure 8-13, we ask the
respondent to choose how often they want to see updates appear for an eBook. If we move the Multiple
answers slider to On, they can pick multiple options. The Required slider controls whether the respondent
must answer the question before they can move to the next question. The Other option allows respondents
to input their own answer. If added context is needed for a question, click the ellipsis menu and add a sub-
title. This menu also exposes the choice to shuffle answers for each respondent.

Figure 8-13: Adding a choice question


If you’re working with a quiz form, you also have the option to mark the correct answer from the set of
choices and to assign the points that the respondent receives if their answer is correct.
Note the set of controls at the top of each question to:
• Copy a question and use it as the basis for another question.
• Delete a question.

Page 263
• Move a question up and down in the list.
You can also add a graphic to each question to make the form more visually attractive.

Text Questions
These questions ask a respondent to enter an opinion about a stated topic. In Figure 8-14 we see that a long
answer is selected, which means that we want to allow plenty of room for our respondent to tell us about
their answer.

Figure 8-14: Adding a text question


The Restrictions choice in the ellipsis menu is useful when a response to a text question contains a number.
For example, you might ask how many days a conference should last and set a restriction on the answer so
that it must be between 1 and 5.

Rating Questions
These questions ask respondents to rate a topic on a scale using stars or numbers (you can decide how many
points to use in the scale). In Figure 8-15 we use the Label option (in the ellipsis menu) to tell respondents
what 1-star and 5-star values mean.

Page 264
Figure 8-15: Adding a rating question

Likert Questions
A Likert scale is often used to measure how people feel about different subjects with a more subtle nuance
than is possible with a simple yes/no response. Likert questions ask respondents how strongly they agree with
a set of supplied statements. As you can see from Figure 8-16, you can change the heading for each choice to
convey the true meaning of the respondent’s view about a statement. You can also add as many statements
and responses as you want, subject only to always avoiding any chance that the respondent will fall into a
state of boredom before they reach the end.

Figure 8-16: Adding a Likert question

Page 265
Other Question Types
Forms also supports these question types:
• Date. These questions ask respondents to select a date value as their response.
• Ranking. These questions ask respondents to move a set of supplied values up and down in a list to
rank them in order of importance.

File Upload
A file upload question allows respondents to upload an Office, image, video, or PDF file as the answer for a
question. Up to ten files can be uploaded at a time and the size of a single file can be between 10 MB and 1
GB. Uploaded files are stored in:
• Personal forms: In a sub-folder of the Apps\Microsoft Forms folder in the form owner’s OneDrive for
Business account.
• Group forms: On a sub-folder of the Apps\Microsoft Forms folder in the group’s SharePoint Online
document library.
In both cases, the name of the account that uploads a file is appended to the file name to make it unique. If
you move a form from personal to group ownership, any files uploaded to OneDrive for Business are not
automatically transferred to SharePoint Online. Instead, Forms treats the form as brand-new with a new set of
responses. Because uploaded files are stored inside Office 365, all the normal protection features that might
be active within the tenant such as Advanced Threat Protection and Data Loss Prevention apply.
Forms does not support guest users, so once a form includes a file upload question, it is automatically
restricted to tenant users.

Adding Media to a Form


Adding media to a question is a good way to explain complex issues and help create the right context for the
question you’re asking, especially in an educational situation. To add media, select a question and click Insert
Media at the right-hand side of the question title. You can then choose to add either an image or a video (but
not both to the same question).
Images can come from your workstation, OneDrive for Business account, or Bing. A video can come from
YouTube or Stream. To insert a video, you need to know its URL. The ability to add Bing images or YouTube
videos is controlled by a Microsoft Forms setting accessible in the Settings section of the Microsoft 365 Admin
Center. The default for the setting is checked (on), which allows users to add these elements to a form. If
unchecked (off), previously added YouTube videos will be converted to a link that takes users to YouTube
instead of using an embedded video.

Finalizing Your Form


Now that all the questions are defined, we can make a few final tweaks before we share the new form with the
world.
First, you should preview the form to see how it looks to a respondent. You can choose to see the form as it
will appear on a workstation (browser on a PC or Mac) or smartphone (mobile). Forms is optimized for IE10+,
Edge, the latest versions of Chrome and Firefox, including Chrome on Android, and Safari on iOS.
Second, you might want to change the theme of the form. You can select from one of the precanned themes
or upload your own graphic. Forms uses the graphic for the background when it displays your form.
Last, you can think about the flow of questions. You’ve already set down a basic order where the questions
are asked sequentially, and you might have selected the option to shuffle questions, but you can also use
Branching (in the ellipsis menu for the form) to take respondents into a different flow of questions depending
on their responses. This is a good way to bring respondents through a set of associated questions.

Page 266
Figure 8-17: Branching controls for questions

Sharing a Form
When you are sure that the form is ready to go, click Share and select how you’d like to make people aware
that you’d like them to complete a form. You can use the following methods to share a form:
• URL: Forms generates a unique hyperlink to the form that you can paste into email, a web page, or a
Teams conversation. The URL looks like this:
https://forms.office.com/Pages/ResponsePage.aspx?id=PzFitvwUokOaetLif080eFjN9O-
4G5lIlN55X2VrShhUOERRQVlaUTRGUFBYRTdWQ1BUVllPQVJTWC4u
• QR-code: Forms generates a QR-code that you can display on a web page or elsewhere.
• Embedded code: Forms generates code suitable for insertion in a web page or Sway. The form
displays in an iFrame.
• Email: Forms switches to the default email client defined for your workstation and creates a new
message, inserting the URL to the form (same as above) along with some boilerplate text. All that
remains to be done is to personalize the text and address the message.
Form authors can decide whether the link can be used only by other users in the same organization or by
anyone who has the link. An author can also share a link that allows other users in the tenant to work on the
design of a form, for instance to help refine the wording of questions.
You can also add Forms as an app to a team. This allows you to create a direct link to a form through a tab in
a channel. Go to the Teams app store, select Forms, then select the team and the channel that you want to
use Forms in, and then select Tab. You can then choose to create a new form or open an existing form. You
can also add a Forms bot to a team and use it to create a simple questionnaire (Figure 8-18). In all cases, the
form needs an internet connection to access data.

Page 267
Figure 8-18: Creating a quick questionnaire in Teams with the Forms bot
When Forms opens a survey or quiz, it renders the questions and layout based on the client type. Figure 8-19
shows how the form appears on an iPhone.

Figure 8-19: Completing a form on an iPhone

Using a Form with Stream


Stream supports the ability to add a form to a video’s Interactivity tab. The idea is that video owners can
embed one or more quizzes or surveys for people to take after watching a video to give feedback to the

Page 268
creators. To associate a form with a video, access the Interactivity tab and click Add New. Then input the URL
of the form (copied from the Share option of Forms) and select when in the video timeline (drag the timeline
slider to the right point) when you want viewers to see the form. When Stream plays the video, it will halt at
the appointed time and display the form to collect viewer input.

Analyzing Responses
If selected, the author receives email notifications as users respond to the form. As responses come in, the
author can see what kind of results are obtained by clicking the Responses tab for the form (Figure 8-20). You
can also click View results to go through each of the individual responses, or Open in Excel to have Forms
generate an Excel file holding the results, including the date and time for each completion. For more
complicated analysis, you use Excel’s data analysis features or import the file into Power BI.

Figure 8-20: Viewing responses for a survey

Group Forms
So far, we have discussed personal forms, which are under the control of a single user. Forms also supports
the notion of Group Forms, where a form is owned by an Office 365 Group. Click the Group forms tab to
access the group forms belonging to groups that you belong to. To create a new group form or edit an

Page 269
existing group form, click the down arrow beside Recent group forms. Forms now displays the set of Office
365 Groups that you are a member of to allow you to select the target group to host the form. Figure 8-21
shows that three groups have associated forms. If you use the Forms bot in Teams to create a quick poll, the
resulting form is listed here but is read-only.
The process of creating and editing a group form follows the same process as a personal form. The only
difference between the two is that any member of the associated Office 365 group can see and manage a
group form.

Figure 8-21: Listing Group forms


You can also convert a personal form to become a group form with the move feature. When executed, you
select a target Office 365 group to become the new owner of the form. Forms then moves the form to the
group and makes it available to all members of the group. You can’t reverse the process to make a group
form into a personal form.

Recovering a Form for Another User


If someone leaves the organization, a global administrator can recover their forms and transfer ownership of
the forms to another user or group. The transfer must be done within 30 days of the user’s account being
deleted or disabled. The process is explained in this page.

Sway
Microsoft calls Sway a digital storytelling application that allows people to gather, format, and share their
ideas and experiences using a mixture of different media from text to video. Elements are arranged on an
electronic canvas. Originally launched as a consumer application, Sway is now part of the Office 365 suite.

Page 270
Users log on to the Sway site using their Office 365 credentials to collect the content that they want to
combine to form their story there. Everything is stored in the Sway cloud service. Nothing apart from source
elements that are uploaded as input for a Sway is needed to be held on a user device. Up to ten people can
co-author a single sway.
A big selling point of Sway over other presentation applications such as PowerPoint is its ability to
dynamically rearrange elements to display in an attractive manner on different device form factors. Initially,
this led to some speculation that Sway was a replacement for PowerPoint, which is inaccurate. Instead, Sway
offers a unique way to present information such as reports, training material, newsletter, brochure, briefing
documents, or even personal travel logs. Sway is very suitable for subjects that are graphic-rich and involve
elements such as video clips.
Creating a new Sway is easy. You need to know the elements that you want to combine in the story. These
include:
• Text
• Pictures and other graphics
• Video
• Tweets
• Charts
To begin, click the Sway icon in the app menu to connect to the Sway service. You can then select an existing
file or create a new Sway. In either case, you work with a card-based storyline to assemble the different
elements of the story and arrange the cards in order. Each card holds an element such as an image plus the
ability to add some text (such as a caption) plus some controls to allow for actions such as adding a web link
or emphasizing the element (for instance, to make an image larger when it is displayed). You can also add
“focus points” for images to let Sway know where the most important parts of the image are found.
Elements can be fetched from online sources such as YouTube or Flickr or uploaded from your PC. You can
also cut and paste text from other documents into Sway or input and edit text directly there. Basic text
formatting controls are provided, but it is usually better to use purpose-designed tools like Word to create
content before moving it over to Sway. Graphics should be in their desired form before being imported to
Sway as no tools are available to crop or otherwise adjust them within Sway. However, a nice search tool is
available to look for image available on the Internet that are covered by the Creative Commons license, so it
should be possible to find a suitable image for almost any purpose if you do not have a suitable one of your
own. Videos that match the search term will also be found and can be dropped into the Sway too. You can
upload a video file of up to 256 MB from a local drive or 768 MB from OneDrive for Business. You can also
add voice recordings to a Sway through the web app to explain more about the topic or make the Sway more
engaging.

Formatting and Displaying Sways


Sway applies themes to format elements for display. If you do not like the results generated by the suggested
theme, you can click the Remix! button to try another out and keep on doing this until you find something
you like. If that does not work, you can dive down into the details of design to create your own theme. The
Sway describing Office 365 administration created by Microsoft is a good example of what can be done with
the application.
Options are available to share Sways using social media such as Facebook and Twitter. You can also copy a
URL that can be embedded into any web page to allow users to access the Sway from there. From a privacy
perspective, the sharing can extend to your tenant, to anyone who can access the link, or to the wider Internet
(the link and its contents are discoverable by search engines). From March 2017, Sway record view statistics to
allow authors to track how many people have viewed their content.

Page 271
When posting a Sway for display in a place like a booth in a trade show, people often want it to auto-play on
a continual loop. To do this, click the Settings icon, select Autoplay, and set the delay and then Start.
Sway is an application that is evolving fast. It is user-centric with limited administrative capabilities. The
content of the files is not exposed to Search and are therefore invisible to compliance features or Delve. Sway
does captures audit events that reveal who is creating and updating files and these events show who is using
the application. None of this is surprising given the consumer focus from which Sway originated and no doubt
some administrative infrastructure will be put in place over time.

Administrative settings for Sway: A number of administrative settings are available to allow control over
how the application behaves. You can disable Sway completely for a tenant, control whether Sways can be
shared externally, use licenses to control access on a per-user basis, and block or allow different content
sources (such as YouTube) that users can search to add content to their Sways.

Sway Clients
Sway supports all modern browsers. Microsoft recommends that you use the latest version of:
• Edge
• Chrome
• Firefox
• Internet Explorer
• Safari
Ideally, sways need a reasonable amount of screen real estate to display to best effect. The application can
resize the elements of the sway to fit small screens, but the best results are seen on larger screens (1600 x
1200 or better). Figure 8-22 shows the composition of a Sway with the storyline revealed. You can see the
kind of editing controls that are available to the user.

Figure 8-22: Composing a Sway storyline using a browser

Page 272
In addition to browsers, a mobile Sway client is available for iOS and an UMP app is available in the Microsoft
Store for Windows. You can view and edit files using the mobile client. However, editing can be a little
frustrating on a small iPhone. Figure 8-23 shows how Sway displays the same content on different device form
factors. The larger PC (left) is obviously able to display more of the picture and text while the more
constrained iPhone (left) still gets the message across, albeit with a little less elegance. Regretfully, Microsoft
is retiring the iOS app from December 17, 2018. The app will no longer be available in the iOS store from
October 19, 2018.

Figure 8-23: Sway displays content on a PC (left) and iPhone (right)

Sway Data location: At the time of writing, the Azure service that holds the data for Sway runs in
Microsoft’s U.S.-based datacenters. This fact might create a deployment issue for non-U.S. companies. If
concerned about this issue, you should check with your local Microsoft office to verify the current
situation.

Deprecated SharePoint Online and OneDrive


For Business Features
Like the other workloads, SharePoint Online is in a state of constant evolution. This means that Microsoft adds
new features, improves existing functionality, and deprecates some features. Deprecation means that the
feature will be eliminated in the future. Table 8-1 shows a list of deprecated features in SharePoint Online and
OneDrive for Business since 2018. This list comes from the information published in the Microsoft 365
Message center.
Feature Deprecation Details
Public Newsfeed in In June 2018, Microsoft started to remove one of the native social features in
SharePoint Online SharePoint Online: the SharePoint Public newsfeed. Basically, the following
actions have happened:
• The Public newsfeed is set to Read-Only, so it is not possible to post
new news there or reply to existing items.
• The Option to implement this feature has been removed from the
SharePoint Online Administration.

Page 273
As an alternative, Microsoft recommends using Team News, Communication
Sites, and/or Yammer.
Machine Translation Beginning September 2018, Microsoft started to remove the user interface
Services for SharePoint entry point for Machine Translation Services from SharePoint Online. This
Online deprecation has the following impacts:
• It is not possible to perform manual or on-demand translation
requests from SharePoint Online.
• It is not possible to create new variation labels for multilingual
scenarios based on the variations feature in SharePoint online.
As an alternative to this feature, Microsoft recommends using Microsoft
Translator APIs.
Classic Popularity Classic usage and popularity reports for classic sites are not available
Reports anymore after February 29, 2020.
SharePoint classic and SharePoint classic and Delve blogs will not be supported after July 17, 2020.
Delve blogs After that date, users cannot create classic SharePoint blogs and Delve blogs
will be removed from Delve profiles. Delve blogs creation was disabled on
January 18, 2020.
Activity column in The activity column in both OneDrive for Business and Shared Libraries is
OneDrive for Business being retired by Microsoft. Once the feature is retired, the Activity column
and shared libraries will no longer be visible. As an alternative, Microsoft recommends using the
Activities section in the Details pane for a selected file or folder.
Custom Forms creation After identifying a code issue related to the creation of custom forms in
with SharePoint SharePoint Online, Microsoft deprecated the feature on April 25, 2020. The
Designer decision was based on the impact to data integrity the issue could cause and
the difficulty in developing a fix guaranteed not to impact other SharePoint
components.
Table 8-1: Deprecated features in SharePoint Online and OneDrive for Business

PowerShell for Power Apps


The cmdlets used to work with Power Apps and Power Automate are in a common module. There are around
90 cmdlets dedicated to working with Power Apps. The complete list of cmdlets can be generated using the
command Get-Command *power*. For more information on prerequisites and installation instructions to use
PowerShell for Power Apps, please visit PowerShell support for the Power Platform Microsoft docs.
The Power Apps cmdlets support access to the created apps in the service. To list all known apps in the tenant
use:
[PS] C:\> Get-AdminPowerApp | Select -ExpandProperty DisplayName

And to get the properties of only one of the apps, use the same cmdlet filtering by the name of the app:
[PS] C:\> Get-AdminPowerApp "Help Desk"

You can combine native PowerShell cmdlets with the Power Apps cmdlets to manipulate data. The following
example uses the Select function to isolate the Owner attribute from the set returned by the Get-
AdminPowerApp cmdlet. The name of the owner object is pipelining that output into another Select function.
Finally, passing the second Select function output into the Group function returns a table that includes a
count of each owner’s number of apps:
[PS] C:\> Get-AdminPowerApp | Select –ExpandProperty Owner | Select –ExpandProperty displayname |
Group

Page 274
Reviewing Users with Trial Licenses
You can export and review information about users with trial licenses using the PowerShell cmdlet: Get-
AdminPowerAppLicenses. The exported file contains both self-service internal trial plans signed up for by
individual users and plans sourced from Azure Active Directory. Internal trial plans are not visible to admins in
the M365 admin center and are only visible through exporting via PowerShell.
[PS] C:\> Get-AdminPowerAppLicenses -OutputFilePath -<licenses.csv>’

Page 275
Chapter 9: Directory
Synchronization
To synchronize identities from your on-premises directory with Office 365, you must enable directory
synchronization in the Office 365 tenant and install the appropriate directory synchronization tool.
More information about the synchronization process in general, its various features, the supported
synchronization tools, and how to manage the synchronization process can be found in Chapter 3 of the main
book. Read through the chapter to familiarize yourself with the core concepts before following the steps
outlined below.

The Basics of Directory Synchronization


To access Microsoft 365 workloads, a user must have a cloud identity. Microsoft 365 lets you use your existing
on-premises AD with Azure AD through the process known as directory synchronization; in this process, some
of the details and attributes of on-premises users, groups, contacts, and other types of objects are
synchronized from the on-premises AD to Azure AD. Azure AD Connect is Microsoft’s tool for implementing
directory synchronization to Azure AD.
Many small organizations rely solely on standalone identities. However, organizations that have existing on-
premises AD are much more likely to synchronize their on-premises identities to Azure AD. There is no
magical number that states what the threshold is for implementing directory synchronization, nor is there any
lower limit. Obviously, the larger the on-premises organization, the more sense it makes to deploy directory
synchronization. Ultimately, if your users have an on-premises AD account, chances are you will want to use
hybrid identity, so you do not need to manage the user’s account (and password) in two places. The first step
to establish a hybrid identity infrastructure is to configure directory synchronization.
In addition to attributes of users, Azure AD Connect can also include the synchronization of password hashes
(salted hashes) to give end users a “same sign-on” experience in which they use the same username and
password to logon to both on-premises and online services without storing the actual passwords in
applications. Directory synchronization is also required for certain features. For instance, if you want to use
federation, or you are planning to configure a hybrid Exchange connection, you must configure directory
synchronization.

Real World: Directory synchronization underpins the Staged and Hybrid migration scenarios for Exchange
Online. When a cutover migration is performed, directory synchronization cannot be used before or during
the migration. However, synchronization can be implemented once the migration is complete. In situations
where third-party migration tools are used, it is critical that the documentation provided by the vendor is
carefully reviewed before any migration activity commences. In many cases, the Microsoft directory
synchronization tools can’t be implemented before or when a third-party migration tool is in use, as the
vendor might provide their own synchronization tools.
Because of the critical role directory synchronization plays in your ongoing identity management strategy,
several decisions must be made before you implement directory synchronization. These decisions include:
• Are you synchronizing objects from a single AD forest or from multiple forests?
• What kind of authentication will you use? Should you use password hash synchronization, pass
through authentication (PTA), or federation?

Page 276
• Should you synchronize every object from AD or only a subset of objects?
• Do you need to use a dedicated SQL Server instance or is the built-in SQL Server LocalDB instance
sufficient for your environment?
• Should Azure AD Connect be installed on an existing server, a dedicated server, or even in Microsoft
Azure?
These decisions can lead to some potential for confusion when the time comes to implement Azure AD
Connect. We will now take a deeper look at Azure AD, how directory synchronization works, and the role that
directory synchronization plays in a hybrid deployment.

Note: Although we focus here on directory synchronization in the context of Azure AD, directory
synchronization can also be used to enable federation and single sign-on for a wide range of other
software as a service (SaaS) applications from Microsoft and other vendors.

Microsoft Directory Synchronization Tools


Three Microsoft directory synchronization tools are available:
• Azure Active Directory Connect (Azure AD Connect or AAD Connect): Azure AD Connect is the
recommended synchronization tool to use to connect on-premises and cloud directories. It is also the
tool that is offered as a download when you configure directory synchronization from the Microsoft
365 admin center.
• Azure AD Connect Cloud Sync: Microsoft has a new architecture for directory synchronization called
Azure AD Connect Cloud Sync. Unlike traditional Azure AD Connect, Cloud Sync uses a lightweight
agent that runs on-premises. The agent is configured from the Azure Portal. Unique to Cloud Sync is
the ability to synchronize multiple AD forests that are not accessible from a single server. This
capability is very useful in companies comprised of multiple acquisitions that do not share a single
connected network. Cloud Sync has a variety of limitations today, but we expect those limitations to
be eliminated over time. If you have multiple AD forests that are not all reachable from one network
location, you should explore Cloud Sync.
• Microsoft Identity Manager (MIM) 2016: MIM is the successor to Forefront Identity Manager 2010
(FIM). Prior to Azure AD Connect, there were specific directory synchronization use cases that could
not be fulfilled with Azure AD Connect’s predecessors. These use cases have been addressed in Azure
AD Connect and thus it is recommended to use Azure AD Connect over MIM.
The synchronization engine in Azure AD Connect is newer and has more features geared specifically
towards synchronization with Azure AD. There is no new engineering work being performed on the
Azure AD connector for MIM.

Azure AD Connect Technical Concepts


When describing the inner workings of Azure AD Connect, there are several concepts that play a key role in
the entire process.
• Metaverse: The metaverse is the consolidated view within Azure AD Connect of all the linked (joined)
identities from the various data sources (connector spaces). It combines the identity information for
an object and is stored in a SQL Server database.
• Connectors: Previously referred to as management agents, connectors are the modules within Azure
AD Connect that connect to data sources such as the on-premises AD and Azure AD.
• Connector space: Think of connector spaces as a cache that sits between a connected data source
and the metaverse. Any additions, changes or deletions are stored in the connector space until the
next synchronization operation occurs. The connector space does not contain the actual synchronized

Page 277
object, but rather a shadow copy with the subset of the object's attributes that are marked to be
synchronized. Each connector has its own connector space and defines what objects and attributes
are stored within the connector space.
• Attribute flow: This is the process that copies data from one connected data source to another.
• Source anchor: The sourceAnchor is the attribute of the on-premises object used to link it to an
object in Azure AD. By default, the value of the object's objectGUID attribute is used. The value of the
objectGUID attribute is stored as a base64-encoded string in the ImmutableID property of the
corresponding object in Azure AD. As of version 1.1.524.0 of Azure AD Connect, the tool uses the
msDs-ConsistencyGuid attribute instead of the read-only objectGUID attribute if you have not
populated the msDS-ConsistencyGuid attribute for other purposes. When Azure AD Connect uses the
msDS-ConsistencyGuid, it takes the value of objectGUID and writes it back into msDs-ConsistencyGuid
in binary form. Doing this enables you to manually update the reference later; for instance, when you
need to link the object to another (already existing) cloud object, or if you have multiple AD forests
and need to re-link the account from another forest to the matching object in the cloud after a user
moves from one forest to another.
Azure AD Connect stores the source anchor value in its metaverse in the sourceAnchor attribute. The
chosen attribute must not change during the life of the object or it will cause synchronization
problems, authentication problems if you are using federation, and other unwanted effects on
Microsoft 365 services.
The basic concepts outlined here apply regardless of the version of the directory synchronization tool in use,
and you are likely to encounter them in any directory synchronization project.

Understanding Source of Authority


Only a handful of properties of a synchronized object can be managed directly in the cloud. Attempting to
edit other synchronized properties through the Microsoft 365 Admin center or PowerShell will generate an
error. For example, this error is flagged when an administrator tries to hide a user from the address book by
running the Set-Mailbox cmdlet in Exchange Online:
[PS] C:\> Set-MailUser britta.simon -HiddenFromAddressListsEnabled $True

The operation on mailbox "Britta Simon" failed because it's out of the current user's write scope.
The action 'Set-MailUser', 'HiddenFromAddressListsEnabled', can't be performed on the object 'Britta
Simon' because the object is being synchronized from your on-premises organization. This action
should be performed on the object in your on-premises organization.

This happens because directory synchronization is a mostly one-way operation that flows from the on-
premises AD forest to Azure AD. The on-premises version of the object is the source of authority. Objects that
are created directly in Microsoft 365 are considered cloud-only objects and are, by default, not synchronized
back to the on-premises AD. The only exception to that rule is when you have a writeback feature enabled.
Writeback is a capability in Azure AD Connect which permits the synchronization of some object types (such
as Groups) and attributes (such as device partnerships) back to the on-premises AD forest. Note that the
decision to implement writeback can affect how hybrid recipients are managed. The various writeback
capabilities and their limitations are discussed later.
Despite the general rule that objects synchronized from on-premises cannot be managed using online tools,
some mailbox features can be managed in both environments. More specifically, this applies to features for
which the synchronization process will write back attributes to the on-premises environment, or for which no
on-premises management capabilities exist. For example, a litigation hold can be enabled on an Exchange
Online mailbox associated with a hybrid identity because the update made to the mailbox attributes can be
synchronized back to Exchange on-premises.

Page 278
Identifying which objects are synchronized from AD is straightforward. If you manage user objects from within
the Microsoft 365 admin center, you can add a column called "Sync status" which states whether or not the
account is a standalone or hybrid identity. Similarly, you can use the PowerShell Module for Azure AD to run
the following query:
[PS] C:\> Get-AzureADUser -Filter "DirSyncEnabled eq true"

ObjectId DisplayName UserPrincipalName


-------- ----------- -----------------
4f9f6f4e-f12c-4528-949a-86bfb588048c Marc Spencer mspencer@o365itpros.com
7561973e-3ec1-4c42-bdbc-aa336dededc4 Billy Weaver bweaver@p365itpros.com

As this blog post points out, there is a quirk in the behavior of the Get-AzureADUser cmdlet when retrieving
details of synchronized or non-synchronized users: it returns true or null to indicate whether an account is
synchronized. In the above example, the filter property uses server-side filtering to find matching objects
before passing retrieved objects to the pipeline. This can significantly increase performance when working
with larger sets of objects.
However, server-side filtering does not support the use of all operators. For example, you cannot use
“DirSyncEnabled ne true” to query for non-synchronized accounts. As explained in the blog, all (user) objects
go to the pipeline first when using client-side filtering. While this approach can consume more time and
memory – especially when working with large object sets – it allows for more flexibility, such as using the
“NotEquals (-ne)” parameter which allows you to filter for non-synchronized users, etc.

Synchronization Interval
Azure AD Connect synchronizes Azure AD with the on-premises directory every 30 minutes. An administrator
can control various aspects of the synchronization schedule like the interval between synchronizations (up-to
a Microsoft-enforced minimum interval of 30 minutes) and the type of synchronization. Administrators can
also force a synchronization to occur outside the regular schedule. This topic is explained later.
In some cases, the period between change and effect will be longer. For example, when you enable a cloud-
based archive for an on-premises mailbox, the archive will be unavailable until directory synchronization has
run twice: one cycle to transit the change from the cloud to on-premises, and one cycle to synchronize back,
which means a longer delay could exist between enabling the archive and it being accessible in client
interfaces.

Note: The synchronization delay only applies to synchronized objects, such as user accounts. Password
hashes, which are synchronized if password hash synchronization is enabled, are synchronized every two
minutes.

Supported Synchronization Topologies


Best practice for AD has long been to keep the design as simple as possible, reflected by the recommendation
to use a single AD forest whenever possible. However, this has not always been possible, and some
organizations still use a legacy infrastructure that is more complex than necessary. For instance, mergers and
acquisitions might force you to maintain multiple on-premises AD forests. Some of these directories might
only contain user accounts, and others might also have Exchange Server deployed. Thankfully, Microsoft
supports synchronizing multiple AD forests into a single tenant. Although Azure AD Connect greatly simplifies
things, by allowing you to easily add multiple on-premises domains to be synchronized with Azure AD
through a step-through wizard, Microsoft still recommends that you maintain a single synchronized forest
whenever possible. Using multiple forests only increases the chance of synchronization errors, and lots of
other issues resulting from those errors. The synchronization scenarios described in the following sections are
supported by Microsoft.

Page 279
Single forest, Single Azure AD tenant
This scenario is the simplest and most common synchronization scenario. A single on-premises AD forest is
synchronized with Azure AD through a single Azure AD Connect server. Except for staging mode server(s)
(more on this later), you cannot use multiple directory synchronization servers to synchronize with Azure AD.
Even though you could theoretically configure multiple synchronization servers to each only synchronize a
subset of objects by configuring exclusions, it will not work properly.

Multiple forests, Single Azure AD tenant


In this scenario, multiple on-premises AD forests are synchronized with a single Azure AD tenant. You can
synchronize by connecting multiple domains through a single Azure AD Connect server, but it is not
supported to connect multiple Azure AD Connect servers to a single Azure AD tenant.

Single forest, Multiple Azure AD tenants


In this scenario, a single on-premises forest (with multiple domains) is synchronized to multiple Azure AD
tenants. For example, consider a large multinational company that has separate domains for each of the
countries where it operates and wants to use separate tenants. Because a 1:1 relationship exists between an
Azure AD Connect server and an Azure AD tenant, a separate server must be used for each Azure AD tenant.
Additionally, each Azure AD Connect server must be configured to exclude objects that are synchronized by
the other servers to ensure that objects are only synchronized to one Azure AD tenant.
In addition to being able to only synchronize a subset of users to each Azure AD tenant, this synchronization
scenario also has other limitations. For instance, you cannot use the same UPN suffix across various Azure AD
tenants, because a domain can only be registered in a single tenant. As such, you must ensure that the
synchronized objects all use a different UPN suffix, depending on the Azure AD tenant they are synchronized
to. This namespace limitation also carries forward into Microsoft 365, more specifically with regards to mail
flow.

Preparing for Directory Synchronization


Before implementing synchronization with Azure AD Connect, there are some system requirements that you
will need to meet, as well as other preparation tasks to help make sure that the on-premises AD environment
is ready to be synchronized to Azure AD.
• The AD forest must run in Windows Server 2003 forest functional mode or higher. The domain
controllers (DCs) must also be running Windows Server 2003 SP1 or higher. It does not matter
whether the DCs are 32-bit or 64-bit. However, given that Windows Server 2003 and Windows Server
2008 R2 are now out of support, it is recommended to use at least Windows Server 2012 R2 DCs, or
preferably Windows Server 2016 or newer DCs. However, it is important to remember that the
different versions of DCs do not add or remove functionality from directory synchronization tools,
except for password writeback.
• Azure AD Connect can be installed on a server running Windows Server 2012 or newer. In some
scenarios, you might want to install the directory synchronization component on an existing server.
This can be any server in your environment from a domain controller to a file server. You cannot
install Azure AD Connect on Windows client operating systems. Although installing Azure AD Connect
on a DC is supported, it is generally not recommended for security and performance reasons.
• Make sure that the account(s) that you will use to configure directory synchronization have the
appropriate permissions. What permissions are required, and how to configure them, is explained
later.
• Decide whether you want to use object filtering. You can define filter rules, as described below, to
synchronize only a subset of your on-premises objects to Azure AD. Because you can enable object

Page 280
filtering at any time, it is not critical that you decide this during the planning process. Note that if you
decide to use group based filtering, you must make that decision at the time of installation.
• Review on-premises objects and make sure they do not contain illegal characters or duplicate values.
It is important that no objects with duplicate or conflicting proxyAddresses attributes are synchronized
to Azure AD as this could disrupt the service for the user. The list of characters that are supported
varies from attribute to attribute. A full list of unsupported characters is available here. Microsoft
provides a variety of tools which can review and remediate potential deployment blockers. These
tools, and how to use them, are explained in the section “Review and remediate directory
synchronization blockers” later.
• You must also make sure that the UPN suffix for the on-premises user objects is an Internet-routable
domain such as yourdomain.com, for example john.doe@yourdomain.com. Preferably, the UPN should
also match the user’s email address. This is because the UserPrincipalName attribute is used to sign in
into Microsoft 365.
To synchronize the value from the on-premises environment you must have previously validated the
domain in your tenant. Given that you can only register public domains in your tenant that you can
verify that you own by adding DNS records as directed by Microsoft, the on-premises value must
match a domain that has been registered in your tenant. If you cannot update AD to use an Internet-
routable UPN suffix, you will need to modify the default Azure AD Connect configuration prior to
completing the installation.

Sizing the Azure AD Connect Server


Assigning too little resources to a directory synchronization server will not prevent it from synchronizing to
Azure AD, but it could significantly slow down synchronization up until the point that synchronizations cannot
complete within a reasonable period. If password synchronization is enabled, an underperforming
synchronization server might also cause passwords not to be synchronized to Azure AD in time, or with a
significant delay, thus impacting the end-user experience.
Microsoft publishes a list of minimum requirements, which are a great starting point. The amount of resources
required almost entirely depend on the number of objects that are synchronized to Azure Active Directory.
Because the number of objects is likely to be in flux all the time, make sure to account for future growth.

Azure AD Connect and SQL Server


A Microsoft SQL Server database is required to hold the objects manipulated by Azure AD Connect. By
default, Azure AD Connect will install and configure a database running on SQL Server Express LocalDB, a
lightweight version of SQL Server Express. The LocalDB installation is valid for up to 100,000 total
synchronized objects (not just 100,000 user objects). This is not a hard limit, but there is a hard limit of 10GB
on the SQL database size. This size limit is where the 100,000-object limit comes from. To remain officially
supported by Microsoft, if you have more than 100,000 objects, you are required to use a full Microsoft SQL
Server database. That does not mean that, if you have 100,005 objects, Azure AD Connect will break.
However, the database for a metaverse containing more than 100,000 objects is very likely to grow beyond
the 10 GB database size limit of LocalDB. If you specify a separate SQL Server, the account that is used to run
the wizard must have permissions to create a database (the dbcreator role) on the SQL server.
To remediate possible issues, you can use the IdFix tool, to bulk-edit conflicting or unsupported objects.
Figure 9-1 illustrates two unsupported objects in the IdFix tool. The attribute field shows which attribute
contains conflicting or unsupported values. The value field shows the current value and the update field
contains a suggested value to remediate the issue.

Page 281
Figure 9-1: The IdFix tool
The Action drop-down menu allows you to select what action you want to perform on the object. If you select
Edit and then click Apply in the top-level menu bar, the on-premises objects will be updated with the value in
the Update field. Note that the value in the Update field is only a suggestion. As an administrator, you can
put in any value you like.
The thought of bulk-editing on-premises objects might sound scary at first. In a way, it is. However, the IdFix
tool will only display non-critical objects. Nonetheless, you should make sure that any change you make does
not have an adverse effect, for example to a line-of-business application that references AD attributes.

Real World: The IdFix tool covers the most common synchronization errors. However, there are some
conditions that can break the directory synchronization process but that IdFix does not check or fix. For
instance, it does not check whether different accounts in the environment have a matching UPN and
proxyAddresses.

Imagine there are two users in your organization with a similar name: Jane Doe and John Doe. Jane Doe's
UPN is jdoe@contoso.com and John Doe's UPN is DoeJ@contoso.com. However, for some reason, one of
Jane Doe's email addresses (in the proxyAddresses attribute) is DoeJ@contoso.com. Although potentially
confusing, this is a fully supported on-premises scenario. However, because of the importance of the UPN
in Azure Active Directory, the directory synchronization process perceives these two objects as a conflict,
because the UPN of one user is on the proxyAddresses list of the other. This scenario might not be very
common, but because it is not reported in either tools from Microsoft, it is also very hard to detect
upfront. Another scenario that is not covered by IdFix is checking whether a synchronized attribute value is
too long. For instance, if you have an extensionAttribute that holds 1,200 characters, it will not be reported
by IdFix, nor will it be synchronized to Azure Active Directory because the value is too long.

Directory Synchronization and Duplicate Object Values


Previously the synchronization tools could not synchronize objects that have specific duplicate or conflicting
attribute values. This would, for instance, be the case when synchronizing a user object with an email address
or UPN that is already assigned to another Microsoft 365 account. Today, Azure AD Connect Health can
handle a variety of common directory synchronization problems. By default, instead of failing to create or
update the object, Azure AD uses a feature called Duplicate Attribute Resiliency to quarantine the conflicting
attribute value and continues to create or update the object. What action is taken depends on the type of
conflicting attribute:
• If the conflicting attribute is required to create the object (for example the user’s UPN), Azure AD
creates the object, but uses a placeholder value to replace the conflicting attribute. For instance, if the
conflicting UPN is John.Adams@office365itpros.com, the replacement value could be
john.adams6743@office365itpros.com. The addition of the random 4-digit number removes the
conflict and allows the object to be synchronized or created. An administrator can then revisit the
object and manually resolve the original conflict later.
• If the conflicting attribute is not required to create the object (for example a secondary email proxy
address), the object is created or updated without the conflicting value.
When the Duplicate Attribute Resiliency feature is disabled, the synchronization error report and all
subsequent reports include the current, but also all previous, still outstanding, synchronization errors. This is
because despite previous unsuccessful synchronization attempts, the synchronization continues to try and

Page 282
synchronize all objects until the conflict is resolved and the action succeeds. When the Duplicate Attribute
Resiliency feature is enabled, objects may be created successfully despite possible conflicts and therefore the
synchronization errors only show up in the report once. Because it is possible that an administrator misses the
report, the list of conflicting objects and attributes is available through the MSOnline PowerShell module:
[PS] C:\> Get-MsolDirSyncProvisioningError -ErrorCategory PropertyConflict

In large environments, the above command can potentially return a lot of results. If needed, you can reduce
the number of results by using filtering parameters like PropertyName and PropertyValue. The following
example only shows synchronization errors caused by a conflicting proxy address. Note that more filtering
parameter and options are available and can help to further refine results.
[PS] C:\> Get-MsolDirSyncProvisioningError -ErrorCategory PropertyConflict -PropertyName
ProxyAddresses

Depending on the synchronization error, and conflicting attribute, various strategies exist to resolve the issue.
Some suggestions to tackle different conflict scenarios is in this article. You can also use Azure AD Connect
Health in the Azure portal to visually review and download a list of synchronization failures.

Understanding Exchange Online Hybrid Writeback


Although the synchronization of objects is usually a one-way operation to Azure AD, a handful of object
attributes are written back into the on-premises organization if you enable Exchange hybrid deployment
support while setting up Azure AD Connect. The writeback attributes are necessary to support some features
in a hybrid deployment, such as the ability for an on-premises mailbox to access a cloud-based archive.
Currently, there is no way to select which attributes are synchronized back into the on-premises organization
from Azure AD. Table 9-1 lists the various attributes that are currently written back into the on-premises AD
from Azure AD for Exchange hybrid deployments.
Attribute name Function
msExchArchiveStatus Enables the hosted online archive feature for an on-premises
mailbox.
msExchUCVoiceMailSettings Used in on-premises Lync or Skype for Business deployments to
show if a user has voicemail in Exchange Online.
msExchUserHoldPolicies Enables Exchange on-premises to figure out which (online)
mailboxes have litigation hold enabled. This is important to ensure
that the litigation hold is persisted when a mailbox is moved back
from Exchange Online.
proxyAddresses (LegacyExchangeDN Used to ensure that mail flow continues to work after a mailbox is
as X500) moved back from Exchange Online to the on-premises Exchange
organization.
msExchSafeSendersHash The information that Outlook collects as part of its safe list
msExchBlockedSendersHash aggregation is written into these attributes and then synced back.
msExchSafeRecipientsHash This is particularly important in hybrid deployments that have the
centralized mail flow option enabled and use an on-premises
filtering solution which can take advantages of these attributes; for
instance, an Exchange Edge Transport Server.
msDS-ExternalDirectoryObjectID This attribute is introduced in Windows Server 2016 (or Exchange
Server 2016). It is derived from the ObjectID attribute in Azure
Active Directory. The value of this attribute equals the ObjectID of
the connected user account in Azure AD, prepended with “User_”.
For instance: User_fca22808-8f7c-4443-af29-ddc3d10bb8f5.
Although the documentation states the attribute is new in
Exchange Server 2016, it also becomes available if you introduce a

Page 283
Windows Server 2016 DC into your on-premises AD environment.
As such, you might find that you run Exchange Server 2010 and
need to configure the appropriate permissions to allow
synchronization to write this attribute too!
publicDelegates This attribute stores information about who has (delegate) access
to this mailbox. Exchange Online uses the cloudPublicDelegates
attribute to control which users have Send-On-Behalf-Of
permissions, while on-premises Exchange uses the publicDelegates
attribute. Versions 1.1.553.0 and later of Azure AD Connect enable
writeback on this attribute, which means that you can now
synchronize sending permissions in hybrid organizations.
Table 9-1: Writeback attributes

Note: Microsoft is working to enable the ability to write back more than the limited set of attributes that
synchronization currently supports. When Microsoft delivers that capability, it might enable scenarios
where hybrid recipients can be managed from both environments. Today, various writeback capabilities
are available either under general availability or in preview. Each of these features has its own set of
objects and object attributes that are written back into the on-premises directory. We will discuss the
various writeback capabilities later.
The idea that the directory synchronization process can potentially modify object attributes in the on-
premises environment can be scary at first. However, it is important to understand that updates happen
securely. First, secure sockets layer (SSL) encryption protects communications between the on-premises
directory synchronization server and Azure AD. Secondly, the account created in the installation of Azure AD
Connect only holds the minimum set of permissions required.

Configuring Writeback Permissions


If you let the Azure AD Connect setup wizard create the AD account used for synchronization, the setup
wizard will automatically grant all the permissions necessary to support various writeback capabilities (and
normal synchronization). If you choose to use an account that already exists, or create a new account
manually, you must delegate the required permissions manually. While delegating permissions manually
provides an opportunity to grant the least amount of privilege, it also introduces additional opportunities for
error. A full list of the permissions necessary and the steps for configuring them is available in Microsoft’s
documentation.
Azure AD Connect includes a PowerShell module called ADSyncConfig containing cmdlets to configure the
necessary permissions for various components of Azure AD Connect. Before you can use any of the cmdlets,
you must first import the additional ADSyncConfig module that is available on the Azure AD Connect server.
To do this, you must first open an administrative PowerShell session:
[PS] C:\> Import-Module 'C:\Program Files\Microsoft Azure Active Directory
Connect\AdSyncConfig\AdSyncConfig.psm1'

Then, you can continue using the cmdlets. For example, to configure permissions for password writeback:
[PS] C:\> Set-ADSyncPasswordWritebackPermissions –AdConnectorAccountName "<service account>"
–ADConnectorAccountDomain "<fully-qualified domain name>"

Object Matching for Existing Objects


The ImmutableID property plays a key role in the directory synchronization process as it is the attribute that
links an on-premises object to an object in Azure AD. When you run Azure AD Connect for the first time, the
sourceAnchor attribute is created automatically and its value is stamped on the ImmutableID property for the

Page 284
cloud object. However, by the time you start Azure AD Connect, you may already have created cloud objects
in Azure AD.
To match existing on-premises user accounts to pre-existing users in Azure AD, the directory synchronization
process uses two different approaches: hard matching and soft matching.

Hard-matching Existing Azure Active Directory Users


Hard matching is when you link two objects by manually setting the ImmutableID on the cloud object. This
requires you to calculate the correct sourceAnchor value. When Azure AD Connect runs, it will automatically
detect the relation between the objectGUID and the ImmutableID and confirm the link between both objects
in its metaverse through the sourceAnchor. You can only set the ImmutableID on a cloud object in a tenant
where directory synchronization is not enabled. This technique can be useful if you are transitioning from
cloud-only accounts to synchronized accounts.
Generally, you will not want to do this if you don’t have to, because it requires manual or scripted action; soft-
matching (explained below) is an easier and simpler way to tie on-premises and cloud objects together.
If you need to use hard matching, you can use PowerShell to extract the ImmutableID from the ObjectGUID
(or any other attribute) and then verify if it matches the object in Azure AD. In the following PowerShell
example, the ObjectGUID for a user called Britta Simon is retrieved and then base64-encoded:
[PS] C:\> $objectGUID = Get-ADUser "Britta Simon" | select ObjectGUID

[PS] C:\> [System.Convert]::ToBase64String($objectGUID.ObjectGUID.ToByteArray())


28yuIZMiVUuT2GJj6+qPDg==

To save you some typing, you can use a PowerShell script to generate the ImmutableID for an Active Directory
user object. Next, connect with PowerShell to Azure AD and update the ImmutableID for the cloud-based
object that you wish to link to the on-premises object:
PS C:\ Set-MsolUser –UserPrincipalName britta.simon@Office365ITPros.onmicrosoft.com –ImmutableId
28yuIZMiVUuT2GJj6+qPDg==

After you have manually configured the ImmutableID, you run Azure AD Connect. Once the process
completes, Azure AD Connect will link both objects together.

Soft-matching Existing Azure AD Users


Azure AD will automatically try to match users together by looking for objects whose primary SMTP address
or mail attribute match. This is known as soft-matching. For this to work, the following requirements must be
met:
• The existing user account must have an email address. It does not require a mailbox, but the
PrimarySmtpAddress or mail attribute must be configured.
• There cannot be more than one object with the same primary SMTP address in the directory. This will
cause the soft-matching process to fail as Azure AD is unable to determine with which object it
should match.
• The user object to match in Azure AD must not have a value for the ImmutableID property.
If Azure AD cannot match a user based on the SMTP address, it can attempt to match user accounts based on
the UPN instead. By default, however, this feature is disabled, and you must run the following PowerShell
command in to enable it:
PS C:\> Set-MsolDirSyncFeature -Feature EnableSoftMatchOnUpn -Enable $true

In the scenario of an Exchange Online cutover migration, both the on-premises and Azure AD user object
should already have a matching SMTP address. This means that, once the migration completes, you can

Page 285
configure directory synchronization and it should be able to soft-match all existing objects with little to no
effort if you do not have duplicate SMTP addresses. Duplicate addresses may exist when you have contacts in
the on-premises AD with the same email address as an Exchange Online mailbox. In this situation, you will
receive a directory synchronization error telling you that duplicate objects exist, and that Azure AD Connect
was unable to create or update the object. If you only have objects in Azure AD and wish to link on-premises
objects to them, copy the value of the email address from the object in Azure AD onto the on-premises
object.

Installing Azure AD Connect


Azure AD Connect is much more than just a synchronization tool; it can also take care of the installation and
configuration of AD FS and Web Application Proxy (WAP) servers. In addition, every installation of Azure AD
Connect also configures the Azure AD Connect Health Agent which interacts with the Azure AD Connect
Health service and provides administrators with valuable insights into the synchronization process and the
health of the synchronization components.
When installing Azure AD Connect, you have two installation options: Express and Custom. The Express
installation is the most basic configuration, and only installs the synchronization components with default
settings. The Custom installation allows for a much more granular configuration, including defining your own
service account, using a SQL Server database, etc. Aside from the additional configuration parameters, the
Custom installation can also install, and configure AD FS and WAP servers for you.

Overview of Express Installation of Azure AD Connect


By default, the Azure AD Connect installation wizard suggest the Express Settings option, which installs and
configures the following options:
• Configure the directory synchronization engine to synchronize Azure AD with a single local Active
Directory forest. By default, all attributes are synchronized.
• Enable password hash synchronization.
• Enable Auto Upgrade; the feature ensures that Azure AD Connect maintains itself and automatically
upgrades to the latest version when available.
When you perform an express installation, the wizard guides you through a few basic steps, which include
specifying administrative credentials to your tenant and the on-premises AD.
The on-premises and cloud administrative credentials that you specify in the wizard are not used to run the
synchronization tool. Instead, the wizard uses these accounts to create the necessary service accounts. First,
Azure AD Connect will create a new service account in your tenant. The format of the service account looks
like Sync_ServerName_Guid@domain.com and the account is granted limited administrative permissions to
allow it to be able to write objects and attribute changes into Azure AD. Similarly, an on-premises account is
created in the default Users container in AD. This account is typically named MSOL_UniqueID, as shown in
Figure 9-2:

Page 286
Figure 9-2: Azure AD Connect service account
When the account is created, it is granted permissions in your environment to execute the tasks which are
configured as part of the configuration wizard.

Real World: If permissions inheritance is blocked somewhere in your domain tree, you must manually set
the permissions on the organizational unit that has inheritance disabled. If inheritance is blocked again at
lower levels, you must repeat this step for each OU that does not automatically inherit permissions.

Overview of Custom Installation of Azure AD Connect


As an alternative to the Express Settings installation, you can customize the installation of Azure AD Connect
during setup by clicking the Customize button instead. The following installation options are available:
• Specify a different installation location. By default, the Azure AD Connect synchronization tool is
installed in "C:\Program Files\Microsoft Azure AD Sync".
• Define whether a separate SQL Server database should be used and, if so, what credentials to use to
access it.
• Specify an (existing) service account. Unlike the Express installation option, the account that you
specify will be used to run the synchronization service afterwards. The account does not need high-
level permissions and can be a standard user account; it only requires read access to AD for most of
its functionality. However, if support for a hybrid deployment, password hash synchronization, or any
of the writeback features are required, additional permissions must be granted to the service account
so that it can perform the necessary tasks such as updating the hybrid writeback attributes or reading
the password hash. More detail on how to configure these permissions is provided in the earlier
section “Configuring Writeback Permissions.”
• Specify custom synchronization groups. These AD groups control what administrative actions
someone can perform in the directory synchronization tool. By default, Azure AD Connect creates
local groups on the server, not in AD.
The Azure AD Connect installation wizard also gives you the option to automatically install and configure AD
FS. When you select to install and configure federation with AD FS, you must have at least two Windows
Server 2012 R2 or newer servers. One is used as the AD FS server; the other is installed as a WAP. You should
also prepare a public SSL certificate that you would like to use for AD FS.
The wizard can configure a new AD FS server farm or add servers to an existing farm. You will notice that the
wizard is very similar to the manual setup of AD FS servers. During the installation, you are asked to provide
the SSL certificate for service communications, and you must specify a service account or group managed
service account which is used to run the AD FS service.

Note: To install the AD FS and WAP servers remotely, the target servers must have Remote Management
(WinRM) enabled. Additionally, if the WAP server(s) is/are an untrusted, non-domain joined computer, you
must add the target host to the list of trusted hosts on the machine that is used to run the configuration
wizard. For more information, review this Microsoft article.

Page 287
In addition to the above options, the custom installation also enables you to configure the following
capabilities:
• Object Filtering. Object filtering enables you to select what objects are synchronized to Azure AD.
The different filtering options are explained in Object Filtering later.
• Multi-Forest synchronization. You can add multiple on-premises domains to synchronize to Azure
AD. For each domain, Azure AD Connect creates a new connector. You can also specify how objects
are matched between the various domains. This is particularly useful when accounts are represented
in more than a single forest. This would for example be the case in a resource forest scenario: users
are represented once in the accounts forest, and a second time with a disabled account in the
resource forest.

Staging Mode
In the last step of the Azure AD Connect configuration process, you can configure directory synchronization in
Staging Mode. If enabled, staging mode will prevent the directory synchronization tool from writing
(exporting) data to either Azure AD or the on-premises AD. All the read-only matching and data collection
steps are still performed but nothing will be written either to Azure AD or the on-premises AD. This can be
particularly useful in two scenarios: when performing a swing upgrade from one version of Azure AD Connect
to another or when you want to have a second directory synchronization server configured as a warm
standby, ready to take over in case the primary server experiences an extended outage.
One can argue if a real need exists for a highly available directory synchronization setup. By default, directory
synchronization happens every 30 minutes. If an Azure AD Connect outage lasts less than that time span,
there is little to no noticeable impact. Who and what functionality is impacted depends on the features
enabled in Azure AD Connect. For instance, if you have password hash synchronization enabled, no password
changes will be synchronized to Azure AD during the outage. The longer the outage takes, the more calls the
helpdesk will get about passwords or other user attributes being out of sync. Password writeback will also fail
to work if Azure AD Connect is unavailable since it requires a real-time communication channel with Azure AD
Connect.
If the Azure AD Connect server cannot be recovered, the simple solution is to deploy a new server. The
installation of Azure AD Connect, not including the time to build a new server from scratch, only takes a few
minutes. However, depending on the size of your organization, the initial synchronization can take several
hours (or much longer for very large organizations). The more objects that are synchronized, the longer it
takes. Depending on the situation, the time to wait for the initial synchronization to complete might no longer
be a viable option.
Staging mode allows you to have a second (or even third or fourth) synchronization server running in parallel
with the active instance. In case you need to activate the second server, all you must do on the staging server
is rerun the setup wizard to disable staging mode. At the next synchronization, the second server will now
start functioning as the “real” Azure AD Connect server, writing changes to both Azure AD and the on-
premises environment. Because the synchronization server has been running in parallel with the active
instance, but not previously writing changes, it will pick up synchronization where the formerly active server
left off.
The only caveat to this approach is that you must make sure that no two synchronization servers are active at
the same time, as this can yield unexpected results. With exception of the sync scheduler configuration (which
is stored in Azure AD), the configuration of both synchronization servers must be kept identical, manually. This
means that any custom synchronization rules that are configured on the primary synchronization server must
also be applied to the second server to keep them consistent.

Page 288
The staging mode option is also used during a swing migration. It allows you to install a second server and
verify that it is configured correctly by monitoring the synchronization process. Once the second server is
setup correctly, you can disable the legacy synchronization server and disable staging mode on the new
server. Finally, you can install a new staging mode server.
If you have a large Azure AD tenant and an in-place upgrade of the Azure AD Connect server will trigger a full
synchronization, you can also begin by upgrading the staging mode server. Once the staging server upgrade
is complete, you can place the legacy Azure AD Connect server in staging mode and activate the upgraded
server. Finally, upgrade the version of Azure AD Connect on the legacy server.

Optional Azure AD Connect Features


Some of the optional features, such as password hash synchronization, support for an Exchange Online hybrid
deployment, or password writeback were already covered. In addition, Azure AD Connect also includes other
functionality to unlock a variety of extra Azure AD capabilities, including:
• Azure AD app and attribute filtering. By default, all default attributes are synchronized to Azure
Active Directory. This feature allows you to control what attributes are synchronized, by either
selecting which workloads you will use in Azure AD, or by selecting which attributes you (do not) want
to synchronize. Mandatory attributes, required to make synchronization function properly, cannot be
filtered.
• Device writeback. Devices that are registered to a user’s account in Azure AD are synchronized back
into the on-premises directory, enabling you to use conditional access policies to shield access to on-
premises applications through integration with AD FS. Device writeback is also necessary for using
hybrid certificate trust with Windows Hello for Business.
• Group writeback. Group writeback enables on-premises recipients to interact with Microsoft 365
Groups. Without group writeback, Microsoft 365 Groups do not show up in the GAL and on-premises
mailboxes are not able to send email to them unless you manually create corresponding contact
objects for each group. The topic of dealing with Microsoft 365 Groups in hybrid environments is
covered in Chapter 5 of the companion volume.
• Directory extension attribute sync. Although most attributes are synchronized by default, some
organizations might have extended the on-premises directory with extra, non-standard, attributes.
Through this feature, those attributes can also be synchronized to Azure AD. Additional attributes are
not consumed by Microsoft 365 workloads and are only available to the Microsoft Graph and certain
Azure AD features.

Customizing Azure AD Connect


Whenever possible, we recommend that you use the default Azure AD Connect configuration. If you need to
customize the configuration, try to limit your customizations to the minimum number possible. This will
greatly reduce your risk when it comes time to upgrade Azure AD Connect. Common customizations include
limiting the objects synchronized to Azure AD and implementing custom synchronization rules to transform
data from AD before synchronizing it to Azure AD. If you customize Azure AD Connect or have a complex
configuration, you can import and export your settings.

Object Filtering
Except for some system accounts, the default mode of directory synchronization will synchronize every user,
group, and contact in your on-premises AD to the cloud. Although having the accounts in Azure AD will not
cost you money (since no licenses are consumed), some organizations do not want to have unnecessary
objects in Azure. This leads to the requirement to control which objects are synchronized and which objects

Page 289
are excluded. While filtering can be used for this purpose, it is important to consider the tradeoff of added
complexity when you customize Azure AD Connect.
If the added complexity is acceptable, four methods can be used to apply filtering to your directory
synchronization:
• Domain-based – in a multi-domain forest you can select which domains should be synchronized with
Azure AD. This is common in scenarios where all the users, contacts, and groups are in a single
“account” domain within the forest. In this situation, it is more efficient to only synchronize this
domain, and ignore other resource domains.
• Organizational Unit (OU)-based – this allows you to specify which OUs contain objects that will be
synchronized to Azure AD. This method of filtering is a straightforward way to enable a slow, phased
approach to directory synchronization, in which objects are moved in small batches into an OU that is
in scope for directory synchronization.
• Attribute-based – this approach allows you to specify which object attributes (for example, the
department attribute for users) should be used to determine which objects synchronize to Azure AD.
Attribute-based filtering can be applied using either inbound or outbound synchronization rules.
Inbound rules are generally recommended as these will be preserved in an upgrade of Azure AD
Connect, while outbound rules will be overwritten during upgrades.
• Group-based – this allows you to specify a group in the local AD. Only member objects of this group
are synchronized to Azure AD. This option is only available when performing a custom installation of
Azure AD Connect. Note that you can only configure group-based whilst first running the setup
wizard. Once you’ve completed the setup, you cannot reconfigure group-based filtering using the
wizard anymore.
Detailed steps to apply each type of filtering in Azure AD Connect are available on Microsoft's website. The
filtering options covered here are only available if you choose a custom installation. If you implement filtering,
understand that you are potentially changing the way users interact with on-premises recipients. This could be
the case in a hybrid deployment where you decide to only synchronize user objects with an Exchange Online
mailbox. Microsoft 365 users may not see most or all the on-premises mailboxes in the GAL.

Real World: If filtering is enabled, you must tread carefully with moving on-premises objects in AD. For
instance, if you excluded an OU from synchronizing to Azure AD, every object moved into that OU will not
be synchronized to Azure AD. If an object which was previously synchronized with Azure AD is moved to
that OU, it will automatically be deleted in Azure AD at the next synchronization interval.

Custom Synchronization Rules


The Azure AD Connect Synchronization Rules Editor allows you to make changes to attributes as they are
synchronized from AD to Azure AD. There are several dozen pre-defined sync rules that, together, implement
the standard Azure AD Connect sync functionality. Much like the registry, you can modify these rules to
change sync behavior—but you must be cautious when doing so.
Why would you want to make these changes anyway? For one example, many large organizations use the last
name, first name format for the user DisplayName attribute to order the Exchange Global Address List (GAL)
by the last name. It can be argued that Exchange Online takes a more user-friendly approach to how
applications refer to users and the default is to create user accounts with the DisplayName formed by the first
name, a space, and the last name. You can accomplish this transformation with a rule.
For all synchronization rule changes, the basic process is simple:
1. Pause scheduled synchronizations in Azure AD Connect.
2. Create a new rule or modify an existing rule by first duplicating it.

Page 290
3. Preview the rule on a single object. You do not have to do this, of course, but it’s an excellent idea. Fix
any errors you find and verify that the target object is modified in the way that you expect.
4. Run a Full Synchronization. Fix any errors that you find and check a selection of objects to verify that
the rule did what you thought it would.
5. Re-enable the synchronization scheduler.
You can access the history of synchronization rule changes using the Get-ADSyncRuleAudit cmdlet (introduced
in version 1.6.2.4 of Azure AD Connect).

Managing and Monitoring Directory


Synchronization
After you implement directory synchronization, you naturally need to manage and monitor it on an ongoing
basis.

Checking the Synchronization Interval


Azure AD Connect’s synchronization schedule is managed by a scheduler that is built-in to the
synchronization tool. The scheduler can only be managed through PowerShell. To check the current settings,
run the following command from the directory synchronization server:
[PS] C:\> Import-Module ADSync

[PS] C:\> Get-ADSyncScheduler

AllowedSyncCycleInterval : 00:30:00
CurrentlyEffectiveSyncCycleInterval : 00:30:00
CustomizedSyncCycleInterval :
NextSyncCyclePolicyType : Initial
NextSyncCycleStartTimeInUTC : 2/28/2019 4:47:40 PM
PurgeRunHistoryInterval : 7.00:00:00
SyncCycleEnabled : False
MaintenanceEnabled : False
StagingModeEnabled : False

• AllowedSyncCycleInterval specifies the minimum time between (automatic) synchronizations Azure AD


will allow. Note that this is a fixed value and cannot be changed.
• CurrentlyEffectiveSyncCycleInterval denotes the current interval. If a custom schedule was configured,
the CustomizedSyncCycleInterval will contain the currently used interval.
• NextSyncCyclePolicyType shows the type of the next synchronization. This can be either Delta or
Initial. Delta means only changes will be picked up on. Initial means the next synchronization will be a
full import and full synchronization, which also includes changes. Full synchronizations re-apply
synchronization rules to every object, even if the object has not changed. A full synchronization can
take an extended period to complete.
• NextSyncCycleStartTimeinUTC. Displays the time the next synchronization is scheduled to start
(converted to Coordinated Universal Time (UTC) timing).
• PurgeRunHistoryInterval controls how long the run history should be kept. The default is 7 days. The
run history can be useful for troubleshooting purposes, but also consumes space in Azure AD
Connect’s SQL Server database.
• SyncCycleEnabled indicates whether the scheduler is active.
• MaintenanceEnabled shows if maintenance mode is enabled.
• StagingModeEnabled shows if this synchronization server is configured in Staging Mode.

Page 291
Forcing Synchronization
The default synchronization interval is 30 minutes. However, sometimes you might need to manually trigger a
synchronization sooner, for example if an urgent change needs to be synchronized.
To start a new delta synchronization cycle, run this PowerShell command on the Azure AD Connect server:
[PS] C:\> Start-ADSyncSyncCycle –PolicyType Delta

If you need to trigger a full synchronization, replace Delta with Initial in the command shown above.

Changing the UPN for a Synchronized User


Historically, the directory synchronization process did not update the User Principal Name (UPN) for a
synchronized user when a Microsoft 365 license is assigned to that user. Although you could still change the
UPN, this approach needs a manual update for the UPN, which is not very practical if you need to update
many UPNs at once.
Tenants created after June 15, 2015 automatically update the UPN of a synchronized account in Azure AD
through the directory synchronization process. Tenants created before that date must enable the feature by
issuing the following command in Azure AD PowerShell:
[PS] C:\> Set-MsolDirSyncFeature -Feature SynchronizeUpnForManagedUsers -Enable $true

The change in how the synchronization service handles UPN updates for managed (synchronized) accounts
does not apply to accounts that are part of a federated domain. These accounts automatically handle UPN
updates without any administrator intervention.

Triggering Full Password Hash Synchronization


When password hash synchronization is first enabled, it will sweep through AD and bulk-update all the user
passwords in Azure AD so that they are in sync with their on-premises counterpart. Generally, it is not
required to repeat this process. However, you can force a new full password hash synchronization (not to be
confused with a full synchronization). Be aware that while the full password hash synchronization is running,
no new password changes will be synced to the service. Users who change their passwords will not have those
changes replicated until the full password hash synchronization finishes and that could be days (or longer!) in
a large environment.
If you are certain that you need to (re-)synchronize all password hashes, run the following PowerShell
commands from the Azure AD Connect Server:
[PS] C:\> $onPremConName = "office365itpros.com"
$onPremCon = Get-ADSyncConnector –Name $onPremConName
$cloudConName = "office365itpros.onmicrosoft.com - AAD"

$pwSyncSettings = New-Object
Microsoft.IdentityManagement.PowerShell.ObjectModel.ConfigurationParameter
"Microsoft.Synchronize.ForceFullPasswordSync", String, ConnectorGlobal, $null, $null, $null
$pwSyncSettings.Value = 1

$onPremCon.GlobalParameters.Remove($pwSyncSettings.Name)
$onPremCon.GlobalParameters.Add($pwSyncSettings)

$onPremCon = Add-AdSyncConnector -Connector $onPremCon

Set-ADSyncAADPasswordSyncConfiguration –SourceConnector $onPremConName –TargetConnector


$cloudConName –Enable $false
Set-ADSyncAADPasswordSyncConfiguration –SourceConnector $onPremConName –TargetConnector
$cloudConName –Enable $true

Page 292
After completing these commands, you should see multiple events with ID 650 in the Application event log.
Each of these events indicates a batch of users whose passwords are being synchronized to Azure AD.

Using the Azure AD Connect V2 Endpoint


In December 2020, Microsoft introduced a new API endpoint for Azure AD Connect. The most important
change that you may notice in this update is the ability to synchronize groups to Azure AD with up-to 250,000
members. If you have large groups or experience synchronization performance issues due to a large directory,
you should consider evaluating the new API endpoint. For information on how to use this endpoint and how
to synchronize large groups, review this document. Microsoft plans to re-configure Azure AD Connect servers
that have auto-upgrade enabled to use the new endpoint automatically. If you do not have auto-upgrade
enabled or available to you, you must manually update the API endpoint for your Azure AD Connect servers.
New installations, beginning with the 1.6.2.4 update to Azure AD Connect, use the V2 endpoint by default.

Monitoring Synchronization
Once directory synchronization has been setup and the initial synchronization completes the synchronization
process will be run again every 30 minutes. The history of synchronization attempts is called the “run history”
and is stored in the SQL Server database used by the tool. Monitoring the run history is an effective way to
keep track of your directory synchronization deployment. The run history is exposed through the
Synchronization Service Manager tool on the Azure AD Connect server, which provides the administrative
interface for managing the tool. If you have Azure AD Premium, you can also check Azure AD Connect Health
in the Azure portal for a more in-depth view of the status of your Azure AD Connect servers.
If you want more detail, you can always use the Synchronization Service Manager, which exposes far more
controls than an administrator typically needs. You should always use caution when viewing information in
this tool. A good rule to stick to is “look but don’t touch”, unless you are making a supported change using a
Microsoft documented procedure. The Synchronization Service Manager is found on the Start Menu. After the
tool is open, you can see the run history from the Operations tab (Figure 9-3).
Multiple entries are logged for each synchronization cycle. Depending on the number of domains and
whether you are running a hybrid deployment or not, you will see different entries per cycle. Each step within
a synchronization cycle has its own status that shows the overall result of the step. Of course, "success" is
what you are hoping to see, but often other statuses like "completed-export-errors" can be observed too.
Those are the type of events you look for to start troubleshooting.

Figure 9-3: Viewing synchronization operations

Page 293
Monitoring Azure AD Connect, AD FS, and AD DS with Azure AD
Connect Health
Azure AD Connect Health is a solution for monitoring Azure AD Connect that also allows tenants to monitor
federation servers (AD FS) and AD domain controllers (AD DS). The agent that uploads diagnostic information
to Azure is automatically installed with Azure AD Connect. As a result, no other configuration is required to
start using the service. The analytics and monitoring information collected by Azure AD Connect Health for
AD FS and AD DS is detailed and often exceeds what other off-the-shelf monitoring systems can provide.
From the health dashboard, the administrator can navigate through different widgets that give insights into
various aspects of the synchronization process, like detailed information about the latest synchronization
attempts, connection latency, the number of synchronized objects, and synchronization errors. If needed, the
administrator can also configure notifications so that an email is sent for all detected problems. Even if you
already have a monitoring platform for AD FS or AD DS, we recommend that you strongly consider whether
deploying Azure AD Connect Health will add additional value. It never hurts to have more information than
less!
Note that you must have at least one Azure AD Premium license to access the dashboard shown in Figure 9-4.
If you have more than one Azure AD Connect server or other servers monitored by Azure AD Connect Health,
you must have an additional 25 Azure AD Premium licenses per monitored server. Microsoft documents this
licensing model in their FAQ for Azure AD Connect Health.

Figure 9-4: Azure AD Connect Health Dashboard


To monitor AD FS servers, the Azure AD Connect Health agent must first be installed on the federation servers
and audit logging must be enabled. Once the servers have been configured to report into the Health Service,
you will be able to find a variety of information on the federation servers and the authentication process. The
latter is especially useful because it allows you to track suspicious activity such as failed logons etc.
If you do not have Azure AD Premium licenses, you can still monitor the synchronization process, albeit in a
more basic way as the Microsoft 365 admin center includes a section on the synchronization process status in
its dashboard.

Page 294
Upgrading Azure AD Connect
New versions of Azure AD Connect are released as new features are added, and bugs are fixed. The in-place
upgrade is the easiest way to upgrade. After launching the installer, Azure AD Connect will detect that it is
already installed and offer you to upgrade to the latest version. While the upgrade process will preserve most
customizations, we recommend that you:
• Make as few customizations to the synchronization service and synchronization rules as possible
• Export your custom synchronization rules using the Synchronization Rules Editor before running any
upgrade
• Document all other customizations so that you can reapply them after an upgrade
Synchronization is disabled during the upgrade, which usually takes anywhere between 10 to 30 minutes
depending on the size of the existing metaverse. During the upgrade process, you cannot make any
configuration changes. The credentials that you must specify to connect to Azure AD and the on-premises AD
must match with what was used before.
Some upgrades to Azure AD Connect may require a full synchronization to be performed. In a large
environment, this process can take a long time to complete. To prevent extended outages, consider
leveraging a staging mode server to perform the upgrade process.
It is important to stay current with Azure AD Connect updates. Microsoft generally supports Azure AD
Connect versions for a year after their release. From March 2024, Microsoft requires customers to run a
version newer than 1.1.751.0.

Azure AD Connect Automatic Upgrades


Because of the frequent updates to Azure AD Connect, it can be cumbersome having to manually update the
installation each time an updated version is available. Automatic upgrade replaces the need to manually
update Azure AD Connect by enabling each Azure AD Connect server to automatically pull updates when
Microsoft releases them, in the same way that Windows Update works. Automatic Upgrade is enabled by
default when the following conditions are met:
• Azure AD Connect was configured by an Express installation and no custom service account is used
• There are no more than 100,000 objects in the metaverse
• The SQL Server Express LocalDB database is used
• The AD account is the default MSOL_ account created by Express installation
To view if Automatic Upgrade is enabled, run this PowerShell command on the Azure AD Connect server:
[PS] C:\> Get-ADSyncAutoUpgrade
Enabled

The cmdlet can return three values: Enabled, Disabled or Suspended. Both Enabled and Disabled clearly show
their meaning. Suspended means that the system determined that it is no longer eligible for automatic
upgrades. For instance, this is the case when Azure AD Connect was reconfigured with custom settings.
If your system is enabled for automatic upgrades, but you do not want to automatically install new versions,
you can run the following command to manually disable it:
[PS] C:\> Set-AdSyncAutoUpgrade –AutoUpgradeState Disabled

Real World: Even when automatic upgrade is enabled, you might not see the latest version installed. This
is because Microsoft also must enable customers for automatic upgrade in the service. It might take a

Page 295
while before all customers have been enabled for automatic updates (in the service). Until then, you can
still upgrade manually.

Page 296
Chapter 10: The Hybrid
Configuration Wizard
A fundamental part of creating a hybrid Exchange connection with Office 365 is running the Hybrid
Configuration Wizard. However, before you can do so you must carefully consider the implications of a hybrid
connection and configure the necessary prerequisites, all of which is covered in Chapter 4 of this book.
Once you have successfully configured all prerequisites, you are ready to run the Hybrid Configuration Wizard.
Note that when running the wizard for the first time, the steps you will have to go through might be a little
different from when you run it for a second or third time. This is mainly because some aspects of the Hybrid
Configuration, like the federation trust with Azure AD Authentication System, only need to be configured once
and never change afterwards.
Some situations will prevent you to successfully run the HCW and force you to revert to PowerShell first. For
instance, if an Exchange server which was previously part of the hybrid configuration is uninstalled without
being removed from the hybrid configuration first, the HVW will display an error message and not let you
continue before you corrected the situation. To do so, you must manually remove the server from the existing
hybrid configuration using the following PowerShell command:
[PS] C:\> Set-HybridConfiguration –ReceivingTransportServers @{Remove="Server1"}

The following steps describe how to step through the HCW for the first time. Even though the HCW is a web-
based wizard, you should run it from an Exchange server because some components need local interaction
with the server.

Starting the HCW


When you click configure on the hybrid-page in your on-premises server, you are automatically redirected to
a landing page within the Exchange Online Control Panel. During this process, you will be asked to sign in as a
Global Administrator for your tenant. While the redirection might seem redundant, it serves several purposes.
First, the page links to the latest public version of the Hybrid Configuration Wizard used to invoke the actual
wizard. Secondly, the page allows Microsoft to provide targeted customers with an alternative link to a
different version of the Hybrid Configuration Wizard. For instance, customers participating in Microsoft’s
Preview Program might get a link pointing to a preview version of an updated wizard. Thirdly, the landing
page allows Microsoft to detect conditions that might affect your setup experience, such as the browser you
use and/or if you have a popup-blocker activated. Once you click the link, the necessary components will be
downloaded onto your server and the wizard will start. At time of writing, the total size of the download was
around 12 MB. If you have previously run the wizard and no updated version is available, the wizard will re-
use the previously downloaded bits. Find click here on the landing page to continue.

Real World. The sources of the Hybrid Configuration Wizard are downloaded from
https://mshrcstorageprod.blob.core.windows.net/o365exchangehybrid/. If you have a locked down
environment or if Internet Explorer's Enhanced Security is enabled, you will not be able to run the applets
unless you add an exception to Internet Explorer's safe sites for this hostname.
Another issue for automatic configuration is that it requires the wizard to be run from an Exchange server.
Although most administrators will likely do this anyway, the EAC could be run from any workstation/server
in the organization.

Page 297
At this point, the latest HCW code is downloaded. You will be prompted to install the wizard, and you must
click Install to continue. Depending on your connection, this might take anywhere between a few seconds to
a few minutes.
After downloading the latest bits and initializing its applets, the HCW will automatically start. The first page in
the wizard is the Welcome page. This page serves no purpose other than to announce that the wizard is about
to start and to give some information about what the wizard does in the form of TechNet links. Click next.

Server Detection
The first real stage in wizard processing is server detection. Unlike earlier versions of the wizard, which
connect to a server based on the existing Remote PowerShell connection, you can now determine from which
server you want to run the wizard. Typically, this is the server you are using to run the HCW from. On this
page (Figure 10-1), you can also select what version of Office 365 you use. Today, there are several options
ranging from Office 365 Worldwide, to various sovereign or government versions of Office 365 like China,
Germany (the sovereign German “Black Forest” region), or GCC. The different versions of Office 365 exist to
meet specific regional requirements. For instance, if you select 21Vianet, the federation trust is not configured.
Click next to continue.

Figure 10-1: Starting HCW


Now, enter the credentials to connect to both Exchange on-premises and Exchange Online (Figure 10-2).
Unlike previous versions of the wizard, the credentials are verified before you can continue to the next step. If
the wizard determines it cannot connect using the credentials you provided, it will allow you to go back and
re-enter them. If an issue is detected in Exchange Online which may impact your ability to complete the
Hybrid Configuration Wizard successfully, a note stating Hybrid Configuration Service may be limited will

Page 298
appear right under the email address of the Office 365 account you have used. However, it will not prevent
you from stepping forward in the wizard itself. Click next to continue.

Figure 10-2: Entering credentials for HCW


If the credentials were verified successfully, click next again to have the wizard validate the credentials by
making a connection to the on-premises and cloud environments.

Licensing a Hybrid Server: From July 2018 on, the HCW is able to obtain and assign a license
automatically for the on-premises hybrid server. A new Detect the optimal Exchange server step performs a
license check on the target server and will license the server if needed. If you choose not to license the
target server, you must choose another server before HCW can proceed.

Cutover or Hybrid
Depending on the size of your organization, you might be presented with a web page stating that Microsoft
does not believe that creating a hybrid connection is the right option for you (Figure 10-3). If you don’t agree
with this assessment, skip this step. While there is no arbitrary number to depict if and when to use a hybrid
connection, it is true that for very small environments other (migration) options might be a better fit. To
continue, check the check box next to I understand that a Hybrid Configuration and then click next.

Page 299
Figure 10-3: The cutover choice

Trusts and Domains


If this is the first time you have run the wizard, and you have not yet configured a trust with the Azure AD
Authentication System, the wizard guides you through an additional step where it searches for a domain
name which is shared across the Office 365 tenant and the on-premises Exchange organization. For each
domain in scope (configurable), you are given a TXT record value which serves as proof of ownership of the
domain.
If you have multiple domains that could be included in the hybrid configuration, the wizard allows you to add
or remove them from the hybrid configuration, and you can designate one of the domains as a so-called
Autodiscover domain. This is the domain that will be used to determine the on-premises Autodiscover
endpoint which the HCW then uses to configure the hybrid configuration and some of its components like
Organization Relationships.
Even though you already had to go through a similar process when adding the domain(s) to the tenant, you
must do so again here. This is because Exchange Federation, that is configured here, uses a separate
infrastructure from Office 365 and thus does not know about the domain already being verified in the Office
365. During the domain federation process, when you copy the value for the TXT record from the Domain
Ownership page (Figure 10-4), the wizard now only copies the actual value of the record instead of including
all the metadata. This minimizes the risk of someone accidentally including the metadata in the value of the
record. Once you have added the TXT records to the public-facing DNS zone for the domain(s), select I have
created a TXT record for each token in DNS and then click verify domain ownership. If the domains have
been validated successfully, click next.

Page 300
Figure 10-4: Domain ownership

Real World. While verifying domain ownership, the wizard must obtain federation information for the on-
premises organization. As part of this process, the wizard uses Autodiscover to discover that information.
Sometimes, Autodiscover is not configured correctly for the internal organization and caused previous
versions of the wizard to fail. In an attempt to minimize failures during that process, the wizard now
automatically attempts to launch an external DNS query by connecting to an DNS server whenever it detects
a problem with the internal Autodiscover process. The wizard then tries to connect to the external
Autodiscover endpoint and, hopefully, acquires the required information to proceed. Of course, if
Autodiscover is broken both internally and externally, this will not help.

Minimal or Full Hybrid


On the next page, you have to choose what type of hybrid configuration you want to configure. There are two
options:
• The Minimal Hybrid option configures the necessary mail flow connectors so that messages can flow
from on-premises to Exchange Online and vice versa. Note that messages flowing through a minimal
hybrid connection are marked as external messages to one another. This is an important difference
with the full hybrid experience, and should be explained to your users. The Wizard also enables the
MRS Proxy component, allowing you to leverage hybrid mailbox moves to move content to Exchange
Online. Any other feature that provides users with a richer coexistence experience in not configured.
Due to its limitations, the Minimal Hybrid option is intended for organizations that want to move to
Exchange Online without having to go through the complexity of setting up a full hybrid setup.
Because of the lesser end user experience, the time between setting up the hybrid connection and
moving all mailboxes to Exchange Online should be kept as short as possible.
Page 301
• As the name implies, the Full Hybrid option configures all the features required to provide a rich
coexistence experience between the on-premises Exchange servers and Exchange Online. This
includes advanced mail flow scenarios (with messages now being marked as internal or support for
Edge Transport servers), support for a variety of cross-premises permissions, the ability to perform
Free/Busy lookups and support for hybrid mailbox moves.

Organization Configuration Transfer


On the same page, you also have the option to enable Organization Configuration Transfer (OCT). This feature
copies specific settings from the on-premises environment to Exchange Online to ensure that the same values
are used in both systems and a more coherent experience is provided to the end user. As part of the copy
process, OCT copies the following policies to Exchange Online:
• ActiveSync Mailbox Policy.
• ActiveSync Device Mailbox Policy.
• ActiveSync Device Access Rules.
• ActiveSync Organization Settings.
• Address List policies.
• Exchange Data Loss Prevention (DLP) policies (as explained in the DLP chapter of the main book,
these are not the same as Office 365 DLP policies and only apply to Exchange messages).
• OWA Mailbox Policy.
• Malware Filter Policy
• Exchange Organization Configuration.
• Policy Tips.
• Mailbox Retention Policies and retention tags.
When it runs, OCT looks for policies in Exchange Online and compares them to the on-premises policies. If a
conflict is found, you will later be able to select which policies you want to override in Exchange Online and
which ones not. The copy process is a one-time action; policies are not automatically kept in-sync. However,
each time you run the HCW, you can re-run the Organization Configuration Transfer to update the policies in
Exchange Online.

Page 302
Figure 10-5: Configuring Hybrid Features

Mail Routing
On the following page, you select the options for mail routing. The default value is to only configure selected
Client Access and Mailbox servers. However, you can opt to include the configuration of Edge Transport
servers as well as to enable centralized mail transport (Figure 10-6). Click next.

Page 303
Figure 10-6: Configuring mail transport
Next, you select the Exchange Servers for which the HCW will configure Receive Connectors. Using the drop-
down menu, you can select one or more servers (Figure 10-7) and then click next.

Page 304
Figure 10-7: Selecting Exchange servers for the receive connector configuration
The next page (Send Connector Configuration) is similar to the previous one, albeit that you now select the
servers for which the HCW will create a Send Connector (Figure 10-8). Note that you can only select servers
that run either Exchange 2013 or Exchange 2016. Click next to continue.

Page 305
Figure 10-8: Send connector configuration
After you select the servers to include in the hybrid configuration, you must select the appropriate transport
certificate. On the Transport Certificate selection page, the wizard allows you to pick the correct certificate
from a pre-populated list. The list of available certificates (Figure 10-9) only includes certificates that are
present on all selected servers and have successfully been installed and configured for the SMTP service on
those servers. If the wizard cannot find a certificate that matches these criteria, a warning is displayed a valid
certificate could not be found. You cannot move forward until you correct the issue. Once you have selected
the correct certificate (mail.office365lab.be in this example), click next.

Page 306
Figure 10-9: Selecting a certificate to secure mail transport

Organization
On the Organization FQDN page, you must enter the endpoint which connects to the on-premises internet-
facing Exchange Servers (Figure 10-10). The endpoint is used for sending and receiving mail to and from
Office 365, and will also be configured as the first migration endpoint for mailbox moves. For example, enter
hybrid.domain.com if your public namespace pointing to your on-premises Exchange servers is
hybrid.domain.com. Then, click next.
As part of the features selected for HCW, you can choose to transfer some organization configuration settings
from the on-premises organization to Exchange Online. This is a one-time, one-way transfer of mailbox
retention policies and retention tags, ActiveSync and mobile device policies, and OWA mailbox policies. The
intention is to make sure that Exchange Online uses the same settings as the on-premises organization.
However, the HCW will only create objects inside Exchange Online if those objects do not already exist. No
synchronization of the settings occurs after the transfer, so if settings change on one side or the other, you
will have to decide whether to replicate the change to the other platform.

Page 307
Figure 10-10: Entering the organization FQDN

Starting the Configuration


The last page informs you that you are about to make changes to your hybrid configuration (Figure 10-11). If
you are unsure about the changes you have made, you can always go back. Otherwise, clicking update will
start the configuration process.

Page 308
Figure 10-11: Ready to create the hybrid configuration
After all input has been gathered, the Hybrid Configuration Engine makes the necessary configuration
changes. As part of the process, the engine collects information about the existing environments by executing
a set of PowerShell cmdlets, very much like how an administrator would do it manually. The gathered
information contains details on items such as Accepted Domains, Current Organization Configuration and
Organization Relationships.
When the wizard detects the configuration of your environment, it uses the ADPropertiesOnly switch to gather
EWS Virtual Directory information. This makes the process significantly faster. In previous versions, this
process would sometimes take over 8 hours to complete in large environments and cause the wizard to time
out as a result. The problem was caused by how the Get-WebServicesVirtualDirectory cmdlet queries
information from a remote server. Without the ADPropertiesOnly switch, the cmdlet requests the data from
the IIS metabase on the remote server, resulting into delays that can span multiple minutes per server.
Once the engine finishes collecting the information, it determines the difference between the existing- and
requested configuration. If there is a difference, which is the case if you run the wizard for the first time or
when you make configuration changes, the engine continues to update the environment's configuration.
First, the service domain is added as a Remote and Accepted Domain to the on-premises environment.
Typically, the service domain will have the format of tenantname.mail.onmicrosoft.com. In addition, the wizard
also adds the tenant name (tenant.onmicrosoft.com) as a Remote Domain to the environment.
Next, the engine modifies the default Email Address policy to include the service domain, so that all recipients
in the organization get stamped with the additional proxy address that matches the service domain. This is
required to ensure mail flow continues after mailboxes are moved to or from Office 365. To ensure that
existing objects are also stamped, the Email Address policy is re-applied to all recipients.

Page 309
Real World. When the email address policy is updated, the wizard uses the UpdateSecondaryAddressesOnly
switch to ensure that no primary email address is changed, and only a new proxyAddresses is added. The
configuration engine updates the default email address policy assuming that it is applied to each recipient
in the organization, and that each recipient is configured to automatically update email addresses based on
an email address policy. In many environments, the check box used when setting mailbox properties to
automatically update addresses is unchecked and email addresses are added manually on an as-needed
basis. Before you can move a mailbox to Exchange Online, it must be stamped with a proxy address based
on the service domain and the changes must be synchronized to Office 365. The solution is to either add
the proxy address manually or to re-enable automatic updating of the recipients.

After the wizard completes, you can use the following PowerShell code to verify if a recipient was stamped
correctly:
[PS] C:\> Get-Mailbox smorris | Select -ExpandProperty EmailAddresses | Format-Table
SmtpAddress,Prefix,IsPrimaryAddress -AutoSize

SmtpAddress Prefix IsPrimaryAddress


----------- ------ ----------------
smorris@hybridexlab1.mail.onmicrosoft.com SMTP False
smorris@o365.exchangedemo.info SMTP True

After recipient configuration, the wizard creates the Organization Relationships between both environments.
The Organization Relationships underpin the Exchange Federation capabilities (such as exchanging Free/Busy
information) and exist in both the on-premises Exchange organization and Office 365. In both environments,
the objects in are named similarly: On-Premises to O365 - <GUID> and O365 to On-Premises - <GUID>.
The GUID that is used in the Organization Relationships stems from the Organizational GUID of the on-
premises organization and can be queried using the following cmdlet in the on-premises organization:
[PS] C:\> Get-OrganizationConfig | Select guid

Guid
----
5b394b2f-8d59-4df4-8eb4-a784111a2fe4

After the Organization Relationships have been created, the engine will also configure an Availability Address
space. Although the address space is not used in a native Exchange 2010, Exchange 2013, Exchange 2016, or
mixed Exchange 2010/2013/2016 environments, it is added in case a legacy Exchange 2007 server still exists.
Of course, this would only apply to Exchange 2010 or 2013 as Exchange 2016 cannot be installed in an
Exchange 2007 organization. Adding the availability address space ensures that requests for Free/Busy
information originating from an Exchange 2007 mailbox are proxied through an Exchange 2010 or 2013
server.
Lastly, the wizard makes the necessary configuration changes with regards to mail flow by making the
following changes in the on-premises Exchange organization:
• A new send connector by the name of Outbound to Office 365. The connector is configured to Force
TLS encryption (-RequireTLS: $true) and updated with the certificate information from the Hybrid
Configuration Wizard.
• The Default Receive Connector is modified to enforce TLS for hybrid mail flow and ensure that SMTP
headers from Exchange Online are maintained by setting the AcceptCloudServicesMail attribute to
$true

Similarly, in Exchange Online the following changes are made:

Page 310
• A new Inbound Connector named Inbound from <GUID> to which the certificate information of the
on-premises organization is added so that only connections coming from the on-premises
organization are accepted.
• A new Outbound Connector named Outbound to <GUID>. If no centralized mail flow is selected, the
connector will be scoped to the accepted domains selected in the Hybrid Configuration Wizard only.
In a centralized mail flow, the scope includes all internal and external domains and is depicted by a
wildcard. The connector also includes smart host and certificate information which was entered in the
Wizard. The latter is used to ensure that mail is encrypted and only sent to the on-premises Exchange
servers.

The Hybrid Configuration wizard also configures OAUTH if it detects only Exchange 2013 and Exchange 2016
servers are present in the environment. There is no need to run a separate OAUTH wizard, as it used to be the
case before. If you still have older Exchange servers in the environment, like Exchange 2007 or Exchange 2010,
the wizard will not configure OAUTH. It is up to the administrator to complete those steps manually
afterwards.
As part of the OAUTH configuration, an Intra-Organization Connector is created in both the on-premises
environment and Exchange Online. These connectors work in a similar way as the Organization Relationships.
In addition to the connectors, there are several other elements are configured too. If the OAuth configuration
wizard fails to successfully configure OAUTH, it automatically disables the Intra-Organization Connectors and
notifies you of the problem. This is to ensure that the environment is not configured with a corrupt OAUTH
configuration as this can cause all sorts of issues.

Completing Configuration
Once the wizard has configured both environments, you see a page to confirm the wizard ran successfully.. If
the process was able to complete, but could not perform one or more non-critical actions, you might see a
page similar to Figure 10-12. In this case, the HCW was unable to communicate with the on-premises
Autodiscover endpoint, which prevented it to update some properties on the Organization Relationship in
Office 365. In order not to fail the entire process, the HCW used an autodiscover endpoint which is based on
the domain name(s) included in the hybrid configuration. If you specified multiple domain names, the
autodiscover endpoint for the domain designated as the Autodiscover domain is used.

Page 311
Figure 10-12: The wizard completes
At this point you completed your hybrid configuration and you can continue testing various features such as
secure mail flow or moving mailboxes to Office 365.

Modern Hybrid Architecture


In February 2019, Microsoft announced the public preview of the new hybrid topology that was previously
announced at the 2018 Ignite conference. This new architecture greatly simplifies the interconnectivity
between Exchange Online and the on-premises Exchange organization. Modern hybrid reached general
availability in July 2019 and the documentation is available online.
The previous hybrid connection requires multiple network ports to be opened from the Internet (Exchange
Online) into the on-premises organization to support a myriad of functionalities such as Free/Busy lookups,
mailbox moves and hybrid mail flow. Whilst this might not be an issue for some organizations, it can be a real
struggle for heavily-regulated industries and highly-secured environments such as financial institutions. They
often have very strict rules and requirements regarding how traffic is allowed to connect to their environment.
The modern architecture is based on the same principles as the Azure AD App Proxy. An agent is installed in
the on-premises organization creates and maintains an outbound connection to the Exchange Online over
TCP port 443 (HTTPS). During the setup, the agent registers an endpoint (URL) that only Exchange Online can
connect to. As such, connections originating from Exchange Online to your on-premises Exchange servers will
be routed to the endpoint, over the outbound connection to the on-premises agent(s). From there, the agent
routes the traffic to the backend Exchange Servers. To ensure the solution is highly available, you can deploy
multiple agents; incoming traffic will automatically be load-balanced over the installed agents.

Page 312
Unfortunately, the new agent does not solve all challenges. Today, the agent can only connect from Office 365
to your on-premises servers and it is limited to HTTPS-traffic only (used for Free/Busy lookups and hybrid
mailbox moves). Messages still have to flow either directly from Exchange Online to your on-premises
Transport Servers or – depending on your topology - Edge Transport servers. Over time, Microsoft will
expand the functionality of the hybrid agent and also include other workloads such as mail flow or perhaps
even a solution to remove the last Exchange servers from the on-premises environment after you have moved
all mailboxes to Exchange Online.
Once the new agent is available, customers that still need to setup a hybrid connection can choose which
architecture they would like to use. Customers that already have deployed a hybrid connection do not need to
upgrade as the new agent will bring them little value, if at all.

The Modern Hybrid Topology


The process to use the modern hybrid topology is not very different from a traditional hybrid configuration.
Most steps in the configuration wizard are the same. After choosing either a minimal or full hybrid
deployment, you will be presented with a different page from where you can choose to Use the Exchange
Modern Hybrid Topology. Doing so will prompt you to agree to specific rules (EULA) for using the new
architecture. Note that, as long as the feature is in Preview, you should not expect much help from Microsoft
when running into issues.
After clicking Next, you will be asked for credentials to the on-premises Exchange servers. These credentials
will be used to make the required configuration changes. Click Next again.
Now, the wizard will provide you the option to install and configure the Hybrid Agent. The whole process to
automatically install and configure a single agent takes a couple of minutes. First, it downloads and installs the
latest version of the agent from Office 365. The agent (or connector) is hosted on the same domain as the
Hybrid Configuration wizard itself (hybridconfiguration.blob.core.windows.net/connector). If you have not yet
signed in to Office 365, you will be prompted for your tenant administrator credentials.
Once the installation is completed, the wizard registers the agent with Office 365 and creates a unique
migration endpoint which Exchange Online uses to move mailboxes from. Should you be interested, the URL
of that endpoint can be retrieved from the HCW log files but is not very useful. Even though the endpoint is
exposed to the Internet, you need specific certificates for authentication. These certificates are only available
to Exchange Online.
Upon successful completion of the agent installation, you can click Update to finalize the Hybrid
Configuration and make additional changes to the environment as needed.

Real World. When you decide to trial the modern hybrid topology, you may want to deploy additional
agents. To do so, you should re-run the HCW, and select a different Exchange Server during the initial steps
of the wizard. This will allow you to repeat the above process and install the agent on that particular server.

Page 313
Chapter 11: Active Directory
Federation Services
This chapter gives information about how to configure Active Directory Configuration Services. It should be
read in conjunction with Chapter 3 of the main book.

Configuring Active Directory Federation


Services
Configuring single sign-on for Office 365 is done in two steps. First, the AD FS infrastructure must be
configured. Then, you enable one or more domains for federation. Active Directory Federation Services is a
built-in role of Windows Server 2008 R2, 2012, 2012 R2 and 2016. Windows Server 2008 R2 was released with
version 1.1, which is not supported by Office 365. If you want to deploy your AD FS farm on Windows Server
2008 R2, you must first configure version 2.0, which is available for free from Microsoft as a separate
download. From an authentication perspective, there is no difference which version of AD FS you use.
However, from a functionality standpoint, AD FS in Windows Server 2012 R2 and later offers some benefits,
including:
• Enhanced security because a dependency on IIS no longer exists
• Simplified deployment through a new UI wizard which also supports integrating with a separate SQL
server (instead of the built-in database)
• Remote installation and configuration
• Enhanced AD FS sign-in experience through extensive customization and branding capabilities
• Built-in support for multi-factor and other authentication types which makes it easier to configure
conditional access (for example, location-based)
• Faster and easier creation of custom claim rules
• Easier password management (for Office 365 users)
In the following sections, we describe how to configure AD FS with Office 365. Although Azure AD Connect
can take care of most of these tasks for you, going through the motions manually provides you with a better
understanding of how everything comes together, making troubleshooting easier if you ever run into issues.

Real world: In August 2018, the team at Okta (which, coincidentally, makes a product which competes with
AD FS) identified a security vulnerability in AD FS when used with MFA. An attacker who can capture one
user’s password and MFA authentication code can use the same MFA code as a second factor when
logging in as any other user in the organization. Microsoft patched it quickly (see the relevant KB articles),
and because the patch is included in the monthly security rollup, anyone who’s regularly patching their AD
FS servers will get the patch. The specific vulnerability here sounds horrifying but in practice is not quite as
bad as press reports make out. An attacker who wants to compromise Alice’s account using this
vulnerability must have the user name and password for Alice’s account and the user name, password, and
MFA code for Bob’s account. While this is certainly possible, it is fairly unlikely.

Page 314
Configuring AD FS
The following example assumes that you use a Windows Server 2016 machine. You can enable the built-in AD
FS role using the Server Manager UI or through PowerShell:
[PS] C:\> Install-WindowsFeature ADFS-Federation

Success Restart Needed Exit Code Feature Result


------- -------------- --------- --------------
True No Success {Active Directory Federation Services}

After restarting the server, you can launch the AD FS configuration wizard (Figure 11-1) from Server Manager.
The wizard will guide you through a series of steps and questions to configure the server.

Figure 11-1: The AD FS Configuration Wizard


1. Welcome: allows you to choose to create a new farm or add servers to an existing farm.
2. Connect to AD DS: requires you to enter credentials for an account that is member of the Enterprise
Administrators security group. This account is only used to create the AD FS configuration container
in Active Directory during setup.
3. Specify Service Properties: here you specify the SSL certificate which the AD FS server should use to
secure communications using HTTPS. The certificate must be issued by a public, trusted Certification
Authority and the subject name of the certificate must match the host name that will be used to
publish the AD FS farm onto the Internet.
4. Specify Service Account: this page allows you to specify a service account which will be used to run
the AD FS service on the AD FS servers. The account should be a regular domain user account and
does not require any elevated permissions. All servers in a farm use the same account and Windows
will manage its password. You can also specify a Group Managed Service Account, if your on-
premises Active Directory supports it.
5. Specify Database: on this page, you can choose what database type should be used to store the
configuration data.

Page 315
As explained before, you can create some resilience for the AD FS deployment by configuring multiple servers
in a single farm. A load balancing solution such as Windows Network Load Balancing or an external load
balancer must be in place to distribute incoming requests across the different servers in the farm. Microsoft
recommends the deployment of at least two federation servers for redundancy. A single AD FS server can
handle the connections created by approximately 15,000 users. Beyond that, you should add servers into the
farm to increase capacity up to the expected workload. Of course, these are just arbitrary numbers. If you
want to properly size your AD FS environment, use Microsoft’s ADFS Capacity Planning Guide for AD FS 2.0.
The advice also applies to more recent versions of AD FS.

Validating the AD FS Installation


In prior versions of AD FS, you could navigate to https://<servicename>/adfs/ls/IdpInitiatedSignon.aspx once
configuring the server. You could then use the page to test if the AD FS server functioned properly (i.e.: talk to
Active Directory and authenticate the user). In Windows Server 2016 this feature is disabled by default. To
access the page, you must first run the following PowerShell command from the AD FS server, after which you
can test the functionality:
[PS] C:\> Set-AdfsProperties -EnableIdPInitiatedSignonPage $true

If Windows Integrated Authentication is enabled, the web page might automatically sign you in (SSO) and not
display the authentication form. Note that once you confirm that AD FS works as expected, you should
probably disable the feature again. The only reason to leave it enabled is if you want to use AD FS to provide
SSO for third-party applications that allow users to initiate sign in from their pages. For example, you can
configure Amazon Web Services, BambooHR, or the Roadmunk or Aha! product planning tools in this mode.
If you can successfully authenticate, it means the AD FS servers can successfully talk to Active Directory,
validate user credentials, and produce valid authentication tokens. At this point, however, you cannot accept
logon request from Office 365. For that, you must first configure a trust relationship with Office 365 so that
the service knows where to send logon request, and your AD FS farm is willing to accept them.

Enabling domains for federation


By default, when you add and validate a custom domain to Office 365, it will be marked as a “managed”
domain, meaning that identities in that domain must be authenticated directly by the service. In order to use
AD FS, you must federate your domains. You cannot mark a domain as federated when you create it, and
there is no GUI to configure federation. You must federate domains using PowerShell.
Because authentication for Office 365 configuration is handled by Azure AD, you must make a remote
PowerShell connection to Azure AD first. This can only be done after you install the following components:
• Microsoft Online Services Sign-In Assistant
• PowerShell module for Azure Active Directory
After these components have been installed, open PowerShell and run the following commands to connect to
Azure AD and convert the tenant domain to a federated domain. The first command is only required if you are
running on a Windows Server 2008 R2 computer:
[PS] C:\> Import-Module MsOnline

[PS] C:\> Connect-MsolService –Credential (Get-Credential)

[PS] C:\> Convert-MsolDomainToFederated –DomainName "o365itpros.com"

Successfully updated 'o365itpros.com domain.

Running the Convert-MsolDomainToFederated cmdlet should result in a message that the domain was
updated successfully. Note that running this command requires the use of the 'legacy' Azure AD PowerShell

Page 316
module. This is because the newer version (using the *-AzureAD* prefixes) does not support this functionality
yet.

Warning. Running the Convert-MsolDomainToFederated command will immediately change the


authentication method for that domain. Don’t run it until you’ve tested your AD FS configuration and are
ready to put it into production.
The PowerShell example assumes that the code is executed from one of the AD FS servers in your
environment. If you are running the Azure AD PowerShell module from another non-AD FS server or
management workstation, you must execute the following command prior to converting the domain. This will
ensure that the AD FS configuration information can be read during the configuration of the domain
federation. In this example, the administrator is prompted for credentials. Note the account used must have
administrative access to the AD FS servers.
[PS] C:\> Set-MsolADFSContext –Computername <ADFS Server> -ADFSUserCredentials (Get-Credential)

When Convert-MsolDomainToFederated runs, a trust relationship is created between the on-premises


federation server farm and the authentication platform in Azure. As part of creating this trust, specific
information from the on-premises AD FS environment is uploaded to Azure. To view which information was
stored during the process, you can run the Get-MsolDomainFederationSettings and Get-
MsolFederationProperty cmdlets. The information includes:
• Certificate details including information about the current token signing certificate
(TokenSigningCertificate) and the future token signing certificate (NextTokenSigningCertificate). The
token signing certificate is used by the AD FS server to digitally sign the tokens it generates. Office
365 uses the public key of the certificate to verify and validate the token before accepting is as a
proof of authentication.
• The federation endpoint information includes hostname(s) of the on-premises federation endpoint.
These different endpoints are important to the AD FS operations as they allow to differentiate
between the different methods of authentication (active, passive, etc.).
The NextTokenSigningCertificate value is used only when the automatic certificate rollover feature in AD FS is
used. The automatic rollover will ensure that the new (future) token signing certificate is automatically used
shortly before the current certificate expires. This step prevents an outage in case the administrator forgets to
renew the certificate. There is a catch though. The trust only maintains information on the current and a single
future certificate. As a result, after each certificate rollover, the domain federation information in Office 365
must be updated using the Update-MsolFederatedDomain cmdlet.

Validating the AD FS Configuration


Once you have run the Convert-MsolDomainToFederated command, users should be able to login to Office
365 through your federation servers. You can easily test the authentication by signing in to Office 365 and
verifying that you are redirected to your federation service endpoint, and that you can successfully
authenticate using your on-premises credentials.

Note: To test federated authentication, you must have at least one user from your on-premises directory
synchronized with Azure AD and the synchronized user's UPN suffix must match the federated domain.
Alternatively, you can use Microsoft's Remote Connectivity Analyzer, which includes a section specifically
created to test authentication to Office 365 through federation servers. The test can be found under the Office
365-tab and is called Office 365 Single Sign-On test. To check if and for which domains federation has been
enabled, you can look at the Domains section in the Office 365 portal, or connect to Azure AD PowerShell and
run the following command:
[PS] C:\> Get-AzureADDomain | ?{$_.AuthenticationType -eq "Federated"}

Page 317
Name AvailabilityStatus AuthenticationType
---- ------------------ ------------------
Office365itpros.com Federated

To get a list of all federated identities (users who will authenticate through the federation solution), run the
following commands. Note that these commands might take a while to complete in larger environments:
[PS] C:\> $federatedDomains = Get-AzureADDomain | ?{$_.AuthenticationType -eq "Federated"}
[PS] C:\> foreach($domain in $federatedDomains){ Get-AzureADUser | ?{$_.UserPrincipalName –like
"*@"+$($domain.Name)} | Select DisplayName,UserPrincipalName}

DisplayName UserPrincipalName
----------- -----------------
Tony Redmond tredmond@office365itpros.com
Paul Cunningham pcunningham@office365itpros.com
Michael Van Horenbeeck mvanhorenbeeck@office365itpros.com

Federating Sub-domains
Sometimes a customer might want to enable federation only for some of its users. This is common in the
education market, where a large university might have 50,000 or more student accounts in Office 365, but
where it is undesirable to have those students leverage federation because it could generate a heavy load on
their AD FS infrastructure. Another scenario is when an organization seeks to segregate control over domain
authentication. Controlling the authentication per subdomain allows you to specify a different AD FS endpoint
for each domain. For example, federation requests for europe.domain.com can be controlled by an AD FS
server farm in Europe while requests for us.domain.com are handled by a server farm in the US.
When a domain is federated, all its sub-domains are automatically federated with it. However, it is possible to
selectively enable federated authentication for sub-domains by executing the following steps in order:
1. Register the child domain(s) in the Office 365 tenant
2. Register parent domain in the Office 365 tenant
3. Configure authentication per (sub-)domain
It is crucial that you register the child domains before registering the parent domain. If your tenant already
has domains configured, you can use the following PowerShell command connected to Azure AD to
determine if a sub-domain was added before or after the parent domain:
PS C:\> Get-MsolDomain -DomainName childdomain.parent.com | Format-List *

ExtensionData : System.Runtime.Serialization.ExtensionDataObject
Authentication : Federated
Capabilities : None
IsDefault : False
IsInitial : False
Name : childdomain.parent.com
RootDomain : parent.com
Status : Verified
VerificationMethod : DnsRecord

If the RootDomain property points to a parent domain, it means that the child domain and parent domain are
linked. As such, if you modify the authentication type for the parent domain, it automatically changes for the
child domain as well. Unfortunately, there is no uncomplicated way to switch to the scenario where you can
control the child domain separately from the parent domain. If you find yourself in that situation, you should
open a ticket with Microsoft support and seek their help. If the RootDomain property is empty, you can
control the authentication type of the parent and child domain separately. To convert the authentication type
for one or more subdomains, without affecting the parent domain or vice-versa, you must use the
SupportMultipleDomain switch when executing the Convert-MsolDomainToFederated cmdlet as illustrated
below:

Page 318
PS C:\> Convert-MsolDomainToFederated -DomainName childdomain.parent.com -SupportMultipleDomain
Successfully updated 'child1.parent.com' domain

The above approach works perfectly if you only have a single domain tree, like child.domain1.com,
child2.domain1.com, child3.domain1.com etc. In larger, more complex, environments where use multiple
domain trees are used, things become a little trickier if you want to use the same AD FS infrastructure for all
domains; in some conditions authentication will fail as soon as you add another domain that belongs to a
different domain tree.

The AD FS configuration database


AD FS servers use a configuration database to hold a set of parameters and data to control server operation.
By default, the configuration database is stored in the Windows Internal Database (WID) that comes with
Windows Server 2008 and later. Alternatively, the configuration data can also be stored in a separate
Microsoft SQL Server database. A Windows Server 2016 farm based on the WID is limited to a maximum of 30
servers, provided there are no more than 100 trust relationships (Relying Party Trusts) which is a lot more than
you need to support only Office 365. However, if there is a need to scale out beyond these limitations, you
should configure AD FS to use an external SQL database. In addition to being more scalable, using a SQL
database allows you to enable additional security features such as token replay detection. These features can
be very useful to protect other applications that leverage AD FS, but they do not provide much value for
Office 365.
If you choose to use the WID during the AD FS configuration, the first server that is joined to the farm will
automatically create the database. This server then continues to assume the role of ‘Primary AD FS Server’.
Because of the importance of the configuration database, all servers that are subsequently added to the farm
will replicate the contents of the configuration database from the primary federation server to make sure that
a local copy is available. This approach ensures that each server can function independently and increases the
tolerance against failures of a single server in the farm. After the initial replication, each “secondary”
federation server will poll the primary federation server every five minutes to check for updates and – if
necessary – synchronize them.
In the WID scenario, configuration changes can only be made from the AD FS Management Console on the
primary federation server. If for some reason that server is unavailable, the other servers will continue to
service requests, but you cannot make any changes to the configuration until the primary server is brought
back online. If necessary, a secondary federation server can be promoted to become the primary server as
shown in the following example:
[PS] C:\> Set-AdfsSyncProperties –Role PrimaryComputer

Next, all other servers in the farm must be reconfigured to now synchronize from the new primary federation
server:
[PS] C:\> Set-AdfsSyncProperties –Role SecondaryComputer
–PrimaryComputerName <FQDN of primary federation server>

Switching from WID to SQL


Changing from the Internal Database to using a SQL database is not an easy task. Instead, it is better to
ensure you choose the correct database from the start. However, sometimes requirements change, or you
only find out later that you need more than 100 Relying Party Trusts, for example. If you find yourself in that
scenario, you have two options:
1. Create a new federation server farm and configure it to use SQL by specifying the SQL database
information during the setup.
2. Migrate the WID to SQL. From a high-level, this process includes the following steps:

Page 319
a. Stop the AD FS Windows service on the primary AD FS server.
b. Use the SQL Management Tools to connect to the Windows Internal Database and detach the
AD FS databases.
c. Copy the database files to the SQL server and attach the databases. Before you do so, ensure
that the AD FS service account has a login on the SQL server.
d. Grant the AD FS service account appropriate permissions to the recently-attached AD FS
databases.
e. Point the AD FS server farm to the SQL database instead of the WID, and then restart the AD
FS service on the primary federation server.
f. Stop the AD FS Windows service on secondary federation servers in the farm and update the
AD FS properties to point to the SQL database, one by one. On each server, restart the AD FS
Windows service.

AD FS proxy servers
Another component to consider deploying is the AD FS Proxy server or alternatively, for AD FS in Windows
Server 2012 R2 and later, the Web Application Proxy (WAP). Although AD FS proxy servers aren't required, it is
highly recommended to deploy them to increase security of the AD FS deployment and unlock additional
features. Like the AD FS servers, you can deploy one or more proxy servers in a load balanced proxy server
farm for redundancy. That farm is then configured to connect to the internal AD FS farm.
A good reason to deploy proxy servers is to avoid exposing the AD FS servers directly to the Internet. The AD
FS proxy server does not have to be a domain-joined server, so it acts as a barrier between clients and the AD
FS servers. The proxy accepts token requests from users and then passes the information securely to the
internal AD FS servers; if the authentication succeeds. For this reason, proxies are frequently deployed in a
perimeter network. It also receives tokens from the internal AD FS servers and passes them back to the user.
Deploying a proxy does not change how AD FS works or the hostname that is used to publish the farm onto
the Internet. However, it does allow additional functionality, such as making a distinction between
authentication requests coming from inside or outside the organization's network.
In Windows Server 2008, 2008 R2 and 2012, the AD FS proxy came as part of the additional AD FS 2.0
download. In Windows Server 2012 R2 and later, Microsoft replaced the dedicated AD FS Proxy server role
with the Web Application Proxy, which is part of the Remote Access service. Some load balancer vendors
such as F5 or Citrix offer alternatives for the AD FS Proxy servers. Instead of deploying additional Windows
Servers, a service within their solution can take on the function of the AD FS Proxy. Of course, it is always a
clever idea to check with Microsoft if the solution you are considering is supported or not.
The process to deploy an AD FS Proxy Server or Web Application Proxy is very similar to how a regular AD FS
server is deployed. The following example guides you through the setup of a Web Application Proxy on
Windows Server 2016. The Web Application Proxy Role Service is part of the Remote Access Server Role and
can be installed using the Server Manager UI or through PowerShell:
[PS] C:\> Install-WindowsFeature web-application-proxy

Success Restart Needed Exit Code Feature Result


------- -------------- --------- --------------
True No Success Remote Access, Web Application Proxy}

In addition to the Web Application Proxy, it is recommended that you also install the Remote Access
Management Tools to provide access to the Web Application Proxy wizard. After the tools have successfully
been installed, but before running the Web Application Proxy Wizard, you must import the communications
certificate on the server. This certificate is selected in the wizard and is used to secure communications with
clients. The certificate should, at the very least, include the hostname of the federation endpoint. For instance,
Page 320
adfs.office365itpros.com. When the certificate is imported, open the Remote Access Management Console
and click on Web Application Proxy in the navigation pane. Then click, Run the Web Application Proxy
Configuration Wizard from the results pane and follow the steps on screen:
1. Welcome: informs you that you are about to installe a Web Application Proxy.
2. Federation Server: requires you to enter the federation service name and credentials for an account
that is a local administrator on the federation servers. This account is only used to create the AD FS
Proxy trust with the federation servers.
3. AD FS Proxy Certificate: here you specify the SSL certificate which the Web Application Proxy server
should use to secure communications (HTTPS). The certificate must be issued by a public, trusted
Certification Authority and the subject name of the certificate must match the host name that will be
used to publish the AD FS farm onto the Internet.
If the wizard completed successfully, you should be able to observe a series of events in the AD FS Admin
Event log. First an event ID 391 will be logged, indicating that a trust was successfully established with the
federation service, followed by an event 245, and 252 denoting that the Web Application Proxy was able to
retrieve and update its endpoint information. Finally, an Event ID 198 indicates the federation proxy started
successfully.

Extranet Lockout
Because the Web Application Proxy role allows you to differentiate internal from external access, you can also
benefit from the built-in extranet lockout policy, which further hardens your deployment. The feature
functions as a protection against brute force guesses, but also as a barrier against DDOS attacks.
This feature is a little misnamed. When the maximum number of failed logon attempts is reached, the account
itself isn’t locked out; instead, the AD FS server will stop processing authentication attempts for that account
for the duration specified. This choice was intentional to avoid blocking the user from accessing resources
internally.
By default, the extranet lockout policy is disabled, and it can only be enabled through PowerShell using the
Set-ADFSProperties cmdlet. When configuring the extranet lockout policy, the following options are available:
• ExtranetLockoutThreshold: this parameter defines the maximum number of failed logons before an
account is temporarily locked out.
• ExtranetObservationWindow: this value serves multiple purposes. It defines the time span during
which the AD FS servers keep track of failed logons and also defines how long federation servers will
stop processing requests for the affected account once the maximum number of failed logons has
been reached.
In the following PowerShell example, the extranet lockout feature is enabled, and the threshold for failed
logon attempts is set to 10 failed attempts in 20 minutes:
[PS] C:\> Set-AdfsProperties –EnableExtranetLockout $true
–ExtraNetLockoutThreshold 10 –ExtranetObservationWindow (New-Timespan –Minutes 20)

Changing the token lifetime


In a default Microsoft AD FS implementation, the token returned as part of the authentication process is valid
for up to 8 hours. The SSOLifeTime attribute controls the duration of the token validity and can be viewed by
running the Get-ADFSProperties cmdlet on one of the AD FS servers in the farm. This token lifetime has
nothing to do with the lifetime of tokens issued by Office 365 once a user logs on; it’s possible for a user to
have both an expired AD FS token and a valid Office 365 token at the same time.
Once a connection is authenticated, the same token can be reused to login to Office 365 services within the
specified timeframe. However, each service or application can impose different timings. Unfortunately, the
Page 321
documentation on what the exact timings are per application is non-existent. As such, you might find that
closing and re-opening Outlook will trigger the authentication process entirely again whilst doing the same
with OWA will allow you to continue to access other Office 365 workloads using the same token you received
previously. For an end user, the difference might not be visible at all. Even if the token is still valid, the
authentication platform in Office 365 will still redirect the client’s request to the organization’s federation
endpoint where – instead of re-authenticating the request – the existing token is simply re-used when
redirecting the user back to Office 365.

Restricting access to Office 365 through AD


FS
By default, when AD FS is configured for Office 365, all users in the environment can receive an AD FS token
for Office 365. Although a user can authenticate, it doesn't necessarily mean that each user will have access to
Office 365-based services; if you don't assign a license to a user in Office 365, they won't be able to use any
services. One symptom of this is when users log in to the Office 365 portal and see none of the service icons,
just the interface for changing their individual account settings.
Sometimes an organization might want to limit access to Office 365 using a variety of conditions, like a user's
group membership or possibly the location from where a user is trying to authenticate inside the corporate
network or outside (from the Internet). To allow such granularity, AD FS can be configured with something
called custom claim rules, sometimes also referred to as Client Access Policies. More specifically, these claim
rules are configured within the issuance authorization rule set which controls what accounts can receive a
token for the relying party, that being Office 365. Without going into the very specifics of how claim rules
work, you can regard them as condition-based access policies: if a specific condition is met, a user is either
allowed or denied access. Claim rules can be created manually by writing the code for the claim rule using the
appropriate claim rule language, or they can be constructed using the built-in wizard in the AD FS
Management Console.
A claim rule can contain one or more conditions. A condition consists of an incoming claim and a value that
can be used to validate that claim. For instance: email address, User Principal Name, group membership, and
so on. The administrator can then choose to allow or deny access if a value matches.
Restricting access based on the location of a client is more difficult. In this scenario, an AD FS Proxy (or Web
Application Proxy) is required. Even when AD FS Proxies are deployed, users on the internal network usually
connect directly to the AD FS servers while external users (not connected to the corporate network) connect
through the AD FS Proxy servers. Because of the difference in the connection flow, external users can now be
differentiated from internal users, for example using a new, custom, claim called X-MS-Proxy. For this to work,
however, some more configuration on the AD FS servers may be required. Older versions of AD FS require
that an administrator configure the additional claim X-MS-Proxy so that AD FS is aware that such a claim
exists. Next, the existing relying party trust with Office 365 must be updated with a new rule which will deny
(or allow) a token to be created if the claim is found in the token request.
Using the X-MS-Proxy claim is not the only way to block external traffic. Although using the claim seems
perfectly valid, it does not work in scenarios where the active authentication flow is being used, such as when
Outlook connects to Exchange Online (without Modern Authentication). Even though a user might be residing
inside the corporate network, the connection to AD FS is made by Microsoft's authentication platform, which
is outside the organization's network. In this case, another claim could be configured: X-MS-Forwarded-Client-
IP. Using a regex expression which matches the corporate's internal IP address ranges to the IP address
returned by the claim, a user can be denied access if the claim does not match. Keep in mind that when users

Page 322
connect through a VPN, if the IP address they’re issued is on the corporate network, they’ll still be able to log
in to services where the claims rule grants access based on being on the corporate network.
Different claim rules and conditions can be used to create a set of Authorization rules that allow or deny
access. To help achieve this purpose, a Microsoft article describes how to prevent access to Office 365 for
external connections.

AD FS and Modern Authentication. Some limitations still apply when using Modern Authentication. For
instance, if you want to restrict access to Office 365 based on the client’s location, or you need to restrict
access to a certain type of applications, it is worth reading through your options here.

AD FS and modern authentication


As described earlier, Modern Authentication changes the way clients authenticate with Office 365, and how
they communicate with federation servers. When you start using Modern Authentication, you might notice
that you aren't always getting a full SSO experience. Instead, clients are presented with the forms-based
authentication window, even when using a domain-joined computer from within the corporate network.
Although the Office client will first attempt to authenticate using Windows Authentication, it attempts to do
so on the WS-Trust 1.3 endpoint of the AD FS server farm, which is not enabled by default. Because that
attempt inevitably fails, Office automatically fails back to forms-based authentication. This annoyance can be
avoided easily, by enabling the WS-Trust 1.3 endpoint on the federation server farm. The quickest way to do
this, is to run the following PowerShell command on one of the federation servers. Note that before the
endpoint is available, the AD FS service on all federation servers in the farm must be restarted.
[PS] C:\> Enable-AdfsEndpoint -TargetAddressPath "/adfs/services/trust/13/windowstransport"

WARNING: PS0038: This action requires a restart of the AD FS Windows Service. If you have deployed a
federation server farm, restart the service on every server in the farm.

AD FS auditing
By default, older AD FS versions don’t keep track of successful and failed authentication attempts. Yet, this
information can be extremely useful for a variety of reasons. If not for general reporting purposes, it allows
you to keep track of failed authentication and, thus, detect potentially suspicious activity. Most reporting
solutions for AD FS plug in on top of the built-in auditing capabilities. As a best practice, enabling auditing
from the start gives you a history you can work with. To enable auditing, you must first configure the AD FS
farm to log successful and failed authentication attempts. The easiest way is to use PowerShell to apply the
change to one of the AD FS servers in the farm:
[PS] C:\> Set-AdfsProperties -LogLevel $((Get-AdfsProperties).LogLevel
+ "SuccessAudits" + "FailureAudits")

To confirm that the additional events are logged, use the following command and verify that both
SuccessAudits and FailureAudits are shown:
[PS] C:\> Get-ADFSProperties | Select –ExpandProperty LogLevel

Errors
FailureAudits
Information
Verbose
SuccessAudits
Warnings

Next, edit the local security policy on the AD FS servers to allow auditing. This can either be done locally on
each of the federation servers, or through Group Policy. Using the Group Policy Management Console, or local

Page 323
Policy editor, navigate to the following setting: Computer Configuration > Windows Settings > Security
Settings > Local Policies > Audit Policy. There, change the Audit object access to include Success and
Failure. Alternatively, you can also modify the local security policy on each federation server using the
following command. Note that, depending on your Group Policy configuration, these settings might be
overwritten at the next Group Policy refresh:
[PS] C:\> auditpol.exe /set /subcategory:"Application Generated" /failure:enable /success:enable
The command was successfully executed.

Once auditing has been enabled, the federation servers will start logging additional information in the
Security event log of the federation servers. Each authentication attempt (successful or failed) is accompanied
by several events. For successful authentications, Events with ID 299 and 500 are most interesting. The most
useful information for failed authentications is provided in events with ID 4625.

Troubleshooting AD FS authentication
One of Murphy's laws says that when something can go wrong, it will. Although one can only hope it never
happens, the chances are that sooner or later you are faced with a problem in the AD FS infrastructure.
Depending on the problem, the solution might be simple and straightforward. For instance, a server goes
down because of a hardware issue, or because an underlying dependency failed.
However, when the federation servers are working properly, but the actual authentication fails,
troubleshooting is a lot tougher, especially when you’re using Office 365. This is mainly because you do not
have control over the entire authentication process. Large pieces of the puzzle are hosted and managed by
Microsoft, and you have no visibility into their infrastructure. As such, it is harder to identify the issue, or to
know whether the issues is due to a problem at Microsoft's end, or your federation servers.
In order to troubleshoot some of these issues, you might need to trace an authentication attempt in order to
figure out what exactly is happening to the request. For this, you will need to enable AD FS Debug Logging
first. To enable debug logging, open the Event Viewer on a federation server, click View and select Show
Analytic and Debug logs. In the crimson channel, under the Applications and Services Logs a new folder
called AD FS Tracing will appear. In that folder, there is a Debug event log. By default, this log is disabled. To
start logging, right click the log and select Enable Log. Once the log is enabled, the federation server will start
spawning a ton, mostly informational, events for each authentication. Each of these events contain a piece of
the authentication puzzle and might reveal information such as the values of (some of) the claims passed
through claims pipeline.

Real World. Because of the multitude of events that are generated for each authentication attempt it is
quite difficult to filter out relevant information from the noise. If you have multiple federation servers, it is
easier to (temporarily) single out a federation server for troubleshooting purposes. That way, you can force
only specific authentications to go through the server you have enabled tracing on. Another way to quickly
search for information in the AD FS Tracing log is to use the Activity ID. The is a unique ID which identifies
the authentication attempt and by extension all events associated with it. Typically, the Activity ID is
displayed on screen when the failure happens.

Password hash synchronization as a backup


Organizations sometime substitute AD FS with password hash synchronization. Although many organizations
have deployed AD FS because it was the only way to achieve true single sign-on, the simplicity of password
hash synchronization makes it a suitable alternative for many organizations (not to mention that PTA now
exists and is much easier to deploy than AD FS).
Today, password hash sync is often deployed alongside AD FS as a backup mechanism in case the on-
premises AD FS infrastructure has an extended outage. The ability to switch from AD FS to Password

Page 324
Synchronization is not automatic and can take up to two hours to complete. During that time, and depending
on the size of your organization, some users might be able to authenticate, others might not. During all my
testing, I've seen the process complete in under a few minutes. However, those test environments were all
limited in size. Your mileage may vary!
When AD FS is still available, switching from AD FS to password synchronization is done by issuing the
Convert-MsolDomainToStandard cmdlet while connected to Azure AD. This will update both Azure Active
Directory and the AD FS server farm. However, in case of an AD FS outage the on-premises AD FS server farm
cannot be updated, and you should use the Set-MsolDomainAuthentication cmdlet instead. This will only
update Azure AD and removes the need for AD FS to be available. When performing the switch, you do not
explicitly configure password synchronization; instead you are (temporarily) "un-federating" your domain so
that it uses Microsoft's authentication system again. If password synchronization was setup previously, it is
used automatically after the next synchronization by AAD Connect. If you have not yet setup Password
Synchronization, all your users will be assigned a new password in Office 365.

Real World. As a best practice, it is recommended to configure password synchronization, even if you are
using AD FS for SSO. Imagine having to distribute new passwords to users in a large environment when
they don’t have access to email. Wouldn’t it be much easier if they could continue to use their existing
password instead?
While you can depend on different tools, it is better to make sure that your AD FS infrastructure is solid and
highly available. This ensures that users have always access to Office 365-based services, even when a single
AD FS server is experiencing issues. For instance, Microsoft Azure can be used to extend the on-premises AD
FS deployment. Instead of using a second datacenter, the on-premises network can be extended into Azure
where additional AD FS servers are deployed. This hybrid approach is ideal for organizations who do not have
access to a second datacenter but need site resiliency.

AD FS and password expiry notifications in Office 365


When passwords are set to expire, users in an on-premises environment typically receive notifications telling
them their password is about to expire or to prompt them to change the password when it expires. The same
is true for users in Office 365 that have a cloud-only account. Federated users in Office 365, however, do not
get a notification in Office 365. This can be a problem for remote users that are never in the office, or do not
use a corporate device. This is because Office 365, by default, does not know when the on-premises password
is about to expire.
Although AD FS supports providing information about password expiration to Office 365, it is not enabled by
default, and you must configure an additional Issuance Transformation rule so that the information is sent to
Office 365. Using the Claims Rule wizard, add the following Issuance Transformation rule to the Office 365
Relying Party Trust:
c1:[Type == “http://schemas.microsoft.com/ws/2012/01/passwordexpirationtime“]
=> issue(store = “_PasswordExpiryStore”, types =
(“http://schemas.microsoft.com/ws/2012/01/passwordexpirationtime“,
“http://schemas.microsoft.com/ws/2012/01/passwordexpirationdays“,
“http://schemas.microsoft.com/ws/2012/01/passwordchangeurl“), query = “{0};”, param = c1.Value);

The newly-created claim will ensure Office 365 gets the following additional information:
• The exact time when the password expires
• The number of days remaining before the password expires
• The endpoint (URL) where the password can be changed
Before the latter can work, you must first enable AD FS to allow password changes. This is explained in the
next section.

Page 325
Enabling password updates through AD FS
AD FS in Windows Server 2012 R2 and later versions support updating expired passwords, but by default the
feature is disabled. The ability to update an expired password is not the same as a password reset. The latter is
something that the user can do at any given time, while updating an expired password only happens when it
is expired. Resetting passwords is typically done by an administrator or the users themselves through a
password reset portal like the Azure Self-Service Password Reset portal. If you have configured password
writeback, the password can also be changed (reset) in the on-premises directory. To enable the ability to
update an expired password, you must complete the following two steps. If you do not want unregistered
devices to have to option to update passwords, you can skip the first step.
To enable the feature, you must enable the AD FS endpoint used for password updates. Do this by running
the following PowerShell command on one of the federation servers. Afterwards, you should restart the AD FS
Windows services on each of the federation servers in the farm.
[PS] C:\> Enable-AdfsEndpoint "/adfs/portal/updatepassword/"
WARNING: PS0038: This action requires a restart of the AD FS Windows Service. If you have deployed a
federation server farm, restart the service on every server in the farm.

If you have deployed Web Application Proxies, you should also run the following command before restarting
the AD FS windows service on each federation server. Once the feature is successfully enabled, users will be
prompted to change their password through the AD FS portal.
[PS] C:\> Set-AdfsEndpoint "/adfs/portal/updatepassword/" –Proxy $True

Enabling "Persistent SSO"


Those who have used Office 365 without AD FS must have noticed the Keep me signed in checkbox on the
Office 365 portal login page. By default, this functionality called "Persistent SSO", is disabled. To make the
option appear on your AD FS forms-based login page, run the following PowerShell command from the AD FS
server:
[PS] C:\> Set-AdfsProperties -EnableKmsi:$true

When enabled, and upon the first authentication, the client receives a cookie which is valid for 24 hours. This
cookie enables the client to login across multiple browsers and sessions for as long as the cookie is valid. After
24 hours, the user must re-authenticate. Without the feature enabled, the authentication cookie is only valid
for the active session.

Page 326
Chapter 12: Delve
On its introduction in 2015, Delve was one of the more interesting Office 365 applications. Some will consider
it essential because of Delve’s ability to find information stored in OneDrive for Business and SharePoint
Online sites and attachments in Exchange Online mailboxes, while others think of Delve as an overrated but
pretty search utility. Like any application, the value you extract depends on how you use the software.
Today, Delve is less interesting. Microsoft Search has got better, more tools are available to help users
organize and process information, and to a certain degree, Microsoft appears to have lost interest. However,
Delve was the first application to use the Office Graph and it is still available today, so it deserves a little
respect.

Mastering Information
The sheer quantity of data accessed by humans has exploded in the last decade. We receive more messages
than ever before, have access to more web sites and pages, and electronic documents have largely replaced
paper equivalents as the basis for most business communications. At the same time, it has not become any
easier for people working within large companies to understand what knowledge existed within the company.
A case perhaps of not being able to see the forest because all the trees got in the way.
When you think about it, an Office 365 tenant is nothing more than a huge data mart with people adding
more data to the mountain every day by creating Word documents, OneNote notebooks, Excel worksheets,
PowerPoint presentations, and other files and storing those files in SharePoint document libraries or
circulating them as attachments to other people within their company. Perhaps valuable information is
glimpsed once and then forgotten in the flood of new data generated daily. It is reasonably common to find
that people know about something but have forgotten where to find the exact information when they need it.
And project teams will work on information in splendid isolation from similar efforts undertaken by other
teams in the same company, perhaps even in the same building. The aim of Delve is to help users master the
information that already exists within their company by exposing information to people who have access to
knowledge but might not know that it exists.
Which brings us to Delve, defined by Microsoft Principal Engineer Alex Pope as “the primary profile experience
and destination for Office 365. Delve helps you contextualize, discover and refind Office 365 content about a
person.”
In other words, Delve makes sense of the data inside Office 365 tenants to make it personal and valuable to
an individual. Delve delivers on this commitment by using signals and information gathered inside the Graph.

Office Graph and Microsoft Graph


Before Delve can help users, it must be able to base whatever it surfaces on facts. The foundation of Delve lies
in the Office Graph, a massive database that uses graph structures for semantic queries with nodes, edges,
and properties to represent and store data. The Office Graph is now part of the Microsoft Graph.
Essentially, graph databases set out to achieve an understanding of how objects relate to each other. Many
graph databases are available and in use today, but we can trace the heritage of the Graph, which underpins
Delve, from work done in Facebook to build the Open Graph to store web pages as objects in a social graph.
Yammer built on this work as the Enterprise Graph, which mapped the relationships between people and
information by recording interactions such the posts, replies, shares, and uploads which occur inside a social
network as well as the likes to show the value individuals place on these activities.

Page 327
Microsoft bought Yammer in 2012. When Microsoft integrated Yammer into their operations, they looked at
how to expand the “signals” consumed by the Enterprise Graph to make sense of the actions taken by people
working with the Office applications to better understand the flow of information that occurs between
individuals. These actions include pieces of work like documents and messages, how sharing occurs between
people and groups, who has worked on what document and when, who sends messages to someone else,
access to documents in SharePoint sites, the instant messages sent by Skype for Business Online, who attends
meetings and when meetings occur, and so on. This work resulted in the Office Graph, which Microsoft
announced at the SharePoint conference in early 2014. Subsequently, Microsoft introduced the Microsoft
Graph as a unified programming interface to multiple forms of information including Office 365, Azure Active
Directory, and Windows.
The Graph holds information about Office 365 objects (like users and documents) and the relationship that
exists between them (such as Tony has shared Document A with Paul), The data are stored as nodes and
edges in a graph index (Figure 12-1). Users and documents are nodes and actions such as sharing and
updating documents are edges. Interactions can be private (for instance, you create a document but do not
share it with anyone) or shared (you then share the document with a team via OneDrive for Business). Sharing
is obviously an important signal, but so are other actions like whom someone converses or meets with on a
regular basis. The Graph captures signals for these interactions as people use Exchange Online, SharePoint
Online, and other applications. The signals form the nodes and graphs and connect the dots in a way that
makes sense. That sounds simple, but it takes a huge amount of CPU power and sophisticated machine
learning algorithms to process and make any sense of the data. The data held in the Graph is then consumed
by applications using the Graph Query Language (GQL) and the Microsoft Graph API.

Figure 12-1: Connections within the Microsoft Graph (source: Microsoft)


The Graph does not assemble data drawn from across Office 365 into a single database. Instead, it stores
metadata and relationships about data that lives in other repositories, such as Azure Active Directory. The data
stores used by the Graph are:
• The Tenant Graph store: Bulk storage of data drawn from tenants optimized to allow analytics to be
calculated quickly.

Page 328
• The Active Content Cache: Provides access to active nodes and edges that are used by applications
like Delve for functionality such as “trending around you” or Office 365 Groups to suggest groups to a
user that they might like to join.
• The Input Router: Responsible for notifying internal and external components of changes made the
graph data stored for a tenant.
To keep computation close to the source data, components running within each workload perform analytic
calculations and then push the resulting data to the tenant graph. The data in the tenant graph then becomes
the input for tenant-wide calculations. For example, Exchange Online has information about the people that
users communicate with in mailbox recipient caches. This data is an important input to calculate whom is
important to a user and makes up a set of known edges for the user’s graph. Exchange Online collates the
data and then pushes it to the tenant graph where it joins information gathered from SharePoint, Skype for
Business, and other workloads to build up a complete picture of a user’s activity within Office 365.
Microsoft exploits the Graph in many places within Office 365. For instance, any time you see a reference to
“trending data” inside Office 365, you can bet that the Graph is the source of authority for that information.
Because the Graph gathers signals about how people generate and share information between each other
and understands the organizational relationships that exist within a company, algorithms can figure out
whether a document, video, group, or other object might be of interest to a user. Features like Discover View
in OneDrive for Business, the way that Office 365 Groups suggests new groups a user might like to join, and
information about trending videos in the Office 365 Video Portal are all based on feeds from the Graph.
Figure 12-2 illustrates another example of the Graph in use. In this case, the People Cards used to display
personal information about Office 365 accounts. Contacts are fleshed out with data to show details of how
and when the user interacted with different individuals. As you can see, the information held in the Graph also
spans external recipients and guest accounts as well as tenant users.

Figure 12-2: How People cards use Graph signals

Page 329
The more interaction someone has with Office 365 applications, the more data about their activities is
available in the Graph and the easier it becomes to distinguish important work from merely interesting
activities. In the same way, the more interaction occurs between users within a tenant, specifically in terms of
sharing or common work, the easier it is to understand the paths of collaboration between teams and
individuals within the company. Someone who only stores their work on their personal PC will generate
relatively few signals for the Graph (email attachments sent to that person will generate some), so their
experience of using Delve will be much less complete than someone who uses OneDrive for Business and
SharePoint team sites to share documents with their team.

The Graph Framework


Apart from its role to collect information drawn from multiple sources across Office 365, the Graph is also a
framework for applications that want to exploit that information. In this context, the framework is called the
Microsoft Graph, a collection of multiple APIs that allow programmers to build applications based on Office
data. Each type of data, such as messages, files, contacts, and so on, is reached by a well-known endpoint.
Together, the endpoints allow applications to mix and match data from multiple Office 365 sources. A good
example is how Microsoft Teams combines its own chat service with functionality taken from SharePoint,
Planner, Skype, and OneNote. Another is how MyAnalytics makes use of the Graph to derive intelligence
about user activities to help them work smarter.
Training material about the Graph is available online. You don’t have to write code to appreciate how the
Graph works, but it is helpful to go through some of the lessons to get some background into its functionality.

Delve and the Graph


The Graph laid the foundation to collect and analyze information about how Office 365 users interact with
other people and the applications. Delve was the first application to exploit the information held in the Graph.
Delve was the first Office 365 application to use the Graph to succeed in its aim of revealing the most relevant
content to users based on the people with whom they work and the topics that they are working upon.
Microsoft sometimes refers to this as “pivoting on people”, which means that Delve is designed to take a
people-centric view of information (such as what they are working on) rather than an organization-centric
driven-from-the-top view as is often implemented in other systems. Another way of looking at the problem is
to consider questions that occur every day within large companies:
• Who has the knowledge to help me with my work?
• How can I connect to people across the company to solve the problems I am working on?
• How are documents stored and how can I access work that is of interest to me?
• What are other people working on?
• What could I work on?
Traditionally, email has been the great connector within companies. Email continues to be a very powerful
unifying influence within the networks that bind companies together, but its power is limited by the need to
make a first connection with someone to “discover” that person and to understand what they do and how a
relationship might be mutually beneficial. Once people connect, email is a fantastic way to develop and
enhance the relationship by sharing information on a one-to-one (person) or widespread (group) basis.
However, new entrants into a company must make their own connections to become effective and that can
often be a real challenge.
Delve presents information to users in two ways that help solve the problem. First, Delve calculates which
users in the Graph are most relevant to the current user. Second, Delve retrieves and presents the most
relevant content available to the user based on their interaction with those users, saving users the time and
effort of having to go and check repositories such as SharePoint team sites or the OneDrive for Business sites

Page 330
of other users to find whether new and interesting information is available. Because Delve knows what is
interesting to you, it has a role in breaking down silos that occur deliberately or by accident by exposing
information to you if your access allows you to see it.
Delve appears as an app in the Office 365 app launcher, and users can access it through a web browser or one
of the Delve apps. Delve is one of the "next generation" portals built inside Office 365 that Microsoft hopes
will allow users to make better use of information. The Office 365 Video portal is another. These portals both
have a tight connection to the Graph. In some ways, the role of Delve is to be the "search and discover" facility
for Office 365. Many people who work in large organizations will say that huge amounts of information are
available to them, if only they could find what they are looking for. Delve shows users the files that they have
worked on most recently, which is a great help in keeping track of things. Two other functions help too. First,
Delve’s uses the Search Foundation to find information, which makes Delve the easiest way to search across
SharePoint and OneDrive sites, including those that are on-premises (the SharePoint hybrid search capability
allows the sharing of metadata from on-premises servers with online services, thus exposing those items to
Delve). Second, humans often remember the person who shared some information with them rather than the
exact details of the shared information that. Delve allows you to navigate to that person’s profile and explore
what they have been working on. If that person has shared something with you, Delve will show it to you.
Delve never extracts content (like the text of a document) from the repositories where data exists. In Instead,
Delve presents users with a view (the cards) onto information that might be of interest together with links to
connect to the information. Delve calculates the files shown under a user’s “Activity” by the actions taken by
that user (such as the files they store in OneDrive) and whom they share information with through access to
SharePoint libraries, as well as explicit sharing of items stored in OneDrive for Business libraries, and Exchange
Online email attachments. The people with whom an individual user works can be figured out by observing
the interactions between users, the sites they share, the email traffic between them, and the meetings they
attend. The information held by the Graph gradually decays over time, so the decisions reached based on its
contents reflect the current state rather an historical trends and interests.
Delve is available to all Office 365 tenants with enterprise plans (E1 to E5 and their academic or government
equivalent). Delve is not available to tenants that use the standalone SharePoint Online plans.

Enabling the Graph


The Graph runs in the background for all tenants to gather the data used by Delve. Before anyone can the
features of Delve, you must enable the Office Graph for the tenant. Or rather, if you do not want to use Delve
and the Graph, you must disable it because the default state is for the Graph to collect signals for a tenant. To
change the Graph state, open the Settings section of the classic version of the SharePoint Admin Center, and
then enable or disable access to the Graph as shown in Figure 12-3. You can’t update this setting through the
modern version of the SharePoint Admin Center (yet).

Page 331
Figure 12-3: Enabling access to the Graph

Delve Browser App


The user interface of the Delve browser app (Figure 12-4) is a dashboard divided into a navigation pane to the
left and a large viewing pane to the right. From top to bottom, the navigation pane has several links to
navigate Delve:
• Home: The entry point to Delve, which reveals the files that Delve believes to be of most interest to
the user based on who has been working on items of shared interest, recent documents shared with
the user, and so on. Files sent to the user as email attachments also appear here.
• Me: The idea is to give users an easy-to-use way to get back to their work or what they might be
interested that others are working on. Several tiles are displayed:
o Go back to recent documents: Lists all the documents and files that the user has accessed
most recently, but only if the files are in SharePoint document libraries or OneDrive for
Business sites. Items exposed here include files stored in the document libraries belonging to
Office 365 Groups. Shared notebooks also show up here as do videos in the Office 365 Video
Portal that the user has watched. Initially, a small number of the most recently accessed files
are listed but clicking “See All” expands to a much larger set.
o See what people are doing: Lists documents and files that are being worked on by people
with whom the user interacts. This is section is intended to highlight new information that the
user might not know about that they might have an interest in seeing. You can also click on
someone’s avatar to see information about their activities.
o Update Profile: Displays personal information about the user fetched from their Office 365
profile.
o Organization: Gives a graphic view of the reporting relationship for the user within the
organization. This information comes from Azure Active Directory.
• Favorites: This view lists favorite “boards,” a navigation mechanism for Delve (we will discuss this
concept shortly) and documents tagged as favorites. Users can mark their most important documents
as favorites so that they are always nearby. Documents shown as favorites can only be in SharePoint
Online or OneDrive sites. You cannot mark a file on a local workstation, a file share, or SharePoint on-
premises as a favorite.
• If a user has a license for MyAnalytics, an “Analytics” link also appears to allow the user to access
their personal dashboard. MyAnalytics is a separate application described in the companion volume.
Logically, because Delve’s mission is to present information that is important to an individual, the dashboard
seen by one user has different information to that shown to anyone else. Delve calculates the data used to
populate each of the parts of its interface using canned queries against the Graph. Thus, the list of people

Page 332
shown are those whom the user has interacted with recently based on signals such as email, meetings, or
document sharing. Photos might be missing for some of the users, which is a sign that they have not updated
their profile by uploading a photo.
Apart from the links described above, the navigation pane has two other kinds of navigation to help users to
get to data quickly. The “People” link lists other people whom Delve considers the user to work with closely
(based on the signals gathered in the Graph). Only users within the tenant show up here. Clicking on the link
for a person brings you to their Delve page and displays their profile and documents on which they have been
working. However, only documents that the user has permission to see are surfaced here. The second
navigation aid is a set of links to Delve boards. You can think of a board as a tag that you can add to
documents to form a collection of documents that have the same tag. We will discuss how to create and use
boards shortly.

Figure 12-4: The Delve browser app


Delve has no role in compliance or the retention of information. It is purely a means of displaying relevant
information to users to encourage them to use that information more productively. Compliance issues such as
data loss prevention or the preservation and retention of information depends on the mechanisms used by
the underlying applications. As items age and no activities occur for those items, they lose relevance to Delve
and are replaced by other items that might be more interesting to the user.
Over time, as SharePoint document libraries and OneDrive personal sites are populated, more information will
become available and Delve becomes more useful to the average user, especially when they need to search to
find files. The key point to remember is that Delve cannot find anything if nothing exists in the target
repositories available to the user, and it cannot reveal anything if the user does not have permission to see
content.

Page 333
Delve Search
A major benefit of Delve is its ability to search Office 365 for relevant data, especially documents. Of course,
like anything else in Delve, if users do not store information in places where Delve can find it, nothing will ever
turn up. It is quite common to meet this situation at the start of Office 365 projects where the first focus is
often to move on-premises mailboxes to the cloud and nothing happens to convince users that they should
be using SharePoint Online and OneDrive to store business documents (including the document libraries used
with Office 365 Groups and Teams) instead of the file shares or personal storage.
The Search Foundation is very important to Delve because Delve depends on its content indexes to find
information. The Search Foundation began life as technology obtained through the acquisition of Fast Search
and Transfer ASA (FAST) in 2008. The Search Foundation updates the content indexes for data created
through applications like Exchange and SharePoint (the on-premises versions of these products use the same
technology) and the content indexes are available for use by any Office 365 application.
The Search Foundation content indexes come from the content in documents and messages. Usually,
searching against the content indexes is efficient and usually very effective in terms of the results returned.
However, users can dramatically increase the chances of finding a specific document if they take the time to
update document properties like title, tags, and comments to include likely search terms. For instance,
including “personal dashboard” as a tag for the document file for this chapter might make it easier to find the
file. Unfortunately, users are not good at populating document properties and the names given to files are
often strange, misleading, or quixotic, so finding files can often be a challenge.
Indexing does not happen at once when a user adds a document to a library or updates an existing file, and it
can take a little while before a new item appears in Delve. Not all content that you expect might show up
within Delve, but there is usually a good reason why not. For example, if no one has accessed a document in
the last three months, it is unlikely to be an item of immediate interest to someone and therefore will not
appear. In addition, Delve only supports Word, PowerPoint, Excel, Notebooks, and PDF files as well as video
files uploaded to the Office 365 Video portal. Anything outside these boundaries does not appear in Delve,
including common types such as Microsoft Project plans.
Possibly because users find it more difficult to find documents as tenants accumulate more information in
OneDrive for Business and SharePoint Online sites, Microsoft rolled out “intelligence-powered search” in
February 2017 as part of the transition to a new version of Delve. At the time, Microsoft said that the content
indexes for Office 365 grow 10% month over month. We can interpret this to mean that more people use
OneDrive for Business and SharePoint Online sites to store documents instead of local drives. This is exactly
the kind of mindset change that you want to see if you want to maximize the use of Office 365, including
search. However, the downside is that the swelling volume of unstructured and ill-organized data can make it
even more difficult for users to find information. It is a classic example of looking for a needle in a haystack.
To solve the problem of how to find the right information among so much data, Delve combines the raw
content indexes with intelligence derived from the Microsoft Graph to generate more precise and focused
search results. Instead of returning results that match input queries, Delve filters the results to extract
information that is most relevant to users. Delve takes factors such as who created documents, the site
holding the documents, and your relationship to those people and sites into account when presenting search
results. The idea is that you should see the most relevant information first followed by less relevant content as
the search progresses.
Searches start when a user types some characters into the search box as Delve begins to evaluate search
terms and presents information as the search returns results. Figure 12-5 shows what you can expect to see. In
this case, we enter “Office 365” as the search term. The response highlights a mailbox, some documents, and
two boards. The Graph heavily influences the results shown here, so the documents listed are usually the most

Page 334
recent matching items the user has accessed. This makes sense because in many instances, users are
interested in recent work.

Figure 12-5: Delve displays initial results


If the quick search results do not turn up the desired information, the user can go on to a complete search by
pressing the Return key or clicking the blue arrow to the right of the search box. Delve then starts a more
exhaustive search, which returns results like those shown in Figure 12-6. Again, we look for items matching
“Office 365” and Delve responds with some users who might match the search term, including members of
the author team and the mailbox used for reader feedback. If you want to limit search results to documents (a
generic term that also spans presentations, spreadsheets, and so on), you can opt for “Documents” instead of
“All result types” in the drop-down choice in the menu bar. Much the same approach is used in with how
Search works in the SharePoint Online home page, the difference being that SharePoint Online does not
include boards or people in its search results.
Clicking a user icon brings you to the user’s Delve page and shows you the documents they have worked on
most recently. To the right, we see a set of boards where matching documents exist. Below the users, we see a
list of documents returned by the search. Selecting a document in the list launches an online viewer to reveal
its content while clicking the location or person link to the right brings you to the storage location.
Remember, Delve only shows information to a user when that user has the permission to see it.

Page 335
Figure 12-6: Delve search results
The integration of Graph into Delve search delivers more precise results than the earlier implementation.
Intelligent search does not mean that users can continue to add rubbish to Office 365 and expect Delve to
make sense of what people dump into SharePoint or OneDrive. Paying attention to document titles,
properties and tags is still critical to search precision and the ability of users to find documents quickly and
easily.

Delve User Profiles


Apart from exposing details of a user’s recent activity, the Delve browser interface (Figure 12-4) includes
information about the user’s profile, which allows anyone who navigates through Delve to the page to
discover information about the user such as their job title and contact details. The organizational structure
reported in the profile is based on the information held about managerial relationships in Azure Active
Directory while the information about other people comes from interactions with other users as recorded in
the Graph. The profile information shown here appears in other places within Office 365, such as when
someone views author information in SharePoint and OneDrive for Business document libraries.
Users can update their Delve profile through Update Profile, which exposes a set of properties defining the
contact information for the user. Some of these properties are updatable and some are only editable by an
administrator. Contact information is defined as properties in the User Profiles section of the SharePoint
management console. The Delve profile only uses a small set of the available properties. Administrators can
select which of the properties are visible through the User Profiles section of the SharePoint Admin Center,,
including properties that are updatable by users, and the properties that are locked down. For example, the
organization usually assigns the number for a user’s Work Phone and the user cannot change it, so it doesn’t

Page 336
make sense to update that property in the Delve profile. On the other hand, the user owns their Home Phone
and is therefore writeable. The same logic holds true for the Office and Office Location properties, where the
latter serves as a user-friendly guide to the location of someone in a building (for instance, “behind the large
plants on the 2nd floor”).
Tenants can define custom properties and make them available in user profiles. However, a critical point to
understand is that all writeable properties only update SPODS (the directory used by SharePoint Online) and
changes to these properties do not synchronize back to Azure Active Directory. The Delve profile also allows
users to enter some information about themselves (Figure 12-7). This information divides into:
• About Me: A free-text personal description.
• Projects.
• Skills and expertise.
• Schools and education.
• Interest and hobbies.
Delve does not apply any checks to any of the information input and it is entirely up to the user what they
choose to include. All the information entered here is visible to other users when they view the contact details
for the user.

Figure 12-7: Updating personal information for a user

Content cards
Delve show information to users through content cards. You can’t control the appearance of these cards or
what information they show. As you can see from the left-hand side of Error! Not a valid bookmark self-

Page 337
reference., a content card gives some added context for the user to understand whether the object
represented by the card is interesting to them. If present, a graphic extracted from the item is used to
highlight content along with the author name and picture. If several graphics are available in an item, Delve
selects the best graphic based on resolution. The author picture is from their Delve user profile, which coms
from the photo data in their Azure Active Directory account. The name of the item used in the card comes
from information inside the file, such as a Heading 1 title used in a Word document or the title used for the
first slide in a PowerPoint deck. Information about the item's location is also provided (for instance, the name
of a SharePoint library or a OneDrive for Business site that holds a document), together with a visual sign to
show if the item is associated with any boards. In this case, we see the number 1 beside the boards icon, so
we know that the document is tagged for a single board. The “titbit” or hint at the top of the card tells the
user why Delve is showing them this information. It could be that someone has recently updated a document
or that some of your colleagues have worked on a document.

Figure 12-8: Delve content cards


Cards for files stored in SharePoint and OneDrive sites have a count of views, which increase as users access
the file or because of edits to the document by the author. At first glance, an item with a high number of
views might seem to be very popular, but it could be the case that the views accumulate because of frequent
edits by the user and no one else has ever been near the file.
The ellipsis menu for a card (in the lower left-hand corner of the card) exposes some more actions. The user
can send a link to an item, copy the link to the clipboard (the links open the item using an online app, if
available), chat about the item in a Yammer group (if the tenant uses Yammer), or see who has access to the
document. This choice takes you to the access control list controlled by the native application and shows you
to whom Delve might display the item in a document view.
Some like the card-centered user interface, others would prefer a more traditional list interface. As it happens,
if you want to create your own interface you can build it on top of the GQL API. Many examples of using GQL
to create new interfaces to explore the kind of information handled by Delve are available online, including an
interesting project that uses Cortana to access Delve.

Delve Boards
Beneath the navigation links in the left-hand side of Delve’s interface, you might see entries for some
"Boards", which are an easy and convenient way for users to mark items as being of interest for some reason.
Put another way, boards allow users to assemble collections of items that belong together, no matter where
they are stored and managed within Office 365. The collection might be temporary, as in the case when a
Page 338
board is used to focus attention on items of interest for a certain meeting, or have a more permanent nature,
as when a board is used to tag all the items accumulated during a long-running project. Boards also allow
users to reuse and build upon information, as in the case when you create a board for a new project and
incorporate items from other boards to serve as the basis for discussion.
Figure 12-9 illustrates a practical example of boards in use. We can see a set of boards in the left-hand
navigation pane, including those that I use to organize documents associated with projects such as my blogs,
travel arrangements, and presentations. Clicking a board reveals all the files associated with that board and so
allows a board to become a fast shortcut to the files that belong to a specific project or area of interest. In this
case, the board reveals the set of documents containing the text for blog articles. A link to a board can be
emailed (with the Send a link choice) to tell fellow workers about the information tagged in the board. When
the recipient uses the link, Delve reveals all the documents in the board that their permissions allow them to
see (some items might be restricted).

Figure 12-9: Documents listed on a Delve board


Boards allow users to gather collections of associated items together without having to seek help from the IT
department, who might otherwise be called upon to create a specific site to hold information belonging to a
project or other initiative. Items tagged with boards can come from many of the Office 365 repositories –
team sites, group libraries, or their personal OneDrive site. Items stay in place in their source repository and
can appear in multiple boards which, in effect, allows users to categorize information to reflect the changing
needs of the organization without the help of a professional knowledge manager or curator. After a user
selects a board to view information, Delve only shows them the items that they can access, even if many other
items are tagged with the board.
You can only add an item to a board if the item’s card displays a board icon. File types that supports boards
includes Word, Excel, PowerPoint, OneNote, and PDF (Microsoft is working to expand the set of supported file
types). To add an item to a board, click the boards icon (for a card) to reveal a dialog where you can enter the
name of a board. As you enter the name, Delve shows a list of recently-used boards. However, you can input

Page 339
whatever name they like. Boards are created by users and there is no administrator intervention to control the
boards that are used or to create boards on behalf of a user. A board assigned to an item by a user becomes
an attribute of the item and is visible to other users. When you add an item to a board, you can click the
board name to find all the other items tagged with that board name. Users can tag an item with many
different boards to reflect the way that users categorize and organize their information.

Board curation: Delve does not include any features to allow users to manage the boards that they create
or for administrators to later fix any problems that might be introduced by users (like spelling mistakes) or
to impose a taxonomy for cards across the tenant. Microsoft said that they might address this need by
listing the boards created by a user in their profile and allow boards to be created, renamed, or removed
there. Although a common request from administrators, especially those dealing with large tenants,
Microsoft has given no sign whether it will be possible to exercise administrative control over boards.
When you add an item to a board or create a new board, Delve automatically adds you to the list of followers
for the board. Following a board is a way of gaining quick access to the contents of the board as a list of the
most recent followed boards always appears in the left-hand navigation pane. Clicking Favorites in the
navigation pane will bring to you a page where all your favorite boards and documents are listed. To find
boards that have been created previously, start typing in the Search box. Delve will list individual documents
that match your query along with boards whose name matches the query.
If you’re not interested in following a board or want to free up space for a board that you think is more
important, you can use Remove from Favorites (available when viewing the contents of a board). Conversely,
if you find a board that seems interesting, you can use Add to Favorites to start following it.
Anyone can remove an item from a board by hovering over board name on the item’s card and clicking the X.
This removes the selected item from all boards. Removing an item from a board does not happen at once for
all users as the change takes time to ripple across the system.
Above the set of boards, you see a listing of other users (from the same tenant) that the Graph believes to be
of interest because they have had some interaction with the user, perhaps as an email attachment or through
common membership of a document library. Clicking a user’s avatar navigates to that person’s Delve
dashboard and reveals information that they have worked on that you can access. Another tile on the
dashboard lists documents that others “working around” that person own that might be of interest. Again,
you won’t see anything here unless you are allowed access to the information. The Graph acquires
information about new users from Azure Active Directory daily, so it might take a little time before a new user
shows up in this list, but they will if they share information with you.

Making Delve your Start Page


When a user signs into Office 365 via a browser, the default action is to display the Office 365 landing page
(http://portal.office.com). But we all have different working habits. Some people like to start every day in
Outlook, while others will want to see their calendar to understand the outline of the day ahead. As Delve
becomes more capable of surfacing information drawn from across Office 365, it makes sense to consider
whether Delve should be your home page. If you’d like to do this, click the cogwheel symbol in the Office 365
menu bar, select Office 365 settings, then select Delve as the Start Page, and then Save. The next sign in to
Office 365 will bring the user to the Delve Home page.

Privacy and Security


The mission of Delve is to bring pertinent information to users. This aspect of the application leads some to
worry about inadvertent information leakage. In this respect, it is critical to realize that Delve never reveals any
document or other information to a user that the user is not already entitled to see. Delve does not change

Page 340
the permissions assigned to a document by its owner and enormous care is taken to ensure that access is
controlled by the security settings managed within applications. For example, you won't see an attached
document unless you received that attachment in email or you won't be able to see a document in a
SharePoint library unless your Office 365 account has permission to access that library. Today, only users with
Office 365 credentials for your tenant can use Delve, so there's no chance that someone outside your tenant
can use Delve to gain unauthorized access to information

The impersonation question: SharePoint Online supports the ability to write code that executes in the
context of another user’s identity (impersonation). When code that uses impersonation processes
documents in libraries, the Graph notes this fact but registers the name of the person running the code
rather than the impersonated user as the account that last processed the document. This can lead to
interesting results in Delve where someone appears to become a hyperactive user and the people who
apparently accessed documents do not show up at all.

Delve never presents information to a user if they cannot see that content. However, it is possible that people
will see information through Delve that the authors might prefer not to share. This is through no fault of the
application. Instead, it underlines the importance of setting and updating appropriate permissions on
document libraries and other sources of information so that they can be properly harvested by the Graph.
Because Delve only ever shows a user information they can see, if you visit a co-worker’s page (click on one of
the people listed in your page), the cards revealed by Delve belong to files that you can access rather than the
complete set of information available to the other user. You do not see all the documents belonging to your
co-worker nor do you see cards that Delve considers relevant for them. The search query executed by Delve
when you navigate to the other user’s page takes your credentials into account when it calculates what cards
to display.
Some unverified reports claim that Delve has revealed documents to someone when they should not have
been able to access the information. Invariably, the root cause of these incidents proves to be permissions set
on the document or the hosting site in such a way that the user can view the content. Delve always trims the
results shown in its views to remove items blocked from a user by inadequate permissions. As Delve becomes
better known within an organization, perhaps it will be a catalyst for a review for how administrators secure
SharePoint Online sites through permissions plus the accounts that can grant administrative access to site
collections.

Real World – Administrator permissions and Delve: tenant administrators are usually administrators of
SharePoint site collections. As such, these accounts have access to all documents stored in all sites. If these
accounts are used by administrators for their day-to-day work, Delve will surface information to them
drawn from sites across the tenant. Although Delve will filter this information to only present items
deemed to be relevant to the user, a high likelihood exists that some confidential data will eventually
appear. Again, Delve is only following the rules – these accounts have access to the information through
their site collection administrator status, so it is OK to show the user any file in the site collection. For this
reason, it is best to ask administrators to only log in and use highly permissioned and privileged accounts
when they need to perform administrative operations and to use lower-permissioned personal accounts to
work with Office 365 applications.

Disabling Delve’s Ability to Show Documents Authored by a User


Some people might be uncomfortable at the prospect that Delve might reveal documents they work to other
users. Informing users about documents, videos, and other files generated by co-workers is a major part of
what Delve does. The intention is to make information more discoverable and available within the
organization. Delve only exposes documents and other data to other users if those people can view the
information. Users receive that authority through their access to the SharePoint site holding the data or

Page 341
because someone has explicitly shared a file with them. Even so, a user who works with confidential
information might be happier if they can block any possible disclosure through inadvertent exposure through
Delve. For instance, those working on a document describing a new salary structure for the company might
well prefer the existence of that document to remain a tightly guarded secret until the time comes to reveal
the new structure to employees.
To prevent Delve showing their documents to other members of the tenant, users can disable document
sharing. To do this, go to the cogwheel menu (Figure 12-10), and move the Documents slider from On (the
default) to Off. This does not disable document permissions that are in place. These permissions stay in place,
and users can access documents through normal sharing mechanisms or through their access to a document
library. All the slider does is control the ability of Delve and OneDrive for Business to show documents to
other users if algorithms running against the Graph decide that those documents are “of interest” based to
those users. The set of Delve options includes MyAnalytics. You will see a different screen if your tenant does
not have the necessary licenses for this application.

Figure 12-10: Delve Sharing Options


Even after someone disables document sharing, Delve continues to make other information available to the
user, such as the organization view and people with whom they work. However, the experience and usefulness
of Delve is much reduced. For example, if the user selects one of the people listed in the navigation pane, they
will not be able to see documents authored by that person and shared with the user because that view
depends on documents sharing data recorded in the Graph. Although the Graph will stop tracking document
activity for the user, any information previously recorded stays in the Graph, which means that Delve might
reveal those documents to other users until the Graph flushes the information.

Page 342
In addition to users selectively disabling sharing of their documents through Delve, administrators of
SharePoint sites can place a filter over what the documents the site reveals to Delve. You might have
situations where complete document libraries or specific documents have content that you want to remain
invisible except when accessed through the document library. As explained by Microsoft in an online post (or
this post for an independent view on the topic) , a site column can be created to allow users to set a managed
property called HideFromDelve to mark documents so that they never show up in Delve, including in the
author’s view of their documents. Marked documents are still indexed and discoverable by searches. They also
remain visible to applications that query the Graph.
It is good to be able to control document sharing activity on a user-by-user basis, but this does not satisfy
some customers who wish to control the gathering of signals by the Graph and limit the collection of the wide
range of signals to track how people interact with Office 365. This data can be interrogated by applications,
like Delve, via the Microsoft Graph API. Some have voiced concern that the Graph gathers too many signals
about user activity and some customers have requested Microsoft to allow organizations to control the type
of signals gathered for a tenant. A smaller and less comprehensive set of signals might lead to poorer
determinations of relevant information for applications like Delve to show to users, but the belief is that this is
an acceptable compromise to take when personal privacy is involved. Microsoft has not said whether they will
deliver a method to customize the signals gathered by the Graph for a tenant, so the question is still open.

Using Graph APIs: The Microsoft Graph API and the Graph Query Language (GQL) are available for third-
party developers to access the contents of the Graph. You can get information about developing against
the Microsoft Graph online.

Exporting User Information from Delve


The Delve feature settings include two ways for users to see and export information Delve uses to build their
personal dashboard. These are:
• Export data from Delve: Displays a web page listing the user’s favorite boards, documents, recently
viewed people, and feature settings.
• Export list of relevant documents: Displays a web page listing the set of documents that the Graph
considers most relevant to the user.
“Export” is true in that you can copy the information listed in the web page and paste it into something more
permanent, like a Word document. The information is in JSON format and is extracted from the Graph, so
some work is needed to interpret the listing. For instance, here’s an example of content included in the Export
data from Delve listing:
{ "OlsItemUrl": "https://substrate.office.com/api/beta/Users('SPO_900e73cd-dd06-4262-af34-
4b9d6c190b56@SPO_b662313f-14fc-43a2-9a7a-
d2e27f4f3478')/Files('SPO_OTAwZTczY2QtZGQwNi00MjYyLWFmMzQtNGI5ZDZjMTkwYjU2LDBiMmI0M2E3LTliYWYtNDY5Mi
05NDNhLTM0ODhkOGY5M2JkMywzNTMzMzBiOS01ZTM5LTQwMDUtYWZlYy04ODkzOGJjZjA0Nzc_01VLADIAHRANMYGTHJEJFZ6LS2
TPAA6BW7')", "DocumentUrl":
"https://office365itpros.sharepoint.com/Projects/Blog%20Posts/What%20Exchange%20administrators%20nee
d%20to%20know.docx", "DocumentID": "17594754355122", "CreatedDateTime": "2017-10-
10T11:03:37.4073947+00:00", "MetadataSource": 0 }

And here’s an example of content from the Export list of relevant documents.
{"Title": "Project Kazaa", "Address": "https://office365itpros-
my.sharepoint.com/personal/kim_akers_office365itpros_com/Documents/Learning%20from%20massive%20scale
.docx", "ReasonForTrending": "{\"ActorAadId\":\"d36b323a-32c3-4ca5-a4a5-
2f7b4fbef31c\",\"Email\":\"kim.akers@office365itpros.com\",\"Name\":\"Kim
Akers\",\"ModifiedTime\":\"2018-12-05T21:44:34+00:00\"}", "Rank": 8 },

Page 343
Delve and Exchange Online
When Microsoft first made Delve available, only documents stored in SharePoint Online and OneDrive for
Business were exposed to users. The ability to see and search for documents proved the usefulness of Delve
and has been expanded over time to include other types of files like videos, but the fact remains that a great
deal of information circulates as email attachments in all sizes of companies. A fair case can be made that
attaching a file to a message was the first rudimentary form of sharing and that it is still the easiest and most
common way for users to share information with their colleagues today. Tenants using Teams probably find
that fewer attachments circulate in email because more files are shared in team channels, but for most of us,
email attachments are a quick and convenient way to share information with others, which is why Delve tries
to expose interesting information found in attachments stored in the user’s mailbox.
Delve's support for Exchange Online is based on the signals collected in the Graph. Some recent messages
that have attachments are displayed in the document view in the Home page. Not all messages with
attachments appear because Delve applies the same tests for relevancy to decide what should appear. In
effect, Delve is filtering your Inbox (for this is the only folder from which messages are selected) to figure out
what messages are most important to you taking all the signals gathered by the Graph into account. You
might be surprised at the choice made by Delve, but that's part of the delight of Delve as it might unearth
something important that you had completely overlooked as you processed the Inbox.

Figure 12-11: A Delve email card


Like any other object that appears in Delve, cards display information about email attachments. Figure 12-11
shows what you can expect to see in such a card. In this case, the card is for a PowerPoint attachment I sent to
someone for their review.
Two methods exist to access the content of the message. You can click the message link icon (lower left-hand
corner) or the message subject. In either case, Delve launches a modified version of OWA to view the message
and attachment. OWA can display any file format supported by Office Online (including Adobe PDF) in this
manner. Although Delve doesn't create a full OWA session to view a message and you can't get to a folder list
or access other messages from this point, you can perform the following actions with the selected message:

Page 344
• Reply and Reply All (including reply all with a meeting request)
• Forward
• Delete
• Mark as Unread
• Print
• Flag
• Assign a retention policy
• Print
• View message details
The card for an email attachment does not offer the ability to add the item to a board. This is by design as
attachments cannot be added to boards. Although it might seem like a good idea to be able to tag an
attachment to a board, attachments exist inside user mailboxes. Attachments are therefore personal and
cannot be shared in the same way as files stored in SharePoint or OneDrive sites.
An Inbox can be quite a dynamic place and if you move a message to another folder (or the message is
moved by the Managed Folder Assistant because of a retention tag) then the data held in the Graph might
not point to its current location, which in turn causes OWA to fail when it tries to display the message and its
attachment. In this past, when people commonly moved items out of the Inbox to other folders, this might
have been more of an issue. Today, when people often leave everything in the Inbox (the “piler” syndrome), it
is not a problem.

Hybrid Delve
Delve and the Graph become increasingly useful as the number of data sources available to them grows.
Microsoft has provided the Graph with the ability to consume signals from on-premises repositories deployed
in hybrid tenants. The Hybrid Cloud Search service is available as an add-on for SharePoint 2013 and is
included in SharePoint 2016 and 2019. The service crawls on-premises content sources and transmits
metadata to SharePoint Online. The on-premises data is incorporated into the search index maintained by
SharePoint Online, which acts as the coordinating authority for searches. Users must connect to SharePoint
Online to search across on-premises and cloud resources. The metadata will also be made available to the
Graph and so be exposed through in Delve. If a user wishes to access an on-premises document that surfaces
in Delve, they are redirected to the on-premises library to access the file from there.
Although hybrid connectivity already exists for Exchange, the hybrid connection is not designed to send
metadata to the Graph. In any case, it would be unreasonable to take the same approach for email
attachments. Only users with cloud-based mailboxes can see attachments in Delve that they have received.
Receiving an attachment is a very personal interaction that cannot be compared to working with documents
in a shared library. The only sharing that has taken place is between the sender and the recipient, which is why
Delve only surfaces email attachments in its personal “Me” view.
In addition, Delve only displays attachments that it considers most relevant to the user. The decision to
display one attachment instead of another is taken based on the totality of the signals held in the Graph. For
instance, an attachment from an external correspondent (even one using an on-premises mailbox in a hybrid
organization) might be totally ignored by Delve while one sent by another person is surfaced. The decision is
driven by the collection of signals held in the Graph so the attachment from the hybrid mailbox is overlooked
because no other signals exist to show its relevance while the other attachment is considered more relevant
because it can be connected to meetings, Skype for Business Online calls, and other signals about a related
project.

Page 345
Chapter 13: Basic Mobile
Management
As workers become more mobile and security risks for corporate data increase, it’s important to consider how
you will manage mobile devices for your organization. Microsoft 365 tenants have a choice of solutions that
can be used for mobile device management (MDM) and mobile application management (MAM). Each of
these solutions has different features available, with different strengths and weaknesses.
Some of the considerations that come into play include which devices and operating systems will need to be
managed, and who will own those devices (BYOD vs corporate). We also need to consider whether non-
Microsoft applications such as SaaS apps or custom business apps need to be managed. Some organizations
can take a single approach to mobility management, while others need to apply different policies and
configurations to different sets of uses. Specific compliance requirements are also important, as some
organizations fall under strict government or industry regulations. In this chapter, we’ll focus on the Exchange
ActiveSync protocol, which is nearly old enough to vote in US elections but is still in wide use.

Mobile Connectivity to Exchange Online


Exchange ActiveSync (EAS) is Microsoft’s protocol to allow mobile devices such as smart phones to securely
access their email, calendar, contacts, and tasks from remote networks. EAS provides customers with policy-
based controls over mobile devices and data. An administrator can block specific users or device types from
connecting, and issue remote wipe commands to erase devices that have been lost or stolen. EAS is a great
example of the mixing of MAM and MDM functionality. Some of the policy controls it offers apply only to the
email or calendar applications on the device, but support for its PIN enforcement and remote device wipe
policies are embedded into all modern versions of iOS and Android and can thus be considered as MDM
features.
EAS uses Direct Push to allow connected devices to be updated immediately when a new email message has
arrived in the user’s mailbox, rather than the device needing to poll the server at fixed intervals. Of course,
users can still choose a manual polling interval in most mobile email apps, but the convenience of instantly
receiving new email to your mobile device is what most people tend to want.
Direct Push works by having the client initiate a HTTPS connection to the server with a long timeout period of
15-30 minutes. If the mailbox has a new or changed item, the server responds to the device’s open HTTPS
request. When the 15- or 30-minute timeout elapses the device simply opens a new HTTPS request and the
process continues. This “hanging sync” process allows the device to idle its cellular or Wi-Fi connection while
waiting for a response from the server, which greatly reduces battery drain while still providing quick arrival
for mail.

Note: The current version used in Exchange Online is ActiveSync Version 16.1 First introduced in 2016, this
version introduced a few incremental enhancements to its predecessors, notably a way for administrators
to issue an account-only wipe request to mobile devices, which allows for selective wipe of corporate data
while leaving the personal data on the device intact. Don’t expect to see any further updates or
expansions to the protocol going forward.
EAS has the advantage of being very broadly supported by the two dominant mobile operating systems. A
user with an EAS-capable device can quickly configure the device herself to sync Exchange data. However,

Page 346
since the introduction of Outlook mobile, Microsoft has essentially lost interest in EAS, partly because new
EAS features require the device manufacturers to update their OS and built-in applications to take advantage.
On the other hand, Outlook mobile has steadily been receiving new features, including integration with the
on-device OneDrive client, the ability to sync Exchange contacts to the device through Outlook, and so on. It’s
probably best to think of EAS as the lowest common denominator for mobile Exchange access (and
MDM/MAM) and prioritize supporting Outlook mobile and the other mobile Microsoft 365 applications as
you proceed with your deployment.

Configuring Mobile Devices and Applications


for Exchange ActiveSync
Mobile devices are simple to configure for EAS connectivity to Exchange Online thanks to Autodiscover. If the
administrator has configured the correct DNS records for Autodiscover, the end user can simply enter their
user principal name (which is recommended to be configured to be the same as their primary email address)
and password in the mobile device’s email application and it will configure itself with the correct settings.
If for some reason Autodiscover is not working, then a manual configuration may be required. The server
name for manual configuration is “outlook.office365.com”.

Understanding Device Access State for ActiveSync Clients


Every mobile device or application that connects to EAS is placed into one of five access states. These access
states are:
• Device discovery – when a mobile device connects to the server for the first time it will spend up to
14 minutes in this state while the server determines what access state should be applied.
• Allow – a device in the allow state can synchronize email, calendar, tasks, and contacts if it remains
compliant with the mobile device policy that is applied to that mailbox user.
• Block – a device can be in the block state for two reasons. The first is that a device access rule is
blocking the device based on one of the device’s characteristics, such as the make, model, or user
agent. The second reason is if the device is not compliant with the mobile device policy that is applied
to that mailbox user.
• Quarantine – like the block state, a device will be placed in quarantine if a device access rule is
configured to quarantine devices matching a specific characteristic. Another reason a device can be
quarantined is if the default access level for the organization is set to quarantine new mobile devices.
When a device is in this state, the user will see a temporary message in their mailbox indicating that
the device has been quarantined, but they will not get their normal email or calendar data until the
device gets into the allow state.
• Mailbox Upgrade – this is a temporary state when a mailbox is moved from an older version of
Exchange to a newer version. In Microsoft 365 you will likely never notice a device in this state.
The device discovery and mailbox upgrade states are both temporary states and are only applicable under
certain circumstances. Since these are not states that you can directly control through configuration and
policies, we won’t be looking any closer at them.
Device access goes through a 9-step workflow (Figure 13-1) to check the status of every attempt to connect a
mobile device or application:
1. Is the mobile device authenticated?
2. Is the user enabled for ActiveSync?
3. Does the device comply with the mobile device mailbox policy in effect for that user?

Page 347
4. Does the user have a personal exemption that blocks the mobile device?
5. Does the user have a personal exemption that allows the mobile device?
6. Is the device blocked by a matching device access rule?
7. Is the device quarantined by a matching device access rule?
8. Is the device allowed by a matching device access rule?
9. Apply the default access level (allow/block/quarantine) specified in the organization settings.

Figure 13-1: The decision flow for ActiveSync device access


This workflow is important to understand because at several points through the process a decision can be
made to allow, block, or quarantine the device. Once that decision is reached, the evaluation process stops.
For example, if a user is not ActiveSync enabled then they will not be able to connect regardless of whether
their mobile device can connect. Or as another example, a user who has a personal exemption that allows
their mobile device to connect will be able to do so regardless of an organization-wide device access rule that
quarantines or blocks that device type, and regardless of the default access level configured for the
organization.
Let’s step through the stages of determining device access state in a bit more detail and explore some of the
configuration options that are available to you for controlling each stage of the process. As you can see from
the device access workflow in Figure 13-1, administrators can control ActiveSync connectivity to mailboxes in
a variety of ways.

Mobile Device Authentication


Before any of the ActiveSync policy or ABQ controls can take effect, the user's device must first authenticate.
Authentication can occur in several ways:
• The user authenticates through an application such as the native Mail app for iOS. The built-in mail
and calendar applications on iOS support Modern Authentication, as described in the Identities
chapter of the main book, so you can use them with MFA and AD FS.
• The user grants access to an application or service that will access the mailbox on behalf of the
device using OAuth for authentication. An example of this is the cloud service used to proxy
ActiveSync connections from the Outlook for iOS and Android apps, which is discussed later.
• Certificate-based authentication is used to authenticate the device. When certificate-based
authentication is used, the user's credentials saved on the device do not need to be updated when

Page 348
they reset or update their password, because the certificate is used instead of the username and
password.
Certificate-based authentication is fully supported by Azure AD, but not every application and client support
it. Microsoft’s basic statement is that all EAS clients should be able to use certificate-based authentication.
However, this requires that each device have a digital certificate, which means that you need your own
Using certificate-based authentication involves running your own Public Key Infrastructure (PKI) such as Active
Directory Certificate Services or a non-Microsoft PKI product, or purchasing certificates from a public
certification authority (CA). The certificates need to be provisioned to users' mobile devices, often achieved
through a mobile device management solution such as Microsoft Intune. In addition, the CA that issues the
client certificates must have a publicly accessible certificate revocation list (CRL) that Azure can access to
check certificate validity. A federation server, such as AD FS, is also required when certificate-based
authentication is used for Android clients to support certificate revocation. Overall, certificate-based
authentication involves a significant cost and operational overhead, so the decision to use it shouldn't be
taken lightly.
At the most basic level you can prevent a user from authenticating to ActiveSync by disabling or removing
their user account or their mailbox. Of course, disabling a user account is not very helpful if you would like the
user to be able to continue using the account for other access, in which case you can consider implementing
one of the controls that we’ll explore next. For certificate-based authentication scenarios, you can revoke the
client certificate to prevent authentication from the mobile device.

Mobile Device Mailbox Policies


An Exchange Online organization is configured with one default mobile device mailbox policy when the
tenant is first created. If you make no changes at all then the default policy will apply to each mailbox user
automatically. The default policy for Exchange Online is quite permissive, as it allows most device types to
connect without even requiring any basic security controls. In the default policy mobile device passwords are
optional, no device encryption is required, and mobile devices that don’t fully support policies can
synchronize anyway. While this might ensure that the small business customers that make up most of the
service’s customer base can easily connect their mobile devices without any outside assistance, the default
policies are inadequate for any company that wants basic security for mobile devices.
An administrator can take several approaches with mobile device policies:
• A single, default policy that applies to all mobile devices.
• Multiple policies, with the most restrictive being the default policy and other less restrictive policies
being assigned on a case-by-case basis.
• Multiple policies, with the least restrictive being the default policy and other more restrictive policies
being assigned on a case-by-case basis.
Although having a single mobile mailbox device is a perfectly valid approach, and one that avoids any
complexity that managing multiple policies creates, demonstrating a multiple policy approach allows us to
look at some important considerations for changing the default policy.
Let’s take the example of a business that wants a default mobile device mailbox policy that has basic security
controls, and a more secure mobile device mailbox policy for the sales team who travel frequently and have a
history of losing their mobile devices in airports and taxis overseas. First, the default policy needs to be
modified. Through the EAC, first select and then edit the Default (default) mobile device mailbox policy by
clicking Mobile in the left navigation bar and then selecting the Mobile device mailbox policies tab.
“Default” is just the name Microsoft chose for the default mobile device mailbox policy and is not the setting
that makes the policy become the default assigned to new mailboxes. A tick box labelled This is the default
Page 349
policy is what controls which policy Exchange uses as its default. If the default policy is named “Default” this is
unlikely to cause any confusion. If you decide to make a different policy the default, it would also be a good
idea to rename the policy named “Default” to something else to avoid confusing other administrators.
Clear the tick box for Allow mobile devices that don’t fully support these policies to synchronize, which
will prevent devices from connecting if they don’t support your policy requirements (for example if you
require device encryption but the device is not capable of encrypting its storage). Next, navigate to the
security settings and set the policy settings to match those shown in Figure 13-2. These settings provide a
basic level of mobile device security without being too inconvenient for the end user.

Figure 13-2: Editing the security details of a mobile device mailbox policy
Before you save the changes be aware that any existing mobile device users that are assigned this policy will
not be able to connect if their device does not comply with the new policy settings. Different mobile device
operating systems will have different user experiences for this situation, but most of them will display a
message to the end user that explains what change is needed to be compliant (e.g., setting a passcode on the
device).
Next let’s look at creating a new policy. In the mobile device mailbox policies view click the [+] icon to
create a new policy. In this scenario, the sales team needs to be assigned a more secure policy than the
default policy by requiring a longer password, a shorter device lock timeout, and enabling device wipe for too
many failed sign-in attempts. After creating the new mobile device mailbox policy, you need to assign it to the
mailbox users. Navigate to the recipients list and select a mailbox user to assign the policy to, then click View
details under mobile devices in the right pane. When the Mobile Device Details window opens, click Browse
and choose the mobile device mailbox policy to assign (Figure 13-3).

Page 350
Figure 13-3: Assigning the new policy to a mailbox
Mobile device mailbox polices can also be assigned using the Set-CASMailbox cmdlet.
[PS] C:\> Set-CASMailbox alex.heyne -ActiveSyncMailboxPolicy "Sales Team"

Set-CASMailbox would of course be the best way to assign policies to the members of the sales team—you
wouldn’t want to click through the steps required in EAC for more than a couple of users.
At some point you may find it necessary to change a mailbox user back to the default mobile device mailbox
policy. In that situation it’s natural to think you should just use Set-CASMailbox to assign them to the policy
named “Default” again.
[PS] C:\> Set-CASMailbox James.Ryan -ActiveSyncMailboxPolicy "Default"

Because of a quirk in the way policies are applied, this won’t work. To understand why that is the wrong action
to take compare two mailbox users; one with the default mobile device mailbox policy assigned, and one with
a different policy applied.
[PS] C:\> Get-CASMailbox Kim.Akers | Format-List activesyncmailboxpolicy*

ActiveSyncMailboxPolicy : Default
ActiveSyncMailboxPolicyIsDefaulted: True

[PS] C:\> Get-CASMailbox James.Ryan | Format-List activesyncmailboxpolicy*

ActiveSyncMailboxPolicy : Sales Team


ActiveSyncMailboxPolicyIsDefaulted: False

The important detail is the ActiveSyncMailboxPolicyIsDefaulted attribute. To set a mailbox user back to the
default policy (not necessarily the policy named “Default”), use Set-CASMailbox to null the
ActiveSyncMailboxPolicy attribute. This reverts the user to the default policy and ensures they will be updated
to any different policy that is later marked as the default.
[PS] C:\> Set-CASMailbox James.Ryan -ActiveSyncMailboxPolicy $null

[PS] C:\> Get-CASMailbox James.Ryan | Format-List activesyncmailboxpolicy*

ActiveSyncMailboxPolicy : Default
ActiveSyncMailboxPolicyIsDefaulted: True

Before you change the default mobile device mailbox policy you just need to be aware that all mailboxes
where the ActiveSyncMailboxPolicyIsDefaulted property is set to True will be re-assigned to the new default
policy, and those set to False will not. To see a list of mailboxes that will not be re-assigned when the default
mailbox policy changes, you can run the following commands to discover the name of the default policy, then

Page 351
filter the results of Get-CASMailbox for those that are assigned that policy but have the
ActiveSyncMailboxPolicyIsDefaulted property set to False.
[PS] C:\>$default = (Get-MobileDeviceMailboxPolicy | Where {$_.IsDefaultPolicy}).Name

[PS] C:\>Get-CASMailbox -ResultSize Unlimited | Where {$_.ActiveSyncMailboxPolicy -eq $default -and


$_.ActiveSyncMailboxPolicyIsDefaulted -eq $False}

Real World: Despite your best efforts to configure mobile device mailbox policies that suit your
organization’s security requirements, you should always be aware that mobile devices and applications can
“lie” to Microsoft 365 and report compliance with policy items such as password complexity even though
they do not meet the policy’s requirements. An example of this was the Outlook for iOS and Android
application, which launched without support for enforcing device PIN/passcode requirements. A later
update to the app added support for these policy items.

Allow/Block Exemptions
In some scenarios an organization may wish to block a specific make, model, or operating system of mobile
devices using a device access rule, but still need to allow some of those specific devices to connect on a case
by case basis.
The unique device ID of a mobile device can be allowed or blocked for a mailbox user, and in doing so this
will take effect before any device access rules are assessed. Personal allow/block exemptions can be seen in
the output of Get-CASMailbox. By default, no device IDs are allowed or blocked.
[PS] C:\> Get-CASMailbox -Identity Kim.Akers | Format-List ActiveSync*DeviceIDs

ActiveSyncAllowedDeviceIDs : {}
ActiveSyncBlockedDeviceIDs : {}

The device ID for a specific mobile device can usually be found within the device operating system, or
sometimes on a label attached to the packaging for the device. If you need to pre-allow or pre-block a device,
you will need to retrieve the device ID before it connects to the user’s mailbox. Otherwise, you can determine
the device ID of any device that is already connected by running the Get-MobileDevice cmdlet.
[PS] C:\> Get-MobileDevice -Mailbox Kim.Akers | Format-List FriendlyName, DeviceID

FriendlyName: Outlook for iOS and Android


DeviceId : 63479921AD27F5AB

FriendlyName: Black iPhone 6S


DeviceId : ApplC39GQ8NNDTDL

To allow or block a device ID use the Set-CASMailbox cmdlet. Both the ActiveSyncAllowedDeviceIDs and
ActiveSyncBlockedDeviceIDs are multi-value attributes. This makes it easy for an administrator to accidentally
overwrite existing values when they are trying to add a device ID to either the allow or block list. In this
example, a device ID is added to the allow list using Set-CASMailbox.
[PS] C:\> Set-CASMailbox -Identity Kim.Akers -ActiveSyncAllowedDeviceIDs ApplC39GQ8NNDTDL

[PS] C:\> Get-CASMailbox -Identity Kim.Akers | Format-List ActiveSyncAllowedDeviceIDs

ActiveSyncAllowedDeviceIDs: {ApplC39GQ8NNDTDL}

Another device is added to the allow list. As you can see this overwrites the existing value if the attribute is
not updated correctly.
[PS] C:\> Set-CASMailbox -Identity Kim.Akers -ActiveSyncAllowedDeviceIDs 63479921AD27F5AB

[PS] C:\> Get-CASMailbox -Identity Kim..Akers | Format-List ActiveSyncAllowedDeviceIDs

Page 352
ActiveSyncAllowedDeviceIDs: {63479921AD27F5AB}

The correct method of updating the allowed or blocked device IDs lists is to use the add or remove methods.
[PS] C:\> Set-CASMailbox -Identity Kim.Akers -ActiveSyncAllowedDeviceIDs @{add='ApplC39GQ8NNDTDL'}

[PS] C:\> Get-CASMailbox -Identity Kim.Akers | Format-List ActiveSyncAllowedDeviceIDs

ActiveSyncAllowedDeviceIDs : {ApplC39GQ8NNDTDL, 63479921AD27F5AB}

Any mobile device that has been allowed or blocked by an exemption will have a DeviceAccessStateReason of
“Individual”.
[PS] C:\> Get-MobileDevice -Mailbox Kim.Akers | Format-List FriendlyName,DeviceID,*AccessState*

FriendlyName : Outlook for iOS and Android


DeviceId : 63479921AD27F5AB
DeviceAccessState : Allowed
DeviceAccessStateReason: Individual

FriendlyName : Black iPhone 6S


DeviceId : ApplC39GQ8NNDTDL
DeviceAccessState : Allowed
DeviceAccessStateReason: Individual

With device access being allowed or blocked via a personal exemption it will ensure that the device is still able
to connect even if a device access rule or the default organization setting is configured to block or quarantine
devices. Although that may sound like a good thing it can sometimes be a problem. For example, if you
decide to block a specific version of iOS due to a bug, and you implement a device access rule to block that
specific version, anyone with a personal exemption for their device will still be able to connect because the
personal exemption takes precedence over the device access rule.

Device Access Rules


Device access rules allow an organization to allow, block, or quarantine mobile devices based on their
characteristics such as make, model, and user agent. This flexibility means they can be deployed to suit almost
any scenario. For example, if you wanted to block all Apple iPhone devices that run older versions of the
operating system, while still allowing the latest versions to connect, you can achieve that with device access
rules.
Similarly, you could configure a default access level for the organization of block or quarantine, but then use a
device access rule to allow all iPhones. Because the device access rules take precedence over the default
access level, an iPhone user will be allowed to connect while any other device will be subject to the default
access level for the organization.
To configure a device access rule in PowerShell, use the New-ActiveSyncDeviceAccessRule cmdlet. The most
important parameters for this cmdlet are -QueryString and -Characteristic. The characteristic can be either:
• DeviceType.
• DeviceModel.
• DeviceOS.
• UserAgent.

If you already know the specific characteristic upon which you want to base the rule, then you can configure a
device access rule regardless of whether a device matching that characteristic has connected to the server or
not.

Page 353
The query string for the characteristics for device access rules are exact match only. There is no partial match
or wildcard allowed for the query string. This means that if you need to block all versions of iOS below a
certain number, you will need to configure a device access rule for every single possible version number
below the minimum that you wish to apply. The situation with Android devices is even worse, due to the
enormous number of different devices that all have unique device types, models, OS or user agent strings. If
you’re not sure of the exact query string for the characteristic, then you can look at the characteristics of
devices that have already connected to the server using Get-MobileDevice.
[PS] C:\> Get-MobileDevice | Format-List DeviceType, DeviceModel, DeviceOS, DeviceUserAgent

DeviceType : Outlook
DeviceModel : Outlook for iOS and Android
DeviceOS : Outlook for iOS and Android 1.0
DeviceUserAgent: Outlook-iOS-Android/1.0

DeviceType : Android
DeviceModel : Nexus 7
DeviceOS : Android 4.3
DeviceUserAgent: Android/4.4.3-EAS-2.0

DeviceType : Outlook
DeviceModel : Outlook for iOS and Android
DeviceOS : Outlook for iOS and Android 1.0
DeviceUserAgent: Outlook-iOS-Android/1.0

DeviceType : Outlook
DeviceModel : Outlook for iOS and Android
DeviceOS : Outlook for iOS and Android 1.0
DeviceUserAgent: Outlook-iOS-Android/1.0

DeviceType : iPhone
DeviceModel : iPhone4C1
DeviceOS : iOS 9.2.1 13D15
DeviceUserAgent: Apple-iPhone4C1/1202.466

Here is an example of a device access rule to demonstrate how they are configured. This rule will block a
device that has a “Model” of “Outlook for iOS and Android”.
PS C:\> New-ActiveSyncDeviceAccessRule -Characteristic DeviceModel -QueryString "Outlook for iOS and
Android" -AccessLevel Block

Notice that the rule has blocked two matching devices, but one is still allowed because individual exemptions
take precedence over device access rules.
[PS] C:\> Get-MobileDevice | Where {$_.DeviceModel -eq "Outlook for iOS and Android"} |
Format-List deviceaccess*

DeviceAccessState : Blocked
DeviceAccessStateReason: DeviceRule
DeviceAccessControlRule: Outlook for iOS and Android (DeviceModel)

DeviceAccessState : Blocked
DeviceAccessStateReason: DeviceRule
DeviceAccessControlRule: Outlook for iOS and Android (DeviceModel)

DeviceAccessState : Allowed
DeviceAccessStateReason: Individual
DeviceAccessControlRule:

It is a good idea to review existing mobile device associations to see which ones will not be impacted by a
device access rule you are implementing. For example, before adding the device access rule to block “Outlook
for iOS and Android” we can run the following PowerShell command to find all matching devices that are
already allowed by individual exemption.

Page 354
[PS] C:\> Get-MobileDevice -filter {$_.DeviceModel -eq "Outlook for iOS and Android"
-and $_.DeviceAccessStateReason -eq "Individual"}

Device access rules can apply an access level of allow, block, or quarantine. An allow rule can be used to
permit specific device types that are considered “safe” to connect when the default access level of the
organization is restrictive (i.e. when it blocks or quarantines). For example, an organization may choose to
quarantine all devices by default so that they can be reviewed on a case by case basis, but then configure a
device access rule to allow all iPhones because they are pre-approved for connection according to that
organization’s security policies. When a device access rule allows a device to connect it does not add the
device as a personal exemption for the mailbox user, therefore the device will only be allowed until the device
access rule is removed, after which they will be re-assessed according to the allow/block/quarantine workflow.
Block rules behave similarly to allow rules except that they block devices instead of allowing them. Other than
that, there is one notable exception: a block rule will continue to block a device even if that device is later
modified in some way that it doesn’t match the rule’s criteria. For example, this user has an iPhone running
“iOS 9.2.1 13D15”, which is the OS version for an iPhone 6S running iOS 9.2.1.
[PS] C:\> Get-MobileDevice -Mailbox Kim.Akers | Format-List friendlyname, deviceos, deviceaccess*

FriendlyName : iPhone 6s
DeviceOS : iOS 9.2.1 13D15
DeviceAccessState : Allowed
DeviceAccessStateReason: Global
DeviceAccessControlRule:

A device access rule is added to block the device OS “iOS 9.2.1 13D15”.
PS C:\> New-ActiveSyncDeviceAccessRule -Characteristic DeviceOS -QueryString "iOS 9.2.1 13D15"
-AccessLevel Block

The device is now blocked by the device access rule. The same block would also apply to any other iPhone 6S
device that was upgraded to that exact OS version.
[PS] C:\> Get-MobileDevice -Mailbox Kim.Akers | Format-List friendlyname, deviceos, deviceaccess*

FriendlyName : iPhone 6s
DeviceOS : iOS 9.2.1 13D15
DeviceAccessState : Blocked
DeviceAccessStateReason: DeviceRule
DeviceAccessControlRule: iOS 9.2.1 13D15 (DeviceOS)

The device owner upgrades the device to a newer version of iOS. The device stays blocked if the device access
rule still exists. In effect this means that a user can’t unblock a device themselves by modifying its
characteristics, such as by upgrading the operating system. However, if the device OS (like Outlook) is blocked
and the user installs a different email application on the device that presents itself as a different DeviceOS,
then that application will not be blocked by the same rule.
If a mobile device has been blocked by a device access rule and you want to allow that device to connect
again without deleting the device access rule, then the solution is to add the device as a personal exemption
for that mailbox user. As discussed earlier this can cause issues at a later stage because once the personal
exemption is in place the device will no longer be impacted by any other device access rule you create.
In some cases, a persistent block is not the desired outcome. Sometimes a mobile device OS is found to have
a security vulnerability, and organizations choose to block that version of the device OS only but want the
devices to be able to connect again after the users upgrade to a later version. In those situations, a device
access rule with a quarantine action is more suitable.
[PS] C:\> New-ActiveSyncDeviceAccessRule -Characteristic DeviceOS -QueryString "iOS 9.2.1 13D15"
-AccessLevel Quarantine

Page 355
The above example will quarantine iPhone 6S devices running iOS 8.1.3 only until they upgrade to a later
version. After the OS is upgraded, the device can connect again, without having to delete the device access
rule or add a personal exemption for the device.

Default Access Level


The last step of the allow/block/quarantine workflow is the default access level of the organization. Mobile
devices that have not already been allowed, blocked or quarantined based on an earlier stage of the process
are controlled using this setting. The default access level for mobile devices in Exchange Online is set to a
value of “Allow” when the tenant is first created.
[PS] C:\> Get-ActiveSyncOrganizationSettings | Format-List DefaultAccessLevel

DefaultAccessLevel : Allow

The default access level can be set to either allow, block, or quarantine, and these settings have the same
effect as they do for device access rules. One exception is the quarantine setting, which has controls at the
organization level to add some additional text to the quarantine notification email that the mailbox user
receives. This text is useful for providing information to end users about how they should respond to their
mobile device being quarantined. For example, some organizations want the device owners to contact the
help desk by phone to request approval for the device, while others may prefer not to burden the help desk
with those calls and will instead instruct the device owner to visit an intranet page to submit a form
requesting approval.
[PS] C:\> Set-ActiveSyncOrganizationSettings -DefaultAccessLevel Quarantine
-UserMailInsert "Please visit http://intranet/mobiledevices to accept the mobile device access terms
and conditions and request device approval."

If you are planning to change the default access level for mobile devices that are connecting to your Exchange
Online organization, you should first consider the impact that this will have on the mobile devices that are
already connecting. It would not be a wise move to suddenly block or quarantine hundreds or thousands of
mobile devices, as this will surely cause a sudden influx of help desk calls from confused and angry end users.
Any mobile device that is not being allowed, blocked or quarantine by a personal exemption or device access
rule is controlled by the default access level. These devices will have a DeviceAccessStateReason of “Global”
when you review the output of Get-MobileDevice.
[PS] C:\> Get-MobileDevice | Where {$_.DeviceAccessStateReason -eq "Global"}

The output of that command may be difficult to interpret within a PowerShell console sessions so I
recommend you pipe it to Export-CSV, or run a more comprehensive report such as the Get-
EASDeviceReport.ps1 script, which also shows additional information such as the timestamp of the last mailbox
synchronization attempt by the device.

Managing Mobile Device Associations


When a user has attempted to connect a mobile device or application to their Exchange Online mailbox with
EAS, a device association is created. Device associations exist for any device that is allowed, blocked, or
quarantined by Exchange Online. You can list the device associations for a mailbox by running the Get-
MobileDevice cmdlet.
[PS] C:\> Get-MobileDevice -Mailbox paul@office365itpros.com |
Select FriendlyName, Device*

FriendlyName DeviceId DeviceImei DeviceOS


------------ -------- ---------- --------

Page 356
Outlook for iOS and Android 257B371BCF057A09 Outlook for iOS and And
iPhone 6s 4FQI7OF7V143F350CDPJBN9R7C iOS 9.2.1 13D15
PAUL-PC8 B60E81836F754126A509240EEEF5E599 Windows 10.0.10586

An Exchange Online mailbox supports a maximum of 100 mobile device associations. For the average user
that is more than enough, but it is foreseeable that someone who frequently tests different mobile devices
might reach that limit. If 100 device associations already exist and the user attempts to connect a new device
to their mailbox, Exchange Online will first check for any existing mobile device associations that have not
synced in more than 180 days and can therefore be considered inactive. If inactive devices are found, they are
removed from the mailbox. If the attempted removal of inactive device associations reduces the count below
100, then the new device can connect. If necessary, the user can remove mobile devices from their mailbox
using the Mobile devices section of OWA Options (Figure 13-4).

Figure 13-4: Managing mobile device associations through OWA Options


You should exercise some caution when working with mobile devices through the GUI because the “Remove”
and “Wipe device” buttons are located directly next to each other. Administrators can also remove mobile
device associations for users by opening the properties of the mailbox in the Exchange admin center,
choosing Mailbox features, then clicking View details under the Mobile Devices section. Again, use caution
when removing devices so that you do not accidentally issue a remote wipe for the device instead. There is
also a limit of 20 device association removals per mailbox each month.

Reporting on Remote Devices


As we have seen, the PowerShell module for Exchange Online includes several cmdlets to deal with ActiveSync
devices. It is possible to use those cmdlets to create reports about device usage. In this example, we look for
devices using the REST API to communicate with Exchange Online, which means we are interested in looking
for devices running Outlook for iOS and Android. After fetching information about the devices known to
Exchange, we fetch the latest statistics for each device. Finally, we output some information, including the
version of the OS running on the device (so that we make sure it’s not too outdated), the last time the device
successfully synchronized with the owner’s mailbox, and the mobile operator. One of the devices has no
operator listed, which means that it only uses Wi-Fi to connect.
Although this script isn’t very sophisticated, it provides the basis for more complicated reports.
[PS] C:\> $OutlookDevices = (Get-MobileDevice -RestAPI | Select FriendlyName, DeviceUserAgent,
UserDisplayName, DeviceMobileOperator, DeviceId, FirstSyncTime, DistinguishedName)
$Report = [System.Collections.Generic.List[Object]]::new()
Write-Host "Checking" $OutlookDevices.Count "Outlook mobile clients"

Page 357
ForEach ($Dev in $OutlookDevices) {
$DN = $Dev.DistinguishedName.Split(“,”)[2..10] -join “,”
$Mbx = (Get-Mailbox -Identity $DN)
$Licenses = ($Mbx.PersistedCapabilities -join ";")
$DeviceInfo = (Get-MobileDeviceStatistics -Identity $Dev.DistinguishedName | Select
LastSuccessSync, DeviceUserAgent, DeviceOS, Status, DeviceApplicationPolicyStatus)
$ReportLine = [PSCustomObject]@{
User = $Mbx.DisplayName
Device = $Dev.FriendlyName
UA = $Dev.DeviceUserAgent
DeviceOS = $DeviceInfo.DeviceOS
Status = $DeviceInfo.Status
Operator = $Dev.DeviceMobileOperator
FirstSync = $Dev.FirstSyncTime
LastSync = $DeviceInfo.LastSuccessSync
DeviceAppStatus = $DeviceInfo.DeviceApplicationPolicyStatus
}
$Report.Add($ReportLine)
}
$Report | Sort User, FirstSyncTime | Format-Table User, Device, DeviceOS, LastSync, Operator -
AutoSize

User Device DeviceOS LastSync Operator


---- ------ -------- -------- --------
Jack Smith Outlook for iOS iOS 11.2.6 20/03/2018 22:10:33 vodafone UK
Jenny James Outlook for iOS iOS 11.2.5 20/03/2018 22:47:52
Tony Redmond Outlook for iOS iOS 10.3.3 01/10/2017 18:13:27 vodafone IE
Tony Redmond Outlook for iOS iOS 11.2.6 21/03/2018 14:47:37 vodafone IE

Remote Wipe
Eventually you will meet the situation where someone has lost their mobile device, or an employee leaves the
company under circumstances where there is a concern about corporate data being stored on the device.
ActiveSync provides the capability to perform a remote wipe of mobile devices. However, there are some
caveats with remote wipes that you need to be aware of.
For a remote wipe to be successful the device must connect to Exchange Online after the remote wipe request
has been created. Unfortunately, there are numerous ways in which a lost or stolen device may never contact
the server again:
• the device is not configured for push email, so doesn’t automatically connect to the server.
• mobile and Wi-Fi communications are disabled to prevent connections being made.
• the mobile carrier disables the SIM card.
• the user changes their password in Active Directory.
• the device is blocked by a device access rule.
Originally, an EAS-triggered remote wipe would erase the entire device. This led to many user complaints;
when Microsoft introduced version 16.1 of EAS, they added the ability for the remote wipe command to target
just the data synchronized via EAS. This “selective wipe” ability is a great example of MAM functionality: the
data in the target application can be managed (in this case, removed) without affecting other applications’
data on the device. In addition to the EAS 16.1 applications (such as the built-in Mail and Calendar apps on
iOS), Outlook mobile appears to Exchange as a device itself, unlike the native mail apps that represent the
entire hardware device. Remote wipes for Outlook mobile thus only remove the data from the application, not
the entire hardware device.

In either case, unfortunately remote wipe results are not reliably reported back to the server. No matter
whether the remote wipe request was successful or not, it will often remain in a “pending” state forever, or
until the wipe request is removed. Therefore, you could never know when and if the remote wipe was

Page 358
successful. Think of remote wipe commands as a best-effort attempt to remove data from a device, not a
guarantee of success.

You can test the EAS capabilities of a mailbox by using the Remote Connectivity Analyzer to perform an
Exchange ActiveSync test. In the results, there’s a line called “MS-ASProtocolVersions” which lists the EAS
versions a mailbox supports. For a mailbox where EAS 16.1 has not yet been enabled, the output looks like
this.

MS-ASProtocolVersions: 2.0,2.1,2.5,12.0,12.1,14.0,14.1,16.0

For a mailbox where EAS 16.1 is enabled, the output looks like this.
MS-ASProtocolVersions: 2.0,2.1,2.5,12.0,12.1,14.0,14.1,16.0,16.1

You can also determine the EAS version in use by querying the mobile devices for a mailbox with the Get-
MobileDevice cmdlet.
PS C:\> Get-MobileDevice -Mailbox "Kim.Akers@office365itpros.com" | Select FriendlyName, DeviceType,
ClientVersion, ClientType

FriendlyName DeviceType ClientVersion ClientType


------------ ---------- ------------- ----------
Outlook for iOS and Android Outlook 14.1 EAS
Outlook for iOS Outlook 161 REST
Outlook for Android Outlook 161 REST
TestActiveSyncConnectivity 12.0 EAS
iPhone 6s iPhone 16.1 EAS
Outlook for iOS Outlook 161 REST
iPad mini 2 iPad 16.1 EAS

In the example above, the iPad is connecting using the native mail app for iOS and is running iOS 10 which is
the minimum requirement for EAS 16.1 compatibility. So, with all of that in mind, let’s look at the process for
remote wipes.

Warning: EAS remote wipes for older devices (those that don’t support EAS 16.1 or are not using Outlook
mobile) are a destructive process. When a mobile device is connecting using EAS and a native mail
application such as those found on iOS and Android devices, a full remote wipe will remove all data from
the device and return it to its default factory condition. In a BYOD scenario, this could result in the
permanent loss of personal data from the device, such as photos and videos. As there are many third-party
email applications for mobile devices it is possible the applications will not all behave the way you expect
them to, so you should always proceed with caution when dealing with remote wipes.
First, the device owner can initiate their own remote wipe if they want to. This can be performed by logging
into OWA and opening the Options panel. In the Mobile devices section, all the user’s mobile devices will be
visible (Figure 13-4). However, users can only perform full device wipes, not account-only wipes. If a device
owner wants to remove only the corporate data from the personal device, and they still have the device in
their possession, then they can simply remove the accounts from the mobile apps themselves. On the other
hand, if they've lost their personal device, they might consider it worthwhile trying to perform a full device
wipe instead.
For administrators who use the EAC to perform a remote wipe, clicking the Wipe device icon will present the
option to perform an account-only wipe, or a full wipe of the data on the device. After answering Yes to the
confirmation prompt the remote wipe request is created and will have a status of “Wipe pending”. The next
time the device connects the wipe request will be completed, and the user will receive an email notification to
let them know the result.

Page 359
If the device never connects the remote wipe request will remain in place with a “Wipe pending” status
forever. Administrators can initiate remote wipe requests in the EAC, and by using the Clear-MobileDevice
cmdlet. The device Identity attribute is used to identify the device to be wiped. You can retrieve the Identity
attribute of a device using Get-MobileDevice. In the output below, we see that one device is quite old (iPhone
6s) while the other uses the Outlook for iOS mobile client connected via Outlook’s REST protocol rather than
ActiveSync:
[PS] C:\> Get-MobileDevice -Mailbox Kim.Akers | Format-List FriendlyName, Identity

FriendlyName iPhone 6s
Identity Kim.Akers\ExchangeActiveSyncDevices\iPhone§UVEBE4V4K17NT0V1US7QBQIK68

FriendlyName: Outlook for iOS


Identity : Kim.Akers\ExchangeActiveSyncDevices\REST§Outlook§46439602b2562155e1d33faa36299732

When you issue the remote wipe request with Clear-MobileDevice you can also specify an email address to
receive the notification if the wipe is successful.
[PS] C:\> Clear-MobileDevice Kim.Akers\ExchangeActiveSyncDevices\iPhone§UVEBE4V4K17NT0V1US7QBQIK68
-NotificationEmailAddresses admin@office365itpros.com

To perform an account-only wipe using PowerShell, include the -AccountOnly parameter when using Clear-
MobileDevice. When using PowerShell, even if you choose to perform an account-only wipe, the cmdlet will
present the same warning that all data on the mobile device will be permanently deleted. However, if you try
to issue an account-only wipe for a mobile device associated with a mailbox that hasn't been enabled yet for
EAS 16.1, or if the device itself doesn't support EAS 16.1, you'll receive an error message to that effect. It's
then up to you, your corporate IT policies, and perhaps the device owner themselves, to decide whether the
situation warrants issuing a full device wipe instead.
When a remote wipe request is in a “Wipe pending” status, and before the device is wiped, the request can be
cancelled. Mobile devices with remote wipes pending can be identified using the Get-MobileDeviceStatistics
cmdlet.
[PS] C:\> Get-MobileDevice | Get-MobileDeviceStatistics | Where {$_.Status -eq "DeviceWipePending"}

DeviceType : iPhone
DeviceUserAgent : Apple-iPhone4C1/1204.508
DeviceWipeSentTime :
DeviceWipeRequestTime : 28/03/2015 12:58:59 PM
DeviceWipeAckTime :
DeviceFriendlyName : iPhone 6s
Identity : APCPR04A001.prod.outlook.com/Microsoft Exchange Hosted
Organizations/office365bootcamp.onmicrosoft.com/Kim.Akers/
ExchangeActiveSyncDevices/iPhone§UVEBE4V4K17NT0V1US7QBQIK68
IsRemoteWipeSupported : True
Status : DeviceWipePending
StatusNote : You'll receive an email message that lets you know when the remote device
wipe is complete.
DeviceAccessState : Allowed
DeviceAccessStateReason: Global
DeviceAccessControlRule:
LastDeviceWipeRequestor: admin@office365itpros.onmicrosoft.com

To cancel the remote wipe, use the Clear-MobileDevice cmdlet with the Cancel parameter. Again, this uses the
Identity attribute of the device to identity which mobile device to remote the remote wipe request for,
however the Identity shown in the output of Get-MobileDeviceStatistics is not the correct value to use.
Fortunately, you can pipe the output of Get-MobileDeviceStatistics back to Get-MobileDevice to see the correct
attribute.
[PS] C:\> Get-MobileDevice | Get-MobileDeviceStatistics |
Where {$_.Status -eq "DeviceWipePending"} | Get-MobileDevice | fl FriendlyName,Identity

Page 360
FriendlyName: iPhone 6s
Identity : Kim.Akers\ExchangeActiveSyncDevices\iPhone§UVEBE4V4K17NT0V1US7QBQIK68

PS C:\> Clear-MobileDevice Kim.Akers\ExchangeActiveSyncDevices\iPhone§UVEBE4V4K17NT0V1US7QBQIK68


–Cancel

After a remote wipe completes, the mobile device partnership for the mailbox user remains in place, and the
mobile device has a state of “Blocked” due to “Policy”. It will remain in this state forever, and if the user
attempts to reconnect the device the remote wipe will immediately be performed again.
[PS] C:\> Get-MobileDevice -Mailbox Kim.Akers | Format-List friendlyname,deviceos,deviceaccess*

FriendlyName : iPhone 6s
DeviceOS : iOS 8.2 12D508
DeviceAccessState : Blocked
DeviceAccessStateReason: Policy
DeviceAccessControlRule:

If you want to allow the user to reconnect that device the remote wipe request must be removed. To remove
the wipe request, you simply need to remove the mobile device partnership for the mailbox using Remove-
MobileDevice.
[PS] C:\> Remove-MobileDevice Kim.Akers\ExchangeActiveSyncDevices\iPhone§UVEBE4V4K17NT0V1US7QBQIK68

Establishing an ActiveSync Policy for Your


Organization
Let’s recap the allow/block/quarantine workflow for determining ActiveSync device access states.
1. Is the mobile device authenticated?
2. Is the mailbox user enabled for ActiveSync?
3. Does the device comply with the mobile device mailbox policy in effect for that user?
4. Does the user have a personal exemption that blocks the mobile device?
5. Does the user have a personal exemption that allows the mobile device?
6. Is the device blocked by a matching device access rule?
7. Is the device quarantined by a matching device access rule?
8. Is the device allowed by a matching device access rule?
9. Apply the default access level (allow/block/quarantine) specified in the organization settings.

Remember that the steps are followed in order, and if at any stage a decision is made to allow, block or
quarantine a device, then the remaining steps of the process are not performed. When you plan your
configuration, you need to take into consideration:
• The process above.
• Whether you want to be a permissive or restrictive organization.
• How much administrative effort will be required for maintaining configurations for each step of the
process.

For example, a permissive organization that wishes to keep administrative effort to a minimum may configure:
• A default access level of Allow.
• Device access rules to block/quarantine only under exceptional circumstances, for example a known
security vulnerability with a new version of a mobile operating system.

Page 361
• A single ActiveSync mailbox policy to enforce a few security settings such as device passwords and
encryption.

A restrictive organization that wishes to keep administrative effort to a moderate level may configure:
• A default access level of Quarantine.
• Device access rules to allow known safe makes/models/operating systems of mobile device.
• Some personal exemptions for users with devices outside of that known safe list.

While a highly restrictive organization that is willing to live with a higher administrative burden may configure:
• A default access level of block.
• Disabling ActiveSync on all new mailboxes when they are created.
• A process for enabling ActiveSync and configuring personal exemptions for specific devices on a case
by case basis.
• A variety of mobile device mailbox polices to apply different security requirements depending on the
level of access to confidential data that the user may have.

Real World: If you do nothing at all and leave the default settings in place, then you are choosing to
operate in a very permissive model in which even basic controls such as passcodes on mobile devices
are not required at all, and any user in your organization can connect any ActiveSync-capable mobile
device to their mailbox.

Microsoft 365 Basic Mobility and Security


In addition to the ActiveSync management capabilities available in Exchange Online, Microsoft has Microsoft
365 Basic Mobility and Security to give tenants more built-in mobile device management capabilities
including support for employee-owned mobile devices in a “bring your own device” (BYOD) environment. The
benefits of Microsoft 365 BMS include:
• Device management – enforce security policies such as PIN/passcode requirements and prevent
jailbroken devices from accessing your corporate data.
• Conditional access – allow access to corporate data only from devices that are managed by your
company and that are compliant with your policies.
• Selective wipe – remove corporate data from employee mobile devices while leaving their personal
data intact.
Given that Exchange ActiveSync offers MDM capabilities, you might wonder why the service offers device
management features of its own. The capabilities of ActiveSync and the bundled Microsoft 365 BMS are
different, and are designed to solve different problems, but can also be used side by side in the same tenant.
ActiveSync takes a device-centric approach to mobile device management. ActiveSync is used to allow, block
or quarantine mobile devices that are accessing Exchange mailbox data such as email, contacts, and calendars.
The user-centric controls for ActiveSync extend only as far as enabling or disabling the ActiveSync protocol for
a user’s mailbox. Beyond that any allow/block/quarantine is based on device characteristics such as make,
model, or user agent, even when you are approving or blocking a device for an individual user. And even
though tenants can use ActiveSync and mobile device policies to enforce security policies such as
PIN/passcode requirements, those policies can be circumvented by the connecting device or application as we
saw with the first release of Outlook for iOS and Android.
Furthermore, if a mobile device is not connected to a mailbox using ActiveSync, then it is not subject to any
policy enforcement at all. With Microsoft 365 this means that the same device can access corporate data using

Page 362
Office Mobile and OneDrive for Business without any device security policies being applied. Microsoft 365
MDM closes that security gap by allowing organizations to enforce device management and security policies
on any mobile device that is accessing corporate data. In addition, Microsoft 365 BMS takes a user and group-
centric approach for targeting policies. In fact, Microsoft recommends blocking Exchange ActiveSync access
for devices that cannot be managed using Microsoft 365 Mobile Device Management.
The Microsoft 365 BMS engine is essentially a stripped-down version of Microsoft Intune, but it doesn’t
require any MEM or Intune licenses, nor does it require as much configuration and setup as the full MEM
platform. At the same time, its feature set is limited compared to its bigger sibling; Microsoft clearly hopes
that customers that start with Microsoft 365 BMS will see its value and decide to buy up to a full Endpoint
Manager environment.

Device and Application Support for Microsoft 365 Basic Mobility and
Security
Microsoft 365 BMS is generally supported on relatively recent versions of Android (currently Android 5.0 and
later) and iOS (currently iOS 11.0 and later), as well as by Windows 8.1 and Windows 10; however, the level of
application support on each device platform may vary. When a user is targeted by a Microsoft 365 MDM
policy, their access to Microsoft 365 resources using Exchange ActiveSync-capable applications (such as the
Mail app on iOS), OneDrive for Business and Office mobile applications (such as Outlook, Word, and Excel)
can be controlled. Microsoft continues to work on the list of supported devices and apps for Microsoft 365
MDM, so it is a good idea to review the current list of supported devices and apps periodically.

Activation and Initial Configuration


Before you can use the Microsoft 365 BMS features you will first need to activate it in your tenant. Because the
bundled MDM leverages Microsoft Endpoint Manager back end services, you can’t activate the bundled
version if you’ve already deployed Intune in your tenant. However, you can contact Microsoft support to be
enabled for co-existence. If you enable Microsoft 365 MDM and later begin exploring Intune’s capabilities,
you will be able to activate Intune on your own.
Log in to the Microsoft 365 admin center with your tenant administrator credentials, then type “Mobile device
management” into the search bar. This is the most reliable way to find the enrollment link, although you can
directly navigate to https://admin.microsoft.com/adminportal/home#/MifoDevices if you prefer.
When you click the resulting link, you’ll see a summary screen with a few informational links, plus one labeled
Let’s get started. Click it, then wait a few hours until your tenant has been provisioned before proceeding.
You’ll get an email from the service telling you how to proceed.
Once your tenant has been provisioned, repeating the search will take you to the empty screen shown in
Figure 13-5. This is not terribly useful. However, it does contain two links that you’ll need. The first, Device
policies, allows you to set up policies to control device behavior when they sync. The second, APNs
Certificates for iOS, is what you use to configure Apple iOS push notification certificates—a requirement if
you want to manage iOS devices through Microsoft 365 MDM. However, you can’t really use either of these
yet until your tenant’s DNS domains have been properly set up for use with MDM.

Page 363
Figure 13-5: Completing MDM setup

Configuring Domains for Microsoft 365 Basic Mobility and Security


To configure your domain, navigate to the domain setup section of the Microsoft 365 admin center
(accessible either from the Settings or Setup items in the left navigation bar), select your domain name, and
click the Continue setup button. When prompted, choose the appropriate option (usually that will be Set up
my online services for me, but if you’re using your own DNS server, you’ll choose I’ll manage my own DNS
records) and then click Next. When you see the Choose your online services page, ensure that you check
the Mobile Device Management for Microsoft 365 checkbox and click Next again. Go through the rest of
the domain setup process as you normally would.
Two new additional DNS records, “enterpriseregistration” and “enterpriseenrollment,” will be presented for you
to add to the public DNS zone for your domain name (Figure 13-6).

Figure 13-6: DNS records for Microsoft 365 MDM


Add the new DNS records to your public DNS zone. If they won’t immediately validate, and you’re sure they
are configured correctly, you can still proceed with the other Microsoft 365 MDM setup tasks and try to
validate the DNS records again later when DNS propagation has had time to complete.

Configure an APNs Certificate for iOS devices


Once you’ve configured your DNS domains for use with MDM, you can return to the MDM page and then
click the APNs Certificate for iOS button to start the process of requesting your APNs certificate. (Note that
you won’t see this button until your other MDM-related DNS records have been verified by the service.) This
is required because Apple device push notifications go through a service maintained by Apple, not directly
from a service or application in the cloud straight to the device. Your web browser will be redirected to a
wizard that leads you through the process of downloading a certificate signing request (CSR) for provisioning
the new APNs certificate, then requesting it from Apple and uploading the resulting certificate. After
downloading the CSR to your computer, the next wizard step will prompt you to sign in to the Apple APNs
Portal to request the certificate (Figure 13-7). An Apple ID is required for this task. Do not use an Apple ID that
is owned by an individual; instead, you should create a new one that is associated with an email address in the

Page 364
company. If you don’t already have a company Apple ID, take a few minutes now to create one on the Apple
web site and then return to continue the setup tasks.

Figure 13-7: Configuring Apple Certificates


After accepting the license agreement click the Browse button and select the CSR that you downloaded from
the Microsoft 365 admin center earlier, then click the Upload button.

Note: Depending on the web browser used, the upload might return a JSON formatted file that the
browser prompts you to download. You can download and save the file if you like, but this is not the
certificate file that you need.
You will receive a notification to the email address for your Apple ID when the certificate has been created. If,
after a moment or so, it seems like your web browser is stuck on the “Uploading…” page, simply refresh the
Apple portal page to see your available certificates. Click the Download button to download your new
certificate as a PEM file.
Return to the web browser tab or window where you are configuring the APNs certificate, and click Next to
continue to the page to upload your new certificate. Or, if your browser session has timed out, simply start the
APNs process again and skip past steps 1 and 2 to begin step 3 of uploading the APNs certificate. Click the
Browse button and select your PEM file to upload.
The certificate install takes several minutes to complete. If your web browser appears to hang or time out on
the upload you can log back into the Microsoft 365 admin center again, start the process of configuring the
APNs certificate, and just skip through to the step 3 again to upload the certificate you already have instead
of requesting another new certificate from Apple. If everything is in order you should see a green tick for the
Mobile Device Management settings, and you can start managing MDM policies.

Managing Device Policies in Microsoft 365 Basic Mobility and


Security
To manage Microsoft 365 BMS device policies, you will use the Security and Compliance Center. In its Data
Loss Prevention section, click the Device management link, which will take you to a page with a link labeled
Device policies (which points to https://protection.office.com/devicev2). This page will be empty to start, so
you’ll need to create at least one basic policy to get started.
There are two types of controls that you can apply in a policy:
• Organization-wide default policies.
• Device policies targeted at groups of users.

Real World: Microsoft 365 Basic Mobility and Security policies are targeted at security groups. Before you
begin configuring additional policies, you should create security groups to serve as policy targets.

Page 365
Configuring Organization-wide Policies
Click the link to Manage organization-wide settings device access settings (Figure 13-8). Here you can
decide whether to allow or block unsupported devices and applications from accessing email when they are
targeted by an MDM device policy.

Figure 13-8: Deciding whether unsupported devices can access Exchange


If your organization is going to require mobile device management for all connecting devices, you can
consider configuring this to Block. However, this will cause unenrolled devices to lose connection to email and
other resources almost immediately after the user is added to a group that is targeted by device policies.
It is also quite reasonable that you would not want to immediately configure a block. Perhaps your
organization is only testing and evaluating Microsoft 365 BMS and don’t want to disrupt existing users, or
perhaps you are allowing a grace period for users to enroll their devices in MDM before you begin blocking
unenrolled devices.
At the time of writing, the user experience on devices that are blocked is not very friendly, which may result in
an increase in support calls to your help desk. Microsoft has tried to improve that experience by sending the
user a system-generated email when they are blocked. The email explains why the device was blocked and
guides them towards installing the Company Portal app to enroll their device. Despite this, it is likely you will
need to support your users through the enrollment process when they are blocked. Even if you do want to
block unsupported devices, you might still have some scenarios where a block is not desirable. For example,
you might have a VIP user who has received an exemption for their preferred device that happens to be
unsupported. It’s natural that this kind of thing should happen, but you should also consider that VIP users
are often targeted by attackers because their devices carry the most sensitive and confidential information. In
most cases, it is more sensible to insist that all devices are included in mobile device management and to
resist the attempts of those who want to be excluded from these policies.
If you are forced to compromise, you can instead specify one or more security groups of users whose mobile
devices are excluded from access control, rather than allow everyone to bypass it.

Note: Before you can add a group to this exclusion list you first need to create the security group in
Microsoft 365. Be aware that the group picker may show no groups at first even if you have groups in your
tenant, so you might need to do a search on a keyword before any groups will appear.

Creating a Device Policy in Microsoft 365 Basic Mobility and Security


To create a device policy, click the “+ Create a policy” button in the Device Security Policies section of the
Security and Compliance Center. Next, give the policy a name and description. You may find it makes
administration easier if you name the policy to match the name of the security group it will be targeted to.
Page 366
The first set of policy items will look familiar to anyone who has configured ActiveSync mobile device policies
before. They contain the most common settings that organizations worry about such as requiring a
PIN/passcode, requiring a minimum length or complexity for the PIN/passcode, inactive lock timeout, and
device encryption. (Figure 13-9)

Figure 13-9: Changing settings in a new device policy


You’ll also notice the setting to Prevent jail broken or rooted devices from connecting. This setting is not
available in ActiveSync policies, only in Microsoft 365 MDM device policies, and it is a good idea to enable it
as many exploits for mobile device platforms only work against jail broken or rooted devices. In fact, if you
choose to allow noncompliant devices to connect (controlled by the If a device doesn’t meet the requirements
above… control group), this setting will be enabled, and you won’t be able to turn it off.
You also have options for requiring management of the email profile on the device. This is available on iOS
only at this stage and is what enables selective wipe (only wiping the corporate email data, not any personal
data). If the user has an existing email profile on the device that profile will stop working and will need to be
deleted.
The final setting is to choose whether to block or allow access for devices that do not meet the requirements
of your policy (for example a PIN/passcode that isn’t long or strong enough). If the policy requires a managed
email profile and the user has not removed an existing email profile from the device that will also result in the
device being non-compliant.
Remember that anybody in the exception group you configured earlier will not be blocked by this setting.
Also keep in mind that users will be blocked for any non-compliance with your new policy, no matter how
trivial that setting may be in the overall scheme of things. It is entirely possible that devices will be blocked for
reasons you might not anticipate when you plan your MDM deployment.
If you want to get a sense of how compliant or non-compliant your users’ enrolled devices are before you
start blocking the non-compliant devices, then you should set the action to Allow access and report violation.
The next page contains a set of additional security policies that you can configure for device backups, data
leakage, and application installs. Requiring encrypted mobile device backups is a common requirement to
protect sensitive data that may be copied to users’ computers. Blocking screen capture is one method of

Page 367
making copying or data leakage more difficult (but not impossible). Application stores (such as the iTunes
Store and the Google Play Store) can also be blocked, however this will prevent the user from installing
applications such as Office mobile apps after they’ve enrolled their device.
Next you can decide whether to apply this policy now or later. If you choose to apply it now, you will need to
select at least one security group that will be targeted by the device policy. Again, be aware that the group
picker will show no groups at first even if you have groups in your tenant, you need to do a search on a
keyword before any groups will appear.
After you finish creating the new device policy it will have a status of Turning on… for several minutes, before
it finally displays a status of On when it is ready and active.

Enrolling Devices for Microsoft 365 Basic Mobility and Security


The enrollment process for mobile devices will vary slightly depending on whether any applications on a user
device connect to Microsoft 365 resources and whether the device policies force enrollment. The prompts a
user sees on her device will vary according to whether she’s on an Android or iOS device, what OS version is
installed, and what applications she’s using.
Once you put users into a security group that is the target of a policy, they will automatically be prompted to
enroll their device in Microsoft 365 MDM, or they can enroll manually. The user experience for this process
varies somewhat also. For example, consider Rese, a user who already has Outlook mobile installed on her iOS
device. Once her user account is placed into a security group that is subject to an MDM policy, the next time
Outlook mobile attempts to log on, it will fail, and she will see an error message. When she tries to sign back
into the Outlook application, it will display a notification that she must enroll her device in management,
along with a link to download the Intune Company Portal app to enroll the device. Alternatively, Rese can
download the Intune Company Portal app from the Apple App Store (Figure 14-10) and begin the enrollment
process manually.

Figure 13-10: Intune Company Portal apps in the Apple (left) and Android (right) app stores

Page 368
The exact enrollment process varies depending on the user’s mobile device operating system. Microsoft’s
documentation covers the process thoroughly. Keep in mind that from time to time, Microsoft changes the
set of supported mobile device OS versions; for example, in January 2021 they dropped support in the
Company Portal app for Android version 5 (“Lollipop”), meaning that those devices won’t get updates to the
Company Portal app and thus won’t be able to use managed applications and services.
Once the device is enrolled, it will regularly download policy or app updates that you’ve pushed through the
MEM portal and apply them. The process is intended to be seamless to end users, although this really
depends on the policies you set. For example, if you apply a conditional access policy to require multi-factor
authentication on certain networks before accessing OneDrive or SharePoint Online content, users will
certainly notice the change. Some policy changes will trigger a dialog in Microsoft’s first-party mobile
applications that tells users that the administrator has enabled data protection (and that the app consequently
must be restarted), and there is at least one bug in this process that results in the dialog sometimes showing
up in Outlook mobile even when no changes have been made. In general, though, as with Windows Group
Policy, if you are not changing policies frequently, your users shouldn’t see any obtrusive behavior caused by
device management.

Removing or Wiping Mobile Devices


There are two ways for a mobile device to be removed from Microsoft 365 MDM:
• The device is retired.
• The device is wiped.
Devices can be retired by the end user by opening the Company Portal app on their mobile device and
removing it from management. Retiring a device wipes all corporate data and any management profiles from
the device. This is the preferred way for a user who leaves the company to clean up their BYOD devices and
return them to an unmanaged state.
After opening the Company Portal app, the steps are slightly different for each mobile device platform:
• On Android select the My Devices tab, select the device, and tap the trash can icon to remove the
device.
• On iOS select the device and then click the “…” icon and then tap the Remove button to remove the
device.
Administrators can also remove devices from management by issuing a full wipe (or “factory reset”) or
selective wipe (“remove company data”) from the Devices tab of the user details page in the Microsoft 365
admin center (Figure 13-11). Note that this is not the mechanism you use to request a remote wipe through
Exchange ActiveSync.

Page 369
Figure 13-11: Preparing to wipe a device assigned to a user in Microsoft 365 BMS
A full wipe is destructive, removing all data from the mobile device including the user’s personal data, and
restoring it to factory default settings. A full wipe would usually be used only in situations where a device has
been lost or stolen. On the other hand, a selective wipe only removes corporate data from the device, leaving
the user’s personal data intact. This is the preferred option for common scenarios such as a staff member
leaving the organization.

Renewing the APNs Certificate for Microsoft 365 BMS


The Apple Push Notification service (APNs) certificate that is issued by Apple will expire after 1 year. The email
address you use for the Apple account that requests the certificate will receive a notification email as the
expiry date is approaching. As mentioned earlier, you should ensure that the email address that is used for the
certificate is one that your organization owns (i.e. do not let anyone use their personal Apple account for this
purpose) and should be one that is monitored so that the expiry notifications are read and actioned. If you’re
not sure about the expiry date for your certificate, you can view it in the mobile device management section
of the Microsoft 365 admin center.
After accessing the Device management section of the Security center, click the APNs Certificate for iOS…
button (which appears below the device list), and then Next to start the process of requesting an APNs
certificate.
The wizard will step you through the process of generating a new CSR, which is then uploaded to the Apple
Push Certificates Portal. Apple will issue you the new certificate, which will be a .PEM file type. If your browser
prompts you to download a .JSON file (which may have also happened to you during initial setup of MDM)
you can ignore that file. Once you have the .PEM file, complete the wizard in the Microsoft 365 admin center.

Page 370
Chapter 14: Stream Classic
Stream is in the middle of a transition from the V1 implementation based on Azure blob storage (Stream
Classic) to a new version powered by SharePoint Online and OneDrive for Business. This chapter is the content
published in the 2022 edition covering Stream Classic.

Stream Architecture
Stream has a set of core services supplemented by functionality drawn from other parts of the Microsoft 365
ecosystem. The core services are:
• Media Organization: Controls the organization of videos into channels and groups.
• Permissions: Controls who can access what video and what they can do with a video.
• Search: Indexes video content and the automated captions generated for videos.
• Live Events: Organizes live events run through Stream, including those run by Teams.
The biggest dependency Stream has is on Azure Media Services, which delivers this functionality:
• Encoding: After Stream uploads videos, it encodes the files and makes them ready for playback in
multiple formats to accommodate variable network conditions and a range of device formats. The
Encoding process also generates video thumbnails.
• Content protection: The Stream Azure-based video storage encrypts content at rest.
• Streaming: When clients connect to Stream, the streaming service figures out what format and
definition a client can accept and then streams the video to the client.
• Azure Player: The component that plays content back to a client. Content is normally played back in
a unicast (one-to-one) transmission, which can be quite demanding on a network. It is often more
efficient to handle large-scale playback using an enterprise content delivery network.
Stream is deployed as a tier C-compliant service. Apart from Azure Media Services, Stream uses the following
services:
• Azure Blob Storage: Stores the video files and transcripts (captions).
• Azure SQL: Stores video metadata and statistics such as video view counts.
• Azure Active Directory: Authenticates client connections. Stream does not currently support guest or
anonymous connections.
• Telemetry: Captures and analyzes the usage of Stream.
• Microsoft 365 Groups: Manages membership and access to content. It’s not compulsory to use
Groups if you don’t want to, but it’s convenient to do so when Groups manage access to resources
owned by other services.
The Stream portal is how users access the platform. The portal loads as quickly as possible by connecting to
the closest Microsoft 365 data center. However, backend services come from the tenant’s home region.
Pending the migration to its new platform, Stream stores video content and metadata in Azure hosted in
these data center regions:
• The United States.
• U.S. Government Community Cloud (GCC).
• Canada.
• European Union (EMEA).
• The United Kingdom.
• Asia-Pacific.
Page 371
• Australia.
• India.
• GCC.
The plans for the deployment of the classic Stream services in additional regional data centers stalled when
Microsoft began the transition to modern Stream. SharePoint is a core Microsoft 365 workload and
SharePoint Online and OneDrive for Business are available in all Office 365 data centers. Until the transition to
modern Stream is finished, tenants might have two sets of data: videos in classic Stream stored in Azure and
new videos stored in SharePoint Online or OneDrive for Business.
If your tenant is in a country-level region, Stream uses the nearest geographic region to store classic data. You
can discover this location by clicking the question mark (?) icon in the Stream menu bar and then selecting the
About Microsoft Stream link. Stream then reports the location for its classic data (Figure 14-1). In this case,
the European Union (EMEA) data center region hosts Stream.

Figure 14-1: Where a tenant’s Stream data is located

Stream Licensing
Stream is included in all enterprise plans (including DoD, GCC, and GCC High), the education plans, and the
front-line worker plans. It is also included in the Business and Business Premium plans. See this page for more
licensing information. Stream functionality available to Office 365 E3 and E5 tenants includes advanced
features like speech-to-text and closed captions, transcript and caption generation, and searches.
The Stream tile appears in the app launcher unless you take action to remove it. To stop people from using
Stream, you can:
1. Disable Microsoft Stream for the tenant via the Azure portal.
2. Selectively remove the license for Stream from users by editing their account settings in the Microsoft
365 admin center (select the Microsoft Stream for Office 365 service plan in SKUs such as Office 365 E3
or E5) or PowerShell.
Although Microsoft published roadmap item 27728 covering the ability to allow videos to be embedded in
websites and viewed without licenses, the current situation is that anyone who views or uploads Stream
content must have a license, including access to Teams meeting recordings stored in Stream. The need for
external access is addressed in the transition to Stream 2.0 when all video content is stored in OneDrive for
Business and SharePoint Online. At that point, the normal external sharing mechanism will extend to Stream
videos.

Stream Clients
Stream supports video upload and viewing through a wide range of browsers, including Microsoft Edge and
the current versions of Chrome, Brave, and Safari. The client is localized into many different languages. Stream
mobile apps are also available for iOS and Android.

Page 372
Stream and Other Microsoft 365 Apps
Like many other Microsoft 365 apps, Stream uses components from across the Microsoft Cloud and
contributes functionality to other apps. Examples of how Stream integrates with other applications and
services include:
• Stream uses Microsoft 365 Groups to manage access to videos.
• A Stream channel can be added as a channel in Teams and videos from the channel can be played
back in Teams. You can also paste the URL for a Stream video in a channel conversation or personal
chat for readers to access by clicking the URL.
• Recordings of Teams meetings and Live Events are in OneDrive for Business or SharePoint Online.
However, Stream features are available for these videos.
• Stream videos can be played back in Yammer. You can paste the URL for a Stream video into a
Yammer message and readers can then play the content back inline.
• The SharePoint web part for Stream can highlight a video, channel, or list of videos (from all of
Stream) on a page.
• A link to a Stream video can be inserted into OneNote or the embedded code generated by the Share
option inserted into a Sway.
• Forms can collect feedback through quizzes, polls, and surveys for videos through the Interactivity tab
for a video.
Other apps can use Stream’s implementation of oEmded to display videos or channels. Unlike the older Office
365 Video Service, Stream has no dependency on SharePoint Online. Apart from anything else, this reduces
the amount of SharePoint storage consumed by videos.
Sometimes gaps exist between apps and Stream. For example, you can’t use the standard sharing controls to
share a video with someone inside or outside the tenant. Instead, you must download a copy of the video to
OneDrive for Business and share it there. These gaps are likely to be closed over time.
Another gap is that guest members cannot access video content posted to Teams, Outlook Groups, or Planner
because Stream does not currently support Azure B2B collaboration.

Sharing Stream Links


In most cases, you need to know the link to a Stream video or channel to be able to embed it into another
app. Stream makes this easy by including a Share button for each video and a Share option in the […] menu
for a channel (Figure 14-2). The options are to generate:
• A URL for the video for pasting into a document or other file.
• An email to selected addressees. The email contains the link to the video. The message comes from a
“no-reply” Stream address, so if you want a copy, remember to add your email address to the
message.
• The code for an embedded iFrame for insertion into a web page to show the video.

Page 373
Figure 14-2: Stream generates a sharable link

Network Demand
Like any video application, Stream benefits from good network performance between servers and clients.
Microsoft publishes articles to help administrators understand the characteristics of network performance for
stream, including Video delivery and network overview and how to scale video delivery for Stream.

Stream User Functionality


Like any video portal, the user functionality of Stream divides into two major sections:
• Uploading content.
• Watching content.
Apart from people with frontline licenses, licensed users can upload a video to Stream. To accommodate
uploads, Office 365 enterprise tenants are assigned 500 GB of video storage plus 0.5 GB of extra storage for
every licensed user (except frontline users). All uploaded videos count against the overall quota for the tenant.
To discover how much of the assigned Stream quota your tenant has consumed, go to Stream admin settings
and access the Usage details option.
The exact size of a video file depends on its format, quality, and length. As a guide, expect to use
approximately 7.5 MB per minute of 1080p MP4 video with smaller amounts consumed for lower-quality
video (the storage required per minute doesn’t vary for Teams meeting recordings saved to One Drive for
Business). Stream counts the original file size of the uploaded video against the quota and doesn’t take other
factors such as the size of transcoded videos and caption files into account.
No matter how someone accesses Stream content, Stream won’t show them a video unless their account has
the right to view it. The default permission assigned to a newly-uploaded video makes it available to any
other user in the tenant. Recordings of Teams channel meetings are available to team members while
recordings of Teams personal calls are available to the participants. You can adjust access by editing video
settings to restrict access to different people. Be careful with groups and channels because it’s easy to create
access conflicts. For instance, if you embed a channel controlled by a group into a channel in a team owned
by another group, team members won’t be able to access the videos in the channel.
The quota assigned in classic Stream will not transition to modern Stream. Instead, storage used to store
videos will be charged against the SharePoint Online (for videos owned by Microsoft 365 Groups) and
OneDrive for Business (for personal videos) storage quotas. This should not impact tenants because most
videos stored in Stream are Teams meeting recordings. These videos will be stored in the OneDrive for
Business account of the person who initiates the recording. OneDrive for Business offers 1 TB of storage for
Microsoft 365 business accounts and “unlimited storage” for enterprise accounts.
Page 374
User Settings
The user options for Stream are minimal. Click the cogwheel icon in the Stream navigation bar and select My
settings from the menu. You can now manage:
• Email notifications. The default is On to receive email notifications, such as when Stream has
completed processing an uploaded video.
• Language and region settings. If these values are not set, Stream defaults to English. You can select
a language and regional format for Stream to use.
You can also view the tenant policy for video uploads if defined in the Stream admin settings.

Uploading a New Video


To upload a video, click the upload link in the Stream navigation bar and then drag one or more video files (in
the supported file formats – audio-only files are not supported) to the marked box on the page. Alternatively,
select files using File Explorer (you can also upload videos to Stream using the mobile app). After uploading a
video, Stream processes it to create the necessary online content. If the language for the video is set to one of
the supported languages and its format is either MP4, MOV, or WMV, Stream generates automatic captions
and transcripts using speech recognition technology. If you prefer not to have Stream generate automatic
captions, you can choose not to have this done and then upload a manual caption file afterward. If a video is
auto-captioned, you can’t replace the captions with manual captions later.

Figure 14-3: Uploading multiple videos to Stream


When automatic captioning is used, Stream concatenates the captions to form a transcript to support
searching for words and phrases within videos. The captions and transcript will be ready after the video is
published (Microsoft says that processing can take between one and two times the video’s duration).

Page 375
While Stream uploads the video, you can add the name of the video, a description, its language (for closed
captioning), and select a thumbnail (when available). In Figure 14-3, Stream is uploading two videos. As soon
as each upload completes, you can publish the video to make it visible, but users won’t be able to access the
content until Stream completes its background processing. You’ll receive an email notification when Stream
completes processing uploaded videos.
Apart from supplying details for the new video, its owner can also set permissions and options. Permissions
control who can see a video. If you don’t do anything to change the permissions, Stream makes the video
available to anyone in the tenant by publishing it in a companywide channel (you can change the default
permissions for new videos through the Stream admin settings). A channel is a way of organizing content and
is not a way to assign permissions. You can control access more precisely by putting the video in channels
belonging to one or more groups (see below). Remember that a video can exist within multiple channels,
some of which are accessible publicly and some confined to members of a group.
Options for a video include whether to allow users to post comments, apply noise suppression, and generate
or upload a caption file. Before publishing a video, it’s always a good idea to review the content and consider
editing the transcript to correct any obvious errors and trimming any extraneous seconds from the start or
end of the video.

Other Ways to Get Video Content into Stream


Other ways are available to get video content in supported formats into Stream, including:
• With the Stream mobile app.
• Making a screen capture video with Stream.
• Recording a Teams meeting.
• Running a Live Event with Teams or Yammer.

In all cases, once the video is uploaded, Stream processes it to create the playback files, captions, and
transcript.

Updating Video Settings


When Stream completes processing a file, it sends an email to the person who uploaded the video to make
them aware that the content is now fully available in the portal. The user can access the video through the My
videos link in the My content menu in the Stream navigation bar, where the default sort order is upload date,
so the latest videos appear first (Figure 14-4).
Apart from giving quick access to uploaded video, this screen allows an owner to edit the settings of a video,
such as changing the permissions to remove a video from a companywide channel to more restricted access
controlled by a group. You can see what videos are available for general access (green icon with double heads
in the View column) and those with limited access (shown as a single head icon), plus the number of views
and likes for each video (this data is not dynamic).

Page 376
Figure 14-4: Listing My Videos
You can’t attach a file to a video to give watchers extra information about a topic. However, you can add a
clickable link to a file stored in OneDrive for Business or SharePoint Online in the video description. A video
owner can also add one or more Microsoft Forms to a video (using the Interactivity tab) to ask watchers to
take a survey or quiz. Each form is linked (via its URL) to a point in the video’s timeline. When that point is
reached, Stream pauses playback and displays the form to collect information. For more information, see the
discussion about Forms in the companion volume.

Video Playback
After organizing videos to make sure that they have the right titles and descriptions and to adjust access,
viewers can watch them. Figure 14-5 shows some of the major features of Stream like closed captioning, the
automatic (and searchable) transcript, user comments (which are part of the data stored by Stream), and
trending videos. If you are the owner of a video (the person who uploads it to Stream), you can edit the
transcript to correct minor errors (via the pencil icon). Editing the transcript does not change the captions
Stream generates for a video.
Users can add videos that they don’t have time to watch to their watchlist and return to those videos through
the My watchlist link on the My content menu in the Stream navigation bar. They can also use Share to
generate a URL link that can point to a specific time in a video.

Page 377
Figure 14-5: Stream plays a video with automatic captioning and (editable) transcript

Transcripts
The transcript for a Stream video is composed of the captions generated when Stream processes a video, so
it’s not like a traditional transcript separated into the different spoken contributions from participants in a
conversation. Instead, a Stream transcript is divided into sections corresponding to times in the video. Unless
you extract captions to an external file, you can’t combine sections of the transcript to create a more coherent
rendition of a conversation or to indicate who spoke. For instance, a section often combines words spoken by
multiple people in a conversation, but you can’t separate the different contributions, probably because this
would break the connection with the captions. Also, you can’t underline or otherwise highlight terms. What
you can do is edit a specific section to correct the text in that section, and that’s usually enough to create an
accurate transcript. Finally, you can’t extract or print a complete transcript from Stream. If you want to capture
some information from the transcript, you must copy and paste individual transcript segments into a file.
Once in the file, you can edit and format the text as you wish, but you have lost any connection with the
video.
Stream used to generate transcripts automatically for new videos after their upload. In addition, from
September 2021, Stream began to remove the automatically generated transcripts for older videos that have
not been watched or updated in the last 3 (U.S.) or 6 months (other regions). These steps are intended to
reduce the amount of data Microsoft must migrate to the Stream 2.0 platform. Stream regenerates the
transcript if a user views the content. A video owner can regenerate the transcript for a video at any time by
editing its details to uncheck and then recheck the control used to control the generation of automatic
captions. Stream will not remove transcripts for videos in active use (people are viewing the video) or when
owners have edited a transcript or uploaded a manual transcript. Video content stored in Stream is unaffected
by this move.

Page 378
Searching for Videos
Stream supports the ability to search against video titles and descriptions. Previously, Stream supported
searching against the contents of video transcripts, but Microsoft removed this useful facility in 2021. Figure
14-6 shows the search interface. The lack of more in-depth search capabilities underlines the necessity to give
videos good titles and descriptions. Stream videos and transcripts do not currently show up in the results
returned by Microsoft 365 content searches.

Figure 14-6: Stream finds videos by searching titles and descriptions

Screen Capture
It’s often useful to record how to do something in an application or, less positively, how to reproduce a bug.
Stream’s screen capture feature can record short videos of up to 15 minutes to capture screen output. Screen
recording works with Chrome, Edge, and Brave browsers on Windows and macOS workstations. Safari on
macOS is not supported and audio capture is only available on Windows. You can’t disable the screen capture
feature. It is available to any user with a Stream license.
The steps to record a screen capture are:
• Select Record screen or video from the Stream create menu.
• Make sure that the correct camera and microphone are chosen. Screen capture from the camera is
only available when the complete screen is recorded.
• Select the screen area for capture (complete screen, window/application, or browser tab).
• Start and stop the recording to capture the desired material.
• Review the capture and upload it to Stream for processing.
• Make any needed adjustments in Stream such as trimming the video, changing permissions, or
updating metadata before publishing as normal.
Stream doesn’t always create automatic transcripts for screen capture videos. This feature depends on the
language set for the video, so if you don’t see a transcript created, update the video details to make sure that
the correct language is set. Stream notes if you change the language and create a transcript if the language is
supported for transcript generation.

Page 379
Trimming Videos
Often videos have some extraneous material at the start and finish. This is especially true of recordings of
Teams meetings or self-produced videos (like screen captures) where some time is consumed before the
content starts to make sure that everything is ready, check microphones, and so on. To improve the quality of
the video, you can remove content from the start or end of the file using the trim function in the […] menu.
When you trim a video, Stream loads a timeline of the content. You can then move two “trim points” by
dragging them to where you want the video to start and end. When you’re happy that the content is right,
click Apply. After you confirm that the trim should proceed, Stream processes the file in the background to
permanently remove content up to the start trim point and from the end trim point. There’s no recovery from
this operation; once Stream completes processing the video, a new file trimmed to the selected points
replaces the old video and the trimmed content is irrecoverable. For this reason, it is wise to download a copy
of a video before making any changes. Anyone with owner access (for instance, because they are a member of
a group with owner access to the video) can trim a video. Stream doesn’t lock a video to stop multiple people
selecting trim points but does when it is applying trims in the background. In other words, two people can’t
trim a video at the same time.
Depending on the load on the service and the length of the video, trimming can take from a few minutes to
an hour or so to finish, including the generation of new captions/transcript and timeline. When the new video
is available, Stream highlights the fact that the file was updated. If you’ve linked a form to a video, you’ll need
to check and possibly redo the work after trimming.

Replacing Videos
Sometimes videos need a little more post-production work than is possible with a simple trim from the start
and end. Video owners and Stream admins can download a video, process it with a video editor like
TechSmith Camtasia, and replace it with an updated version (or overwrite a video with a completely new file).
Replacing a video keeps its link, which means that the new file will play anywhere the link is used (like in a
Teams channel tab or web page). In addition, many of the video attributes are retained:
• Video details, like name, description, and language.
• Permissions.
• Comments, likes, and view history.
However, because the content of the video changes, anything based on the content is not retained. This
includes:
• The original transcript is replaced by a new transcript generated after the new file is uploaded to
Stream.
• New thumbnails are also generated to replace those generated from the original video.
• If Forms are linked to the original video, the links are removed (because they depend on specific
timelines in the video) and you’ll have to reinsert the Forms at the appropriate times in the
replacement video.
• The people timeline generated for the original video is no longer available. As Stream no longer
generates people timelines for videos, one will not be created for the replacement video.
To replace a video:
1. If you plan to edit the original video, download it from Stream and do whatever processing is needed.
Otherwise, locate the replacement video file. Even if you plan to replace the video with a brand-new
file, it’s a good idea to download the original as there’s no way to restore a video in Stream after the
replacement happens unless you have a copy of the original file.

Page 380
2. Select the video you want to replace in Stream and choose the Replace video option from the […]
menu.
3. Choose the video file to replace the original and click Replace.
4. Stream uploads the replacement video and swaps it for the original. The normal processing for a new
video occurs to generate the automatic transcript. After a short delay, the replacement video will be
available. Like what happens following the upload of a new video, the video owner receives an email
notification when the replacement video is in place.
Stream logs a StreamInvokeVideoUpload audit record when the replacement video is uploaded and a
StreamEditVideo record when the replacement is made. Unfortunately, the file name of the replacement video
is not captured.

Noise Suppression
Noise suppression is an option Stream can apply to new videos as they are uploaded to the portal or for
existing videos. Noise suppression isolates speech from background noise in the audio feed to make it clearer
and more distinct. Team owners or Stream administrators can update the details of existing videos to enable
noise suppression. When noise suppression is enabled, viewers have the option to keep noise suppression on
or disable it during video playback.
Most videos qualify to be processed for noise suppression. The criteria include:
• The video is two hours or shorter, and no larger than 3 GB.
• An audio track is available, but not when multiple audio tracks in different languages exist in a video.
• The video is not a recording of a Teams meeting. This is because noise suppression is automatically
done when Teams meetings are recorded.

Noise suppression is not available for recordings of Live Events. You can’t disable noise suppression on a
tenant-wide level.

Stream Mobile Apps


Stream supports highly usable mobile apps for iOS (Figure 14-7) and Android. Users can browse channels and
groups to find and play videos, add videos to watchlists, like a video or make comments about its content,
edit, delete, and share their videos, and download videos for offline access. Users can choose to minimize data
consumption when streaming videos with standard definition or use more data for higher fidelity. The
automatic transcript (captions) is supported on mobile clients. This page has the latest details about the
Stream mobile apps.
Being able to record and upload videos on mobile devices is an especially popular feature. When recording on
a mobile device, the Stream mobile app supports these features:
• Swap between the available cameras on the device. The default camera selected for a new video is
rear-facing, but you’ll probably want to use the front-facing camera for in-person shots.
• Record multiple clips before uploading the video to Stream. When all the clips are recorded, you can
drag and drop them into the order you want the clips to appear in the video.
• Include photos from the device in a clip. For example, you could take a picture of a new product and
tape a commentary for the picture.
• Annotate (draw), add emojis or apply filters to a clip.
• Trim clips by removing content from the front or end of a video.
When the creator uploads the video, Stream processes it and prepares the content for publication. Before the
upload, the creator should make sure to assign the language to a video to allow Stream to generate an

Page 381
automatic transcript. Users can also upload videos from the device’s camera roll or a cloud storage app like
OneDrive.
Other mobile applications can be used to create videos, many of which offer functionality over and above
what’s available in the Stream mobile app. The advantage of the Stream app is its integration with Microsoft
365, which makes it easy to upload and share videos. See this page for more information about recording and
editing Stream videos on Android and iOS devices.

Figure 14-7: Watching a video on Stream for iOS

Stream Administration
An important point about Stream is that the platform treats all content as belonging to the tenant. Users
might be the owners of videos, but their account can be removed, and the videos will remain in Stream and
be accessible to anyone with permission to view the content. Because this is the case, global tenant and
Stream administrators can manage any video stored in Stream. To access Stream administration, select Admin
settings from the Stream cogwheel menu (only administrators see this link). Figure 14-8 shows the Stream
admin screen.

Page 382
Figure 14-8: Stream Admin settings
Briefly, the Stream admin settings are:
• Administrators: Global tenant administrators can always change privacy settings, ownership of a
video or channel, and other settings. You can define a list of users to assume the role of Stream
administrators. Because Stream was designed to be able to function without Microsoft 365, this is not
a Microsoft 365 or Azure AD administrative role.
• Spotlight videos: Select up to four videos to highlight in the slideshow at the top of the Stream
home page and arrange the order in which the videos appear (Figure 14-8). If you need to add
another video, you must first remove one of the existing videos from the list.
• Live Events: Define the set of users allowed to create live events. The Live Now tab lists currently
active events while the Upcoming tab lists events scheduled in the future.
• Company policies: Define a URL to a page where users can read the company policy governing the
uploading of videos to Stream. This link appears in User settings. You can also set a switch to require
users to accept the company policy before they can upload a video (including screen captures). If the
company policy changes, you can reset the setting to force users to accept the new policy.
• Usage details: Shows the amount of storage currently used for video storage and the remaining
quota. You can also choose that Stream notifies the administrators when the storage reaches 85% of
the assigned quota.
• Recycle bin: Recover videos deleted by Stream users (see the section below). Administrators can also
remove videos individually or in bulk to free up the storage available for Stream.
• Groups: Groups are managed through the Microsoft 365 Admin Center.
• Support: Submit a support ticket to Microsoft.
• Comments: By default, users can post comments to videos. You can disable this feature here.
• Content creation: You can:
o Restrict the set of users allowed to upload new videos. If you block an account here, they
cannot upload recordings of Teams meetings.
o Restrict the ability of users to create companywide channels. Companywide channels must
have a unique name within the tenant.

Page 383
o Define the default viewing permission assigned to new videos, choosing between everyone in
the company or specific groups and people.
• Third-party eCDN provider: You can sign up with an enterprise content delivery network such as
Hive Streaming or Ramp to optimize video traffic within your network. If you do, you must update
Stream so that it knows about the eCDN. Stream doesn’t support the Azure CDN.

Data Privacy
The Data Privacy section of Stream administration includes:
• Manage user data generates a report about a selected user’s activity within Stream. The HTML-
format report lists all the videos uploaded by the user as well as the channels and groups they create,
the groups they belong to, the comments posted, and notifications sent by Stream to the user.
Reports remain online for 30 days after creation.
• Manage deleted users allows an administrator to remove or update references to deleted users that
might appear in Stream. For instance, when you remove a Microsoft 365 account, that user’s object in
Azure AD is removed (after 30 days), but references to the account persist in Stream and appear
beside videos, channels, and comments created by the user. After an account is permanently removed
from Azure AD, you can:
o Replace its name wherever it appears in Stream with a generic name like “Removed user.”
o Remove all references to the account and its email address, perhaps to ensure that personal
data is removed in response to a GDPR right to be forgotten request.
Stream won’t return anything for a deleted user if they have not been active.

Switching to Stream Admin Mode


A Stream administrator can manage content on behalf of other users, including being able to change
permissions on videos, update metadata, trim videos, or edit or remove comments for a video. They do this by
entering admin mode, enabled through a slider in the Stream menu bar (Figure 14-9).

Figure 14-9: The Stream admin mode slider

Deleted Videos and the Stream Recycle Bin


The Stream Recycle Bin is available to both users and administrators. When someone deletes a video, it is
removed from the groups and channels where it is published and moved into the recycle bin. Users can access
deleted videos by clicking Recycle bin in the My Content menu bar. They can restore a deleted video at any
time during the 30 days that Stream keeps videos in the recycle bin. After the retention period expires, a
background process permanently removes the video and it is irrecoverable. Restoring a video moves it back
to the groups and channels where it was previously published with the same permissions that it had in those
locations. If they wish, an owner can delete a video from the recycle bin, in which case the video is also
irrecoverable.

Page 384
Stream administrators can also recover videos for users through the Recycle bin option in Admin settings.
This option allows administrators to see and restore videos deleted by any user in the tenant or permanently
remove videos from the recycle bin.

Stream and Compliance


Stream generates audit records and ingests them into the audit log for its operations. It does not apply other
aspects of Microsoft 365 data governance and compliance to its content, such as sensitivity labels and
retention policies. In addition, Microsoft Search does not index video content and therefore you will not find
Stream content with content searches. The only indication that a video of interest might exist is when a
content search finds the email notification sent by Stream to a user when it completes the processing of an
uploaded video.
Because Stream logs audit records, we can search the audit log to see what users are doing. You can search
audit data with the audit log search feature in the Microsoft Purview Compliance portal or using the
PowerShell Search-UnifiedAuditLog cmdlet. The auditing chapter in the main book explains how to search
using either method, but for illustration purposes, the code below shows how to search the audit log and
return the count of videos uploaded by individual users.
[PS] C:\> $StartDate = (Get-Date).AddDays(-90); $EndDate = (Get-Date) #Maximum search range for
audit log for E3 users
$Records = (Search-UnifiedAuditLog -Operations StreamInvokeVideoUpload -StartDate $StartDate -
EndDate $EndDate -ResultSize 2000)
If ($Records.Count -eq 0) {
Write-Host "No audit records for Stream video uploads found." }
Else {
Write-Host "Processing" $Records.Count "audit records..."
$Report = [System.Collections.Generic.List[Object]]::new() # Create output file for report
# Scan each audit record to extract information
ForEach ($Rec in $Records) {
$AuditData = ConvertFrom-Json $Rec.Auditdata
$ReportLine = [PSCustomObject] @{
TimeStamp = Get-Date($AuditData.CreationTime) -format g
User = $AuditData.UserId
Action = $AuditData.Operation
VideoURL = $AuditData.ResourceURL
VideoName = $AuditData.ResourceTitle }
$Report.Add($ReportLine) } }
$Report | Sort {$_.TimeStamp -as [DateTime]} -Unique -Descending | Out-GridView

$Report | Group User | Sort Count -Descending | Format-Table Name, Count

Name Count
---- -----
Jane.Nix@office365itpros.com 74
James.Ryan@office365itpros.com 22
John.Hubbard@office365itpros.com 1
James.Joyce@office365itpros.com 1
Ben.Owens@office365itpros.com 1

Other events are available to track actions such as video uploads, deletions, or even when people like a video.
For instance, if you replace StreamInvokeVideoUpload (upload a video event) with StreamInvokeVideoUpView
(view a video event) in the Search-UnifiedAuditLog command, you’ll create a report of Stream view events to
know who’s looking at videos and what the most popular videos are. To see a summary, you change the
Group-Object command slightly to:
[PS] C:\> $Report | Group-Object Property VideoName | Sort Count -Descending |Format-Table Name,
Count

To find a more comprehensive set of audit records for Stream operations over the last ninety days, you could
do this:
Page 385
[PS] C:\> $StartDate = (Get-Date).AddDays(-90); $EndDate = (Get-Date)
$Operations = @('StreamEditGroup', 'StreamDeleteGroup', 'StreamInvokeVideoView',
'StreamInvokeVideoSetLink', 'StreamEditUserSettings''StreamEditVideo', 'StreamDeleteVideo',
'StreamInvokeVideoUpload')
$Records = (Search-UnifiedAuditLog -Operations $Operations -StartDate $StartDate -EndDate $EndDate -
ResultSize 2000)

And then process each record to extract information from the AuditData field.
Yet another example of how to use Stream audit data is to look for deleted video events over the last 30 days.
These events should correspond to videos waiting in the recycle bin. After 30 days, Stream removes the
videos. The audit events can be used to create a report for checking to ensure that none of the deleted videos
should be kept. A script, downloadable from GitHub, shows how to create the report and email it to a
recipient.

Microsoft 365 Groups and Stream

Figure 14-10: Stream displays videos for a group


Given the way Groups acts as an identity and membership service for applications, it is natural to consider
using Groups to organize access to videos. Stream does not use Groups in the same way as other applications
like Planner do. Stream treats all videos as belonging to the tenant and managed by their owners. Groups do
not manage Stream resources in the same way as a SharePoint Online site belongs to a group. Because
Groups are only linked to videos, you can delete a group in Stream without affecting any of the videos
associated with the group. However, the other resources belonging to the group such as the group mailbox,
team, plan, and site are removed when you delete a group from Stream.

Page 386
Instead of using Groups as a resource manager, Stream uses Groups as a convenient method to organize and
publish videos with links connecting videos to groups. This allows Stream to treat each group as a type of
mini-portal, meaning that when users select a group, Stream shows a highlight page with trending and new
videos linked to the group (Figure 14-10). Group owners and Stream administrators can change the
highlighted videos through the cogwheel menu on the page. Other group settings such as the display name
or description can be updated through the same menu. Stream doesn’t currently support sensitivity labels for
either group settings or individual videos, nor does it support retention labels for channels or videos.
Within a group, videos can be further divided into channels, which make it easier for users to find the
information they want by following the channel, which puts a link to the channel in their “Followed channels”
list available from the My content link in the Stream navigation bar.

Public and Private Groups


As in other applications, the access type of a group can be public or private. Anyone can access videos
associated with public groups. The Stream URL for the group can be published to let people have quick access
to the group. The URL is in the form shown below, where GUID is the identifier for the group object in Azure
AD:
https://web.microsoftstream.com/group/33b07753-efc6-47f5-90b5-13bef01e25a6
Only group members can view content belonging to private groups. As you can see in Figure 14-10, the
group is public, so its videos can show up as trending on the Stream home page and users can search, find,
and view the videos.

Listing Available Groups


Selecting Groups from the My content menu in the Stream navigation bar lists the groups that the user
belongs to (Figure 14-11). Not all these groups will have videos associated with them, so you can use the
drop-down filter to sort the groups to show those that own most videos first. The list of groups includes all
groups, including those hidden from Exchange clients and Exchange address lists. However, the list is not
dynamic, and the need for background synchronization means that it takes about 15 minutes after someone
joins a group before the group shows up in Stream.

Page 387
Figure 14-11: Stream lists the groups available to a user

Edit Group Settings


To edit the settings of an existing group, select the group from the set shown by Groups, click the ellipsis
menu in the top right-hand corner of the group card, and select Edit. For Microsoft 365 Groups, you can then
update the group’s display name, description, classification, access type (public or private), and the switch
controlling whether members can upload videos and create channels. For Stream groups (usually created
during the migration from the older Office 365 Video platform), you can update the name, description, and
access type.
The same menu includes the choice to delete a group. Like in other applications, if you delete a group, Azure
AD puts the group into a soft-deleted state for 30 days. During this period, an administrator can recover the
group. Once the period elapses, Azure AD permanently removes the group and associated resources, such as
the SharePoint team site, and the group becomes irrecoverable. However, the removal of a Microsoft 365
group only unlinks videos from the group. The videos remain in Stream and can be assigned to new groups or
owners.
Group membership can be updated by selecting a group and then Membership from the navigation bar. You
can change a member to become an owner or vice versa, remove a member, or add new members.

Creating new Groups and Group Channels


If a user can create new groups (by policy or because the tenant allows unrestricted creation of new groups),
Stream users can create new Microsoft 365 groups through the Groups section or by choosing Group from
the Create menu. Currently, you can’t create new groups that aren’t connected to a Microsoft 365 group.

Page 388
Figure 14-12 shows the screen to collect settings for the new group. If a group naming policy is in force, it is
applied to the group display name when the group is created. The Allow all members to contribute setting
controls who can upload videos to the group and create or remove channels. This is a setting specific to
Stream and does not affect how the group works. If users cannot upload content, they are restricted to view-
only access to videos. After creating the group, you can add members.

Figure 14-12: Creating a new Microsoft 365 group with Stream


As we know, channels are how Stream organizes content. You can have company-wide channels and group
channels. Company-wide channels do not belong to any specific group. These channels hold content that you
want to make available to everyone within the tenant. The videos in these channels must allow company-wide
viewing. A group channel belongs to a specific group and inherits the membership of that group.
A channel must have a unique name within the group. Unless you edit group settings to turn off Allow all
members to contribute, group members can create new channels in a group. The same switch controls the
ability of members to upload videos, so you might want to leave it turned on in the knowledge that members
can then create channels. In this situation, some coaching might be necessary to prevent the creation of a
sprawl of channels.

Accessing Videos for a Group


When you select a group from the Groups list, you access a micro portal within Stream. You can now upload
new content to the group or watch whatever content is available in the group. If you upload a video to a
group, Stream sets its permissions to allow access to group members, and the ability of other users outside
the group to see the video is set by whether the group is public or private. If you want to make a video
uploaded to a private group available to everyone, check the Allow everyone in your company to view this
video box. You can change the permissions for a video to have it appear in multiple groups and multiple
channels. An uploaded video is always associated with a single group, which is where you delete the video if
you want to remove it from the tenant. For the other groups, use the Remove from group option to remove
the video from the list of videos exposed in the group.

Page 389

You might also like