You are on page 1of 222

The Pester Book

Adam Bertram
This book is for sale at http://leanpub.com/pesterbook

This version was published on 2020-09-20

This is a Leanpub book. Leanpub empowers authors and publishers with the Lean Publishing
process. Lean Publishing is the act of publishing an in-progress ebook using lightweight tools and
many iterations to get reader feedback, pivot until you have the right book and build traction once
you do.

© 2017 - 2020 Adam Bertram


Tweet This Book!
Please help Adam Bertram by spreading the word about this book on Twitter!
The suggested hashtag for this book is #PesterBook.
Find out what other people are saying about the book by clicking on this link to search for this
hashtag on Twitter:
#PesterBook
Contents

About This Book . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . i

About the Author . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ii

Foreword . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iii

Feedback . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vi

Code Samples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vii

Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . viii
Testing as Institutional Memory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . viii
Testing Drives Better Modularization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix
Tests as Functional Specifications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix
Tests in Automated Build Pipelines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix
Where We’re Not Going . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix
Resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . x

Part I - Pester Concepts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1

Block Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
Describe Blocks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
Context Blocks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
It Blocks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
Before and After Blocks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6

TestDrive and TestRegistry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7


Using TestDrive . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
Using TestRegistry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13

Assertions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
Using the Should Assertion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
Understanding Should Operators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
Testing Collections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
CONTENTS

Other Tests . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
Testing Errors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
Adding Custom Assertion Operators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
Because . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26

Test Cases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
Creating Tests for Different Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
Using the TestCases Parameter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
Using the Test Name Token . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31

Mocking Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
Mock Walkthrough . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
Mock Assertions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40

Code Coverage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
Why Code Coverage? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
Using Pester to Measure Code Coverage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
Getting Granular with Code Coverage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45

Gherkin . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
Files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
Features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
Scenarios . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
Steps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
Step Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
Running Gherkin Tests . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53

Part 2: Using Pester . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54

Controlling Test Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55


Pending and Skipped States . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
Forcing Pass/Fail States . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58

Working with Pester Output . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59


nUnit XML . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
Returning Failed Tests as Exit Codes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61

Tagging . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
CONTENTS

Introduction to Tagging . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
Tagging Strategies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66

Modules and Dot-Sourced Script Gotchas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67


Scripts with Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
Modules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71

Advanced Mocking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
Mocking Specific Object Types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
Mock Scope . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
Module-Level Mocks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
Parameter Filters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87

Mocking the Unmockable . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88


.NET Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88
Creating Wrapper Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
“Overwriting” .NET Methods with a ScriptMethod . . . . . . . . . . . . . . . . . . . . . . . . 90
Using “Stubbed” Commands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94
Asserting Mocks “Transitively” . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97
Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99

Improving Code Coverage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100


Step #1: Getting a Baseline . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100
Step #2: Build the Test Framework . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102
Step #3: Create the Mocks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105
Step #4: Know the “Before” Code Coverage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106
Step #5: Build all Tests . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108
Step #6: Re-check Code Coverage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109
Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113

Infrastructure Testing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114


Analyzing the Code . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116
Getting Testing Dependencies in Place . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116
Prototyping the Infrastructure Tests . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117
Assessing the Current Environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118
Writing the Infrastructure Tests . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119
Dependencies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121
Operational Validation Framework (OVF) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127
Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129

Troubleshooting and Debugging . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130


CONTENTS

Sending Variables to the Console with Write-Host . . . . . . . . . . . . . . . . . . . . . . . . . 130


Using Debugger Breakpoints During Tests . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133
Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136

Test Walkthrough #1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 138


Analyzing the Code . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139
Refactoring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 140
Deciding What to Test . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142
Find the Shortest Code Path . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142
Creating Test(s) for the Shortest Code Path . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143
Building Tests for Other Code Paths . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 144
Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153

Test Walkthrough #2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 154


Understand the Subject . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 154
Designing Tests . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155
Writing the Tests . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 156
ImportVariables Parameter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 158
ImportModules Parameter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159
RunspaceTimeout Parameter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 160
Parameter parameter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 160
Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 161

Part 3: Hands-On Design and Testing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 162

Buildling Testable Code . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163


Modularization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163
Limit Function Input . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 169
Input Via Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 172
Managing Output Properly . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 173
Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 173

Test Design Practices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174


Ensuring Code “State” Doesn’t Change . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174
The Input/Execution/Output Pattern . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 175
Choosing When to Create Describe Blocks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 177
Naming Conventions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 179
Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 181

Part 4: Pester Cookbook . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 182

Recipe: Testing for Live Conditions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 183

Recipe: Testing for Installed Software . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 185


CONTENTS

Recipe: Ensuring all Functions in a Module Have Tests . . . . . . . . . . . . . . . . . . . . . . . 187

Recipe: Testing External Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 190


The Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 191
Testing Exit Codes from Standalone EXE files . . . . . . . . . . . . . . . . . . . . . . . . . . . 191

Recipe: Testing Syntax . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 193

Recipe: Testing Remote Script Blocks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 196

Resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 200
Video Courses . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 200
Articles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 200

Release Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 201


About This Book
This is the ‘Forever Edition’ on LeanPub¹. That means when the book is published as it’s written
and may see periodic updates. Although, at this time, this book is 100% complete. That is not to say
it will never receive updates again. The updates will be sporadic, if at all.

If you purchased this book, thank you. Know that writing a book like this takes hundreds of hours of
effort, during which I’m not making any other income. Your purchase price is important to keeping
a roof over my family’s heads and food on my table. I’m not rich–this income is important to us.
Please treat your copy of the book as your own personal copy–it isn’t to be uploaded anywhere, and
you aren’t meant to give copies to other people. I’ve made sure to provide a DRM-free file (excepting
any DRM added by a bookseller other than LeanPub) so that you can use your copy any way that’s
convenient for you. I appreciate your respecting my rights and not making unauthorized copies of
this work.
If you got this book for free from someplace, know that you are making it difficult for me to write
books. When I can’t make even a small amount of money from my books, I’m encouraged to stop
writing them. If you find this book useful, I would greatly appreciate you purchasing a copy from
LeanPub.com or another bookseller. When you do, you’ll be letting me know that books like this are
useful to you, and that you want people like me to continue creating them.

• Adam

This book is copyrighted (c)2019 by Adam Bertram, and all rights are reserved. This book is not open
source, nor is it licensed under a Creative Commons license. This book is not free, and the author
reserves all rights.
¹http://leanpub.com
About the Author
Adam Bertram is a 20-year veteran of IT and experienced online business professional. He’s an
entrepreneur, IT influencer, Microsoft MVP, blogger at adamtheautomator.com², trainer and content
marketing writer for multiple technology companies. Adam is also the founder of the popular IT
career development platform TechSnips. Catch up on Adam’s articles at adamtheautomator.com,
connect on LinkedIn³ or follow him on Twitter⁴.
²https://adamtheautomator.com/blog
³https://LinkedIn.com/in/AdamBertram
⁴https://twitter.com/adbertram
Foreword
Years ago, when I started writing PowerShell, I felt a spark. It was one of those sparks that get you
excited to go to work. It was a spark that activated my inner geek and gave me an excellent sense
of accomplishment after I had automated something. That spark has since turned into a fire which
has energized practically my entire career. PowerShell is now a big part of my work life.
This is what Pester feels like. That’s big!
Pester is a product that’s gotten me probably too excited sometimes and made my eyes light up
with satisfaction when I see all of that lovely green output scrolling down the screen to confirm the
success of my code.
As I’ve worked with Pester over the years, I always performed a TDD-hybrid approach with it. TDD
is writing failing tests first and then only writing the code that makes the tests pass. In a previous
life, I was a Senior Automation Engineer using PowerShell, which, in a roundabout way, is the
same thing as a PowerShell developer. I worked on code that provisioned test environments for my
developers as well as code for projects for clients.
Before Pester, I would write thousands of lines of code, run it, fix it, run it, fix it, run it ad infinitum.
It “worked” for the most part until some unknown bug crept in or someone else on my team made a
change I wasn’t expecting, and it would break. I’d then have to dive in and spend time researching,
fixing, and sometimes breaking other things in the process. At the time, I thought it was just how
you wrote PowerShell code. I was wrong.
I thought that testing—and thus Pester—was a waste of time. Why would I want to essentially to
write the same code again to confirm I wrote the code in the first place? To me, it was a huge time
drain for little value. That is until I decided to buckle down and force myself not to write a single
line of code that wasn’t covered by a unit test.
This is my story.
One of the modules that I owned on my team was called EnvironmentManifest. This module was a
home-grown PowerShell module that discovers all of the applications in a particular environment
and builds XML manifests from them. It gathered the versions through DLL file versions, database
queries, web service calls, and enumerating software installed on various servers. It then took all
of this information and built a manifest from it. With this module, the user could compare these
manifests over time to see the differences between different environments such as test, prod, etc. It
was the most complicated module I’d ever built.
It worked, but it was soon turning into a popular service that more people began using. I would
receive lots of feature requests and bug reports and be expected to fix them. Every time a change
request came in, I would cringe. Why? Because it was as fragile as Kanye West’s ego. It was like
Foreword iv

playing whack-a-mole every time I made a change. I’d change something, and it’d break somewhere
else.
It was painful to maintain.
After dealing with this for many months, I decided to take the plunge and essentially rewrite it. I
knew it would take weeks but, luckily, I have an understanding boss, and with this module now
handling part of the SOX auditing process, it was critical. I had built a lot of tests before this, but
this time I was going to get serious and write unit tests for everything. There wouldn’t be a piece of
this code that wasn’t covered under a test!
So I began.
At first, it was frustrating as hell. I kept thinking to myself that this was such a waste of time and
that it was taking me so long to get things done. But I continued…thirty or so tests in, my mindset
began to change. Things started to click for me. That spark began to light. I can’t put my finger on
the exact moment, but at some point, I realized how nice it was to run a script to give a pass/fail.
I tend to be an unorganized coder and, previously, I always had to get my mind on a new task as I
switched around.
“Now, where was I?”
“Did I leave this code in a workable state?”
“What was this function supposed to output again?”
These questions didn’t go through my head anymore. As I switched, I would get into the habit of
running the test suite and immediately seeing the state of my code. There was no more racking
my already tired brain trying to figure out where I was. The tests would tell me! It was like taking
automation to the next level.
A couple of weeks in, I realized that coding this way was changing me. It was changing how I wrote
code, changing my perception of “taking too long” and changing how I felt about my code. It was
indeed a mind shift.
This mind shift made me realize that even though the time to get things done is much more
significant, the confidence you have in your code increases dramatically. By building tests as you go
along, you find bugs that you would never have caught by chance. You discover these bugs because
writing tests force you to understand your code at another level. You’re compelled to analyze every
decision you’ve made. Sometimes you realize a coding decision wasn’t the best one to make in the
first place.
Writing tests as I code has allowed me to be lazy. Maybe this is a bad thing, but I now have a binary
yes/no definition of when my code is “working.” I can do something else and not have to keep
thoughts in the back of my head about that first task while I’m working on the next. I can forget the
groove I was in previously, and when I come back to the job, I run the test suite, see what fails, and
start there.
By far, the biggest takeaway for me was the increased confidence and ability to switch tasks quickly.
Previously, I thought I had to keep a mental bookmark of where I left off on each project and each
Foreword v

task. I didn’t want to start all over again. I was now free to forget this mental bookmark and rely on
my tests instead. I can’t say how freeing that feeling is.
Since you’ve purchased this book, you know the importance of testing. You’ve realized there might
be something to this whole Pester thing. I’m glad you did! I hope that, once you’ve consumed this
book’s Pester teachings, you’ll write better code too, be more confident in that code, and won’t be
afraid to make changes at any time. You’ve got your tests now to help out.
Happy testing!
-Adam Bertram
Feedback
I’d love your feedback. Found a typo? Discovered a code bug? Have a content suggestion? Wish I’d
answered a particular question? Let me know.

1. Please have a chapter name, heading reference, and a brief snippet of text for me to reference.
I can’t easily use page numbers, because my source documents don’t have any.
2. Go to the LeanPub website and use their email link⁵ to email me to let me know.

Thank you for taking the time to help make this book better!
⁵https://leanpub.com/pesterbook/email_author/new
Code Samples
Many chapters in this book include lengthy code listings, and I’d like to make it easy for you to get
those. Visit https://github.com/adbertram/pesterbookcode to find the sample code.
The code formatting in this book only allows for about 60-odd characters per line. I’ve tried my best
to keep my code within that limit, although sometimes you may see some awkward formatting as
a result.
For example:

Invoke-CimMethod -ComputerName $computer `


-MethodName Change `
-Query "SELECT * FROM Win32_Service WHERE Name = '$ServiceNa\
me'" `
Introduction
This book’s code was developed and tested with Pester 4.7.3. Older (or newer) versions may have
different capabilities and features.
In the world of software development, there are a few broad types of testing. The first is integration
testing, which, very roughly speaking, is where you test a particular chunk of code by compiling
your entire project and run the whole thing. A more granular approach is unit testing. Unit testing
is where you test a particular chunk of code all by itself, feeding it various inputs and checking its
outputs, in a standalone fashion. Unit testing tests code and while integration testing tests how that
code works once deployed.
The general theory is something like this:

1. You perform unit tests as you code each unit of work, like a module or function. You try to
make each chunk of code work as correctly as possible.
2. You then perform integration tests on the entire project. This can help catch problems that
might only surface when different functions start to talk to each other. For example, if you
forgot to use a last name with an apostrophe like “O’Shea” when testing the “Add a Customer”
module, you might pass your unit tests, but the integration test might fail when testing
“O’Shea.”
3. If you made any fixes during integration testing, you run regression tests which is basically
another round of unit testing to see if you introduced any new problems when making your
fixes.
4. On some occasions, you may choose to write acceptance tests as well. These test the desired
outcome of your script. For example, when creating an Active Directory user, the desired
outcome isn’t necessarily to create the actual user. It is just a step needed to get an employee,
contractor, or whoever authenticated to the domain. An acceptance test for this feature might
include consideration of how intuitive the process is, how quickly the app completes the task,
and whether the UI matches the client’s branding requirements.

Pester is the tool to use when performing any of these tests in the PowerShell world and, as you’ll
see in the coming chapters, to test infrastructure (no code needed).

Testing as Institutional Memory


Automated testing creates gates. It helps you not make the same mistake twice. Take that “O’Shea”
example, where the presence of an apostrophe somehow breaks the code (a common occurrence in
poorly-written database code). You might not catch that bug initially, but if you get into the habit
Introduction ix

of coding a test for every bug you find, then you’ll never forget to test for each one again. Instead of
having to remember about all the weird little things, specific to your environment or situation, that
you have to accommodate in your code, the tests remember for you. Should you leave the project,
you leave a legacy of those memories, encoded as repeatable tests.

Testing Drives Better Modularization


Once you begin building tests, you’ll soon find out many of your bad coding practices. You’ve
stumbled upon another hidden benefit of automated testing. Automated tests are more manageable
when your hunks of code are tightly scoped, and each does only one specific thing. Not incidentally,
tightly scoped chunks of code are also easier to write, more comfortable to debug, easier to spread
around a team, easier to re-use and easier to maintain.
Breaking your code into tightly scoped units of work makes unit testing more reliable.

Tests as Functional Specifications


There’s a whole software development theory called Test Driven Development, or TDD, which posits,
more or less, that you write your tests first. The tests serve as a kind of code-based functional
specification for what the hunk of code is supposed to do, and developers can run unit tests as they
go, to make sure they’re getting the code right. I’m oversimplifying a lot here, but we’ll dig in a
bit as I go through Pester’s paces. Because tests focus on inputs and outputs, they can indeed serve
as a kind of “in this situation, this is what should happen” functional specification that can drive
development.

Tests in Automated Build Pipelines


In an ideal world of Continuous Delivery and/or Continuous Integration, automated testing plays an
important role. Imagine checking in a PowerShell module to a source code repository. That triggers
the spin-up of a virtual machine which is loaded with your code and your Pester tests. Your tests
run against your code, and if the tests all pass, the repo publishes your module to a gallery, where
anyone can download and use it. Does that guarantee your code is bug-free? Well, no. But, if you
do find a bug, you code up a new test to catch it, and you’ll never have that particular bug again.
Over time, you get closer and closer to the rainbow’s end of perfect code.

Where We’re Not Going


Before I dive in, it’s essential to set the scope for this book. We’re going to focus on Pester itself—the
tool. As a necessary part of that, we’re going to be scratching the surface of unit, integration, and
Introduction x

infrastructure testing methodologies. But, and here’s an important point, this isn’t a book about those
methodologies. So while we’re going to try and make some key recommendations, this book isn’t
going to make you a unit/infrastructure/integration methodology expert. There are tons of books and
other resources on methodologies (you’ll encounter acronyms like TDD, DRY, DAMP, and others)
and techniques, and I encourage you to read up on the subject if you’re interested. We’re going to
focus as tightly as possible on Pester itself, mechanically, without getting deep into how it fits into
one approach or another.

Resources
You will find all external resources for this book in the pesterbookcode GitHub repository⁶.

⁶https://github.com/adbertram/pesterbookcode
Part I - Pester Concepts
In this part of our book, we’re going to focus on the concepts. We’ll be covering each component
of Pester from a basic, 101-level perspective. Each chapter is broken down by an important area of
Pester. This part will introduce you to Pester and all that comes with it to prepare you for the next
part where then take that knowledge and begin applying it to practical examples.
Block Introduction
Pester’s domain-specific language (DSL) is heavily reliant on PowerShell scriptblocks. Scriptblocks
allow Pester to segment code, control how it’s run and more. Scriptblocks are what will enable
Pester to do most of the unit-testing magic it does! To understand Pester, you must first understand
the various kinds of scriptblocks (blocks) that Pester uses to execute code.
In Pester’s DSL, you’ll see six types of blocks:

• Describe
• Context
• It
• BeforeEach/BeforeAll
• AfterEach/AfterAll
• Mocks (Refer to the Mocking Introduction chapter)

Each of these blocks provides Pester the ability to control the order in which code is executed when
a test runs, control scope to ensure one block’s activities don’t conflict with another or to allow a
child scope to inherit the scope of its parent.

Describe Blocks
The highest level in your test file is a describe block. A describe block is a logical grouping of tests.
Each test file can have one or more describe blocks. Nearly everything happens inside a describe
block. If a test file contains one or more describe blocks, describe blocks are the containers for almost
everything else.
Like all of the other blocks covered in this chapter, a describe block is a PowerShell scriptblock that
contains all of the code necessary to carry out a Pester test.
The describe block, at its simplest, consists of a name and a scriptblock indicated by the opening and
closing curly braces.

describe 'Stop-MailDaemon' {
## Stuff goes in here
}

Describe blocks also separate scope that, when performing unit tests is critical to understand.
Block Introduction 3

Context Blocks
The next level down from a describe block is the context block. The context block resides inside of a
describe block and provides a logical grouping of it blocks which you’ll cover next. A context block,
like a describe block has a name and a scriptblock definition that contains code inside of the context
block. They are optional so, unlike the describe block, don’t have to be used to build tests.

describe -Tag 'Linux' 'Stop-MailDaemon' {


## Describe-specific code in here
context 'When the server is down' {
## Context-specific code in here
}
}

Context blocks not only allow you to organize tests better but they also allow you to define separate
scopes which you’ll learn a lot more about in the Mocking Introduction chapter.

It Blocks
It blocks are the next step down from the optional context block. It blocks can be in a describe block
or a context block. It blocks assert an expectation for your code. They contain the code that checks
the actual state with the expected state. It blocks are what most people are referring to when they
talk about a Pester test.
It blocks have a name and a scriptblock definition. The name of an it block is up to you but its best
practice to give it a descriptive name of what state it’s testing. Be sure to check out the Test Design
Practices chapter for some guidance there.
For example, if you’re testing whether the code throws an exception, a good it block name would
be “throws an exception” or, if the test within checks to see if a script is executed, the name could
be “the script runs.” Keep your names simple and to the point.

describe -Tag 'Linux' 'Stop-MailDaemon' {


it 'the script runs' {
## Code to compare the state to test with the real state
}
context 'When the server is down' {
it 'throws an exception' {
## Code to compare the state to test with the real state
}
}
}
}
Block Introduction 4

An it block (test) can have one of five possible results:

• [+] Passed: The test ran, and the expectation was met.
• [-] Failed: The test ran and the expectation was not met.
• [?] Inconclusive: The test ran but it did not pass nor did it fail.
• [!] Skipped: The test was not run because it was put in a skipped state.
• [?] Pending: The test was not run because it was empty or is pending.

We cover the skipped, pending and inconclusive states in depth in the Controlling Test
Results chapter.

Before and After Blocks


By default, Pester runs code from top to bottom. However, there may be times when you need code
run before any it blocks, right after or in between. In this case, you would use one or more BeforeAll,
BeforeEach, AfterAll, or AfterEach blocks. These blocks run arbitrary code that executes at certain
times during the test run.
These blocks can contain whatever arbitrary code you need. They may set up some logging, create
a database connection, or whatever else you need. They can appear in a describe or a context block.
These blocks are dot-sourced into the context, or describe in which they appear, meaning they can
also do things like define variables, which would then be “visible” to any it blocks in the same
describe or context. It doesn’t matter where within the describe or context, these blocks appear,
nor does it matter in which order they appear. However, for readability, it’s customary to put
BeforeAll/BeforeEach blocks at the top of the describe or context block, and to put AfterAll/AfterEach
blocks at the bottom.

describe -Tag 'Linux' 'Stop-MailDaemon' {


BeforeAll {
## Code in here
}
BeforeEach {
## Code in here
}

it 'the script runs' {


## Code to compare the state to test with the real state
}

context 'When the server is down' {


it 'throws an exception' {
## Code to compare the state to test with the real state
Block Introduction 5

}
}
}
AfterEach {
## Code in here
}
AfterAll {
## Code in here
}
}

The names are self-explanatory.

• BeforeAll runs before any it block runs.


• AfterAll runs after all it blocks have run.
• BeforeEach runs before each it block runs—meaning, if you have five it blocks, BeforeEach
runs five times.
• AfterEach runs after each it block runs.

If you define BeforeEach/AfterEach in a describe block and also within a child context block, the
ones in the describe block run first, followed by the ones in the context block. For example:

describe -Tag 'Linux' 'Stop-MailDaemon' {


BeforeEach {
## Runs first
}

it 'the script runs' {


## Code to compare the state to test with the real state
}

context 'When the server is down' {


BeforeEach {
## Runs second
}
it 'throws an exception' {
## Runs third
}
}
}
}
Block Introduction 6

Have you ever used try/catch/finally blocks in PowerShell? This is what Pester uses under the covers.
Pester places the Before** code before all others. Then, once the *it block, or describe block has
completed execution, Pester inserts code contained in the AfterEach or AfterAll blocks into a finally
block for execution.

Summary
This chapter covered all of the types of blocks that Pester uses, and some of the parameters Pester
provides to control the behavior of how these blocks are executed. You’ll be building blocks like crazy
when working in Pester, so it’s critical to understand how each block works and what’s possible.
This chapter was just an introduction to the various types of blocks you’ll see in Pester. We didn’t
cover them all, but we did hit the major ones you’ll be working with extensively in this book. If you
see an opening and closing curly brace set in Pester, chances are you’re creating a scriptblock which
comes with all characteristics we covered in this chapter like modularization and scope separation.
TestDrive and TestRegistry
Code often has to work with the file system or, if you’re on Windows, the Windows registry.
Because it’s not always a great idea to muck around with the real file system or registry when
you’re testing, Pester provides a virtual file system and registry in the form of a PowerShell PS
drives called TestDrive and TestRegistry.
These PS drives are a way you can get and set file and registry information in a virtual environment
which does not make changes to a real file system or registry. These two features are nice to have if
have a lot of code that writes file and registry information.
TestDrive and TestRegistry looks and feel like a real file system and Windows registry but, unlike a
real file system or registry, Pester removes them after each test run.
Think of these features as like a mocked file system or Windows registry.

Using TestDrive
TestDrive is used to test code that reads or modifies a file system. TestDrive is a useful feature if you’d
instead not create an actual mock, and is probably more of an accurate test when manipulating files
because you’re creating, reading, and modifying real files rather than creating mocks in the code
itself.
There are two ways to reference TestDrive and TestRegistry; either through a PS drive or a variable.

Referencing TestDrive as a PS Drive


TestDrive can be referenced using a PS drive. If you’re not familiar with PS drives, a PS drive is
typically a volume on a file system represented by a drive letter like C. To access these drives,
they are referenced precisely the same as you would a typical file system drive using the format
<Label>:\<File/FolderName>.

To demonstrate using TestDrive, let’s start with an example. Let’s say you’ve got a function that
takes a file as an input and modifies it in some manner. The function below accepts a file path as
a parameter and when run, gets the contents of that file, replaces a string and then overwrites the
original file contents with the replaced string version.
TestDrive and TestRegistry 8

function Set-File {
param(
[string]$FilePath
)

$fileContents = Get-Content -Path $FilePath -Raw


$replacement = $fileContents -replace 'foo','bar'
Set-Content -Path $FilePath -Value $replacement -NoNewLine
}

Running this function requires a text file to exist somewhere ($FilePath). You’re building a unit
test here, so you don’t want to create a test file somewhere on the filesystem. Instead, you’ll use
TestDrive. To use TestDrive, you’ll first create a dummy file that represents what you expect to be
passed to the Set-File function. Once the dummy file is created, call the Set-File function and
then assert that it’s as expected as shown below.

describe 'Set-File changes the file provided' {


$testFilePath = 'TestDrive:\testFile.txt'
Add-Content -Path $testFilePath -Value 'foo bar foo bar' -NoNewLine
Set-File -FilePath $testFilePath

it 'replaces foo with bar' {


(Get-Content -Path $testFilePath -Raw) | should -Be 'bar bar bar bar'
}
}

Notice that you’re creating a file on TestDrive and passing that to Set-File.
If you run this test, you’ll find that it passes, yet no file exists on the file system of your test machine.
This behavior is the beauty of TestDrive is that you can test file manipulation without depending
on an actual file system drive to be in place before running the test. It also saves you the work of
cleaning up your test files after the test is complete.

Referencing TestDrive as a Variable


Most of the time you reference TestDrive as a PS drive. However, there are some edge cases where
this is not possible, such as when working with .NET objects (specifically System.XML.Document
objects). These objects have trouble being read from the TestDrive PS drive. Luckily, there’s an
alternative method that comes from the way TestDrive is invoked. Under the covers, the TestDrive PS
drive is just a folder in a temp directory on the test machine, and the path of that folder is represented
by a $TestDrive variable which particular .NET objects understand better.
To use this method, replace TestDrive: references to $TestDrive. Below you’re using the same
example as above but using $TestDrive.
TestDrive and TestRegistry 9

describe 'Set-File changes the file provided' {


$testFilePath = "$TestDrive\testFile.txt"
Add-Content -Path $testFilePath -Value 'foo bar foo bar' -NoNewLine
Set-File -FilePath $testFilePath

it 'replaces foo with bar' {


(Get-Content -Path $testFilePath -Raw) | should -Be 'bar bar bar bar'
}
}

TestDrive Gotchas
There are some behaviors of TestDrive, though, of which you’ll need to be aware. Even though
TestDrive seems simple enough, there are some situations you may find yourself in that don’t make
sense.
Describe Block Scoping
The most important concept to understand that that TestDrive is ephemeral, and that it is scoped
to each describe block. If you create a file on the TestDrive in one describe block, that file is not
available in another describe block.
Below is an example where you’re using TestDrive to create a file in one describe block and reference
it in another. You’ll find that this test passes because the TestDrive:\testFIle.txt does not exist in the
second describe block.

describe 'Set-File changes the file provided' {


## Create the test file in this describe block
$testFilePath = 'TestDrive:\testFile.txt'
Add-Content -Path $testFilePath -Value 'foo bar foo bar' -NoNewLine
Set-File -FilePath $testFilePath

it 'replaces foo with bar' {


(Get-Content -Path $testFilePath -Raw) | should -Be 'bar bar bar bar'
}
}

describe 'check to see if the test file still exists' {


it 'test file does not exist' {
Test-Path -Path 'TestDrive:\testFile.txt' | should not exist
}
}
TestDrive and TestRegistry 10

Describing Set-File changes the file provided


[+] replaces foo with bar 33ms

Describing check to see if the test file still exists


[+] test file does not exist 51ms

Context Block File Operations


The behavior is a little different when context blocks are involved. Each context block can be thought
of as having a child scope of the parent describe block. This scoping means that any files created in
the root of a describe block are available to each context block, but files created in a context block
are not available to any other context block in that describe block.
Separating file operations in context blocks like this is useful when you need to separate certain file
operations within a single function. To demonstrate this, let’s take the Set-File example function
above and have the function first read the file and then replace a different bit of text depending on
the contents of the file.

function Set-File {
param(
[string]$FilePath
)

$fileContents = Get-Content -Path $FilePath -Raw


if ($fileContents -match 'foo') {
$replaceWith = 'bar'
} else {
$replaceWith = 'baz'
}

$replacement = $fileContents -replace 'foo', $replaceWith


Set-Content -Path $FilePath -Value $replacement -NoNewLine
}

I now have two scenarios; when the file has the word foo in it and when the file does not. You need to
write tests for both of these scenarios, but since doing so requires two different files, you’ve chosen
to separate the tests into two different context blocks.
TestDrive and TestRegistry 11

describe 'Set-File changes the file provided' {


$testFilePath = 'TestDrive:\testFile.txt'

context 'when the file contains the string foo' {


Add-Content -Path $testFilePath -Value 'foo bar foo bar' -NoNewLine
Set-File -FilePath $testFilePath

it 'replaces foo with bar' {


(Get-Content -Path $testFilePath -Raw) | should -Be 'bar bar bar bar'
}
}

context 'when the file does not contain the string foo' {
Add-Content -Path $testFilePath -Value 'hi baz hi baz' -NoNewLine
Set-File -FilePath $testFilePath

it 'replaces foo with baz' {


(Get-Content -Path $testFilePath -Raw) | should -Be 'hi baz hi baz'
}
}
}

This test passes. It would have failed if you were referencing the same file at TestDrive:\testFile.txt
in both context blocks because Add-Content would have appended text to the file and its contents
would have been foo bar foo barhi baz hi baz.
However, you can reference that same file in either context block if you create the file in the describe
block’s scope but, as you’ll see, it makes the tests useless at this point because you’re not testing both
scenarios. you’re just testing the scenario of foo being in the file.

describe 'Set-File changes the file provided' {


$testFilePath = 'TestDrive:\testFile.txt'
Add-Content -Path $testFilePath -Value 'foo bar foo bar' -NoNewLine

context 'when the file contains the string foo' {


Set-File -FilePath $testFilePath

it 'replaces foo with bar' {


(Get-Content -Path $testFilePath -Raw) | should -be 'bar bar bar bar'
}
}

context 'when the file does not contain the string foo' {
TestDrive and TestRegistry 12

Set-File -FilePath $testFilePath

it 'replaces foo with baz' {


(Get-Content -Path $testFilePath -Raw) | should -be 'hi baz hi baz'
}
}
}

Notice how the same test fails because both are referencing the same file.

Describing Set-File changes the file provided

Context when the file contains the string foo


[+] replaces foo with bar 4ms

Context when the file does not contain the string foo
[-] replaces foo with baz 7ms
Expected strings to be the same, but they were different.
Expected length: 13
Actual length: 15
Strings differ at index 0.
Expected: 'hi baz hi baz'
But was: 'bar bar bar bar'
-----------^
17: (Get-Content -Path $testFilePath -Raw) | should -be 'hi baz hi baz'

Using TestRegistry
Like the TestDrive feature, Pester always has a way to work with a virtual registry called
TestRegistry. This feature works the same way that TestDrive does but as a Windows registry instead
of a filesystem.
Perhaps you have a function called Get-InstallPath like below that queries a registry key.

function Get-InstallPath($path, $key) {


Get-ItemProperty -Path $path -Name $key | Select-Object -ExpandProperty $key
}

I want to build a test for this to ensure when you call Get-InstallPath, it queries the expected
registry key. To test this, you could use TestRegistry to create a virtual registry key at the expected
path, call the function passing in that virtual registry key path and key name and then test to ensure
the function queried the expected key as shown below.
TestDrive and TestRegistry 13

describe 'Get-InstallPath' {
New-Item -Path TestRegistry:\ -Name TestLocation
New-ItemProperty -Path 'TestRegistry:\TestLocation' -Name 'InstallPath' -Value '\
C:\Program Files\MyApplication'

It 'reads the install path from the registry' {


Get-InstallPath -Path 'TestRegistry:\TestLocation' -Key 'InstallPath' | Shou\
ld -Be 'C:\Program Files\MyApplication'
}
}

The TestRegistry feature also adheres to the same describe and context block scoping that Test-
FileSystem does.

Summary
In this chapter, we covered the TestDrive, and TestRegistry PS drives. You learned that if you have
code that modifies or reads a filesystem or Windows registry items, these are two significant features
to perform your tests with.
Assertions
Assertions are a core component of Pester, as they are for any other unit testing framework.
Assertions let you assert that something expected happened. Assertions can be just about anything
like ensuring a function only allows a particular parameter type, demonstrating something inside
of the code executed as it should have, or verify that the code returned something specific.
Assertions are an integral component of Pester because they provide the method to compare what
should -Be with what is, whether you’re testing that a function outputs a string, that a script runs
a specific function, or that a parameter on a command in a module only accepts a particular set of
values. You assert these scenarios against expected output.
Assertions come in the form of things like the should assertion which you will focus on in this
chapter, the Assert-MockCalled and Assert-VerifiableMocks commands and more. You’ll have to
wait for the mock assertions though until the Mocking chapter.

Using the Should Assertion


The should keyword is used within it blocks to compare an object against an expected object. An
object can be anything from the kind of object a function returns, a file or registry at a specific path,
an exception thrown or any number of other results.
The should assertion is always used inside of an it block. The item being tested is typically piped
to should via the pipeline which then uses an operator to compare the object being passed to the
should assertion with the expected end state.
The should assertion is a Pester concept that’s exactly like a common if/then construct.
For example, let’s say you’d like to ensure that one plus one equals two without Pester. You could
test this with a simple if/then construct, as shown below. This example returns $true if the result is
as expected or $false if it does not.

$result = 1 + 1
if ($result -eq 2) {
$true
} else {
$false
}

Let’s now assert that this is the case using the should assertion. To do that, you’d build a describe
block with a single it block.
Assertions 15

describe 'Stupid-simple, unnecessary math test' {


it '1+1 should -Be 2' {
1 + 1 | should -Be 2
}
}

This returns:

Describing Stupid-simple, unnecessary math test


[+] 1+1 should -Be 2 13ms

This example is the most basic use of the should assertion. You perform an action, capture the result,
and then test that result against an expected item.

Understanding Should Operators


In the example above, you used a parameter to the should assertion or what’s technically known
within Pester as an operator. An operator is what defines what is tested. It is the “if” part of the
if/then construct.
There are many types of operators available with the should assertion.

Testing Simple Objects


Operators can loosely be categorized by the type of test they perform. The least complicated and most
straightforward are the tests that compare single, scalar items like strings, integers, and booleans.
BeNullOrEmpty
The most straightforward test I can think of is testing whether or not the output from a command
or a variable has a value assigned to it. The test is general and could be more granular but if you
care if something exists use the BeNullOrEmpty operator.
The BeNullOrEmpty operator tests whether the output the should keyword is receiving is an empty
string or has a null value.
For example, if you believe a variable should contain some value, you can test that using BeNul-
lOrEmpty. Notice also you’re using a Not keyword. This keyword can be used in conjunction with
any other should operator to test the negative.

$variable = 'somevalue'
$variable | should -Not -BeNullOrEmpty
Assertions 16

Be and BeExactly
A common operator is be. The be operator compares scalar items like strings or integers. This is an
operator that you’ll probably be using a lot. The be operator has input for a single thing like a string
or integer.
Below is an example of using the be operator testing whether the string word equals the string word.
The below tests will succeed.
The be operator is not case-sensitive.

"word" | Should -Be "Word"


"word" | Should -Not -Be "phrase"

The BeExactly operator works the same way, but for string comparisons is case-sensitive, meaning
that if comparing word to WORD, example, the test will fail while using the be operator will succeed.

"word" | Should -BeExactly "WORD"

Remember that you should only use Be and BeExactly to test scalar items. Even though
these assertions might “work” using arrays, string collections, hashtables, etc, they aren’t
intended to test these kinds of types.

BeLess and BeGreater


When specifically testing integers, you have a few operators at your disposal in the form of
BeLessThan, BeLessOrEqual, BeGreaterThan and BeGreaterOrEqual. As you might expect, they test
precisely what you’re thinking.
For example, if you somehow forgot that the number one is greater than the number zero, there’s a
test to remind us. {linenos=off} PowerShell 1 | Should -BeGreaterThan 0
Likewise, you can confirm that one is indeed higher than zero because if you use the not keyword
in this example or test whether one is less than zero, the tests will fail.

1 | Should -Not -BeGreaterThan 0


1 | Should -BeLessThan 0

Perhaps you need not only to test whether a number is greater than or less than, but you also need
to test equality as well. For that, you can use BeLessOrEqual or BeGreaterOrEqual.

0 | Should -BeGreaterOrEqual 0
1 | Should -BeLessOrEqual 0

BeLike, BeLikeExactly
If you’ve ever used the like or clike PowerShell operators, you’ll be right at home with BeLike and
BeLikeExactly operators. These operators test wildcard scenarios.
For example, if you’d like to test whether the letter o is in the word word, use the BeLike operator.
Assertions 17

'word' | Should -BeLike '*o*'

Or perhaps you need to perform a case-sensitive wildcard match. Use BeLikeExactly.

'WORD' | should -BeLikeExactly '*O*'

Using the BeLike and BeLikeExactly operators allows you to compare strings using the PowerShell
like and clike operators the Pester way!

Match, MatchExactly
Along the same vein as the BeLike and BeLikeExactly operators, Match and MatchExactly follow
the same premise. These two operators use regular expressions (regex) to match strings. The Match
and MatchExactly operators use the PowerShell operators match and cmatch.
Using the example above, perhaps you want to test w is the first letter in word. Using the Match
operator, you can easily do this.

'word' | Should -Match '^w'

However, this test would also succeed if you were to use Word as well. It’s not case-sensitive. You
can be even more explicit about what’s match by using MatchExactly which will now fail.

'Word' | Should -MatchExactly '^w'

BeTrue and BeFalse


Finally, to round out the Be** single item tests, you have *BeTrue and BeFalse. These operators let
you test boolean values.

$true | should -BeTrue


$false | should -BeFalse

We’ve covered all of the single item operators. Let’s now jump into testing groups of items or
collections.

Testing Collections
We can also perform tests around collections as well like arrays. First, let’s check out how you can
test to see if an item is in an array. To do that, you have two operators at your disposal.

Contain
The contain operator uses the PowerShell operator contains to test if an array contains a particular
item. For example, perhaps you have an array with three elements inside (red, yellow and green)
and you know that green should be in the array.
Assertions 18

@('red','yellow','green') | should -Contain 'green'

BeIn
Similar to the Contain operator, BeIn also tests whether an item is in an array. The BeIn and Contain
operator are functionally equivalent. You can see the only difference is the item being tested in on
the left, and the array is on the right.

'green' | should -BeIn @('red','yellow','green')

HaveCount
For the last collection operator, you have HaveCount. HaveCount is an operator that checks the
number of items in a collection match an expected count. It’s similar to doing something like this in
PowerShell.

@('red','yellow','green').Count -eq 3

In Pester, the above code would be expressed like:

@('red','yellow','green') | should -HaveCount 3

We’re now finished with testing collections. Let’s get a little more advanced and start building some
tests with objects.

Other Tests
As you’ve seen, Pester can test for strings, integers, or even collections of items, but these are simple
data structures. PowerShell is all about objects, and likewise, Pester needs to be capable of testing
rich objects.
As of now, there is only one object-specific operator.

BeOfType
All objects have a type derived from a class. There are times when you’ll need to ensure an object
is of a specific type. Instead of having to call the Get-Member command or invoke the GetType()
method on an object, Pester provides an easy way to test to see if an object is of a specific type
called BeOfType.
The BeOfType operator has a single argument which lets you provide a specific .NET object to test.
For example, perhaps you have a variable that you’re expecting to be a string. You can test this by
specifying System.String as the type to test.
Assertions 19

$variable = 'test'
$variable | Should -BeOfType 'System.String'

Above, you use the full dot notation of the .NET namespace and class. However, the BeOfType
operator will work with type accelerators as well as specifying either the string representation of
the type to test or by enclosing it in square brackets.

'test' | Should -BeOftype 'string'


[pscustomobject]@{Property = 'foo'} | Should -BeOftype [pscustomobject]

HaveParameter
Perhaps you’ve built out an extensive suite of functions in a module. You have all of the functions
hand-crafted with the best parameters you can think of and want to ensure they stay that way. You
can use Pester’s HaveParameter operator to test them.
The HaveParameter operator lets you specify the parameter name as well as attributes that the
parameter should have. Using the HaveParameter operator, you can define test parameter attributes
like the default parameter value, whether it’s mandatory or not and it’s name.
Let’s create an advanced function to work with some of these options.

function Get-Thing {
[CmdletBinding()]
param(
[Parameter(Mandatory)]
[string]$MyParam
)
}

Now that you have a function to work with let’s now build some tests around it. You want to ensure
the parameter name stays the same, the type stays as a string and you want to ensure the parameter
is always mandatory. Below is an example of how to do that.

Get-Command -Name 'Get-Thing' | Should -HaveParameter 'MyParam' -Type 'string' -Mand\


atory

When piping content to the should keyword using the HaveParameter operator, you must first use
Get-Command to send a command information object so should understands how to process the input.

We can also test for a default value on a parameter. Perhaps you no longer need the MyParam
parameter to be mandatory, and instead, you define a default value like the below example.
Assertions 20

function Get-Thing {
[CmdletBinding()]
param(
[Parameter()]
[string]$MyParam = 'defaultvalue'
)
}

I can test for this default value on the MyParam parameter using the HaveParameter operator’s
DefaultValue argument.

Get-Command -Name 'Get-Thing' | Should -HaveParameter 'MyParam' -DefaultValue 'defau\


ltvalue'

Gotcha: HaveParameter with Parameter Sets


In the above examples, you were not using parameter sets, and everything works great. However, be
aware that using HaveParameter with parameter sets can be a little challenging. We’re not going to
cover this in depth but know that testing parameters in parameter sets is possible although currently
not possible using the HaveParameter operator.
Instead, you’ll need to parse the command yourself. As an example, you’ve changed up the example
function to include a parameter set and to make the MyParam parameter mandatory as shown below.

function Get-Thing {
[CmdletBinding()]
param(
[Parameter(ParameterSetName = 'MyParamSet',Mandatory)]
[string]$MyParam
)
}

I now can build a test for this but, as you can see below, is a lot less pretty but works nonetheless.
Assertions 21

$commandToTest = 'Get-Thing'
$paramToTest = 'MyParam'
$parameterSetName = 'MyParamSet'
$cmd = (Get-Command $commandToTest).Parameters[$paramToTest].Attributes
$parameter = $cmd | Where-Object {
$_.TypeId.Name -eq "ParameterAttribute" -and
$_.ParameterSetName -eq $parameterSetName
}
$parameter.Mandatory | should -Be $true

FileContentMatch
Throughout this chapter, we’ve been focusing on unit testing or just testing code. Pester is an
excellent unit-testing framework but also is only as good at testing environmental things like files
on disk, registry keys, services, and a lot more.
Pester has three should operators that let you test the contents of a text file in the form of
FileContentMatch, FileContentMatchExactly and FileContentMatchMultiline. All of these operators
read a non-binary file and use regular expressions to look for at least one match of the string inside
the file. This includes files that you create in your TestDrive.
If you’re looking for a string inside a file and don’t care about case sensitivity, use the FileContent-
Match operator. This should operator takes a string input representing the path to the text file being
tested as shown below. The test below will pass.

Add-Content -Path 'C:\foo.txt' -Value 'this is a test'


'C:\foo.txt' | Should -FileContentMatch 'test'

If you’re concerned about case-sensitivity, you can use the FileContentMatchExactly operator which
works the same way as shown below.

Add-Content -Path 'C:\foo.txt' -Value 'this is a test'


'C:\foo.txt' | Should -FileContentMatchExactly 'test'

FileContentMatchMultiline
If you have a more complex pattern that needs to match over multiple lines, then you will need to
use the FileContentMatchMultiline operator. This can be important when the order of the contents
in the file matter. If you were updating an ini file for example, you would want to test that your
changes happened in a specific section of the file.
Assertions 22

$data = @"
[database]
server=127.0.0.1
"@

Add-Content -Path 'C:\foo.txt' -Value $data

$pattern = "\[database\]$([System.Environment]::NewLine)server"

'C:\foo.txt' | Should -FileContentMatchMultiline $Pattern

Exist
We’ve tested the contents of a file but what if you’re not sure the file is even there? In that case, you
need to check if the file exists. In PowerShell, you’d use the Test-Path command. In Pester-speak,
you can use the Exist should operator. When a path to a file (or folder) is passed to it, it will test if
that file (or folder) exists or not.
To demonstrate, you can pass a path to a file or folder to should using the Exist operator. The
examples below will fail.

'C:\FileDoesNotExist.txt' | Should -Exist


'C:\FolderDoesNotExist' | Should -Exist

Watch out when testing a path containing square brackets ([ or ]). To get Exist to return
an accurate test, you must escape the brackets with backticks, e.g., C:\file[].txt' |
Should -Exist.

Testing Errors
Sometimes you want to make sure your code throws an error. For example, feeding invalid input
parameters, performing an illegal operation, or some other situation will generate an error in the
real world, and you want to make sure that works in your tests. To test whether code throws an
exception via a hard-terminating (exception) error, Pester has the Throw operator.
The Throw operator behaves like all of the other operators in this chapter by passing some input to
should in the same scope. But when PowerShell throws an exception in the same scope the test is
running in, Pester may think the test code itself is throwing an exception. To remediate this, you
must put the code being tested in a scriptblock and pass the whole scriptblock to should.
In the below example, you’re explicitly throwing an exception with the error message of total fail.
In this example, the message could be anything, and the test would still pass.
Assertions 23

{ throw "total fail" } | Should -Throw

The Throw operator can get more specific too. An exception, after all, is just another object to test
with various properties. Using the Throw operator you can not only check whether code throws
an exception, but you can also test the exception’s message, error ID and type as well. If the above
example succeeds, the below example will fail because it’s getting more specific. We’re testing the
error message.

{ throw "total fail" } | Should -Throw 'minor bump'

We can also test the exception’s FullyQualifiedErrorId using the ErrorId parameter. Pay special
attention to the FullyQualifiedErrorId that PowerShell gives you. If someone just uses throw with
a message, that message becomes the ErrorId. The ErrorID generated by CmdLets will often be
different from the error message that you see.

{ throw "total fail" } | Should -Throw -ErrorId 'total fail'

$nullPathErrorId = 'ParameterArgumentValidationErrorNullNotAllowed,Microsoft.PowerSh\
ell.Commands.TestPathCommand'
{ Test-Path $null} | Should -Throw -ErrorId $nullPathErrorId

Finally, you can use the ExceptionType parameter to test for the actual type of the exception. This is
generally preferred if you have to handle exceptions generated from binary cmdlets or dotnet. The
ExceptionType takes an actual type as the parameter. If you do not wrap the type in parenthesis like
in the next example, then the parameter will see it as a string. This will end up giving you a type
conversion error if you don’t do it correctly.

{throw ([System.NotImplementedException]::new())} | Should -Throw -ExceptionType ([S\


ystem.NotImplementedException])

Pay attention to soft and hard-terminating (exception) errors when using the Throw
operator. Just because you see red text doesn’t necessarily mean PowerShell threw an
exception that will be caught by the Throw operator. If you want to test all types of errors,
you can force all errors to be exceptions by setting $ErrorActionPreference to Stop or
setting the parameter ErrorAction’s value to Stop on most commands.

Adding Custom Assertion Operators


We already have lots of assertions to work with but there are times when that is not enough. You
may have a special case that you need to test for frequently. Instead of writing the same test over
Assertions 24

and over, you would create a custom assertion. The simple way to add a custom assertion is to create
a function that takes pipeline input and then throws an exception when the results do not match the
expected value. Here is a simple assertion that validates that an array or value contains green.
This method isn’t related to Pester, however. This method is a PowerShell function you can build to
behave like Pester.

function Test-HaveGreen {
[CmdletBinding()]
param(
[Parameter(ValueFromPipeline)]
$ActualValue
)
process {
if (-not $ActualValue -notcontains 'green') {
throw "Expected to find green but did not find it"
}
}
}

Once you have the function built, you can then pass in your collection of colors for validation.

'red','yellow','green' | Test-HaveGreen

Custom Should Assertions


We can take this idea one step further and add an actual custom should assertion for Pester that
you can use just like all the others that are built into Pester. The value parameter needs to be called
$ActualValue for this to work with Pester. You have to add a $Negate switch as a parameter so that
the user can call it with the -not operator. The function will also need to return an object instead of
throwing an exception. Once you have those pieces in place, you will need to register the assertion
with Pester. This next example shows all those pieces in place.
Assertions 25

function HaveGreen {
[Cmdletbinding()]
param(
[Parameter(ValueFromPipeline)]
$ActualValue,

[Parameter()]
[bool]$Negate
)
process
{
$message = if ($Negate) {
'We expected not to find green but we did find it'
} else {
'We expected to find green but did not find it'
}
$result = [pscustomobject]@{
Succeeded = $Negate
FailureMessage = $message
}
if ($ActualValue -contains 'green') {
$result.Succeeded = -not $negate
}
$result
}
}

Pester has a command called Add-AssertionOperator that will register the new function with Pester
and make it available as a should assertion. Registering the custom assertion with Pester makes it
much more natural to work with.

PS> Add-AssertionOperator -Name 'HaveGreen' -Test $Function:HaveGreen

We can now use this custom assertion operator in an it block.

it 'should have green' {


'red','yellow','green' | Should -HaveGreen
}

Because
We covered a lot of operators that are more than capable of failing tests if necessary, but the output
you received wasn’t always the most intuitive. Behind every test is some business logic reason the
Assertions 26

test should pass that Pester has no idea about. Wouldn’t it be nice to provide the reason why a test is
expected to succeed rather than just seeing another red line scroll past when it fails? You’re in luck!
Pester has a Because operator? parameter?
When any test fails, you can use the Because operator/parameter to indicate as to why it failed. This
message will then be displayed in the test results. The Because operator/parameter can be used on
any test but below is a good example.

describe 'my maths test' {


it '1 should obviously be 2' {
1 | should -be 2 -Because "I don't know maths"
}
}

When this test is run, and it fails, you’ll see the reason why it failed in the output, as shown below.

Describing foo
[-] foo 12ms
Expected 2, because I don't know maths, but got 1.
1: describe 'foo' { it 'foo' { 1 | should -be 2 -Because "I don't know maths\
" }}

Summary
Assertions are a critical concept to understand in Pester. Assertions are where the most important
part of the test happens; the comparison of the state to be tested and an expected state. In this chapter,
we covered should assertions. Although should assertions are extremely common, they aren’t the
only kind of assertions. You’ll see in both the Mocking Introduction and Advanced Mocking chapters,
a new type of assertion is available to you in that domain.
Test Cases
The whole purpose of a test is to confirm whether the code under test works as expected, or changes
an element of the environment in an expected way. What separates a good test from a great test is
coverage. It’s about writing a test that covers every likely scenario that users will throw at that piece
of code. Any test at all is good, but if you’ve got a test or test suite that covers every single possible
way that the code can run, and you confirm the output, you can have high confidence that the code
does what you intend it to do. Test cases in Pester are a great way to write test suites that cover
every possible way your code can run.
Test cases let you define a set of parameters to pass into an it block when you run the test based on
several different scenarios or situations. Scenario is an excellent synonym for test case.
A Pester test case is simply one or more hashtable of parameters passed to a test (it block) at once.
Pester then passes those parameters to the test one at a time running that test for every case.
Let’s say you have a function with a single parameter, ComputerName. This function has a logic path
in it that differentiates between computer names that end in SRV and those that do not. In other
words, the function behaves one way for some computer names and another way for others. Altering
a computer name also alters the code’s execution path, and therefore produces different output —
notice below that the code returns a modified string based on the value of ComputerName.

function Set-Computer {
param($ComputerName)

if ($ComputerName -like '*SRV') {


## Do something because it's a server
'Did that thing to the server'
} else {
## Do something else because it's probably a client
'Did that thing to the client'
}
}

PS> Set-Computer -ComputerName BIGSRV1


Did that thing to the server
PS> Set-Computer -ComputerName SMALLPC
Did that thing to the client
Test Cases 28

Creating Tests for Different Parameters


To set up a basic unit test for this function you need to have at least two It blocks: one to pass a
computer name ending in SRV, and one to pass a name ending in something else. You’d then use a
Should command to assert what the function returns.

describe 'Set-Computer' {
it 'when a server name is passed, it returns the right string' {
Set-Computer -ComputerName 'MYSRV' | should -Be 'Did that thing to the serve\
r'
}

it 'when anything other than server name is passed, it returns the right string'\
{
Set-Computer -ComputerName 'MYCLIENT' | should -Be 'Did that thing to the cl\
ient'
}
}

This works, but it probably doesn’t test enough scenarios.

• What if someone tries to use a space in ComputerName?


• What if someone tries to use a special character in ComputerName?
• What if SRV is in the middle of ComputerName somewhere?

See where you’re going with this? There are a lot of possibilities, but to include each of these test
scenarios forces you to create an it block for it. When there are only a few possibilities to test, this
isn’t a big deal; but if you’ve got a super-critical function, you’re going to want to test as many
scenarios as possible. You need to redesign this approach to be more scalable. Luckily you can, using
Pester test cases.

Using the TestCases Parameter


The It command in Pester has two parameters to use all the time (although you might typically use
them positionally and not by name) called name and test.

it -name 'does that thing' -test { ## Code to test that thing }

The it command also has a much less commonly used parameter called TestCases that accepts an
array of hashtables.
Test Cases 29

it -name 'does that thing' -test { ## Code to test that thing } -TestCases @()

Each hashtable in that array represents a set of parameters for the function or script under test.
When the It block executes, it passes each hashtable inside of that array, one at a time, to the test
scriptblock if the items inside of that hashtable are defined as parameters inside the test scriptblock.
To demonstrate, let’s build a test case for the Set-Computer function. To do this, you first create an
array with a single hashtable inside that represents the parameters to pass into the test scriptblock.

$testCases = @( @{ ComputerName = 'FOOSRV' } )

Next, you add the TestCases parameter to an It block using the array of the single hashtable you have
just defined. Since you now want to use the parameters inside the test case, you replace the static
MYSRV reference as used in the last section with the ComputerName variable inside the hashtable.

it 'when a server name is passed, it returns the right string' -TestCases $testCases\
{
Set-Computer -ComputerName $ComputerName | should -Be 'Did that thing to the ser\
ver'
}

But wait. This approach won’t work, because the test scriptblock does not know what $ComputerName
is yet. To allow the keys in each of the hashtables to be passed into the it block; you must first define
a param block that includes each of the keys in the hashtable.

it 'when a server name is passed, it returns the right string' -TestCases $testCases\
{
param($ComputerName)

Set-Computer -ComputerName $ComputerName | should -Be 'Did that thing to the ser\
ver'
}

Once this is done and the test is run, the It block executes Set-Computer and passes the expected
computer name of FOOSRV to it.
This behavior can seem a bit tricky at first until you realize that Pester is being super-simplistic.
Because the hashtable’s key was ComputerName, Pester looks for a ComputerName parameter inside
the script block. Simply because the hashtable key and the variable are spelled the same, Pester
knows what to do.
With this structure built, it’s now possible to add sets of parameters as hashtables to the $testCases
array, which Pester automatically passes to the It block. This approach makes managing these test
scenarios much easier.
I can now easily add each scenario to an array and reuse the same It block rather than creating
additional ones.
Test Cases 30

describe 'Set-Computer' {

$testCases = @(
@{ ComputerName = 'FOOSRV' }
@{ ComputerName = 'FOO SRV' }
@{ ComputerName = 'FOOSRV' }
@{ ComputerName = 'FOOSRVSRV' }
)

it 'when a server name is passed, it returns the right string' -TestCases $testC\
ases {
param($ComputerName)

Set-Computer -ComputerName $ComputerName | should -Be 'Did that thing to the\


server'
}
}

When run, the output will look like the below example:

Describing Set-Computer
[+] when a server name is passed, it returns the right string 8ms
[+] when a server name is passed, it returns the right string 2ms
[+] when a server name is passed, it returns the right string 2ms
[+] when a server name is passed, it returns the right string 3m

Using the Test Name Token


When the above test runs you, see that even though a different parameter set is being passed to the
it block, the name of the test is the same for every run. The resulting output makes it difficult to
determine which test case failed. To remedy that situation, Pester lets you use one of those keys in
each hashtable to designate the name of the test. You can then add a placeholder to the test name
that Pester replaces at runtime with whatever the key name is. That way, the Pester output will be
more meaningful, and you’ll know what passed and what failed.
Using the example above, you’ve got four different test cases to use against the unit test. You define
a test name for each of these by adding a key to each hashtable called TestName, but it can be called
whatever you want.
Test Cases 31

$testCases = @(
@{ ComputerName = 'FOOSRV'; TestName = 'SRV at the end' }
@{ ComputerName = 'FOO SRV'; TestName = 'space in computer name' }
@{ ComputerName = 'FOOSRV'; TestName = 'asterisk in computer name' }
@{ ComputerName = 'FOOSRVSRV'; TestName = 'two iterations of "SRV" in computer n\
ame' }
)

Once this is done, dd the placeholder to the test name, enclosing it with < and >.

it 'when a server name is passed, it returns the right string: <TestName>' -TestCase\
s $testCases {
param($ComputerName)

Set-Computer -ComputerName $ComputerName | should -Be 'Did that thing to the ser\
ver'
}

Now when this test runs, it will replace <TestName> with the value of TestName in each hashtable,
giving you a clear picture of the status of each test case.

Describing Set-Computer
[+] when a server name is passed, it returns the right string: SRV at the end 4ms
[+] when a server name is passed, it returns the right string: space in computer\
name 2ms
[+] when a server name is passed, it returns the right string: asterisk in compu\
ter name 2ms
[+] when a server name is passed, it returns the right string: two iterations of\
"SRV" in computer name 14ms

Summary
This chapter’s purpose was to give you a clear idea of how the test case structure and syntax looks
in Pester. You’ll be using this a lot more in the chapters on code coverage, infrastructure validation,
and more. We’ll also provide some guidance on managing test cases, which can—in very complex
code—become a documentation nightmare pretty quickly if you’re not careful.
Mocking Introduction
Mocks are one of the most challenging concepts most Pester beginners struggle with. In a nutshell,
mocks are a unit testing concept that allows you to “replace” the code being executing with other
code. Mocks force PowerShell to do something else you define other than running the code it would
have. Mocks allow you to, in a sense, “neuter” other code to force it to run your temporary shim
code vs. running its actual code.
A mock in Pester is a simple construct. It only has four parameters (which we’ll cover all later) with
only two of those being required. However, how the mock behaves is the part most beginners get
confused about.
Within a script, function, or module you are testing, you typically have various command references.
Perhaps your code calls Get-Content to read a file, New-AdUser to create an Active Directory user,
or even Where-Object to filter a collection. Whatever the case, it calls various commands several
times. One aspect of writing useful tests for your PowerShell code is controlling that code. Being in
control of code execution means being able to execute only certain portions of the code to minimize
outside influence from other parts of the code. This has many benefits, as you’ll see throughout this
book.
At its most basic, a mock uses two required parameters. You need to define the command you’re
mocking using the CommandName parameter and the code to run in the form of a scriptblock when
the command being mocked is executed called MockWith.
A mock is defined at the describe block level. If you were creating a mock for the command above,
it would look something like the below example:

describe 'Tests something' {


mock -CommandName 'Get-Employee' -MockWith {
## Code to run in here in place of Get-Employees's actual code
}
}

Typically though, you will not see the parameter themselves defined. Instead, you rely on positional
parameters that allow us to define the values for the parameters like the below example.
Mocking Introduction 33

describe 'Tests something' {


mock 'Get-Employee' {
## Code to run in here in place of Get-Employee's actual code
}
}

We could cover what mocks are further, but it’s much easier to learn by example.

Mock Walkthrough
Let’s say you have a function called Get-Employee and a CSV file at C:Employees.csv that has
information about new employees in each row.
To keep this demonstration simple, the CSV file looks only has one new employee in it as shown
below.

FirstName,LastName,UserName
Adam,Bertram,abertram

This Get-Employee function queries a CSV file with employees in each row. You have this function
in a script called MyActiveDirectoryFunctions.ps1.

function Get-Employee {
Import-Csv -Path C:\Employees.csv
}

Now that we’ve got the function built, you now need to develop a test that requires mocking.
Mocks are used to mainly create a context where code can be executed on any system and in
any environment regardless of the configuration. For example, Get-Employee will fail to run if the
C:Employees.csv file doesn’t exist on the computer where you run this test. This is a dependency
you need to account for by mocking.
Let’s build the test file; it’ll be called UserProvisioning.Tests.ps1 and create a single describe block
for the function. You’ll also create a single it block which tests to ensure the Get-Employee function
returns the expected users.
Mocking Introduction 34

1 ## Dot source my script in to make the function available


2 . .\MyActiveDirectoryFunctions.ps1
3
4 describe 'Get-Employee' {
5 it 'returns all expected users' {
6 $users = Get-Employee
7 $users.FirstName | should -Be 'Adam'
8 $users.Lastname | should -Be 'Bertram'
9 $users.UserName | should -Be 'abertram'
10 }
11 }

What would happen now if you’d run this test and didn’t include any mocks on a system that doesn’t
have the CSV file on it? It would fail. This isn’t a good test if it fails right out of the gate! Remember
that we’re testing the Get-Employee function itself not the fact that the CSV file is there or not.

Describing Get-Employee
[-] returns all expected AD users 17ms
FileNotFoundException: Could not find file 'C:\Employees.csv'.
at Get-Employee, C:\MyActiveDirectoryFunctions.ps1: line 2

This test is worthless. It did not check that the function returned all expected users because it didn’t
even get the chance! Instead, it failed before it was able to perform the test because an external
dependency was not met. This is where mocking can come to the rescue. By creating a mock for the
Import-Csv command, you can essentially force Import-Csv to return some fake employees. This
way, you altogether remove the dependency on the file existing and, at the same time, can think up
all kinds of different configurations in which the C:Employees.csv file may exist. This way, you can
build tests around each of these scenarios.
You’ll now create a mock for the Import-Csv command and execute the test again.

describe 'Get-Employee' {
mock 'Import-Csv' {
[pscustomobject]@{
FirstName = 'Adam'
LastName = 'Bertram'
UserName = 'abertram'
}
}
it 'returns all expected users' {
$users = Get-Employee
$users.FirstName | should -Be 'Adam'
$users.Lastname | should -Be 'Bertram'
Mocking Introduction 35

$users.UserName | should -Be 'abertram'


}
}

Describing New-Employee
[+] returns all expected users 42ms

It passed! The CSV file was never created yet you were still able to confirm that Get-Employee uses
Import-Csv to read a CSV file and that Get-Employee returns an object with properties of FirstName,
LastName and UserName.

When you mocked Import-Csv, you mimicked every property of the object it would have returned.
Since Import-Csv returns objects with properties dictated from each CSV row and, in this case,
each CSV row only had three properties (FirstName, LastName and UserName) you were returning
an object that looked exactly like what Import-Csv would have returned.
This was a simple example of using a mock. You were able to perform a test on the Get-Employee
function by mocking the dependency on the CSV file and on the Import-Csv command itself.

Mock Assertions
In the above walkthrough, you mocked the Import-Csv command to force that command reference
to return an object of our choosing. This is just one use-case of a mock. There may also be times
when you want to ensure certain commands are run or certain parameters of commands are used
when your code executes. You need to assert that command ran. You need to assert your mock was
called.
There are two ways that Pester can perform mock assertions, using the Assert-VerifiableMocks
and the Assert-MockCalled commands. Both commands test whether a command was called in a
particular manner, but each is best used in different contexts.
Using the example from the walkthrough using the Get-Employee function, when that function was
called, you assumed that it was reading the C:Employees.csv but how do you know for sure? You
don’t since you didn’t write a test. You simply mocked Import-Csv to force that command reference
to return an object of our choosing. What if someone maliciously or inadvertently changes the path
to the CSV? You should write a test to ensure the path always remains the same.
We’ll use the same test you used earlier to ensure C:Employees.csv is read every time the Get-Employee
function runs as shown below. However, this time, instead of just removing the dependency on the
Import-Csv command and thus on the C:Employees.csv file, we’re going to write a test to ensure
Import-Csv was called using the C:Employees.csv file as it’s Path parameter.
Mocking Introduction 36

describe 'Get-Employee' {
mock 'Import-Csv' {
[pscustomobject]@{
FirstName = 'Adam'
LastName = 'Bertram'
UserName = 'abertram'
}
}
it 'returns all expected users' {
$users = Get-Employee
$users.FirstName | should -Be 'Adam'
$users.Lastname | should -Be 'Bertram'
$users.UserName | should -Be 'abertram'
}
}

Assert-MockCalled
The Assert-MockCalled command is a command that asserts a single mock was called during a
test run. You need to add an it block to the describe block that ensure that Import-Csv reads the
C:Employees.csv file. Before you go that far though, we know that Import-Csv should run.
The Assert-MockCalled command can be called anywhere in an it block and has only one
mandatory parameter: CommandName. CommandName is the name of the mocked command that you’re
asserting was called. Let’s now assert that the Import-Csv command was called at all by adding an
it block to the describe block.

it 'runs the Import-Csv command' {


Assert-MockCalled -CommandName 'Import-Csv'
}

Run the describe block and both tests should pass now.

Describing Get-Employee
[+] returns all expected users 25ms
[+] runs the Import-Csv command 101ms

The test was successful, but what exactly did this test? By default, Assert-MockCalled tests whether
a mock was called one or more times. It doesn’t care if it was called once or 100 times. When writing
tests, it’s best to be as accurate as possible. Since you know that if this function is executing as it
should, Import-Csv should only be called once; no more, no less. Let’s improve this assertion with
the Times parameter.
Mocking Introduction 37

The Times parameter tells Assert-MockCalled to only pass the test if the mock was called the value
of Times or more. To demonstrate how the Times parameter works, we’re going to do just that by
forcing the test to fail by asserting that Import-Csv was called twice when, in fact, it should only be
called once.

it 'runs the Import-Csv command' {


Assert-MockCalled -CommandName 'Import-Csv' -Times 2
}

You’ll run the test again and notice the output below.

Describing Get-Employee
[+] returns all expected users 32ms
[-] runs the Import-Csv command 52ms
Expected Import-Csv to be called at least 2 times but was called 1 times
17: Assert-MockCalled -CommandName 'Import-Csv' -Times 2

The test failed and Pester told us why. We expected Import-Csv to be called twice, but it was only
called once. To get this test to pass, you’d just have to change the Times value to 1, and it’d work.
If you need to ensure a mock was called exactly X times, the method we discussed in this section
will not work. By default, Assert-MockCalled always passes if the mock was called X or more times.
Use the Exactly switch parameter on Assert-MockCalled to ensure the mock was called a specific
number of times making the it block example look like below:

it 'runs the Import-Csv command' {


Assert-MockCalled -CommandName 'Import-Csv' -Times 2 -Exactly
}

If you’re testing modules, Assert-MockCalled has a ModuleName parameter. If you’re attempting to


verify a mock that was created with a ModuleName parameter, your Assert-MockCalled must specify
the same ModuleName parameter and value (unless using InModuleScope).

Scopes
Assert-MockCalled also has a Scope parameter. This specifies the scope in which you want to check
for mocks. By default, Assert-MockCalled looks in the current context block (if there is one) or the
current describe block (if there’s no context). But you can specify -Scope 'Describe', for example,
to force the command to check the entire describe block, even if you’re in a context. Valid values are
it, context, and describe.
Mocking Introduction 38

Asserting Mocks by Parameters Used: ParameterFilter


Now that you have a good understanding of how to use Assert-MockCalled to test whether a mock
has been called, let’s get more granular and examine not only whether the mock has been called
at all but also if it was called with the expected parameters and parameter values. Asserting that a
mock has been called using specific parameters by using the ParameterFilter parameter.
In the example, you not only know that Get-Process should be called twice, but that it should
be called twice using the Name parameter and with values of foo and bar. To make the test more
precise, let’s add the ParameterFilter to Assert-MockCalled. The ParameterFilter parameter
works exactly as it did when you learned how to apply them to the mocks directly. However, this
time, the parameter filter isn’t dictating whether the mock is created at all; it’s a condition to define
the outcome of the assertion.

describe 'Test-Process' {
mock 'Get-Process'
## Run the function Test-Process

it 'should test for each process inside of the text file' {


$assertParams = @{
CommandName = 'Get-Process'
Times = 2
ParameterFilter = { $Name -in @('foo','bar') }
}
Assert-MockCalled @assertParams
}
}

In the example above, you’re using a parameter filter which ensures that the Name parameter value
passed to Get-Process is either foo or bar. Since you’re testing two values at once, You’ve updated
the Times parameter to 2 as well. This ensures that Get-Process is called twice with only foo or bar
as values for the Name parameter. If it were called with any other value for Name, the test would fail.

Limiting Mock Assertions to a Single Instance: ExclusiveFilter


When using the ParameterFilter parameter on Assert-MockedCalled you’re asserting that the
command was invoked with certain parameters. But what if you want to ensure a command was
called with parameters not matching the conditions in ParameterFilter? If you need to assert that
a mock was called by parameters other than a particular set, you can use the ExclusiveFilter
parameter. This parameter is the opposite of using ParamterFilter.
Using ExclusiveFilter prevents you from having to create two Assert-MockCalled references
accepting one instance but negating the other.
Mocking Introduction 39

Assert-MockCalled SomeCommand -Times 1 -ParameterFilter { $something -eq $true }


Assert-MockCalled SomeCommand -Times 0 -ParameterFilter { $something -ne $true }

For example, perhaps you have a simple function that calls Get-Content. For whatever reason, you
just want to ensure that Get-Content called once using C:ShouldCall.txt and not called again with
any other Path parameter.

## Function
function Do-Something {
param($Path)
Get-Content -Path $Path Get-Content -Path 'C:\SomeOtherPath.txt'
}

## Test
describe 'Do-Something' {
mock 'Get-Content'

it 'should not try to read the C:\ShouldNotCall.txt file' {


Do-Something -Path 'C:\ShouldCall.txt'

Assert-MockCalled -CommandName 'Get-Content' -ExclusiveFilter { $Path -eq 'C\


:\ShouldCall.txt' }
}
}

When this test is invoked, you’ll see that the test fails as shown below.

Describing Do-Something
[-] should not try to read the C:\ShouldNotCall.txt file 68ms
Expected Get-Content to only be called with with parameters matching the specifi\
ed filter, but 1 non-matching calls were made at <ScriptBlock>, : line 16
16: Assert-MockCalled -CommandName 'Get-Content' -ExclusiveFilter { $Path -eq 'C\
:\ShouldCall.txt' }

Notice above that Get-Content was called using the correct Path parameter value, and also called
Get-Content using another parameter. Since you are using the ExclusiveFilter parameter, Pester
noticed this and failed the test. If you were using ParameterFilter, that test would have succeeded.

Assert-VerifiableMocks
Another method to assert that a mock was called is to use the Assert-VerifiableMocks command.
This command is similar to Assert-MockCalled but uses an approach at asserting called mocks that
Mocking Introduction 40

is like “tagging.” To use this command, you must “tag” each mock that’s created with a Verifiable
parameter on the mock declaration. This tells Pester to keep track of when this mock is used because
you intend to perform some assertions later on.
Assert-VerifiableMocks is typically used as a shortcut to verify multiple mocks at once. With one
command, you can assert all of the mocks you created instead of using Assert-MockCalled and
defining a mock assertion for each mock. Unfortunately though, Assert-VerifiableMocks isn’t quite
as extensive as Assert-MockCalled is because it doesn’t let you specify the number of times a mock
should be called. To get that same functionality you’d still have to apply a parameter filter to the
mock itself instead of using it at the mock assertion.
Because Assert-VerifiableMocks doesn’t have a Times parameter, you can’t get as granular as you
can when using Assert-MockCalled, but you can still use a parameter filter applied to the mock
itself, as you can see below.
In this example, you’ve “tagged” Get-Process as needing to be verified when Assert-VerifiableMocks
is called, and applied a parameter filter to ensure the mock is not created unless the name is either
foo or bar. This is the same functionality as applying the parameter filter to the mock assertion with
Assert-MockCalled.

describe 'Test-Process' {
mock 'Get-Process' -Verifiable -ParameterFilter { $Name -in @('foo','bar')}

## Run the function Test-Process


it 'should test for each process inside of the text file' {
Assert-VerifiableMocks
}
}

Pester provides all of the mock assertions you need to test just about any scenario you’re after.

Summary
Mocks are by far the hardest concept in Pester to wrap your head around but are critical to creating
unit tests. Review the chapter over until the principles sink in. I can’t stress enough how valuable
this knowledge is. This chapter was just a warmup to get you familiar with mocking essentials. We
cover a lot more about mocking in the Advanced Mocking chapter.
Code Coverage
Sometimes it can be tough to figure out if you’ve tested everything, especially with long, complex
code. Pester’s code coverage features help you figure that out by showing the amount of code that
actually executed in a given test suite.
Unlike what you may expect of code coverage, it is not a measure of how much of the code has
some kind of test associated with it. That is, when Pester runs a test suite for a script, it’s completely
possible to show 100% code coverage yet have no tests at all! As long as all of the script was executed
inside of the test somewhere, and the code was allowed to complete, that’s 100% code coverage!

Why Code Coverage?


Code coverage is a set of tests to measure whether you’re executing all of your code. That said, 100%
coverage isn’t necessarily your goal every time. You might hit 90% coverage, look at the missing
10%, and decide that’s good enough. The level of coverage is completely relative and up to you. If
the risk of not writing a test for a particular scenario isn’t worth the time to eek out that extra 10%
then 90% is just fine.
There is definitely a trade-off, because you don’t want to get into situations of having to write
enormously complex test suites just to handle an edge case that you can’t even be sure will happen
in the real world. You have to apply some intelligence and analysis to the subject — it’s not all
about unlocking the “100% Coverage Achievement” for your code.
Think of code coverage as a way of double-checking yourself. It is an automated means for Pester
to just be sure you didn’t intend to write test for certain pieces of code.

Using Pester to Measure Code Coverage


To run a code coverage test, you will use the CodeCoverage parameter on the Invoke-Pester
command and specify the path to your test file and the path to the PowerShell script you’ll be
running those tests against.
For common scenarios, you would initiate code coverage reporting using the following syntax. This
is assuming MyScript.Tests.ps1 is the test suite designed for the code in MyScript.ps1.

PS> Invoke-Pester -Script .\MyScript.Tests.ps1 -CodeCoverage .\MyScript.ps1

Let’s start with an example of how to measure code coverage with Pester. Perhaps you have a script
that contains a few different functions. What the functions actually do is irrelevant for purposes
here.
Code Coverage 42

1 function Do-Thing { Write-Output 'I did the thing' }


2 function Get-Thing { Write-Output 'I got the thing' }
3 function Set-Thing { Write-Output 'I set the thing' }

You’d now like to build a set of tests for this script. The test script might look something like the
below, and would be located in the same directory as DoStuff.ps1.

1 ## Dot source the script


2 . "$PSScriptRoot\DoStuff.ps1"
3
4 describe 'Get-Thing' {
5 it 'should return "I got the thing"' {
6 Get-Thing | should -Be 'I got the thing'
7 }
8 }

Notice that you’ve created a test only for the Get-Thing function when two other functions, Do-Thing
and Set-Thing, exist in the same script. You haven’t created tests for these functions yet. Those two
functions contains a single command that simply executes the Write-Output command.
If you run this test and also include the script being tested, you can see that it knows that the lines
2 and 10 did not execute.

PS> Invoke-Pester C:\DoStuff-GetThing.Tests.ps1 -CodeCoverage C:\DoStuff.ps1

Describing Get-Thing
[+] should return "I got the thing" 632ms

Tests completed in 632ms


Passed: 1 Failed: 0 Skipped: 0 Pending: 0 Inconclusive: 0

Code coverage report: Covered 33.33 % of 3 analyzed commands in 1 file.

Missed commands:

File Function Line Command


---- -------- ---- -------
DoStuff.ps1 Do-Thing 2 Write-Output 'I did the thing'
DoStuff.ps1 Set-Thing 10 Write-Output 'I set the thing'

Now notice what happens when you create a test for each of the other functions, but don’t include
any assertions. This is essentially just executing the functions without actually comparing the result
to an expected output. In other words. this really isn’t a “test” at all!
Code Coverage 43

1 ## Dot source the script


2
3 . "$PSScriptRoot\DoStuff.ps1"
4
5 describe 'Get-Thing' {
6 it 'should return "I got the thing"' {
7 Get-Thing | should -Be 'I got the thing'
8 }
9 }
10
11 describe 'Do-Thing' {
12 it 'should return "I did the thing"' {
13 Do-Thing
14
15 ## Notice no should assertion here. Whoops!
16 }
17 }
18
19 describe 'Set-Thing' {
20 it 'should return "I set the thing"' {
21 Set-Thing
22
23 ## Notice no should assertion here. Whoops!
24 }
25 }

Each test shows success and the code coverage reports shows 100% but in actuality, the only thing
really tested was the output of Get-Thing.

PS> Invoke-Pester C:\DoStuffAll.Tests.ps1 -CodeCoverage C:\DoStuff.ps1

Describing Get-Thing
[+] should return "I got the thing" 259ms

Describing Do-Thing
[+] should return "I did the thing" 176ms

Describing Set-Thing
[+] should return "I set the thing" 124ms

Tests completed in 560ms


Passed: 3 Failed: 0 Skipped: 0 Pending: 0 Inconclusive: 0
Code Coverage 44

Code coverage report: Covered 100.00 % of 3 analyzed commands in 1 file.


}

This is a great example that shows you that code coverage does not mean code tested but rather
code executed.

Getting Granular with Code Coverage


Not only can you compare an entire script against a test suite to generate a code coverage report,
it’s also possible to compare elements as small as a single function, or even a single line. Instead of
specifying a script for the CodeCoverage parameter, Pester lets you specify a hashtable indicating a
script path, along with a function name or a start and an end line.

Scoping Code Coverage to a Function


Using the example above, let’s say you’re aware that you’ve only written a test for the Get-Thing
function, but you’d still like to run a code coverage report. You’d like to limit Pester to just the code in
the Get-Thing function. To limit Pester, you’ll specify a hash table for the CodeCoverage parameter,
containing keys for both the script path and the function to which you’d like to limit the coverage
report.

PS> Invoke-Pester C:\DoStuff-GetThing.Tests.ps1 -CodeCoverage @{ Path = 'C:\DoStuff.\


ps1'; Function = 'Get-Thing' }

Describing Get-Thing
[+] should return "I got the thing" 125ms

Tests completed in 125ms


Passed: 1 Failed: 0 Skipped: 0 Pending: 0 Inconclusive: 0

Code coverage report: Covered 100.00 % of 1 analyzed command in 1 file.


}

You can see that you achieved 100% now instead of 33.33%. This is because you have scoped Pester
down to look only at the particular function for which you’ve built a test.

Scoping Code Coverage to a Start/End Line


As with a function, Pester lets you limit the scope of code coverage by specifying a start and an
end line. This is useful when testing inside a function. Instead of specifying the Get-Thing function
below, you’re specifying the lines in the script that contain the entire function. Notice that the result
is the same.
Code Coverage 45

PS> Invoke-Pester C:\DoStuff-GetThing.Tests.ps1 -CodeCoverage @{ Path = 'C:\DoStuff.\


ps1'; StartLine = 5; EndLine = 7 }

Describing Get-Thing
[+] should return "I got the thing" 125ms

Tests completed in 125ms


Passed: 1 Failed: 0 Skipped: 0 Pending: 0 Inconclusive: 0

Code coverage report: Covered 100.00 % of 1 analyzed command in 1 file.

Summary
Code coverage is a great feature to help you notice gaps in test coverage. It keeps us humans in check
by ensuring our code has all of the appropriate tests built. Even though the code coverage feature
doesn’t actually represent test coverage, it’s still a valuable tool to use when ensuring your code is
completely tested.
Gherkin
Built right into Pester itself, exists a completely different language that allows you to design and
structure tests known as Gherkin⁷. In this chapter, you’ll be introduced to the Gherkin language,
how to create tests with Gherkin and how to integrate familiar Pester concepts with Gherkin.
So far, you’ve solely used Pester’s DSL to define tests. You’ve learned, built and executed tests solely
with Pester and how it thinks a test should be written. But Pester is not the only test language
around. In fact, software developers have been using dozens of different languages for decades. For
PowerShell, Pester was the only game in town. That is until Gherkin support came along in Pester.
Gherkin is a business readable, domain specific language⁸. It’s decouples the business logic from the
actual code being tested. It can be used with any development language, including PowerShell and
Pester.
Pester was originally built for unit tests, but it can also be adapted to integration, acceptance and
infrastructure testing. Gherkin is the opposite. The Gherkin language seems to be more suited for
infrastructure testing rather than unit testing. You’ll get more in tune with this as you go along.

Files
When using Pester, you’re used to creating lots of Tests.ps1 PowerShell scripts. Everything that
makes up a test is included in these files, from the test specifications themselves such as describe,
context and it blocks to all the ways you can assert conditions like the should assertion. This usually
works, but perhaps you’d like to separate out specifications like “in this scenario, do this thing” or
“when this condition is true, test this…”. In Pester, these types of scenarios are defined inside of the
Tests.ps1 file under a context block. Gherkin support in Pester approaches the task differently.
Instead of grouping everything a test needs in a single file, Gherkin requires two files; a .feature and
a .Steps.ps1 file. The feature file consists of the business logic read: no actual code here. A feature
file just contains plain-text descriptions of how the test will be structured. The actual assertions for
the tests that compare actual vs. expected state are in the steps file. It’s recommended to keep both
files in the same directory with the script or module you’ll be testing against.

Features
The feature file has a single feature at the top. A feature is something that the software should do.
Think of a Gherkin feature like a Pester it block. A feature is a description of what the code should
be along with some optional data about it.
⁷https://github.com/cucumber/cucumber/wiki/Gherkin
⁸http://martinfowler.com/bliki/BusinessReadableDSL.html
Gherkin 47

A feature is represented in the feature file with the word Feature followed by a colon and then a
short description of what’s being tested. For example, you have a script that adds the Web-Server
Windows feature to a remote server called InstallWebServer.ps1. This script is only supposed to work
when SQL server is not installed. It looks like this:

param($ComputerName)

if (-not (Get-Service -ComputerName $ComputerName -Name MSSQLServer -ErrorAction Ign\


ore)) {
Invoke-Command -ComputerName $ComputerName -ScriptBlock { Add-WindowsFeature -Na\
me 'Web-Server'} -ErrorAction Ignore
}

To perform some tests for this script, you create a file called InstallWebServer.feature in the same
folder as a script with the same name. In this file, you create a Gherkin feature that looks something
like this:

Feature: Enables the Web-Server Windows feature

Below the feature declaration is where you can add optional information about the action being
tested, such as a longer description of the action, various business rules that govern the scope, etc.
Any additional information would be placed under the feature declaration:

Feature: Enables the Web-Server Windows feature


Computer will not have SQL server installed

Notice in the example above that the additional information lines are indented. This is required by
Gherkin. It doesn’t matter if you use spaces or tabs. The lines and other entities in the feature file
must follow the same indentation pattern.

Scenarios
Once you’ve got the feature defined, it’s time to create one or more scenarios. A scenario is similar
to a Pester context block. It describes a predictable situation in which the code may be executed.
Think of a scenario as starting with the when word. Scenarios like when the server is offline, when
the server is online, when the network is down, etc. are great examples. You’ll find later that a when
keyword is available to you. Don’t let the when keyword get confused with the scenario itself.
Scenarios appear in the feature file underneath the feature. Using the example above, you add a
couple of scenarios to the feature. Each scenario has an empty line separating it from the feature
and other scenarios.
Gherkin 48

Feature: Enables the Web-Server Windows feature


Computer will not have SQL server installed

Scenario: The remote computer isn't supported

Scenario: The remote computer is supported

Steps
Once you’ve got your feature and scenario(s) defined, it’s time to add some steps. Creating a step
with Gherkin in Pester consists of two separate steps (no pun intended):

1. Define the step in the feature file


2. Define and create the step code in the corresponding Steps.ps1 file

Gherkin steps are like detailed tests of the scenario. Each step is a statement that defines a subset of
the bigger scenario. Steps are similar to it blocks that exist inside of a Pester context block.
In the feature file, each step must be defined underneath the scenario that it applies to, but cannot
be free-form text the way the feature and scenarios are. A step definition always includes one of the
keywords:

• Given (Puts the system into a known state)


• When (Describes the key action the user performs)
• Then (Observe the outcome)
• But (Exclude)
• And (Add)

These keywords are required because, as you’ll see later, they represent PowerShell commands that
will be executed inside of the step’s script. The keywords probably make intuitive sense to you. For
example, using the scenario of the computer being offline, you could break it down into individual
tests by adding steps to the scenario.
Gherkin 49

Feature: Enables the Web-Server Windows feature


Computer will not have SQL server installed

Scenario: The remote computer isn't supported


Given the remote computer is running SQL Server
And the unsupported computer does not have the web server feature installed
When the feature is attempted to be added to the unsupported computer
Then no change to the computer happens

Scenario: The remote computer is supported


Given the remote computer is not running SQL Server
And the supported computer does not have the web server feature installed
When the feature is attempted to be added
Then the web server feature is added

Think of the Given keyword as like a a Pester context block and an it block combined. The When
keyword can be thought of as similar to a Pester describe block while the Then, And and But
keywords are more like the assertions that are found in a Pester it block.

Step Definitions
Once each step has been defined in the feature file, it’s now time to finally bring in some PowerShell
and Pester syntax and “link” up some code execution to each of the steps defined in the feature file.
You do this by creating a step definition in the Steps.ps1 script using the same name as the feature
file. The steps script will be called InstallWebServer.Steps.ps1.
In the steps script is where you plug in the code to perform the tests. Each step definition in this
script starts with the same keyword as you used in the feature file followed by a string representing
the exact same name as was given in the feature file.

Always ensure that the step defined in the feature file is the same name as in the steps
file. Otherwise, the feature file will be read but will not be able to find the corresponding
set of steps.

Opening up the InstallWebServer.Steps.ps1 file, you build the steps defined under each scenario in
the feature file.
Gherkin 50

Given 'the remote computer is running SQL Server' {

And 'the unsupported computer does not have the web server feature installed' {

When 'the feature is attempted to be added to the unsupported computer' {

Then 'no change to the computer happens' {

Given 'the remote computer is not running SQL Server' {

And 'the supported computer does not have the web server feature installed' {

When 'the feature is attempted to be added' {

Then 'the web server feature is added' {

Running Gherkin Tests


At this point, you could execute the test and see what happens. You use Invoke-Gherkin while in the
same folder. Invoke-Gherkin finds all the feature and steps files in the current folder automatically.
Gherkin 51

PS> Invoke-Gherkin

Testing all features in 'C:\tests'


Feature: Enables the Web-Server Windows feature
Computer will not have SQL server installed

Scenario: The remote computer isn't supported


[+] Given the remote computer is running SQL Server 0ms
[+] And the unsupported computer does not have the web server feature instal\
led 0ms
[+] When the feature is attempted to be added to the unsupported computer 0ms
[+] Then no change to the computer happens 0ms

Scenario: The remote computer is supported


[+] Given the remote computer is not running SQL Server 0ms
[+] And the supported computer does not have the web server feature installe\
d 0ms
[+] When the feature is attempted to be added 0ms
[+] Then the web server feature is added 0ms

Testing completed in 1ms


Scenarios Passed: 2 Failed: 0 Steps Passed: 6 Failed: 0 Skipped: 0 Pending: 0 Inconc\
lusive: 0

You can see that you’ve got a lot of false positives because you haven’t even added any code in the
steps. Watch out for this! Let’s add some code to the steps with some Pester should assertions to
perform the tests.

Given 'the remote computer is running SQL Server' {


Get-Service -ComputerName 'labsql' -Name MSSQLServer -ErrorAction Ignore | shoul\
d not be nullorempty
}

And 'the unsupported computer does not have the web server feature installed' {
(Invoke-Command -ComputerName 'labsql' -ScriptBlock { Get-WindowsFeature -Name '\
Web-Server'}).Installed | should -Be $false
}

When 'the feature is attempted to be added to the unsupported computer' {


& "$PSScriptRootInstallWebServer.ps1" -ComputerName 'labsql'
}

Then 'no change to the computer happens' {


Gherkin 52

(Invoke-Command -ComputerName 'labsql' -ScriptBlock { Get-WindowsFeature -Name '\


Web-Server'}).Installed | should -Be $false
}

Given 'the remote computer is not running SQL Server' {


Get-Service -ComputerName 'DC' -Name MSSQLServer -ErrorAction Ignore | should -B\
e $null
}

And 'the supported computer does not have the web server feature installed' {
(Invoke-Command -ComputerName 'DC' -ScriptBlock { Get-WindowsFeature -Name 'Web-\
Server'}).Installed | should -Be $false
}

When 'the feature is attempted to be added' {


& "$PSScriptRootInstallWebServer.ps1" -ComputerName 'DC'
}

Then 'the web server feature is added' {


(Invoke-Command -ComputerName 'DC' -ScriptBlock { Get-WindowsFeature -Name 'Web-\
Server'}).Installed | should -Be $true
}

I now run Invoke-Gherkin again and see what the tests are passing.

PS> Invoke-Gherkin

Testing all features in 'C:\tests'

Feature: Enables the Web-Server Windows feature


Computer will not have SQL server installed

Scenario: The remote computer isn't supported


[+] Given the remote computer is running SQL Server 288ms
[+] And the unsupported computer does not have the web server feature instal\
led 1.54s
[+] When the feature is attempted to be added to the unsupported computer 13\
8ms
[+] Then no change to the computer happens 977ms

Scenario: The remote computer is supported


[+] Given the remote computer is not running SQL Server 29ms
[+] And the supported computer does not have the web server feature installe\
Gherkin 53

d 923ms
[+] When the feature is attempted to be added 1.14s
[+] Then the web server feature is added 1.13s

Testing completed in 6.16s


Scenarios Passed: 0 Failed: 2 Steps Passed: 5 Failed: 3 Skipped: 0 Pending: 0 Inconc\
lusive: 0

Summary
You can see that Gherkin is similar to Pester but does have its differences. Using Gherkin with the
Pester PowerShell module, you’re able to write your tests in a popular testing language. This allows
you to share your tests with others who also use Gherkin!
Part 2: Using Pester
In Part II, we’re going to get down to business. By now, you’ve graduated from the 101-level
information and learned what Pester is and how to use it in a basic manner. It’s now time to get
down to applying that knowledge to real-world use cases.
This part will be mostly hands-on demonstrations of how to use Pester in various scenarios. We’ll
cover how to use Pester to solve real problems, go over best practices and show you how to go from
a Pester newbie to a Pester rockstar!
Controlling Test Results
When most people think of a test, they think of a pass or a fail. Granted, this is generally the correct
assumption even in software testing, but it doesn’t have to be. There are other “in between” states
that a test could be in that aren’t necessarily so black and white. For example, what if the outcome
of a test depends on another test, and that test hasn’t run. The depending task is in a limbo state.
What about when a test can’t decide to pass or fail? It sounds odd, but it does happen!
In this chapter, you’re going to cover these “interim” states and cover when they are necessary and
in what context they might be useful.

Pending and Skipped States


An it block generally returns a pass or fail state, but there are times when you may not want the
test to run at all. In this case, you have three options. You can either set the test to be in a skipped,
pending or inconclusive state. Each of these states is a way to skip over a test which performs the
same function, but when the test is run, the output will indicate the state which it is in.
An it block state can be set to pending or skipped using the Skip and Pending parameters. These
two parameters are mutually exclusive; An it block cannot be both skipped and pending at once.
Using the Pending state is common when you’re scaffolding out tests to come back to later. The
it block is pending some kind of other action to happen so it can be rerun. The Skip parameter is
more generic in that it merely skips the test for no apparent reason. An it block can be skipped for
a plethora of reasons.
Below is an example of using the Skip and Pending parameters. Notice that you’ve pass the Skip or
Pending parameters directly to each it block.

1 describe 'My tests' {


2 it 'displays the number 1' {
3 1 | should -Be 1
4 }
5
6 it -Skip 'Checks the database' {
7 Test-DBConnect | should -BeOfType System.Data.SqlClient.SqlConnection
8 }
9
10 it -Pending "Isn't done yet" { }
11 }

When run with Invoke-Pester, your output will look like the below example output.
Controlling Test Results 56

PS> Invoke-Pester -Path MyTests.Tests.ps1

Executing script Y:\Downloads\MyTests.Tests.ps1

Describing My tests
[+] displays the number 1 5ms
[!] Checks the database, is skipped 0ms
[?] Isn't done yet, is pending 0m
}

Notice that even though both the *Checks the database* and *Isn't done yet* *it* blo\
cks were skipped over, the output is different giving you an indication of the state\
each *it* block is in.

Another related state is *inconclusive*. The *inconclusive* state performs the same \
function of *skipped* and *pending* states but is yet another way to "tag" the reaso\
ns a test should -Be skipped over. However, unlike *skipped* and *pending*, the *inc\
onclusive* state cannot be set by a parameter and must be set within the *it* block \
using the `Set-ItResult` command. Using `Set-ItResult -Inconclusive` command referen\
ce or the legacy `Set-TestInconclusive` command will put an *it* block into the *inc\
onclusive* state as shown below.

{linenos=off}
```PowerShell
describe 'My tests' {
it 'should -Be in limbo' {
Set-ItResult -Inconclusive
}
}

This tests will return:

Describing My tests
[?] should -Be in limbo, is inconclusive 12ms

Pester allows you to skip over tests in many different ways. How you choose to skip over these tests
is up to you.

Setting Pending and Skipped States Dynamically


Typically, it blocks are set skipped or pending by statically defining them as such in the test file
itself. However, it’s also possible to set them dynamically. This technique comes in handy when
tests have dependencies that must be met before running.
Controlling Test Results 57

For example, PowerShell Core has automatic variables that define what operating system family
it is running on called $IsMacOs, $IsLinux and $IsWindows. These variables return True or False
depending on what operating system PowerShell Core is running on. Knowing this information,
you can build logic into the tests to only run tests if PowerShell is running on a particular operating
system.
One example of how to set pending and skipped states dynamically is to define it block parameters
in a describe block’s BeforeAll block. Using splatting, you’re about to save parameters passed to
commands in a variable and pass those parameters to the command later.
Below you can see an example of where you define the Skip parameter in the BeforeAll block only
if PowerShell is running on a Windows computer. Then when the it block is executed, it uses the
parameters you defined earlier.

describe 'should only run on Windows' {


BeforeAll {
$itParams = @{}
if (-not $IsWindows) {
$itParams.Skip = $true
}
}

context 'Block A' {


it @itParams 'This only runs on Windows' {
$true | should -Be $true
}
}
}

When you run this test on a MacOS or Linux computer, you’d see output like the below example.
You can see that it shows an exclamation point (!) indicating that it was skipped.

Describing Should only run on Windows

Context Block A
[!] This only runs on Windows, is skipped 0ms

Pending, skipped and inconclusive states are useful when scaffolding out soon-to-be-written tests,
temporarily disabling tests while troubleshooting or, as you’ve just seen above, helpful in accounting
for test dependencies.

Forcing Pass/Fail States


By default, Pester allows pending, skipped and inconclusive states but what if you want to get
draconian about it and mandate a pass/fail situation? In that case, you can use the Strict parameter
Controlling Test Results 58

on Invoke-Pester. The Strict parameter requires a pass or fail by ignoring the Skip, Pending and
Inconclusive states and instead just failing them.

Below you’re using the MyTests.Tests.ps1 script you created above. You can see that the previously
pending and skipped tests have now failed. Using the Strict parameter is a heavy-handed approach
at ensuring no tests are forgotten about or looked over.

PS> Invoke-Pester -Path MyTests.Tests.ps1 -Strict

Executing script C:\MyTests.Tests.ps1

Describing My tests
[+] displays the number 1 2ms
[-] Checks the database 0ms
Exception: The test failed because the test was executed in Strict mode \
and the result 'Skipped' was translated to Failed.
[-] Isn't done yet 0ms
Exception: The test failed because the test was executed in Strict mode \
and the result 'Pending' was translated to Failed.

Using the Strict parameter with Invoke-Pester is a great way to ensure no tests are passed over
or forgotten about.

Summary
We just covered all of the interim states that a Pester test (it block) can be in; pending, skipped or
inconclusive. Even though each state performs the same function, providing the ability to separate
the reasons is helpful to many testers.
Working with Pester Output
Up to this point, we’ve been running Pester tests by either copying and pasting the describe blocks
directly into the console or, more commonly, using the Invoke-Pester command. You would then
marvel at all the beautiful green output you see indicating all of the passing tests. This kind of output
is useful if you’re running tests on your own, in front of your computer; but doesn’t necessarily work
too well when you’re automating tests or need to get tests results into another system. For this, you
need something better than simple host output.

nUnit XML
Pester can return test results not only via the console as red/green output also as XML; more
specifically nUnit XML⁹. nUnit XML is a structured data format that’s popular with many build
systems. Most modern build systems like Microsoft Team Foundation Server (TFS), AppVeyor,
Jenkins (with a plugin) and TeamCity natively understand the nUnit XML format. A wide range
of testing frameworks use this format to represent test results, not just Pester. Once test results are
stored in this format, you can upload them to the build system and display them on a web page as
shown below.

Example output shown on AppVeyor

If you need send Pester output somewhere else other than the console, you can change it by forcing
it to return output via nUnit XML. To do so, you have to use two additional parameters: OutputAs
and OutputFile when you invoke Invoke-Pester. OutputAs only has one possible value: NUnitXml.
This parameter represents the format of the output to write to the file. OutputFile represents the
path to the XML file that will be written to.
The below example shows me providing the path to the tests files using the Path parameter and
sending the results of those tests to a file called C:TestResults.xml.

⁹https://github.com/nunit/docs/wiki/NUnit-Project-XML-Format
Working with Pester Output 60

Invoke-Pester -Path 'C:\My.Tests.ps1' -OutputFormat NUnitXml -OutputFile C:\TestResu\


lts.xml

When this command runs, the default output to the console still shows. This is expected. However,
you’ll see that Invoke-Pester has created a C:TestResults.xml file representing your test results as
well. If you don’t want to see the test results via the console, you can always suppress them by using
the Show parameter on Invoke-Pester with an argument of None.

nUnit XML Structure


When you look at the XML file that’s created, you’ll immediately notice some common attributes.
You’ve created an XML file below from the following test:

describe 'MyScript' {
it 'does awesome stuff' {
$false | should -Be $false
}

it 'just does cool stuff, I guess then' {


$false | should -Be $true
}
}

Even though these results below aren’t what you’re used to be seeing, keep in mind that when
returning test results in this format, you’re not expected to look at this XML file by itself. The
majority of the time, a build system or other application which understands nUnit XML consumes
the file and produces a lovely web page, as you saw earlier in the chapter.

Example nUnit XML


Working with Pester Output 61

Returning Failed Tests as Exit Codes


Another way that Pester can generate results besides through the console is through exit codes.
By using the EnableExit parameter on Invoke-Pester, you can fire up a PowerShell session with
powershell.exe, run the tests and then immediately exit out of the session. This is helpful when you
automate tests in a build, for example, and you need a way to tell your build system that the build
failed because it failed one or more tests.
Using the simple example above, we know one test failed. You can run Invoke-Pester with the
EnableExit parameter to show that it failed a single test. This technique relies on the exit code of
the PowerShell process itself so if you’re already in a PowerShell console, you’ll need to invoke
another powershell.exe process to see what exit code it returned.

PS> powershell.exe -NoProfile -NonInteractive -Command "Invoke-Pester -Path C:\MyScr\


ipt.Tests.ps1 -EnableExit"
PS> $LastExitCode
1

This behavior is handy when running tests unattended and you need to make some decisions down
the line if any tests in the test suite failed.

Summary
When taking tests to the next level by expanding test-writing to an entire team or dealing with
thousands of tests, or just when you need a better way to visualize test results, returning results to
the console just isn’t going to work. Especially when you use Pester in a build pipeline, being able to
determine the numbers of failed tests with exit codes and to return those results in a standard XML
format are great features that Pester provides.
Tagging
Pester has a concept called tags. Tags apply to describe blocks and serve a variety of purposes.
When designing tests, using tags can be a beneficial way to call certain groups of tests at once, or
to exclude specific tests, or to change the test in other ways.

Introduction to Tagging
Perhaps you have lots of describe blocks scattered across a single test script. Since they are already
in a single test script, there’s already some level of organization. However, tags allow for a different
level of categorization than just grouping tests inside of a single script.
Without tags, when tests are in a single test script, you’ve only got two options for calling them.
You can either run them all by simply executing the Invoke-Pester command and passing it the
script path (Invoke-Pester -Path 'C:\Tests.ps1'), or you could run a single test at a time in
that script using the TestName parameter on the Invoke-Pester command (Invoke-Pester -Path
'C:\Tests.ps1' -TestName 'foo'). You have no way to run a subset of tests or to exclude sets of
tests.
For example, you have four describe blocks in a single test script that represent four functions. Each
function is built around an object called Thing and is saved as Thing.Tests.ps1.
You’re working with a single object with multiple functions that perform various actions on that
object.

describe 'New-Thing' {

describe 'Set-Thing' {

describe 'Get-ThingAttribute' {

describe 'Get-Thing' {

}
Tagging 63

You’ve already got these scripts in a single test file which lets us invoke all the tests at once. But, in
some situations, you’d rather not do that. Instead, you only want to invoke the tests that may modify
things on the system, or those that are less intrusive and just read things. All kinds of reasons exist
to group tests together.
For example, you can assign a tag to each describe block representing the action of each of those
functions. Below, you’ve decided to use the tag Modifications to indicate a test for a function that
changes things and ReadOnly for a test for a function that reads things.

describe 'New-Thing' -Tag 'Modification' {

describe 'Set-Thing' -Tag 'Modification' {

describe 'Get-ThingAttribute' -Tag 'ReadOnly' {

describe 'Get-Thing' -Tag 'ReadOnly' {

Once you have the describe blocks tagged, you can now use the Tags parameter on the Invoke-Pester
command to run only those tests for each type of function as shown below.

PS> Invoke-Pester -Path C:\Thing.Tests.ps1 -Tags 'Modification'


PS> Invoke-Pester -Path C:\Thing.Tests.ps1 -Tags 'ReadOnly'

The reverse action also applies to tagged describe blocks without modifying the tags themselves
in any way. The only difference is how you call the Invoke-Pester command. Instead of using the
Tags parameter, you’ll use the ExcludeTag parameter to prevent specific tests from running.

PS> Invoke-Pester -Path C:\Thing.Tests.ps1 -ExcludeTag 'Modification'

The Tags parameter supports multiple tags as well so perhaps you have a different set of tags you’d
like to apply to these functions too like Database, Server, whatever. Simply add another tag delimited
by a comma, as shown below.
Tagging 64

describe 'Get-Thing' -Tag 'ReadOnly','Database' {

Tagging Strategies
Because tagging can serve a multitude of purposes, let’s cover some examples of how others are
using tags. I hope this section will give you some good examples of how you can use tags in your
tests and help you come up with some tagging strategies of your own.
To use the tags in a systematic and structured way, I recommend first defining all of the tags you
will be using. Setting tags upfront is especially crucial if you’re working on a team. Store available
tags in a text file, spreadsheet, database, or some other place and stick to only using those tags.
There’s no built-in way to manage tags in Pester so you’ll have to build your own simple storage
and retrieval system.

General Strategies
Although some roles will have specific examples of tagging strategies, many of them have a standard
set of tags that can be applied across any technology position. These tags can be used for just about
any particular role.
Some examples of general tagging strategies are:

• By Status:
– Disabled
• By Action:
– Read
– Modify
– Remove/Create
• By Operating System:
– Windows 7
– Windows Server 2019
– Ubuntu Linux
– MacOS
• By Time:
– NighlyBuild
– AdHoc
Tagging 65

For DevOps
Perhaps you’re on a software development team or in charge of automating builds and software
releases. You have a continuous integration/continuous deployment (CI/CD) pipeline set up, and
you’re using Pester tests to perform checks against test, QA, or production environments when the
software is deployed. In a situation like this, you can define tags by a particular stack.
Maybe the software you’re deploying is a web application that allows users to log into an application
with a backend database of some kind. Some useful tags might be:

• By Stack:
– Database
– LoadBalancer
– WebServer
• By Feature:
– NewUser
– FeatureX
– FeatureY
• By Release:
– v51
– v4
• By Deployment Stage:
– Dev
– UA
– QA
– Prod

For Sysadmins
Maybe you’re a system administrator and are using Pester to ensure your infrastructure is at a
consistent state, no changes are being made without your consent. In this case, I’d argue for the
point of a configuration management tool, but Pester tests are better than nothing!
Some example tags might be:

• By Rights:
– RequiresAdminPrivileges
– RequiresDomainAdmin
• By Server Type:
– ActiveDirectory
– FileServer
• By Server Component:
– CPU
– Memory
• By Cloud:
– Azure
– AWS
– Google Cloud
Tagging 66

For DBAs
For database administrators, Pester tests can provide significant value too. Since a Pester test is just
a PowerShell script, you can write a test to check for anything you can read with PowerShell. For
DBA, that could be:

• By Database Vendor:
– MicrosoftSQLServer
– MySQL
– PostGres
• By Database Metrics:
– PagingSpeed
– ReadSpeed
– Size

The options are limitless when it comes to coming up with tags to use. Remember to first come up
with a pre-defined set of tests you and your team decide to use ahead of time and then assign the
tags to your describe blocks and see what works for you.

Summary
Tagging is a simple but time-saving element of Pester. If you can assign tags in an organized fashion
and track when specific tagged tests should be run, this tactic will save you and your team a ton of
time over the long run. Tagging becomes more critical with the more tests you continue to create.
Modules and Dot-Sourced Script
Gotchas
It’s inevitable that you’re going to write tests that depend on something else, whether that be a
module or just another script with a bunch of functions. If so, there are a couple of critical caveats
to think about. Consider loading a module or dot-sourcing a script.
Remember that if you don’t explicitly load or dot source the code, but merely run a command from it,
PowerShell will attempt to load the module implicitly from a path in $env:PSModulePath or implicitly
find functions already loaded in the current session. That could be a problem.
For example, you may have a module located in C:Program FilesWindowsPowerShellModules that’s
the current production version of a module, but actually, be testing a new module which is stored
in a different location. PowerShell’s do-stuff-for-me attitude is excellent, but if you’re not careful, it
will bite you.

Scripts with Functions


One way to bring in custom functions into a PowerShell session is to use dot sourcing. Dot sourcing
allows us to define functions into a PS1 script and load all of the functions in that PS1 script into the
current session at all once.You do this but using a dot followed by the path of the PS1 script.

. C:\PowerShellScriptWithFunctions.ps1

You’ll see the dot sourcing method if you run the New-Fixture Pester command. When you create a
test scaffold with the New-Fixture command, it will generate a code snippet like the below example.

$here = Split-Path -Parent $MyInvocation.MyCommand.Path


$sut = (Split-Path -Leaf $MyInvocation.MyCommand.Path) -replace '\.Tests\.', '.'
. "$here\$sut"`

The above code snippet is created by the New-Fixture in the test file because when the test is run,
it must load all of the functions inside of the script it’s testing. The way that New-Fixture does this
isn’t intuitive to the Pester newcomer; let’s demonstrate a more straightforward example of how dot
sourcing works when running Pester tests.
Let’s say you have a function in the C:PowerShellScriptWithFunctions.ps1 script called Get-Thing
like the below example.
Modules and Dot-Sourced Script Gotchas 68

function Get-Thing {
param()
Write-Output "Hello, I am version 1 of this function!"
}

I then create a test file at C:PowerShellScriptWithFunctions.Tests.ps1 that looks like the below
example.

describe 'Get-Thing' {
it 'returns something' {
Get-Thing | should -Be 'Hello, I am version 1 of this function!'
}
}

I then invoke the test file with Invoke-Pester and expect it to work.

Executing script .\PowerShellScriptWithFunctions.Tests.ps1


Describing Get-Thing
[-] returns something 247ms
CommandNotFoundException: The term 'Get-Thing' is not recognized as the \
name of a cmdlet, function, script file, or operable program. Check the spelling of \
the name, or if a path was included, verify that the path is correct and try again.
at <ScriptBlock>, C:\PowerShellScriptWithFunctions.Tests.ps1: line 3Test\
s completed in 1.52sTests Passed: 0, Failed: 1, Skipped: 0, Pending: 0, Inconclusive\
: 0

It didn’t work because Pester doesn’t know about the Get-Thing function in the C:PowerShellScriptWithFunctions.ps1
script. You need to dot source this script above the describe block to bring it into the session
using "$PSScriptRoot\PowerShellScriptWithFunctions.ps1". You’re using the $PSScriptRoot au-
tomatic variable here to represent the directory where the test file is located.
The test will now pass. What would happen though if you’re trying different versions of the
Get-Thing function and then copy/paste that function directly into the console?
Modules and Dot-Sourced Script Gotchas 69

Executing script .\PowerShellScriptWithFunctions.Tests.ps1


Describing Get-Thing
[-] returns something 228ms
Expected strings to be the same, but they were different.
String lengths are both 39.
Strings differ at index 20.
Expected: 'Hello, I am version 1 of this function!'
But was: 'Hello, I am version 2 of this function!' ---------------\
------------------------^
5: Get-Thing | should -Be 'Hello, I am version 1 of this \
function!'
at <ScriptBlock>, C:\PowerShellScriptWithFunctions.Tests.ps1: line 5

The test now fails because Pester is now using the version you copied and pasted directly into the
console. This is a demonstration to encourage you always to remember to dot source scripts with
functions you’re testing first thing in the test script to ensure the latest version of the function is
loaded. If you have the same function already loaded, dot sourcing will overwrite it and use the
version of the function defined in the PS1 script.

Modules
Some developers speak about writing tests for modules but that terminology isn’t entirely accurate.
In Pester, you don’t write tests for modules; you technically write tests for the functions inside of
those modules. Executing tests for functions inside of modules follows the same general workflow
as dot sourcing.

• Bring the module’s function into the session


• Run the functions to run tests against them

Due to PowerShell’s module auto-loading, if you have a module inside of a folder in the PSModulePath
environment variable, Pester will automatically know about this function, and no other actions need
to be taken. However, this can be dangerous!
If you rely on PowerShell’s auto-loading feature, you can quickly get in a situation where you’re
not testing a version of the module you think you are.
Let’s say you’ve got the Get-Thing function now inside of the C:Program FilesWindowsPower-
ShellModulesThingThing.psm1 folder. PowerShell knows about this module location, so it knows
about the function. The tests will pass as-is with only the describe block.
However, let’s say you’re working on that module and you change the Get-Thing function to output
Hello, I am version 2 of this function!. Everything still works. Your changes worked, OK.
But how could that be? The test is still checking for the output of Hello, I am version 1 of this
Modules and Dot-Sourced Script Gotchas 70

function!. The reason is that Pester isn’t loading the latest version from disk. It’s still relying on the
function loaded into memory.
To remedy this situation, you need to force a new module import every time the tests run to ensure
you’re using the latest version.

Importing Modules
PowerShell has two ways of importing modules: by name or by file path. When you import a module
by name, the module must be in one of the folders defined in the PSModulePath environment variable.
Take a look at the folders on your system by running $env:PSModulePath.
When a module is explicitly imported (Import-Module -Name FOO), PowerShell will first check to see
if the module is already imported. If not, it will import it. This is important to realize when it comes
to testing modules. By default, the Import-Module command will not import the latest version of the
module if an older version is already imported. This action can be dangerous, leading you to think
you’re testing the newest version of the module when you’re not. Instead, at a minimum, always use
the Force parameter on the Import-Module command to force PowerShell to load the latest version
from disk.
Our test script will then look like this:

Import-Module -Name Thing -Force

describe 'Get-Thing' {
it 'returns something' {
Get-Thing | should -Be 'Hello, I am version 1 of this function!'
}
}

Never use the Import-Module command in your script without the Force parameter. At a minimum,
use this method to load modules for Pester tests.
There are two downsides of this method though:

• The module isn’t always unloaded and reloaded properly


• You’re restricted to only importing modules in the PSModulePath environment variable

To be 100% sure you’re always loading the latest version of the module, you always explicitly remove
the module and then import it using the path to the file rather than the name. This approach requires
a little more effort but is less prone to error.
Instead of adding Import-Module -Name Thing -Force to the test file, you’d add Import-Module
-Name 'C:\Program Files\WindowsPowerShell\Modules\Thing' -Force instead. Why? Because this
Modules and Dot-Sourced Script Gotchas 71

method is being more intentional about what you want to do and allows us to run tests on modules
located anywhere.
When you write tests, it’s crucial to be as explicit as possible. It’s important not to assume anything
and not to release control over any aspect of the testing process to defaults. When importing modules
by name, you’re doing just that. You’re assuming that a module exists in one of the folders in
the PSModulePath environment variable. Don’t do this. Instead, always use the second method of
pointing directly to the module you intend to test. This way you’re pointing directly to the module.
To take things one step further, you can also ensure all versions of that module are first unloaded
and then import the module by the path of the module as the example below.

$moduleName = 'Thing'
Get-Module -Name $moduleName -All | Remove-Module -Force
$module = Get-Module -Name 'Thing' -ListAvailable
Import-Module -Name $module.Path -Force -ErrorAction Stop

You’re calling Get-Module you’re using the All parameter in the example above. This is important.
This parameter tells Get-Module to discover all versions of a module. You’re using the name of the
module here rather than the file path because, when using the All parameter with Get-Module, you
cannot specify the file path itself.
If Get-Module finds any modules, it passes them to Remove-Module. Using the Force parameter, there
eliminates any problems with module dependencies and will remove the module from the session
even if another module depends on the module being removed.
Another vital parameter to point out is ErrorAction on Import-Module. Include ErrorAction here to
override any global $ErrorActionPreference values being set somewhere else and not continuing
if a problem arises. You need to make sure the module loads correctly before proceeding.

Summary
Writing Pester tests for functions defined in PS1 scripts or modules is a common occurrence that
works well most of the time. However, as you’ve seen in this chapter, it’s essential to ensure you’re
testing what you think you’re testing.
Advanced Mocking
Mocking is a deep topic in Pester and has many different angles to approach it at. In the Mocking
Introduction chapter, you covered the basics of mocking, but there’s so much to learn. In this chapter,
you’re going to dive deep into mocking and cover many use cases and nuances.

Mocking Specific Object Types


When mocking commands that return an object, it’s common to not worry about the type of object
that’s returned. If a script only references properties on a returned object, the mock doesn’t need to
return that same object type the original command returns.
For example, if you have a script that calls Get-Item to grab a file and then references the Name
property on that System.IO.FileInfo object, Pester doesn’t care if the mock for Get-Item returns a
System.IO.FileInfo object or not. You can easily get by with returning a pscustomobject as shown
below. Notice that you only care about the value of the Name property.

1 $file = Get-Item -Path C:\file.txt


2 $file.Name

1 describe 'finds the right file' {


2 mock 'Get-Item' {
3 [pscustomobject]@{
4 Name = 'filename'
5 }
6 }
7
8 $file = & ScriptToTest.ps1
9
10 it 'finds the file' {
11 $file.Name | should -Be 'filename'
12 }
13 }

There are times, however, when a mocked function needs to return a particular object type. This
scenario typically happens where you’re using an object one command outputs as the input for
another command or when the input command explicitly requires a particular type for a parameter.
In this case, there are a couple of ways to create mocks that return a specific object type using the
TypeNames property and the Pester function New-MockObject.
Advanced Mocking 73

Using PSTypeName
One way to make a mocked command return a particular object type is to include the PSTypeName
property as a property of a custom object. This method is useful for when you need to mock a
command that outputs custom objects.
You can see below you’ve made the mock for the Get-Item command more real by returning the
same System.IO.FileInfo object that the real Get-Item command would have returned.

mock 'Get-Item' {
[pscustomobject]@{
PSTypeName = 'System.IO.FileInfo'
Name = 'filename'
}
}

I can prove this works by piping this pscustomobject to the Get-Member command and inspecting
the type it sees

PS> $obj = [pscustomobject]@{


>>> PSTypeName = 'System.IO.FileInfo'
>>> Name = 'filename'
}
PS> $obj | Get-Member

TypeName: System.IO.FileInfo

<SNIP>

Using New-MockObject
Another way to return specific types in the mocks is to use the New-MockObject command in Pester.
This command can generate “mocked” objects of just about any type, unlike the previous method.
New-MockObject uses advanced .NET capabilities to create “fake” or “serialized” versions of objects.
What that means for us is that it will work in many more situations than will only using the
PSTypeName method.

New-MockObject has a single parameter called Type, which, as you may have guessed, is the
object type you’d like to mock. It requires the full namespace and type, which means that
instead of specifying an object like FileInfo, you must include the .NET namespace as well:
System.IO.FileInfo.
Advanced Mocking 74

describe 'testing' {

it 'testing' {
mock 'Get-Item' {
New-MockObject -Type 'System.IO.FileInfo'
}

Get-Item -Path 'doesnotexist' | should -BeOfType 'System.IO.FileInfo'


}
}

Mock Scope
It’s important to realize that a mock will apply to any command within the describe or context block
in which the mock exists. This concent is powerful but not straightforward. For example, suppose
a simple text file, C:\nonesuch.txt, contains just one line as shown below. {linenos=off} PowerShell
hugs

You also have a script called foo.ps1 that contains a function called Read-Stuff that reads that file
with the Get-Content command as shown below.

function Read-Stuff {
Get-Content c:\nonesuch.txt
}

And finally, you have a test file called foo.tests.ps1 to test the foo.ps1 script that looks like below:

describe 'stuff' {
mock Get-Content { 'smoochies' }

it 'hugs' {
Read-Stuff | should -Be 'hugs'
}
}

This test will fail because the Get-Content mock was created in the scope of the describe block which
is a parent of the it block. Due to scope inheritance, the Get-Content mock applied at the it block
level. Watch out for this!
Now, let’s change this up a bit. Using the same file and script, change foo.tests.ps1 to look like below.
Advanced Mocking 75

describe 'stuff' {
mock Get-Content { 'smoochies' }

context 'new context' {


mock Get-Content { 'fist bump' }

it 'bumps' {
Read-Stuff | should -Be 'fist bump'
}
}
}

This time the test passes because the mock for Get-Content defined in the context block overrode
the mock defined for the same command in the describe block parent scope. Mocks defined closer
to the actual test always override the parent mocks.

describe 'stuff' {
mock Get-Content { 'smoochies' }

context 'new context' {


mock Get-Content { 'fist bump' }

it 'slaps' {
mock Get-Content { 'slap' }

Read-Stuff | should -Be 'slap'


}

it 'bumps' {
Read-Stuff | should -Be 'fist bump'
}
}
}

I have two tests now above called slaps and bumps. What happens to each? The slaps test succeeds
because the mock defined in the it block overrides the mocks defined by in the context, and describe
blocks. The bumps test fails because it blocks share a scope. It blocks do not run in their own scope.

Module-Level Mocks
Modules present a unique challenge because each module is its own scope.
Advanced Mocking 76

Consider an example where you have a module, Bar.psm1, which contains two functions, Get-Something
and Set-Something. The Get-Something function is a private function meaning it is only available
within the module itself.

1 function Get-Something {
2 param()
3
4 }
5
6 function Set-Something {
7 param()
8
9 $something = Get-Something
10 $something
11 }
12
13 Export-ModuleMember 'Set-Something'

You begin writing tests for the Set-Something function and want to mock Get-Something to test the
output of Set-Something. Building a simple test below might generally do the trick if all functions
were exported from the module.

Import-Module Bar

describe 'Set-Something' {
mock 'Get-Something' {
'foo'
}

it 'Set-Something should return foo' {


Set-Something | should -Be 'foo'
}
}

However, you’ll receive this error message below because Pester cannot find the command Get-Something
to mock because it’s running outside of the module scope.
Advanced Mocking 77

Describing Set-Something
[-] Error occurred in Describe block 0ms
CommandNotFoundException: Could not find Command Get-Something
at Validate-Command, C:\Program Files\WindowsPowerShell\Modules\Pester\4.7.3\Fun\
ctions\Mock.ps1: line 845
at Mock, C:\Program Files\WindowsPowerShell\Modules\Pester\4.7.3\Functions\Mock.\
ps1: line 170
at DescribeImpl, C:\Program Files\WindowsPowerShell\Modules\Pester\4.7.3\Functio\
ns\Describe.ps1: line 199

This situation is a problem because not exporting module commands is a common practice.

The ModuleName Parameter


One solution that Pester provides for this scenario is the ModuleName parameter on the mock
command. By defining what module the mocked command is a part of allows Pester to look in
the private scope of the module and find the function. Changing up the tests to look like the below
example will work as expected.

describe 'Set-Something' {
mock 'Get-Something' -ModuleName Bar {
'foo'
}

it 'Set-Something should return foo' {


Set-Something | should -Be 'foo'
}
}

This solution only works for that command inside of the Bar module.

InModuleScope
An alternative approach at mocking private module functions is to use the InModuleScope command.
This command is essentially a shortcut to defining the ModuleName parameter on every mock.
Everything inside of the block defined for InModuleScope will run inside of the private module scope.
You no longer have to use the ModuleName parameter on all of your mocks; Pester just assumes it
now.
To use InModuleScope, define the module you’d like to run within and include the rests of your tests
in its block as shown below.
Advanced Mocking 78

InModuleScope 'Bar' {
describe 'Set-Something' {
mock 'Get-Something' {
'foo'
}

it 'Set-Something should return foo' {


Set-Something | should -Be 'foo'
}
}
}

By enclosing the describe block InModuleScope scriptblock, you’re telling Pester to assume every-
thing inside the scriptblock is going to run in the module scope. Thus you do not need to use
ModuleName. Pester automatically assumes it.

Passing Runtime **Parameters to Module Tests


There are times when you need to pass parameters to a test file when invoking the Invoke-Pester
command. For this purpose, Invoke-Pester has a parameter called Script that allows an alternative
method to executing tests by allowing pre-defined keys in a hashtable to represent the test script
file and parameters to pass to it (Path and Parameters). The parameters in this hashtable are then
passed to test script to be made available within the tests themselves.
Below is an example of how this behavior looks. You can see that the value of $Foo is available in
Tests.ps1.

1 param(
2 [Parameter()]
3 [string]$Foo
4 )
5
6 if (-not $Foo) {
7 ## do something
8 } else {
9 ## do something else
10 }

The Thing.Tests.ps1 test script can then be invoked with Invoke-Pester using the Script parameter
with the required Path and Parameters keys inside of the hashtable as shown below.
Advanced Mocking 79

Invoke-Pester -Script @{ 'Path' = 'C:\Thing.Tests.ps1'; 'Parameters' = @{ Foo = 'foo\


valuehere' }}

This behavior is perfectly acceptable and causes no problems. However, under certain circumstances,
a problem arises when you’re testing a module with some private functions and have to use the
InModuleScope command.

Since tests inside of the InModuleScope scriptblock are running in a different scope, the tests aren’t
able to see the value of the parameters being passed to the test script. It’s impossible to pass a
parameter directly into the test script and reference that variable inside of the InModuleScope block.
We can prove this using the test script above but this time adding an InModuleScope block and
attempting to access the value of $Foo variable there.

param(
[Parameter()]
[string]$Foo
)

if (-not $Foo) {
## do something
} else {
## do something else
}

Import-Module Bar

describe 'tests not in module scope' {


it 'should see the param Foo' {
$Foo | should -Not -BeNullOrempty
}
}

InModuleScope 'Bar' {
describe 'tests in module scope' {
it 'should not see param Foo' {
$Foo | should -Not -BeNullOrempty
}
}
}

You’ll then invoke this test script and see what happens. Notice below that the second test failed
because $Foo wasn’t available in that scope.
Advanced Mocking 80

PS> Invoke-Pester -Script @{ 'Path' = 'C:\Thing.Tests.ps1'; 'Parameters' = @{ Foo = \


'foovaluehere' }}
Executing all tests in 'C:\Thing.Tests.ps1'

Executing script C:\Thing.Tests.ps1

Describing tests not in module scope


[+] should see the param Foo 8ms

Describing tests in module scope


[-] should not see param Foo 8ms
Expected a value, but got $null or empty.
23: $Foo | should -Not -BeNullOrempty
at <ScriptBlock>, C:\Thing.Tests.ps1: line 23
Tests completed in 327ms
Tests Passed: 1, Failed: 1, Skipped: 0, Pending: 0, Inconclusive: 0

To ensure the parameter variable is available inside the InModuleScope block and for any variable
declared outside of it, you have to assign a variable to the global scope. To do that, you have to
catch the parameter value and assign it to a global variable and then use that variable inside the
InModuleScope block.

param(
[Parameter()]
[string]$Foo
)

if (-not $Foo) {
## do something
} else {
## do something else
}

Import-Module Bar

$global:Foo = $Foo

describe 'tests not in module scope' {


it 'should see the param Foo' {
$global:Foo | should -Not -BeNullOrempty
}
}
Advanced Mocking 81

InModuleScope 'Bar' {
describe 'tests in module scope' {
it 'should not see param Foo' {
$global:Foo | should -Not -BeNullOrempty
}
}
}

If you rerun the tests, you’ll see that they both pass. To my knowledge, this is the only way to
pass variables in the local scope to the module scope. Technically, it’s not good practice to use
the global scope. It can cause various conflicts where you least expect them and turn code into
an unmanageable mess. Whenever you have to use the global scope, you ensure you only use it
temporarily. Although it’s entirely optional, you always capture all global variables before you set
one, and when you’re done, you clean up after myself. Below is an example of that.

##Capture all global variables before


$globalVarsBefore = (Get-Variable -Scope Global).Name

##Set the global variable and do some stuff with it here

## Capture all of the global variables after


$globalVarsAfter = (Get-Variable -Scope Global).Name

##Remove any new global variables added


$globalVarsAfter | where { $_ -notin $globalVarsBefore } | foreach {
Remove-Variable -Name $_ -Scope Global
}

You can see there are lots of gotchas when working with modules. Always pay attention to what
scope your tests are running in when troubleshooting. It’ll make your life a lot easier!

Parameter Filters
You learned about parameter filters on mocks in the Mocking Introduction chapter, but you didn’t
learn everything about them. Once you become comfortable using parameter files, you might run
into a few gotchas and advanced techniques that are important to be aware of.

Mocking Parameter Aliases


When a mock is defined with a parameter filter, the variables inside of the parameter filter typically
map identically to the actual parameter names.
Advanced Mocking 82

For example, if you reference theGet-Content command in a function and reference the Path
parameter of that command, you can create a mock with a parameter filter to ensure that mock
is called with that particular parameter like below. This example will pass.

function Do-Something {
param()

Get-Content -Path 'C:\File.txt' -Tail 100


}

describe 'Do-Something' {

mock 'Get-Content' -ParameterFilter { $Tail -eq 100 }

it 'finds the last 100 lines' {


Do-Something
Assert-MockCalled -CommandName 'Get-Content'
}
}

However, many PowerShell commands have parameter aliases that can be called instead of the actual
parameter names. Using the example above, Get-Content’s Tail parameter has an alias called Last.
You can define parameter aliases on parameter filters preventing you from having to write your tests
using the actual parameter names and code using aliases. It can get confusing quick!
To demonstrate, replace $Tail with $Last in the parameter filter and you will see the test still passes.

describe 'Do-Something' {

mock 'Get-Content' -ParameterFilter { $Last -eq 100 }

it 'finds the last 100 lines' {


Do-Something
Assert-MockCalled -CommandName 'Get-Content'
}
}

Using $PSBoundParameters
Many tutorials explain (this book included) that Pester “converts” each parameter inside a parameter
filter to a variable so that you can run comparisons. Most of the time this works great, but
it’s essential to know that the variables in a parameter filter’s scriptblock are not just variables
Advanced Mocking 83

representing the mocked command’s parameter values. There are many variables available inside of
the parameter filter scriptblock. To prove this, use Get-Variable | Out-String in a parameter filter
and notice the result.

describe 'Do-Something' {

mock 'Get-Content' -ParameterFilter { Write-Host (Get-Variable | Out-String) }

it 'finds the last 100 lines' {


Do-Something
Assert-MockCalled -CommandName 'Get-Content'
}
}

Sometimes, when you have a parameter that’s the same name as say a global variable, you’ll run
into conflicting values and unexpected results.
To ensure that the variables used in a parameter filter’s scriptblock will always be those parameters,
use $PSBoundParameters. In a parameter filter, $PSBoundParameters only represents the values
passed to that command, eliminating any potential for variable or scope overlap problems.
Below is a side by side comparison of creating a mock for a function called SomeFunction with a
parameter called Name. Either way will usually work, but it’s safer to use $PSBoundParameters.

mock 'SomeFunction' {
$null
} -ParameterFilter { $Name -eq 'foo' }

mock 'SomeFunction' {
$null
} -ParameterFilter { $PSBoundParameters.Name -eq 'foo' }

The difference is minimal, but using $PSBoundParameters.Name instead of $Name prevents the chance
of any variable conflict and ensures that the values you compare against will always be the values
passed to the expected parameter.

Parameter Filter “Inheritance”


Parameter filters let you get granular when working with mocks and that extra layer of functionality
leads to a more sophisticated test. More sophistication, though, sometimes leads to additional
complexity. What you call parameter filter “inheritance” is one of those niche, edge cases you might
run up against.
Advanced Mocking 84

Parameter filter “inheritance” isn’t a technical term, but it fits. “Inheritance” comes into play when
you create a test with at least one mock set up at the describe block level and at least one mock at the
context block level. This situation leads to Pester having to figure out which of the mocks you want
to use. Without parameter filters, the mock defined closest to the command execution is always the
one used. But when parameter filters come into play, things behave a little differently.
Let’s say you have a couple of functions that you created as a shortcut to finding a SQL Server service
pack’s installer location somewhere on your network. The first function is Find-SqlServicePack. It
is the main function that calls the helper function Find-SqlServerServicePackInstaller:

function Find-SqlServerServicePackInstaller {
param($Version)

if ($Version -eq 2012) {


[pscustomobject]@{
Name = 'installername2012'
}
} elseif ($Version -eq 2014) {
[pscustomobject]@{
Name = 'installername2014'
}
}

function Find-SqlServerServicePack {
param($Version)

$installer = Find-SqlServerServicePackInstaller -Version $Version

## Do some other stuff

$installer
}

I can now call Find-SqlServerServicePack, passing a value to the Version parameter as shown
below.

PS> Find-SqlServerServicePack -Version 2012

I want to create a Pester test for the Find-SqlServerServicePack function to ensure that it returns
the path you expect from Find-SqlServerServicePackInstaller. To make that happen, you create
a test that looks like this:
Advanced Mocking 85

describe 'Find-SqlServerServicePack' {
mock 'Find-SqlServerServicePackInstaller' {
@{ Name = 'installername2012' }
} -ParameterFilter { $Version -eq '2012' }

mock 'Find-SqlServerServicePackInstaller' {
@{ Name = 'installername2014' }
} -ParameterFilter { $Version -eq '2014' }

it 'finds the expected installer name for SQL Server 2012' {


$result = Find-SqlServerServicePack -Version 2012
$result.Name | should -Be 'installername2012'
}

it 'finds the expected installer name for SQL Server 2014' {


$result = Find-SqlServerServicePack -Version 2014
$result.Name | should -Be 'installername2014'
}
}

In this test, you create two mocks for Find-SqlServerServicePack, using a unique parameter filter
on each. One mock replaces a call for version 2012, and the other replaces a call for version 2014.
Each mock returns a unique string which you then assert was returned in each of the it blocks. This
test works great!
However, you forgot a test. You’d like to ensure that the Find-SqlServerServicePack function
returns nothing if Find-SqlServerServicePackInstaller returns nothing. To test this scenario, you
have to create another mock for Find-SqlServerServicePackIinstaller and make it return nothing.
Creating a mock that returns null involves specifying an empty scriptblock using mock 'Find-SqlServerServicePack
No big deal. Also, since you don’t care what version is passed to the mock, you don’t create a
parameter filter on the mock. After all, you want it to match anything. And you create a context
block to ensure this mock does not conflict with the others you created with parameter filters.

describe 'Find-SqlServicePack' {
mock 'Find-SqlServerServicePackInstaller' {
@{ Name = 'installername2012' }
} -ParameterFilter { $Version -eq '2012' }

mock 'Find-SqlServerServicePackInstaller' {
@{ Name = 'installername2014' }
} -ParameterFilter { $Version -eq '2014' }

it 'finds the expected installer name for SQL Server 2012' {


Advanced Mocking 86

$result = Find-SqlServerServicePack -Version 2012


$result.Name | should -Be 'installername2012'
}

it 'finds the expected installer name for SQL Server 2014' {


$result = Find-SqlServerServicePack -Version 2014
$result.Name | should -Be 'installername2014'
}

context 'when the SQL installer cannot be found' {


mock 'Find-SqlServerServicePackInstaller'

it 'should return nothing' {


Find-SqlServerServicePack -Version 2014 | should -Benullorempty
Find-SqlServerServicePack -Version 2012 | should -Benullorempty
}
}
}

I would expect the mock created in the when the SQL installer cannot be found context block
to supersede the other mocks, thus returning nothing and making the assertion pass. But when you
run the test, this is what you get:

Describing Find-SqlServicePack
[+] finds the expected installer name for SQL Server 2012 71ms
[+] finds the expected installer name for SQL Server 2014 45ms

Context when the SQL installer cannot be found


[-] should return nothing 71ms
Expected: value to be empty but it was {System.Collections.Hashtable} at <Script\
Block>, : line 50
50: Find-SqlServicePack -Version 2014 | should -Benullorempty

What gives? It’s ignoring the mock you created in the context block! It seems like it’s being ignored
because the mocks defined in the parent describe block are interfering. To fix this, you could re-
design your test, but it’s already perfect except for this one situation. Instead, create a parameter
filter on this mock to always match. What better way to do that than just checking to ensure $true
is always $true?
Advanced Mocking 87

context 'when the SQL installer cannot be found' {


mock 'Find-SqlServerServicePackInstaller' {} -ParameterFilter { $true }

it 'should return nothing' {


Find-SqlServerServicePack -Version 2014 | should -Benullorempty
Find-SqlServerServicePack -Version 2012 | should -Benullorempty
}
}

Rerun the test, and you will notice it passes and works as expected.
Parameter filters can complicate things dramatically, especially when you’re working with a lot of
mocks. The key is to keep “inheritance” to a minimum and, if necessary, use a “fake” parameter filter
to ensure the mock closest to the command is invoked.

Summary
This was quite the whirlwind chapter! Mocks are a topic that can leave your head spinning. If you
haven’t digested everything in this chapter and want to understand mocking truly, go back and
slowly go through each example scenario. This book is full of examples for good reason. They are
the best way to understand a subject, especially mocks!
Coming from experience, advanced mocking can take a bit to get your head around, but if you keep
at it, you’ll eventually begin to see the patterns of what works and what doesn’t. Hopefully, through
all of the gotchas I’ve put forth in this chapter, your learning experience will be a lot smoother than
mine!
Mocking the Unmockable
So far, you’ve learned what mocks are and how they are used in Pester. You know the basics of
mocking: create a mock for a function call and return some predetermined output. But in the real
world, it isn’t always that simple. Sometimes you’re forced to think outside the box and implement
code in your tests that isn’t necessarily “standard.” You’re compelled to come up with new ways, far
from the documentation, that solve the immediate problems until, one day, the functionality you
need is added to Pester.
This chapter is about times when you really need to mock something, but are unable to refactor the
code to satisfy Pester’s rules for mocking.

.NET Methods
As you’ve seen, Pester loves PowerShell commands. Pester assumes that if you’re writing Pow-
erShell, you’re going to use functions, cmdlets, and other commands as core components of your
scripts. As a result, Pester’s mocking capability is purely focused on commands.
When you’re writing PowerShell, it’s usually a good idea to separate the code into as many small
commands as possible. It’s just good practice. But sometimes, you may run across instances that do
not fit that mold. For whatever reason, sometimes code has to contain one or more calls to .NET
methods.
When someone comes to me with some code with a .NET method call in it, I’d typically recommend
to refactor the code to use a function instead. Calling a method on a .NET object may be a perfectly
suitable way to read all of the contents of a text file, using C#, but this isn’t how you do it in
PowerShell. PowerShell has its own commands to do that sort of thing. Why not use Get-Content?
That’s why it’s there!
However, there are times when you must use some kind of .NET method, and PowerShell doesn’t
offer a good alternative. In this case, you’ve got two options:

• Wrap the .NET method call in a helper function, essentially turning the method into a
PowerShell command
• “Replace” the method by using a ScriptMethod which is attached to an object

The decision of which method to use either one is based on one condition: do you need to assert
the mock being created? In other words, do you need to ensure that the mock ran, or did not run,
as part of your tests? Pester does not have a way to assert the mock created by using option #2 (the
ScriptMethod approach), however Pester does have that ability for option #1.
Mocking the Unmockable 89

If you simply need to script over the code that contains a .NET method call, using a ScriptMethod
works, but does not allow assertions of any kind.
Why might you need to “script over” a .NET method call but not assert it? You might have a time
when you need to allow other code to call a .NET method and get a specific result back, but you
might not actually care whether the code calls the method at all. In that case, the ScriptMethod
approach can work well. It lets you spit out a known, predictable result, which is one of the reasons
you mock something in the first place.
Let’s take a real-world example of a common task that’s performed in PowerShell, for which
the shell doesn’t have a native command. Sometimes it’s necessary to read a password that’s
part of a System.Management.Automation.PSCredential object. This can be done by using the
GetNetworkCredential() method and then reading the Password property returned from that as
shown below.

$Credential = Get-Credential
$Credential.GetNetworkCredential().Password

There’s currently no cmdlet or “PowerShell way” to invoke this method. What if you’d like to test
what happens when GetNetworkCredential() returns a password of 123456? You can’t do this in
Pester, since there’s no built-in way to mock a .NET method.

Creating Wrapper Functions


A wrapper function is a function in PowerShell that intentionally abstracts the code being executed
inside of it. It is a function that always calls another command or routine to accomplish some task.
In this case, a wrapper function only exists to call the .NET method.
Let’s first try to wrap this method call in a function. You can do that by creating a super-simple
function that accepts a Credential parameter as shown below.

function Get-Password {
param(
[pscredential]$Credential
)
$Credential.GetNetworkCredential().Password
}

I can now call Get-Password in the main script instead of directly calling the GetNetworkCredential()
method.
Now that this method call is in a function, you can create a mock for that function. You can now
perform all of the tasks on this mock as you’re used to including asserting that the mock was called.
Mocking the Unmockable 90

mock 'Get-Password' { '123456' }

You’re now able to add parameter filters to these mocks, and perform assertions on them as usual.
Below, you assert that Get-Password was called when the credential passed to it uses the username
of abertram.

mock 'Get-Password' { 'GregShieldsIsMyHero.HesSoDreamy' }


$assmParams = @{
CommandName = 'Get-Password'
ParameterFilter = { $Credential.UserName -eq 'abertram' }
}
Assert-MockCalled @assmParams

“Overwriting” .NET Methods with a ScriptMethod


As you saw earlier, wrapping a .NET method call in a helper function is an excellent way to refactor
code to be more testable. That is my recommended way to “mock” .NET methods. But sometimes
this isn’t possible. When it isn’t, you can always fall back to just “overwriting” the method with
your own method using the Add-Member cmdlet. However, you’re not able to assert anything using
this technique. This is why it’s called a “mock” (in quotes) because you’re not actually mocking
anything in the normal Pester sense. You’re really just overwriting a method on an object with your
own method.
This technique works thanks to PowerShell’s ETS, or Extensible Type System. Basically, the ETS
lets you add script-based properties and methods to existing .NET objects. The additions aren’t
permanent; they only affect a given instance of the class, not the underlying class definition itself.
And the extension only exists for the current PowerShell session.
You see the ETS in action all the time in PowerShell, adding properties like Name to Service objects
so you can use that instead of the less-consistent ServiceName property with which Service objects
are born.
But there’s a trick to the ETS: if you try to add a member, like a ScriptProperty or ScriptMethod, that
already exists, you will “overwrite” the existing member with your new one. The original member
still exists, it’s just hidden in PowerShell for the moment, temporarily replaced by whatever you
specified.
Using the previous example of the GetNetworkCredential() method, let’s say that you don’t
necessarily want to test different passwords via mocking. Instead, you just want your script to
finish completely without throwing an exception when that GetNetworkCredential() method is
called. You can do this by using the Add-Member cmdlet to add a ScriptMethod. To demonstrate this,
you’ll create a credential using Get-Credential, and see how the GetNetworkCredential() method
looks:
Mocking the Unmockable 91

PS> $Credential | Get-Member

TypeName: System.Management.Automation.PSCredential

Name MemberType Definition


---- ---------- ----------
Equals Method bool Equals(System.Object obj)
GetHashCode Method int GetHashCode()
GetNetworkCredential Method System.Net.NetworkCredential GetNetworkCredential()
GetObjectData Method void GetObjectData(System.Runtime.Serialization.SerializationIn\
fo info, System.Runti...
GetType Method type GetType()
ToString Method string ToString()
Password Property securestring Password {get;}
UserName Property string UserName {get;}

You can see the GetNetworkCredential() method and see that it returns a System.Net.NetworkCredential
object. You can also see the object returned by GetNetworkCredential() is indeed of the type
System.Net.NetworkCredential as seen in the example below.

PS> $Credential.GetNetworkCredential() | Get-Member

TypeName: System.Net.NetworkCredential

Name MemberType Definition


---- ---------- ----------
Equals Method bool Equals(System.Object obj)
GetCredential Method System.Net.NetworkCredential GetCredential(uri uri, string auth\
Type), System.Net.NetworkCr...
GetHashCode Method int GetHashCode()
GetType Method type GetType()
ToString Method string ToString()
Domain Property string Domain {get;set;}
Password Property string Password {get;set;}
SecurePassword Property securestring SecurePassword {get;set;}
UserName Property string UserName {get;set;}

Let’s now “overwrite” this method with your own script method.
Mocking the Unmockable 92

$addMemberParams = @{
Type = 'ScriptMethod'
Name = 'GetNetworkCredential'
Value = { 'ourcustomoutput' }
Force = $true
}
$Credential | Add-Member @addMemberParams

Let’s see how the credential looks now:

PS> $Credential | Get-Member

TypeName: System.Management.Automation.PSCredential

Name MemberType Definition


---- ---------- ----------
Equals Method bool Equals(System.Object obj)
GetHashCode Method int GetHashCode()
GetObjectData Method void GetObjectData(System.Runtime.Serialization.SerializationIn\
fo info, System.Run...
GetType Method type GetType()
ToString Method string ToString()
Password Property securestring Password {get;}
UserName Property string UserName {get;}
GetNetworkCredential ScriptMethod System.Object GetNetworkCredential();

Note that GetNetworkCredential() is a ScriptMethod now and returns an object type of System.Object.
You’ve “replaced” it in this particular object instance. Since the GetNetworkCredential() method is
now under your control, invoking the example that calls GetNetworkCredential() will return what
you’ve specified, rather than what would normally come out.
Now that you’ve got that method under control, you can use this technique to manipulate the output
of a function. Perhaps you have a function that creates a credential called New-Credential, as shown
below. It returns a System.Management.Automation.PSCredential object as you’d expect.
Mocking the Unmockable 93

function New-Credential {
param(
$UserName,
$Password
)
$password = ConvertTo-SecureString $Password -AsPlainText -Force
New-Object System.Management.Automation.PSCredential ($UserName, $password)
}

Now perhaps this function call below is in a script against which you’re writing tests for.

New-CredeQntial -UserName 'FOO' -Password 'foo'

Using this method replacement in your test script, you can allow your script to execute because it
will still return a Password property, but a Password property you “mocked”.

describe 'some test name here' {


mock 'New-Credential' {
$password = ConvertTo-SecureString 'replaced' -AsPlainText -Force
$cred = New-Object System.Management.Automation.PSCredential ('userhere', $passwor\
d)

$addMemberParams = @{
Type = 'ScriptMethod'
Name = 'GetNetworkCredential'
Value = { @{Password = 'ourcustomoutput' } }
Force = $true
}

$cred | Add-Member @addMemberParams


}

it 'does that thing with a Credential object' {


## Invoke the script here.
}
}

This method is cruder than simply wrapping the method call in a helper function. I suggest trying
first to create a wrapper function, and as a last resort using this “overwriting” tactic.
Mocking the Unmockable 94

Using “Stubbed” Commands


When Pester mocks a command, it essentially replaces the original command with a user-defined
command. Why can’t you do this same behavior on your own and not use a Pester mock at all? You
can, but only as a last resort. But why do this in the first place?

• When the command to be mocked does not exist on the testing machine
• When Pester cannot mock the command

There are times during testing when you may have to forego Pester’s mocking feature. One of the
main reasons is when you find yourself in a situation where modules that will be on a target system
are not on the system where you perform the testing. This scenario is typical in build pipelines
where a build server runs tests but the actual code runs on production systems. Due to Pester’s
architecture, if you attempt to mock a command that’s not available (either not in a module or not in
scope somehow), it will not work. You’ll receive an error that looks like: CommandNotFoundException:
Could not find Command FunctionDoesNotExist.

The best approach is to get the modules onto that system. It’s important to replicate the test and
production systems as closely as possible. However, I realize edge cases do exist; and rather than
just throwing up your hands, there is a solution: “trick” Pester and replace the original commands
with “stubbed” commands.
Stubbed commands are essentially placeholder functions with the same names as the real commands
but that do not contain any actual code. Stubbed commands are just shells and exist only to trick
Pester into thinking the real commands exist on the testing system. When creating these stubbed
commands, you create a module containing all of the commands that you intend to mock.
Let’s say you’ve got a custom module that contains a few different functions. This module’s functions
are called from scripts on many different users’ systems. As part of your deployment process, that
module is either copied to each system it’s being used on, or the functions are exposed via implicit
remoting. Regardless, the real functions are always available on each end user’s system.
However, you may have a build server or multiple development systems that, for whatever reason,
do not have the real modules installed. This is not the best behavior, but who am I to judge? You’d
like to run tests for this module on these systems that will never have the real module installed. In
that case, you need to trick Pester into thinking those real modules are available.
To demonstrate, let’s say you have a module called AcmeModule with functions that look like the
examples below. In the Do-ThisThing function, you have a reference to Check-ThatThing, which
dictates whether or not something is done.
Mocking the Unmockable 95

function Do-ThisThing {
param()

if (Check-ThatThing) {
Write-Output 'I did this thing'
} else {
Write-Output 'I did not do this thing' \
}
}

function Check-ThatThing {
param()

if (Test-Path -Path 'C:\ThatFile.txt') {


$true
} else {
$false
}
}

I write a set of Pester tests for the Do-ThisThing function, creating a describe block for it and creating
an it block for each kind of output. To control the output, you have to control what Check-ThatThing
returns, because whether it returns $true or $false dictates what Do-ThisThing returns. So you need
to mock the Check-ThatThing function.

describe 'Do-ThisThing' {
mock 'Check-ThatThing' {
$true
}

it 'returns "I did this thing" if it was supposed to' {


Do-ThisThing | should -Be 'I did this thing'
}

mock 'Check-ThatThing' {
$false
}

it 'returns "I did not do this thing" if it was not supposed to' {
Do-ThisThing | should -Be 'I did not do this thing'
}
}
Mocking the Unmockable 96

When the AcmeModule module is installed on the test system, it works as expected.

PS> Get-Command -Module AcmeModule

CommandType Name Version Source


----------- ---- ------- ------
Function Check-ThatThing 0.0 AcmeModule
Function Do-ThisThing 0.0 AcmeModule

PS> Invoke-Pester 'C:\Program Files\WindowsPowerShell\Modules\AcmeModule.Tests.ps1

Describing Do-ThisThing
[+] returns "I did this thing" if it was supposed to 364ms
[+] returns "I did not do this thing" if it was not supposed to 89ms

But, when the module does not exist, you run into that Could not find Command error message:

Describing Do-ThisThing
[-] Error occurred in Describe block 696ms
CommandNotFoundException: Could not find Command Check-ThatThing at Validate-Comma\
nd,
C:\Program Files\WindowsPowerShell\Modules\Pester\3.4.3\Functions\Mock.ps1: line 8\
01 at Mock,
C:\Program Files\WindowsPowerShell\Modules\Pester\3.4.3\Functions\Mock.ps1: line 1\
68 at <ScriptBlock>, <No file>: line 3

To remedy this, let’s create a module with “stubbed” commands containing only those functions
that you intend to mock. These commands have the same names as the real ones. You’ll still call it
AcmeModule with the same function names, but you’ll replace the “real” code inside with a simple
throw statement that lets me know you ran the “stubbed” functions.

function Check-ThatThing {
param()

throw 'Do-CheckThatThing'
}

I then add this fake module to the module path. PowerShell sees the same function that you intend
to mock, but the code inside is a simple throw statement.
Mocking the Unmockable 97

PS> Get-Command -Module AcmeModule

CommandType Name Version Source


----------- ---- ------- ------
Function Check-ThatThing 0.0 AcmeModule

PS> Get-Content function:Check-ThatThing


param()

throw 'Check-ThatThing'

The test described above will now complete as expected.


It’s important to recognize why this technique works. It’s about how mocking works, in general.
When Pester creates a mock, it’s just reading that command, gathering up all of its parameters
and then ignoring everything inside of it. It doesn’t matter what’s inside at this point because once a
mock is created, Pester doesn’t care. Pester just has to find that command somewhere on the system.
Command stubbing is an infrequently-used technique to get around those times when you need to
mock a command, and it’s not available on the system where the tests will run.

Asserting Mocks “Transitively”


This technique is not for the faint of heart and is one that I am not proud to share. However, there
are times, which you’ll show you, in which it is necessary. This method involves asserting a mock’s
input downstream to verify that a command upstream received the correct input. This may sound
confusing, so let’s go through a real-world example.
Get-Credential is a command that interactively prompts for input. Interactivity and automated
testing don’t get along, so this becomes a problem. Due to how Get-Credential is built, Pester has
no way to mock this command. When mocked, it still prompts for input. What if you need to assert
that Get-Credential was called? You have to get creative.
Suppose you have a couple of functions that look like this:
Mocking the Unmockable 98

function Do-Thing {
param($Username)

function foo {
$cred = Get-Credential

Do-Thing -Username $cred.Username


}

I need to confirm that Get-Credential was called so you’ll create a stubbed command making
Get-Credential return a PSCredential object as it normally would.

function Get-Credential {
$cred = New-MockObject -Type 'System.Management.Automation.PSCredential'
$cred | Add-Member -MemberType ScriptMethod -Name 'GetNetworkCredential' -Force -Va\
lue { [pscustomobject]@{Password = 'pwhere'} }
$cred | Add-Member -MemberType NoteProperty -Name 'UserName' -Force -Value 'getcred\
user' -PassThru
}

This is a start, but I now need to assert that Get-Credential was invoked. As-is, there’s no way to do
this. Instead, you can pay close attention to what the Get-Credential mock is returning and assert
the property that the stubbed function returns is called later by another command.
Notice above that Get-Credential returns getcreduser as the UserName property. This is important.
This is a unique value that’s only returned by this mock. Since you know what Get-Credential is
returning, you can verify what that value is by a command downstream through a parameter filter.
Here is the solution:

describe 'foo' {
## Create the mock here to assert later
mock 'Do-Thing'

it 'should prompt user for credential' {


## Create a stubbed command with a specific property
function Get-Credential {
$cred = New-MockObject -Type 'System.Management.Automation.PSCredential'
$cred | Add-Member -MemberType NoteProperty -Name 'UserName' -Force -Value '
eduser' -PassThru
}
Mocking the Unmockable 99

## Execute the function


foo

## Assert the command DOWNSTREAM to confirm that Get-Credential was called


## If Do-Thing is not called with that username then we know Get-Credential was not\
called
## If it was not, there would be no credential at all!
$assMParams = @{
CommandName = 'Do-Thing'
Times = 1
Exactly = $true
Scope = 'It'
ParameterFilter = {
$PSBoundParameters.Username -eq 'getcreduser'
}
}

Assert-MockCalled @assMParams

}
}

Using this option is a last resort because it’s not intuitive and will take a little bit to decipher.
Sometimes, though, it’s necessary.

Summary
.NET methods are perhaps the trickiest beasts to mock, although I hope I’ve given you some good
ideas on how to do so. The whole point of PowerShell is to wrap an admin-friendlier, more-
consistent, layer around all that .NET. When you run across something in .NET for which a
PowerShell version doesn’t yet exist, why not create that wrapper yourself? Doing so makes the
.NET easier to mock in Pester, and gives you a more PowerShell-ish way of accessing a bit of .NET
functionality.
Improving Code Coverage
You got a glimpse of what the term code coverage means in the Code Coverage chapter in Part I.
Let’s now dig deeper into that concept and focus on how you can improve that code coverage.
Before you get too far in this chapter, let me address the elephant in the room that many developers
ponder over. If you’re not at 100% code coverage, should you worry? No. It is unrealistic to have a
goal of 100% code coverage. Sure, it’d be nice to see that “perfect” score when your code coverage
tests run, but it’s simply not possible to have tests that cover all situations your code may be run
under. With that out of the way, let’s continue.
In this chapter, we’ll look at how to improve code coverage through examples and talking through
some of the decisions that will be necessary to make when setting out on this journey. We’ll do this
by starting with a real function called Get-MachineInfo available in the book’s GitHub repository¹⁰.
This function queries a remote Windows computer and returns information such and disk space,
operating system, memory, CPU, and more. The before and after tests are available here.
To make this example as real as possible, you’re also starting with a set of tests. When improving
code coverage, it’s common to have a set of tests already, to begin with. Some code coverage has
already been met. You’re improving it! It’s worth noting that these tests are typical of what someone
might write when they’re just getting started in Pester. These are very much focused on what the
function does. There’s nothing wrong with that, but, as we’ll see, you can do a lot more. Examining
code coverage is an excellent place to start figuring out what more you can do.

Step #1: Getting a Baseline


You’ve downloaded the Get-MachineInfo.ps1 and Get-MachineInfo-start.Tests.ps1 files from GitHub
and have saved them to the C:. To get a baseline, let’s first see how much code is run when the tests
are executed by using the CodeCoverage parameter on Invoke-Pester. Remember that code coverage
doesn’t mean how much code is tested but rather how much code is executed as your tests are run.

¹⁰https://github.com/adbertram/pesterbookcode
Improving Code Coverage 101

PS> Invoke-Pester -Path C:\Get-MachineInfo.Tests.ps1 -CodeCoverage C:\Get-MachineInf\


o.ps1
Executing all tests in 'C:\Get-MachineInfo.Tests.ps1'

Executing script C:\Get-MachineInfo.Tests.ps1


Describing Get-MachineInfo
[+] should return object over CIM 1.71s
[+] should not allow WMI as protocol 79ms
[+] should write error log 2.56s
Tests completed in 4.92s
Tests Passed: 3, Failed: 0, Skipped: 0, Pending: 0, Inconclusive: 0

Code coverage report:


Covered 80.00% of 65 analyzed Commands in 1 File.
Missed commands:
File Class Function Line Command
---- ----- -------- ---- -------
Get-MachineInfo.ps1 Get-MachineInfo 59 $option = New-CimSessionOption -Proto\
col Dcom
Get-MachineInfo.ps1 Get-MachineInfo 135 if ($Protocol -eq 'Dcom') {...
Get-MachineInfo.ps1 Get-MachineInfo 136 $newprotocol = 'Wsman'
Get-MachineInfo.ps1 Get-MachineInfo 138 $newprotocol = 'Dcom'
Get-MachineInfo.ps1 Get-MachineInfo 141 Write-Verbose "Trying again with $new\
protocol"
Get-MachineInfo.ps1 Get-MachineInfo 142 $params = @{...
Get-MachineInfo.ps1 Get-MachineInfo 143 'ComputerName' = $Computer
Get-MachineInfo.ps1 Get-MachineInfo 144 'Protocol' = $newprotocol
Get-MachineInfo.ps1 Get-MachineInfo 145 'ProtocolFallback' = $False
Get-MachineInfo.ps1 Get-MachineInfo 148 if ($PSBoundParameters.ContainsKey('L\
ogFailuresToPath')){...
Get-MachineInfo.ps1 Get-MachineInfo 149 $params += @{'LogFailuresToPath' =$Lo\
gFailuresToPath }
Get-MachineInfo.ps1 Get-MachineInfo 149 'LogFailuresToPath' = $LogFailuresToP\
ath
Get-MachineInfo.ps1 Get-MachineInfo 152 Get-MachineInfo @params

On the surface, 80% test coverage seems pretty good but remember; code coverage does not mean test
coverage. Code coverage is just a measure of the amount of code that was executed when the tests
ran. You could technically invoke Get-MachineInfo using every parameter to match all conditions
in the function and achieve 100% code coverage without a single test!
Rather than focusing on code coverage, you’re going to break this function down piece by piece and
analyze both how you could write it better and how to build better tests (which will increase code
coverage naturally).
Improving Code Coverage 102

The first step when creating new tests or refactoring existing tests is to eyeball what’s not testable.
In Pester, you can’t quickly test .NET methods. It’s best to wrap these in their own helper functions.
Lucky for us, there are none. This means you won’t have to do any significant refactoring. In fact,
the function looks OK as-is with no refactoring needed.

Step #2: Build the Test Framework


The first step in improving code coverage is to take the whole script in and scaffold out some tests
you believe should -Be written. By scaffolding out a rough framework of describe and it blocks, you
can begin to piece together what the code is supposed to do.
Because you’re not focusing on any order at this point, create a single describe block and a whole
bunch of it blocks. This is to allow you to lay out all the tests that need to happen. You might separate
these into multiple context blocks later or not; it all depends on what you come across as you begin
filling in the test code in each of the it blocks.

## Dot source the script to bring the function into scope.


$here = Split-Path -Parent $MyInvocation.MyCommand.Path
$sut = (Split-Path -Leaf $MyInvocation.MyCommand.Path) -replace '.Tests.', '.'
. "$here\$sut"

describe 'Get-MachineInfo' {

it 'allows multiple computer names to be passed to it' {

it 'only allows the strings "Wsman" and "Dcom" to be used for the Protocol param\
eter' {

it 'should create a DCOM CIM session when the DCOM protocol is specified' {

it 'should create a WSMAN CIM session when the WSMAN protocol is specified' {
Improving Code Coverage 103

it 'should create a CIM session for each computer provided' {

it 'should query the Win32_ComputerSystem CIM class on each computer provided' {

it 'should query the Win32_LogicalDisk CIM class on each computer provided' {

it 'should query the Win32_Processor CIM class on each computer provided' {

it 'should only return the first instance of the Win32_LogicalDisk CIM class on \
each computer provided' {

it 'when an exception is thrown when querying a computer, and ProtocolFallBack i\


s used, and WSMAN is used as the Protocol, it should call itself using the DCOM prot\
ocol' {

}
Improving Code Coverage 104

it 'when an exception is thrown when querying a computer, and ProtocolFallBack i\


s used, and DCOM is used as the Protocol, it should call itself using the WSMAN prot\
ocol' {

it 'when an exception is thrown when querying a computer, and ProtocolFallBack a\


nd LogFailuresToPath are used, it should call itself using the LogFailuresToPath par\
ameter' {

it 'when an exception is thrown when querying a computer and the parameters Prot\
ocolFallBack and LogFailuresToPath are used, it writes the computer name to a file' \
{

it 'should return a single pscustomobject for each computer provided' {

it 'should return a pscustomobject with expected property names for each compute\
r provided' {

it 'when an exception is thrown when querying a computer, it should write a warn\


ing to the console for each computer' {

}
Improving Code Coverage 105

This test suite is much more significant than the previous suite. Even though the code coverage was
80% when running the first set of tests for this function, it’s evident that a lot of test cases were
missed. It was a lot more than 20% of the code. And this isn’t even all of the tests you could create!
This is just a set of tests to get started. These tests were created by following the various code paths
of the function. I followed the code from top to bottom.

Step #3: Create the Mocks


Once you’ve scaffolded out some basic tests, it’s time to start mocking. Since you only have a single
describe block in this test suite (since you only have one function), you’ve added mocks at the
describe block level. These mocks were created by looking through the code at all of the external
command references. Once I found all of those, I then inspected what those commands output and
mocked that object and any properties referenced on that output object later in the code.
If the call to those external commands in Get-MachineInfo does not assign the output of the function
being mocked to a variable, you have the mock return nothing. Otherwise, you have the mock return
the bare minimum of properties to be used later on in the code.

mock 'New-CimSessionOption' {
New-MockObject -Type 'Microsoft.Management.Infrastructure.Options.WSManSessionOp\
tions'
}

mock 'New-CimSession' {
New-MockObject -Type 'Microsoft.Management.Infrastructure.CimSession'
}

mock 'Get-CimInstance' {
@{
Manufacturer = 'manhere'
Model = 'modelhere'
NumberOfProcessors = 'numberofprocshere'
NumberOfLogicalProcessors = 'numlogprocshere'
TotalPhysicalMemory = '1000'
}
} -ParameterFilter { $PSBoundParameters.ClassName -eq 'Win32_ComputerSystem' }

mock 'Get-CimInstance' {
@{
Freespace = 'freespacehere'
Improving Code Coverage 106

SystemDrive = 'C:'
Version = 'ver'
ServicePackMajorVersion = 'spver'
BuildNumber = 'buildnum'
}
} -ParameterFilter { $PSBoundParameters.ClassName -eq 'Win32_OperatingSystem' }

mock 'Get-CimInstance' {
@{
Freespace = 'freespacehere'
}
} -ParameterFilter { $PSBoundParameters.ClassName -eq 'Win32_LogicalDisk' }

mock 'Get-CimInstance' {
@{ AddressWidth = 'Addrwidth' }
@{ AddressWidth = 'Addrwidth2' }
} -ParameterFilter { $PSBoundParameters.ClassName -eq 'Win32_Processor' }

mock 'Remove-CimSession'
mock 'Out-File'
mock 'Write-Verbose'

Step #4: Know the “Before” Code Coverage


If you run the code coverage report now, it shows 0% because you don’t have any invocations of the
Get-MachineInfo function inside of the test script. You have a describe block defined with a bunch
of mocks and some blank it blocks in it.
Now that you’ve got all of the it blocks created, and all of the mocks defined, let’s begin to chip
away at this challenge. You’ll fill in some code for the should create a DCOM CIM session when
the DCOM protocol is specified test as shown below.

it 'should create a DCOM CIM session when the DCOM protocol is specified' {
$result = Get-MachineInfo -ComputerName FOO -Protocol Dcom

$assMParams = @{
CommandName = 'New-CimSessionOption'
Times = 1
Exactly = $true
Scope = 'It'
ParameterFilter = { $Protocol -eq 'Dcom' }
}
Improving Code Coverage 107

Assert-MockCalled @assMParams
}

Run Invoke-Pester again using the CodeCoverage parameter. The should create a DCOM CIM
session when the DCOM protocol is specified test should pass now, and the code coverage report
should look like below:

Code coverage report:


Covered 32.31% of 65 analyzed Commands in 1 File.
Missed commands:

File Class Function Line Command


---- ----- -------- ---- -------
Get-MachineInfo.ps1 Get-MachineInfo 61 $option = New-CimSessionOption -Proto\
col Wsman
Get-MachineInfo.ps1 Get-MachineInfo 90 $drive_params = @{...
Get-MachineInfo.ps1 Get-MachineInfo 91 'ClassName' = 'Win32_LogicalDisk'
Get-MachineInfo.ps1 Get-MachineInfo 92 'Filter' = "DeviceId='$sysdrive'"
Get-MachineInfo.ps1 Get-MachineInfo 93 'CimSession' = $session
Get-MachineInfo.ps1 Get-MachineInfo 96 $drive = Get-CimInstance @drive_params
Get-MachineInfo.ps1 Get-MachineInfo 98 $proc_params = @{...
Get-MachineInfo.ps1 Get-MachineInfo 99 'ClassName' = 'Win32_Processor'
Get-MachineInfo.ps1 Get-MachineInfo 100 'CimSession' = $session
Get-MachineInfo.ps1 Get-MachineInfo 103 $proc = Get-CimInstance @proc_params \
| Select-Object -first 1
Get-MachineInfo.ps1 Get-MachineInfo 103 $proc = Get-CimInstance @proc_params \
| Select-Object -first 1
Get-MachineInfo.ps1 Get-MachineInfo 105 Write-Verbose "Closing session to $co\
mputer"
Get-MachineInfo.ps1 Get-MachineInfo 107 $session
Get-MachineInfo.ps1 Get-MachineInfo 107 Remove-CimSession
Get-MachineInfo.ps1 Get-MachineInfo 109 Write-Verbose "Outputting for $comput\
er"
Get-MachineInfo.ps1 Get-MachineInfo 110 $obj = [pscustomobject]@{...
Get-MachineInfo.ps1 Get-MachineInfo 111 'ComputerName' = $computer
Get-MachineInfo.ps1 Get-MachineInfo 112 'OSVersion' = $os.Version
Get-MachineInfo.ps1 Get-MachineInfo 113 'SPVersion' = $os.ServicePackMajorVer\
sion
Get-MachineInfo.ps1 Get-MachineInfo 114 'OSBuild' = $os.BuildNumber
Get-MachineInfo.ps1 Get-MachineInfo 115 'Manufacturer' = $cs.Manufacturer
Get-MachineInfo.ps1 Get-MachineInfo 116 'Model' = $cs.Model
Get-MachineInfo.ps1 Get-MachineInfo 117 'Procs' = $cs.NumberOfProcessors
Improving Code Coverage 108

Get-MachineInfo.ps1 Get-MachineInfo 118 'Cores' = $cs.NumberOfLogicalProcesso\


rs
Get-MachineInfo.ps1 Get-MachineInfo 119 'RAM' = ($cs.TotalPhysicalMemory / 1G\
B)
Get-MachineInfo.ps1 Get-MachineInfo 119 $cs.TotalPhysicalMemory / 1GB
Get-MachineInfo.ps1 Get-MachineInfo 120 'Arch' = $proc.AddressWidth
Get-MachineInfo.ps1 Get-MachineInfo 121 'SysDriveFreeSpace' = $drive.FreeSpace
Get-MachineInfo.ps1 Get-MachineInfo 124 Write-Output $obj
Get-MachineInfo.ps1 Get-MachineInfo 135 if ($Protocol -eq 'Dcom') {...
Get-MachineInfo.ps1 Get-MachineInfo 136 $newprotocol = 'Wsman'
Get-MachineInfo.ps1 Get-MachineInfo 138 $newprotocol = 'Dcom'
Get-MachineInfo.ps1 Get-MachineInfo 141 Write-Verbose "Trying again with $new\
protocol"
Get-MachineInfo.ps1 Get-MachineInfo 142 $params = @{...
Get-MachineInfo.ps1 Get-MachineInfo 143 'ComputerName' = $Computer
Get-MachineInfo.ps1 Get-MachineInfo 144 'Protocol' = $newprotocol
Get-MachineInfo.ps1 Get-MachineInfo 145 'ProtocolFallback' = $False
Get-MachineInfo.ps1 Get-MachineInfo 148 if ($PSBoundParameters.ContainsKey('L\
ogFailuresToPath')){...
Get-MachineInfo.ps1 Get-MachineInfo 149 $params += @{'LogFailuresToPath' =$Lo\
gFailuresToPath }
Get-MachineInfo.ps1 Get-MachineInfo 149 'LogFailuresToPath' = $LogFailuresToP\
ath
Get-MachineInfo.ps1 Get-MachineInfo 152 Get-MachineInfo @params
Get-MachineInfo.ps1 Get-MachineInfo 158 Write-Verbose "Logging to $LogFailure\
sToPath"
Get-MachineInfo.ps1 Get-MachineInfo 159 $computer
Get-MachineInfo.ps1 Get-MachineInfo 159 Out-File $LogFailuresToPath -Append

By adding just a single test for a tiny piece of code, you’ve already achieved 32.31% code coverage.
Also, notice that it says you missed line 61 ($option = New-CimSessionOption -Protocol Wsman),
but it says nothing about line 59 ($option = New-CimSessionOption -Protocol Dcom). It doesn’t
mention line 59 because that is the test you just built. You created a mock that forced that particular
line to execute and not the other one. This ensured that this test was limited to this single condition
when the Dcom parameter is passed to it.

Step #5: Build all Tests


Since you’re well beyond Pester 101 at this point, you’ll now fill in all of the assertions for each of
the it blocks. The complete test script is available in the book’s GitHub repo¹¹. Once this is done,
¹¹https://github.com/adbertram/pesterbookcode
Improving Code Coverage 109

you should have 100% code coverage. But, we’ll use the CodeCoverage parameter on Invoke-Pester
after you’re done to test the tests to ensure that you execute all of the code in the function.

Step #6: Re-check Code Coverage


My tests did change slightly from the original template. You made these changes using the techniques
in this book; the tests had to be reorganized to work as required. Now run Invoke-Pester again with
the CodeCoverage parameter and see what the code coverage report looks like.

Code coverage report:


Covered 95.38% of 65 analyzed Commands in 1 File.
Missed commands:

File Class Function Line Command


---- ----- -------- ---- -------
Get-MachineInfo.ps1 Get-MachineInfo 137 $newprotocol = 'Wsman'
Get-MachineInfo.ps1 Get-MachineInfo 150 $params += @{'LogFailuresToPath' =$Lo\
gFailuresToPath }
Get-MachineInfo.ps1 Get-MachineInfo 150 'LogFailuresToPath' = $LogFailuresToP\
ath

I’ve been using Pester for a long time and I can’t even get full code coverage on the first try! I created
all of these tests on intuition alone, and I thought that I’d have 100% code coverage, but I guess not!
Let’s take a look to see if I care about not getting 100% code coverage.
Look over the code coverage output above. Looking at line 137, it seems like the code did not assign
WsMan to the $newProtocol variable. Looking at the code for Get-MachineInfo, this happened either
because:

• we didn’t use the ProtocolFallback parameter or Protocol parameter value when calling the
Get-MachineInfo on line 135
• if the Protocol parameter was used, it wasn’t equal to Dcom as seen on lines 136
• the function did not throw an exception and end up in the catch block on 127 at all.

I know that the ProtocolFallback parameter was used multiple times in the tests so it can’t be that
and there were multiples times when the Dcom value was used on the Protocol parameter. Perhaps
the function never threw an exception and didn’t get into the catch block? That can’t be the case
because you have tests that pass for code inside of the catch block which includes all of the tests in
the when the function calls itself context block.
To double-check, add a small Write-Host line right above that line and rerun the test to see if you
get anything back on the console.
Improving Code Coverage 110

After further investigation, it looks like you had an issue of mock overlap. Every time an exception
was thrown, either ProtocolFallback wasn’t used or $Protocol wasn’t equal to Dcom.
You’ll see in the When the function calls itself context block in the test script on lines 163
and 177 that the Get-MachineInfo function is being mocked with two different parameter filters {
$Protocol -eq 'DCOM' } and { $Protocol -eq 'WSMAN' }. Recall from the mocking chapters that
when defining mocks in it blocks, all it blocks share the same mock scope. In this instance, this was
a problem because the second mock on line 177 was overriding the mock defined on 163. This meant
that Get-MachineInfo was never being called with the Protocol parameter set to DCOM after all when
an exception was thrown in the function.
To remedy this situation, you can separate the tests into context blocks. All mocks defined in a
context block are in their own, unique scope so no overriding will happen when in different context
blocks. You will replace the it blocks and instead build context blocks around them to create separate
mock scopes.
The result is a single context block with all tests that need to be performed when an exception is
thrown with child context blocks inside separating mock scope as shown below.

context 'When the function throws an exception' {

mock 'New-CimSession' { throw }

context 'when an exception is thrown when querying a computer, and ProtocolFallB\


ack is used, and WSMAN is used as the Protocol' {

mock 'Get-MachineInfo' { } -ParameterFilter { $Protocol -eq 'DCOM' }

it 'when an exception is thrown when querying a computer, and ProtocolFallBa\


ck and LogFailuresToPath are used, it should call itself using the DCOM protocol' {

$result = Get-MachineInfo -ComputerName FOO -Protocol WSMAN -ProtocolFal\


lBack

$assMParams = @{
CommandName = 'Get-MachineInfo'
Times = 1
Exactly = $true
Scope = 'It'
}
Assert-MockCalled @assMParams

}
}
Improving Code Coverage 111

context 'when an exception is thrown when querying a computer, and ProtocolFallB\


ack is used, and DCOM is used as the Protocol' {

mock 'Get-MachineInfo' { } -ParameterFilter { $Protocol -eq 'WSMAN' }

it 'when an exception is thrown when querying a computer, and ProtocolFallBa\


ck is used, and DCOM is used as the Protocol, it should call itself using the WSMAN \
protocol' {

$result = Get-MachineInfo -ComputerName FOO -Protocol DCOM -ProtocolFall\


Back

$assMParams = @{
CommandName = 'Get-MachineInfo'
Times = 1
Exactly = $true
Scope = 'It'
}
Assert-MockCalled @assMParams
}
}

context 'when an exception is thrown when querying a computer, and ProtocolFallB\


ack and LogFailuresToPath are used' {

mock 'Get-MachineInfo' { } -ParameterFilter { $LogFailuresToPath -eq 'C:\Pat\


h' }

it 'when an exception is thrown when querying a computer, and ProtocolFallBa\


ck and LogFailuresToPath are used, it should call itself using the LogFailuresToPath\
parameter' {

$result = Get-MachineInfo -ComputerName FOO -ProtocolFallBack -LogFailur\


esToPath 'C:\Path'

$assMParams = @{
CommandName = 'Get-MachineInfo'
Times = 1
Exactly = $true
Scope = 'It'
}
Assert-MockCalled @assMParams
}
Improving Code Coverage 112

it 'when an exception is thrown when querying a computer, and ProtocolFallBack i\


s not used, and LogFailuresToPath is used, it writes the computer name to a file' {
$result = Get-MachineInfo -ComputerName FOO -LogFailuresToPath 'C:\Path'
$assMParams = @{
CommandName = 'Out-File'
Times = 1
Exactly = $true
Scope = 'It'
ParameterFilter = {
$PSBoundParameters.FilePath -eq 'C:\Path' -and
$PSBoundParameters.InputObject -eq 'FOO'
}
}
Assert-MockCalled @assMParams
}
}

We’ll now rerun a code coverage report, and this time, line 137 isn’t a problem.

Code coverage report:


Covered 96.92% of 65 analyzed Commands in 1 File.
Missed commands:
File Class Function Line Command
---- ----- -------- ---- -------
Get-MachineInfo.ps1 Get-MachineInfo 150 $params += @{'LogFailuresToPath' =$Lo\
gFailuresToPath }
Get-MachineInfo.ps1 Get-MachineInfo 150 'LogFailuresToPath' = $LogFailuresToP\
ath

Moving onto the next instance at line 150, it looks like the LogFailuresToPath key isn’t being added
to the $params array. Looking at the code on lines 149-151, the only time this happens is when you
use the LogFailuresToPath parameter on the Get-MachineInfo function.

if ($PSBoundParameters.ContainsKey('LogFailuresToPath')){
$params += @{'LogFailuresToPath'=$LogFailuresToPath}
}

In this case, you don’t care. You’re testing the outcome of when the LogFailuresToPath pa-
rameter is used anyway in multiple tests e.g. $result = Get-MachineInfo -ComputerName FOO
-ProtocolFallBack -LogFailuresToPath 'C:\Path'. You don’t care about this one, so there is no
need to fix it. You’re thrilled with the 96.92% code coverage test.
Improving Code Coverage 113

Summary
Congrats! You made it through another real-life example chapter! I hope this real example of working
on improving code coverage allowed more of these methods and principles to sync in.
As you’ve seen in this chapter, the process of improving code coverage is an iterative one. The
subject can get complicated quickly and to understand this topic, you need to engross yourself in the
problem. Improving code coverage, troubleshooting mocking, and other techniques you’ve learned
in this book cannot be acquired by reading. You must get your hands dirty and do them.
Improving code coverage may take awhile, but you can be sure that when you’re done, you’re code
is rock solid. Also, by using code coverage as a guide, you can be sure that you have addressed every
code path, thus allowing you to trust your code more.
Infrastructure Testing
Because Pester is a unit testing framework, some people in IT operations fail to see its benefit. They
see references to mocking, testing function output, and asserting that exceptions were thrown, and
automatically assume it doesn’t apply to them. They believe Pester is only about testing code. I
would argue that unit testing still is essential to an IT admin who writes scripts; but beyond my
unmistakable scorn for “no unit testing,” there’s another kind of testing that’s possible with Pester:
infrastructure validation, or infrastructure testing.
Infrastructure testing is a relatively new concept in the testing world. Since this whole DevOps
movement has taken hold, a lot of IT operations people have been trying to figure out ways to adapt
traditional developer concepts to infrastructure; thus, infrastructure testing was born. Infrastructure
testing is a combination of the more conventional software-developer-centric integration and
acceptance testing.
Infrastructure testing isn’t about testing code; it’s about testing what the code changes in the
environment. Some would even argue that no code is needed at all!
Infrastructure testing is just putting a term around a practice you’ve been doing for years. Instead
of only manually eyeballing the results of what your script should have changed in an environment,
infrastructure testing takes the steps that you went through in your brain and converts them to code.
Once those validation steps are in code, you can automate testing and inherit all of the goodness
that comes with automation.
If you’re using code to change your infrastructure in any way, are you sure it works? I don’t mean, “It
didn’t throw an error message and fill up the console with red text.” I mean, are you 100% confident
that it did the thing you thought it should?

• Did it create that Active Directory user?


• Did it change that registry key value?
• Did it restart that virtual machine?

Scripts can be complicated and can contain lots of different ways the code can flow. One simple
mistake in a single if construct could skip over entire sections of your script without you realizing it,
for example. It’s essential to have an overseer to ensure that the infrastructure changes you intended
to introduce are as expected. This overseer position is where Pester—and infrastructure testing—
come in.
Pester is a tool that lets you write code that not only ensures that the code is written correctly but
also what the code does is what you expected. How’s that done? That is what this chapter is all
about.
Infrastructure Testing 115

Let’s start with a simple example of how to leverage infrastructure testing in Pester.
Perhaps you’ve got an Active Directory (AD) sync script that takes input from some other source, like
a CSV file or a database, and populates that information into AD. As part of that script, it looks for an
existing user account matching a first name and last name in the CSV file and, if it doesn’t find that
account, creates that user. Your script might look something like below. Call it Invoke-AdSync.ps1.

## Grab the employee records from a CSV file


$employees = Import-Csv -Path C:\employees.csv

## Start reading each employee record


foreach ($employee in $employees) {
## Check to see if the user account already exists
$adUserParams = @{ Filter = "givenName -eq '$($employee.FirstName)' -and surName\
-eq '$($employee.LastName)'" }
if (-not (Get-ADUser @adUserParams)) {
## If not, create a new user account <FirstName><LastName>
$newAdUserParams = @{
Name = "$($employee.FirstName)$($employee.LastName)"
GivenName = $employee.FirstName
SurName = $employee.LastName
Department = $employee.Department
Title = $employee.Title
}
New-ADUser @newAdUserParams
}
}

Let’s say the CSV ends up looking something like this, with four employees that should -Be added
to AD if not already present:

"FirstName","LastName","Department","Title"
"Adam","Bertram","Executive Office","CEO"
"Donnie","Baker","Janitorial Services","Janitorial Services Manager"
"Bob","Woodruff","Janitorial Services","Custodian"
"Jeff","Hicks","Accounting","Accountant"

We run the script and see that the user accounts were created. Great! The script worked. Not so fast.
How do you know the user accounts weren’t already in AD? Since you didn’t have any logging set
up in the script, there’s no way to tell if the script created a given user account, or just skipped over
that one because it already existed.
Also, did you remember to check for each of the attributes on each account, or did you forget one?
If you’re like all other humans, chances are you’re not going to remember to check every attribute
Infrastructure Testing 116

manually that was intended to be set on each of the user accounts. You’re assuming that the script
worked, and assuming can be dangerous. You need a failsafe in place to ensure the script does what
you expect. Let’s begin building a set of infrastructure tests for this scenario.

Analyzing the Code


Just as in a unit test, before writing a single line of test code, it’s essential first to analyze what
the script does and document it somewhere in a workflow or pseudocode. The complexity of this
analysis task varies greatly depending on how much stuff the script does. For this script, the analysis
is relatively straightforward.

1. Read each row in a CSV file


2. Check to see if the employee in each CSV row exists in Active Directory
3. If not, create the AD user account using the attributes in the CSV row

Let’s first set up a small test environment to test this script. To do that, you’ll need the actual CSV
file you’re going to use in production or one that looks “similar” to it. You’re also going to ensure
that the machine your tests run on is on a domain-joined computer and that the user context the
tests run under has rights to read and create new users; these are infrastructure dependencies and
are an essential concept.

Getting Testing Dependencies in Place


First, you need to ensure that the same CSV file you’ll use in production (or one just like it) is the
one you used for testing. For file input like this, you’ve got two options:

• Use Pester’s TestDrive provider and TESTDRIVE: PSDrive


• Make an actual copy of the CSV file

If you choose to use Pester’s TestDrive provider, you’ll need to generate the CSV file programmat-
ically and then use that in your tests. This process can sometimes be burdensome when all that’s
required is merely reading a file. For this reason, you typically make a copy of the actual file that
will be used in production and use that in your test suite.
Let’s say the CSV file you’ll be using for this script in production is located at FILESERVERSha-
reEmployees.csv. Since it’s useful both to use an accurate representation of the actual “thing” during
testing and to keep your production artifacts separate from your test artifacts, you’ll copy this CSV
to a location where all your scripts and tests are created. That way you’re using real production
data, but only a copy of it, so you’re not impacting production. You can do something similar for
databases and other storage.
You’ll create a folder called TestArtifacts to hold a copy of the CSV file. Creating this “artifact
repository” is a great start to building a central repository to keep all of the files that your tests
will need when run.
Infrastructure Testing 117

Copy-Item -Path \FILESERVER\Share\Employees.csv -Destination 'C:\TestArtifacts'

Now that a production copy of Employees.csv is in the C:TestArtifacts folder, you can reference it
using that path in all of your tests.
The next step is ensuring that the computer that executes the tests is joined to the same domain in
which the script should work and that the script runs under a user with the appropriate permissions.
You’re not going to cover how to do this. Setting up virtual machines and fiddling with AD isn’t in
the scope of this book. These skills do come in handy when building infrastructure tests because,
for example, in this situation you could set up a standalone Active Directory domain, using VMs, so
that you can test entirely outside of production.

Prototyping the Infrastructure Tests


Now that all of your dependencies are in place, you need to figure out how to read and validate
whatever it is you expect the script to do. In this case, you’re creating AD users, so you need to
know how to query them. To do that, you’ll use the Get-ADUser cmdlet, which is part of the Active
Directory module in the Remote Server Administration Tools¹² software package.
To read an AD user, using your original script as a guide, you can come up with a line that will allow
me to query for a user account.

$user = Get-ADUser -Filter "givenName -eq 'Adam' -and surname -eq 'Bertram'"

Now that you’ve got this syntax in your back pocket, you need to explore the properties of the object
returned. You’re doing this because you’re not just creating an AD user in the script. You’re creating
an AD user with a particular first name, last name, department, and title; you need to test for those
attributes.
You’ve already implicitly “tested” the first name and last name by querying for the AD user using the
first name (givenName) and last name (surname); you’re mostly confirming those properties already
exist. Ae now need some code to read the department and title for the user account.
Doing a little more investigation with the Get-Member command, the Get-ADUser returns an object
with both the department and title, but only if you use the Properties parameter as shown below.

$user = Get-ADUser -Filter "givenName -eq 'Adam' -and surname -eq 'Bertram'" -Proper\
ties 'Department','Title'

When looking at the value of the $user variable, you now can see the Department and Title
properties, as shown below.

¹²https://www.microsoft.com/en-us/download/details.aspx?id=45520
Infrastructure Testing 118

Department : Executive Office


DistinguishedName : CN=AdamBertram,CN=Users,DC=mylab,DC=local
Enabled : True
GivenName : Adam
Name : Bertram
ObjectClass : user
ObjectGUID : 55b20cbf-c5a3-4dea-a15a-040499fe5f6c
SamAccountName : AdamBertram
SID : S-1-5-21-4117810001-3432493942-696130396-3123
Surname : Bertram
Title : CEO
UserPrincipalName :

Assessing the Current Environment


A test always compares a state against an expected state. If you can’t figure out what that a current
environment state is, you’ll never be able to write a good test to ensure it’s in an expected state.
When writing infrastructure tests, it’s important to first “capture” the current environment to ensure
it’s not in the expected state. You’ve come up with the code above to “capture” the present “state”
by figuring out how to query AD users. You should now confirm what the current state looks like
in AD for this specific example.
In the CSV file, you have four employee records. Let’s see if they exist and, if they do if those accounts
have the attributes the script intends to set for each of them.

## Read all of the employee records


$employees = Import-Csv -Path C:\TestArtifacts\Employees.csv

## Define all of the AD attributes the script will attempt to set


## Create a hashtable to "map" each CSV employee record property with the AD attribu\
te name
$csvToAdAttributeMap = @{
'FirstName' = 'givenName'
'LastName' = 'surName'
'Department' = 'department'
'Title' = 'title'
}

## Read each employee record


foreach ($employee in $employees) {
## Check to see if the AD user exists
Infrastructure Testing 119

$getAdUserParams = @{
'Filter' = "givenName -eq $($employee.FirstName) -and surname -eq $($employe\
e.LastName)"
'Properties' = 'Department','Title'
}
if ($user = Get-ADUser @getAdUserParams) {
Write-Host "The employee $($employee.FirstName) $($employee.LastName)'s AD a\
ccount already exists."
## Check each of the attributes of that user against the employee record in \
the CSV
$csvToAdAttributeMap.GetEnumerator() | ForEach-Object {
if ($user.($_.Value) -eq $employee.($_.Key)) {
Write-Host "--- : AD attribute $($_.Value) is the same."
}
}
}
}

We can see that just by forcing yourself to go through this initial step, you’re building a lot of the
code you can then re-use inside of your infrastructure test.
When you run this code, you can now get an idea of what the current state looks like.

The employee Adam Bertram's AD account already exists.


--- : AD attribute title is the same.
--- : AD attribute department is the same.
--- : AD attribute givenName is the same.
--- : AD attribute surName is the same.
The employee Donnie Baker's AD account already exists.
The employee Bob Woodruff's AD account already exists.
The employee Jeff Hicks's AD account already exists.

We now know that the Adam Bertram AD user account already exists even before the script runs.
Why is this helpful? It’s helpful because after the script runs and you run the set of infrastructure
tests and the Adam Bertram AD user account comes back as created with all of its AD attributes
defined correctly, you know the script didn’t do this. The user account was set like that in the first
place.

Writing the Infrastructure Tests


We now know how to retrieve the elements that your script creates or changes. At this point, you
can move this code you’ve been playing around with and build some infrastructure tests. To do this,
Infrastructure Testing 120

ask yourself a single question: What changes is this script supposed to effect in my environment,
given the input that’s provided to it? In your case, the script will create one or more AD users, given
a CSV file of employee records.
Creating this test is a lot easier now that you already have the code generated above. You now need
to scaffold out a Pester describe and one or more it blocks and a few should assertions.

describe 'AD CSV Sync script' {

## Run the AD sync script


& './Invoke-AdSync.ps1'

## Read all of the employee records


$employees = Import-Csv -Path C:\TestArtifacts\Employees.csv

## Define all of the AD attributes the script will attempt to set


## Create a hashtable to "map" each CSV employee record property with the AD att\
ribute name
$csvToAdAttributeMap = @{
'FirstName' = 'givenName'
'LastName' = 'surName'
'Department' = 'department'
'Title' = 'title'
}

it 'creates a user account for each employee in the CSV file' {


foreach ($employee in $employees) {
$getAdUserParams = @{
'Filter' = "givenName -eq $($employee.FirstName) -and surname -eq $(\
$employee.LastName)"
'Properties' = 'Department','Title'
}
Get-ADUser @getAdUserParams | should -Not -BeNullOrEmpty
}
}

it 'sets all expected AD attributes for each employee' {


foreach ($employee in $employees) {
$getAdUserParams = @{
'Filter' = "givenName -eq $($employee.FirstName) -and surname -eq $(\
$employee.LastName)"
'Properties' = 'Department','Title'
}
Infrastructure Testing 121

if (Get-ADUser @getAdUserParams) {
$csvToAdAttributeMap.GetEnumerator() | ForEach-Object {
$user.($_.Value) | should -Be $employee.($_.Key)
}
}
}
}
}

Below is what an example output looks like

Describing AD CSV Sync script


[+] creates a user account for each employee in the CSV file 151ms
[-] sets all expected AD attributes for each employee 145ms
Expected strings to be the same, but they were different.
Expected length: 3 Actual length: 5 Strings differ at index 0.
Expected: 'CEO'
But was: 'Janitor'
-----------^
33: $user.($_.Value) | should -Be $employee.\
($_.Key)

Notice that the Invoke-AdSync.ps1 script was invoked inside of the describe block. When you
execute the code you are testing, and then do all the tests, you are engaging in a testing pattern called
Arrange, Act, Assert. This pattern refers to executing your code and then performing all validation
tests (assertions) afterward. This method works well in infrastructure tests because it keeps you from
having to worry about all the setup and teardown necessary to get your test environment in a state
to perform testing in the first place.
Arrange is when you get all prerequisites and inputs lined up and ready to go. Act is when
you run the code to test. Assert is when you make assertions about what the code should have
accomplished. In the example above, your Arrange action isn’t complete—you’re missing more than
a few dependencies, which you’ll address in the next section.
Also, note that this is a very different pattern from the unit tests you’ve covered so far. Infrastructure
tests assume that any necessary unit tests already passed. Notice that you didn’t validate the CSV
input? A unit test would have done that. Unit tests ensure that your code runs correctly; this
infrastructure test ensures the code had the desired outcome.

Dependencies
You may be thinking that your job is done. You understand infrastructure tests, and they’re working
great… until something goes wrong. Perhaps it’s a problem with an Active Directory domain
Infrastructure Testing 122

controller you depend on, or you accidentally run the test with a user who doesn’t have the right
permissions, or you move the test CSV file someplace else. The tests then blow up.
At this point, you become painfully aware that infrastructure tests are different from unit tests.
Unit tests tend to be self-contained (indeed, the whole point of mocks is to help them be more
self-contained), but infrastructure tests involve the real world. It’s now explicitly clear what you’ve
implicitly known all along: you must have a domain, a CSV file, and a user account with the right
permissions in place before these tests will execute correctly. There are underlying dependencies
that must be in place before your infrastructure tests will work.
Unlike unit tests, infrastructure tests “touch” the environment. They either require some existing
piece of infrastructure is in place, or they need a platform on which to create new things or to
modify existing things. In this example above, you assumed that:

• A CSV file with the expected rows existed in an expected place


• A functioning Active Directory domain existed
• Successful authentication to the AD domain controller was possible
• The test user had read rights to all AD users

Although it may sound like a considerable time-drain to define these dependencies that you think
you know exist, it’s still important to recognize them. Granted, if you’re running this test as a
domain administrator, on a domain-joined computer, with a CSV you just created, it’s highly likely
everything is going to work just fine. If this is your first infrastructure test, and you keep up with
this practice, your tests will become much more complicated. At that point, you’ll need some way
to check for dependencies and be aware that, for a test to make a successful run, a few pieces have
to be put in place ahead of time.
How can you cohesively manage these dependencies so that they don’t get in the way of your
testing? You have a few options.
There are products out there that answer just this question. A popular one is Test Kitchen¹³, from
the folks at Chef. Test Kitchen is an open source product that lets you bring up and tear down a
multitude of different types of virtual machines at will to create test environments for your code.

Building Dependency Discovery Code


The first step in dealing with these test dependencies is defining them. You’ve already done that
above in the walkthrough. Next, you must write code to discover and verify these dependencies.
Just as you did when building your infrastructure tests, if you don’t know how to check for a
dependency, you’re not going to be able to handle them. To demonstrate this, let’s first figure out the
code necessary to discover the state in which your environment must exist before the tests execute
successfully.
¹³https://kitchen.ci/
Infrastructure Testing 123

Using the defined dependencies in the bulleted list defined earlier, the first check is to ensure that
a CSV file is in place and has the expected rows. Notice the term expected here. When you use the
word “expected,” that’s a clear indicator of something that needs some investigation. What does
“expected” mean? You must explicitly define these expectations.
In this case, you’re going to state your expectation as “that the CSV has a particular set of columns.”
Before your test runs, you need to confirm both that the CSV file exists at all in the expected location,
and that it has the expected column headers.
Here’s how you might confirm the state of this dependency:

## Does the CSV file exist?


Test-Path -Path C:\TestArtifacts\Employees.csv

## What are the headers on the CSV file?


$expectedHeaders = 'FirstName','LastName','Department','Title'
$headers = (Get-Content -Path C:\TestArtifacts\Employees.csv | Select-Object -First \
1).Split(',')
if (-not (Compare-Object $expectedHeaders $headers) {
$false
} else {
$true
}

Next, you’ll tackle Active Directory. you’ve defined a few dependencies around Active Directory
that you can test in one shot. What happens when you run Get-ADUser? A lot happens under the
covers of which you’re not aware. If it returns any user at all, you can be sure that:

• The domain is “working.”


• The user running it can successfully authenticate to the domain
• The user running it can read a user account

The certainties above take care of all of the Active Directory dependency requirements in one shot.
That makes it super-simple: you’ll just run Get-ADUser against a built-in domain account that you
know has to be there, and you’ll be done with these requirements.

if (-not (Get-ADUser -Identity 'S-1-5-21-4117810001-3432493942-696130396-500' -Error\


Action Ignore)) {
$false
} else {
$true
}
Infrastructure Testing 124

Notice that you’re using the security identifier (SID) and not the name. The name can change, but
the SID cannot, and you need an identifier that never changes.
If Get-Aduser returns a user account instead of throwing an error, you know you have a functional
domain, and you know you can read user accounts from it. If this “test” fails, then your dependencies
aren’t in place. You don’t need a more granular test to cover “domain exists, but you don’t have
permissions”; remember, you’re not testing the domain here, you’re merely checking to see that the
prerequisites are in place so that you can run the tests.

Adding Dependency Checking to Tests


Once you’ve figured out how to determine the state of the dependencies, it’s time to implement
those checks in your tests. Because you’re checking the state in which an environment must exist
for tests to execute successfully, you must perform these checks before anything else.
The location in which you include these checks could either be in your tests file in the area above all
of your Pester describe blocks or in those blocks. Below is an example of adding your dependency-
checking code inside of a describe block. You’ve also changed the checks to not return boolean
values of $true and $false but rather throw exceptions to ensure the test script stops when a
dependency isn’t found.

describe 'AD CSV Sync script' {


## Does the CSV file exist?
if (-not (Test-Path -Path C:\TestArtifacts\Employees.csv)) {
throw 'The CSV file was not found.'
}

## What are the headers on the CSV file?


$expectedHeaders = 'FirstName','LastName','Department','Title'
$headers = (Get-Content -Path C:\TestArtifacts\Employees.csv | Select-Object -Fi\
rst 1).Split(',')
if (-not (Compare-Object $expectedHeaders $headers) {
throw 'The CSV file headers are not expected.'
}
if (-not (Get-ADUser -Identity 'S-1-5-21-4117810001-3432493942-696130396-500' -E\
rrorAction Ignore)) {
throw 'Cannot query Active Directory.'
}
}

Which location you choose governs how you build your test scripts. For example, if you’ve got a
massive test script with a bunch of different tests, each with various dependencies, it makes no sense
to put all the checks before any tests run. If even a single dependency check failed, that would abort
Infrastructure Testing 125

all other tests. However, if you tend to keep your test scripts small, then it might be better to place
those checks before everything.
In the example, since you have a little test script, you’ve chosen to implement these tests in the
describe block. If at some point, you add more tests, you could relocate the dependency checking
at that time.
At the end of the day, “dependency checking” has nothing to do with Pester. You’re just injecting
your own PowerShell code into a Tests.ps1 script and running it before the tests run. Because of this,
you have a lot of flexibility to implement this in any way you choose.

Creating a Dependency Structure


Instead of a single if construct, why not create some structured way to define each of these
dependencies? One way to do that is to “store” each of these conditions/dependencies as distinct
entities. I understand that this approach might be a little hard to grasp, so let’s start with an example.
In the example above, you’ve got three conditions that each check for a particular dependency. Each
dependency check consists of a piece of code. Since you’re in PowerShell, you can store these pieces
in script blocks that return $true or $false depending on whether that particular dependency passed
requirements.

$dependencies = @(
{ Test-Path -Path $employeeCsvLocation }
{ [bool](Get-ADUser -Identity 'S-1-5-21-4117810001-3432493942-696130396-500') }
)

I could then execute each of these dependency checks in the tests files by adding code inside of the
describe block before any of the other code executes.

$dependencies = @(
{ Test-Path -Path $employeeCsvLocation }
{ [bool](Get-ADUser -Identity 'S-1-5-21-4117810001-3432493942-696130396-500') }
)

foreach ($dep in $dependencies) {


if (-not (& $dep)) {
throw 'A dependency check failed. Halting all tests.'
}
}

You’ve got a great start, but you can make it better. Currently, when a dependency isn’t met,
PowerShell’s just going to return an error message that says, A dependency check failed. Halting
Infrastructure Testing 126

all tests. There’s no indication of what dependency check failed. That’d be nice to know so that I
can get the right dependency in place.
Let’s refactor the solution a little to associate a label with each dependency check, so you can then
see what dependency failed:

$dependencies = @(
{ Test-Path -Path $employeeCsvLocation }
{ [bool](Get-ADUser -Identity 'S-1-5-21-4117810001-3432493942-696130396-500') }
)

foreach ($dep in $dependencies) {


if (-not (& $dep)) {
throw 'A dependency check failed. Halting all tests.'
}
}

$dependencies = @{
@{
Label = "CSV file at $employeeCsvLocation exists"
Test = { Test-Path -Path $employeeCsvLocation }
}
@{
Label = "The $(whoami) user can read AD user objects"
Test = { [bool](Get-ADUser -Identity 'S-1-5-21-4117810001-3432493942-6961303\
96-500') }
}
)

foreach ($dep in $dependencies) {


if (-not (& $dep)) {
throw "The check: $($dep.Label) failed. Halting all tests.'
}
}

Now if a dependency check fails, it will return the label for the dependency, making your
troubleshooting much more manageable.

Two Ways to Treat Dependencies


You’ve now got a framework in place to recognize dependencies and to do something if they don’t
exist. The next decision you have to make is what to do when a dependency is missing. Mostly, there
are two different ways to handle test dependencies:
Infrastructure Testing 127

• Require their presence, and stop tests if they aren’t present


• Do not require their presence, and build them on the fly if needed

In the example above, you used the first method. If the dependency checks did not complete
successfully, the script threw an exception and stopped all tests. However, it doesn’t have to be
this way.
Due to the highly virtualized nature of today’s IT environments, it’s possible to bring up just about
any dependency necessary from scratch, which is where Chef’s Test Kitchen product comes in.
Imagine being able to check for a dependency like an entire domain, and if that domain doesn’t
exist, create it! This way, you know your tests will complete successfully unless the prerequisites to
the prerequisites don’t exist! At some point, you must assume infrastructure components exist on
the virtual host or that there’s a network in place to connect them all.
Because infrastructure provisioning is an entirely different topic that would cover multiple books
itself, you’re not going to cover how to create dependencies. It is the intention, however, to plant
that seed and to show you that it is in fact, possible to do so. To leave you with a teaser, using the
framework you’ve created for verifying dependencies, one way would be to add another key to each
dependency’s hash table with the code required to provision that dependency.

$dependencies = @(
@{
Label = "CSV file at $employeeCsvLocation exists"
Test = { Test-Path -Path $employeeCsvLocation }
Action = { ## Invoke a script, run a function, whatever }
}
@{
Label = "The $(whoami) user can read AD user objects"
Test = { [bool](Get-ADUser -Identity 'S-1-5-21-4117810001-3432493942-6961303\
96-500') }
Action = { ## Invoke a script, run a function, whatever }
}
)

Imagine now just invoking the scriptblock define as each Action step when one of the dependency
checks fail. This type of thinking can transform the way you handle infrastructure dependencies for
tests.

Operational Validation Framework (OVF)


Throughout this chapter, you’ve built PowerShell code to test various infrastructure components.
Building your own system is comes in handy because it’s completely customizable but sometimes
Infrastructure Testing 128

it’s not necessary. If you need to build infrastructure tests with Pester but don’t necessarily want to
build a solution from scratch, you could always use the Operational Validation Framework (OVF)
PowerShell module¹⁴. OVF is a module that uses Pester to provide a DSL specifically meant for
infrastructure testing.
You can download and install the OVF module using Install-Module -Name OperationValidation.
OVF is built around PowerShell modules that provision infrastructure. If you’re still relying on
individual PS1 scripts, you might have a hard time implementing OVF.
OVF relies on a specific folder structure inside of your module. For example, if you had a module
called WebServer that you used to provision web servers, the OVF folder structure would look like
below:

WebServer\
WebServer.psd1
Diagnostics\
Simple\
services.tests.ps1
Comprehensive\

Once the folder structure is built under your module, you can then begin adding Pester tests inside
of the Simple and Comprehensive folders.
Your Pester needs need to adhere to a specific style with parameters defined in the test script which
then get passed to the tests. Below is an example of services.tests.ps1 that represents a set of tests to
ensure the Eventlog Windows service is running.

param(
$Services = @(
'Eventlog'
)
)

describe 'Operating System' {


context 'Service Availability' {
$Services | ForEach-Object {
it "[$_] should -Be running" {
(Get-Service $_).Status | Should -Be running
}
}
}
}
¹⁴https://github.com/PowerShell/Operation-Validation-Framework
Infrastructure Testing 129

Once you’ve got the Pester test script built, OVF makes it easy to find all of the tests defined in that
test script and then run all of them.

PS> $tests = Get-OperationValidation -ModuleName WebServer


PS> $tests | Invoke-OperationValidation

Module : WebServer
FileName : C:\Program Files\WindowsPowerShell\Modules\WebServer\Diagnostics\Simple\\
services.tests.ps1
ShortName : services.tests.ps1
Name : Operating System:Service Availability:[Eventlog] should be running
Result : Passed
Error :
RawResult : @{ErrorRecord=; ParameterizedSuiteName=; Describe=Operating System; Para\
meters=System.Collections.Specialized.OrderedDictionary; Passed=True; Sho\
w=None; FailureMessage=; Time=00:00:00.0093848; Name=[Eventlog] should be running; R\
esult=Passed; Context=Service Availability; StackTrace=}

Notice that you didn’t have to run Invoke-Pester to execute the tests. All of the code necessary to
invoke the infrastructure tests are wrapped up in the OVF module.
The OVF module is handy PowerShell module that leverages Pester to make a specific use-case of
Pester (infrastructure tests) easier to invoke.

Summary
Even though Pester wasn’t specifically designed for infrastructure testing, it can still get the job
done. Since Pester is built with PowerShell and PowerShell is flexible enough to work with all kinds
of infrastructure components, it only makes since Pester could do it as well.
I’ve personally used Pester to test infrastructure code and as a rudimentary testing platform and can
wholeheartedly recommend it.
Troubleshooting and Debugging
So you’ve managed to spend hours creating that perfect test suite. Invoke-Pester runs, and all the
tests fail. That’s fine and expected; tests fail sometimes. But when you know you’ve fixed the bug
in the code and your tests still fail or even succeed when they shouldn’t, you’ve got a problem with
your tests.
Troubleshooting and debugging code can be a frustrating experience, but it’s compounded when
debugging tests for that code. Tests are supposed to find bugs in the code for us. They’re not meant
to require debugging themselves!
A Pester test is, after all, just PowerShell, and tests will not always work as you might expect. Tests
are still only code, and they require upkeep just as your code does. Fortunately, as you progress in
your Pester skills, you’ll find yourself doing less troubleshooting and more doing. Understanding
how Pester works is essential to know how to troubleshoot your tests. If you’ve made it this far in
the book, I’ll assume that the Pester training wheels are off and you can get down to it.
This chapter is not about Pester test design, syntax, or use cases. Instead, you’re going to focus on
some everyday situations you may find yourself in when wondering, “Why in the %G^#@@ doesn’t
this test work?!?” You’ll explore various scenarios that you may run across in the wild and how to
narrow down the problem.

Sending Variables to the Console with Write-Host


One of the most common examples of when it’s necessary to send variable values to the console
is when you’re asserting a mock with a parameter filter. A parameter filter used with a mock
declaration only creates the mock if the parameters match what’s in the filter. Likewise, when
asserting a mock which is invoked using a command like Assert-MockCalled, a parameter filter
limits the assertion to when the command is called with particular parameters.
Let’s say you have a simple function where you’re getting the current status of a Windows service
but first checking to see if the computer is online. If the computer is offline (Test-Connection returns
$false, the function throws an exception.
Troubleshooting and Debugging 131

function Get-MyService {
param(
$ComputerName,
$Name
)

if (-not (Test-Connection -ComputerName $ComputerName -Quiet -Count 1)) {


throw "The computer [$ComputerName] is offline."
}

Get-Service -ComputerName $ComputerName -Name $Name


}

You’ll build a test to ensure that the value you pass to the ComputerName parameter is the
one that’s being tested. To do that, you can create a standard test for this function with a
mock for Test-Connection, and assert that it’s being passed the expected computer name using
Assert-MockCalled as shown below.

describe 'Get-MyService' {
mock 'Test-Connection' { $true }
mock 'Get-Service'

it 'ping the expected computer name' {


$null = Get-MyService -ComputerName 'FOO' -Name 'srv1'

$assMParams = @{
CommandName = 'Test-Connection'
Times = 1
Exactly = $true
Scope = 'It'
ParameterFilter = { $ComputerName -eq 'F00' }
}
Assert-MockCalled @assMParams
}
}

I run the test and immediately see that it fails.


Troubleshooting and Debugging 132

Describing Get-MyService
[-] ping the expected computer name 710ms
Expected Test-Connection to be called 1 times exactly but was called 0 times
15: Assert-MockCalled @assMParams

On the surface, the function and the test look right. You’re passing the value of FOO as the
ComputerName parameter and then asserting that the Test-Connection mock was invoked using a
ComputerName parameter of FOO.

To get some information as to why the mock isn’t working, you need to simplify this a bit by
removing the ParameterFilter parameter from the Assert-MockCalled command. This will assert
a less specific situation and return True if Test-Connection was called at regardless of what value
is passed for the ComputerName parameter.
My Assert-MockCalled command invocation now looks like this:

$assMParams = @{
CommandName = 'Test-Connection'
Times = 1
Exactly = $true
Scope = 'It'
}
Assert-MockCalled @assMParams

Once you remove the parameter filter, you’ll see that the test succeeds. This means that Test-Connection
is being called, although you’re unsure with what parameters. Now that you know that the code is
getting to the mocked command, you can add the parameter filter back, but this time you need to
figure out what the value of the ComputerName variable is inside the parameter filter.
Since the value for the ParameterFilter parameter value is just a PowerShell scriptblock indicated
by the curly braces, you can insert whatever code you want in it. For this instance, you can add a
Write-Host reference which will show me what the value of $ComputerName is at the time Pester is
comparing it with the expected ComputerName parameter value.

describe 'Get-MyService' {
mock 'Test-Connection' { $true }
mock 'Get-Service'

it 'ping the expected computer name' {


$null = Get-MyService -ComputerName 'FOO' -Name 'srv1'

$assMParams = @{
CommandName = 'Test-Connection'
Times = 1
Troubleshooting and Debugging 133

Exactly = $true
Scope = 'It'
ParameterFilter = {
Write-Host "The computer name is $($ComputerName)"
$ComputerName -eq 'F00'
}
}

Assert-MockCalled @assMParams
}
}

Once you have the Write-Host reference now in the ParameterFilter scriptblock, you can rerun
the test which should return the actual value being passed to the ComputerName parameter on the
Test-Connection command.

Describing Get-MyService
The computer name is FOO

[-] ping the expected computer name 163ms


Expected Test-Connection to be called 1 times exactly but was called 0 times at <Scr\
iptBlock>, : line 25
25: Assert-MockCalled @assMParams

Notice in the example above the line The computer name is FOO. This line is from the Write-Host
reference output. You can now see the value being passed to the ComputerName parameter on
Test-Connection that’s failing the test when the ParameterFilter is used.

Looking at the ParameteFilter on Assert-MockCalled, you can see that you’ve mistaken the letter
O for a zero. Changing $ComputerName -eq 'F00' to $ComputerName -eq 'FOO' and rerunning the
test should now result in a pass as expected.

Using Debugger Breakpoints During Tests


PowerShell has an excellent debugger to track down an assortment of problems with scripts.
Since a Pester test is essentially just a script, the same concepts can be applied while you run
tests. PowerShell has a great feature of its debugger known as breakpoints. If you’ve never used
breakpoints before, I highly encourage you to check out this article on how to debug scripts with
the PowerShell ISE¹⁵. You’re going to assume you know how to use breakpoints in a non-Pester
situation.
¹⁵https://msdn.microsoft.com/en-us/powershell/scripting/core-powershell/ise/how-to-debug-scripts-in-windows-powershell-ise
Troubleshooting and Debugging 134

A Pester test consists of three informal phases: setup (mocking, test prep, etc.), execution (invoking
the script to run) and assertions (comparing the expected state with the actual state). One way to
troubleshoot problems in the execution phase is by using breakpoints. Let’s jump into an example.
Perhaps you’ve got a function that pings a computer, checks a service and returns a CSV file of the
results. That function might look something like this:

function Find-Service {
param(
$ComputerName,
$CsvFilePath
)
if (-not (Test-Connection -ComputerName $ComputerName -Quiet -Count 1)) {
throw "The computer [$ComputerName] is not available."
}
Get-Service -ComputerName $ComputerName -Name 'srv1' | Export-Csv -Path $CsvFile\
Path
}

We then create a small Pester test, and as one of those assertions, you’d like to ensure that this
function is querying the service name that you expect. A part of the Pester test suite for this function,
you could assert that Get-Service is being passed the expected ComputerName parameter value as
shown below.

describe 'Find-Service' {
mock 'Test-Connection' { $true }
mock 'Get-Service' {
[pscustomobject]@{
Name = 'srv1'
}

[pscustomobject]@{
Name = 'srv2'
}

[pscustomobject]@{
Name = 'srv3'
}
}

mock 'Export-Csv'

it 'should query the expected service' {


Troubleshooting and Debugging 135

Find-Service -ComputerName DOESNOTMATTER -CsvFilePath TestDrive:\services.csv

$assMParams = @{
CommandName = 'Get-Service'
Times = 1
Exactly = $true
Scope = 'It'
ParameterFilter = { $Name -eq 'srvl' }
}
Assert-MockCalled @assMParams
}
}

However, when you run this test, it will fail because the mock assertion fails.

Describing Find-Service
[-] should query the expected service 258ms
Expected Get-Service to be called 1 times exactly but was called 0 times at <Scr\
iptBlock>, : line 31
31: Assert-MockCalled @assMParams

Looking over the ParameterFilter value on the Assert-MockCalled reference and the call to
Get-Service in the function, everything looks fine.

We need to see what service name is being passed to the Get-Service command. To see the state of
variables when a particular function executes, you can use a breakpoint. More specifically, you can
invoke the breakpoint when the Get-Service command is invoked. This will stop the script at that
point and let us poke around to see what is going on.
To create a breakpoint to stop test execution when a command is invoked (Get-Service in this
case), you’ll use the Set-PsBreakpoint command in the console before you invoke the test with
Set-PSBreakpoint -Command Get-Service.

Let’s now run the test and see what happens.

Describing Find-Service
Entering debug mode. Use h or ? for help.

Hit Command breakpoint on 'Get-Service'

At line:38 char:26
+ dynamicparam { Get-MockDynamicParameter -CmdletName 'Get- ...
+ ~
[DBG]: PS C:\>> c
Troubleshooting and Debugging 136

Hit Command breakpoint on 'Get-Service'

At line:40 char:13+ {
+ ~
[DBG]: PS C:\>>

You can see above that it hit a breakpoint and stopped on line 4, but the code looks a little odd. This
stuff about dynamicparam doesn’t make any sense to us right now. You’ll see this is if you’ve created
a mock for the same command that you’ve set the breakpoint on. This behavior is normal. When a
mock is created, Pester runs the command and “replaces” it. This behavior is Pester doing what it’s
supposed to do.
Since you don’t care about the inner workings of Pester, you’re going to continue running the script
by typing c at the [DBG] prompt to continue. You can then see at on line 11, PowerShell hit that
breakpoint again, and this time its now paused at the time Get-Service is invoked in the function.
Now that the code is stopped at the Get-Service invocation, you can see what the value of the $Name
variable is by simply checking the value as shown below.

[DBG]: PS C:\>> $Name


srv1

We can now confirm that the value of $Name at the time Get-Service is invoked is srv1. The value
of $Name still looks good, so let’s dig deeper. To be 100% sure, let’s copy out the exact service name
of srv1 in the Assert-MockCalled assertion and compare it with the value of $Name. You can do that
now because you’ve got the script waiting on us right at the point when the Get-Service command
is invoked.

[DBG]: PS C:\>> 'srvl' -in $Name


False
[DBG]: PS C:\>> 'srv1' -in $Name
True

This situation, unfortunately, looks like another classic “Duuuh!” moment! That “l” sure looks like a
1.
Being able to stop the script at the moment the command is executed lets us compare the
Assert-MockCalled command’s ParameterFilter value to the one being used in the function.
Without a breakpoint, you would have had no way to do this.

Summary
As you build more tests with Pester, you’ll inevitably come up with many different ways of
troubleshooting and debugging tests out of necessity. All of the techniques you’ve covered in this
Troubleshooting and Debugging 137

chapter did not come from documentation or a technical manual; they came from my real-world
experience. I’m positive there are countless more. Troubleshooting and debugging is a “loose” topic
that rarely follows a straight line.
Pay attention to the techniques you use and try to figure out what works best for you.
Test Walkthrough #1
One of the biggest problems I have with many books is that even though they explain concepts well,
there’s never an opportunity given to run through complete walkthroughs of processes. In this book,
we’re going to change that.
In this chapter, the first walkthrough chapter, I’m going to take you on a journey through my mind
as I set out to build tests for an existing PowerShell function. You’ll cover, in order, how my mind
goes from no tests to a fully-fleshed out test suite.
You’ll be using a PowerShell function called Enable-IISRemoteManagement. This function installs
a Windows feature, sets a registry value, and modifies a Windows service. This function has been
built already, and it’s up to you to create the tests for it. I’ve saved a copy of this at C:Enable-
IISRemoteManagement.ps1.
This function may look familiar because you also used it as an example in the Test Design Practices
chapter.

function Enable-IISRemoteManagement {
[CmdletBinding()]
param (
[Parameter(Mandatory)]
[string]$ComputerName
)

## Verify that the IIS Management Service Windows feature is enabled.


$getFeatureParams = @{
ComputerName = $ComputerName
Name = 'Web-Mgmt-Service'
}

if ((Get-WindowsFeature @getFeatureParams).Installed) {
# Enable Remote Management via a Registry key.
$null = Invoke-Command -ComputerName $ComputerName -ScriptBlock {
$setItemParams = @{
Path = 'HKLM:\SOFTWARE\Microsoft\WebManagement\Server'
Name = 'EnableRemoteManagement'
Value = '1'
}
Set-ItemProperty @setItemParams
}
Test Walkthrough #1 139

## Set the IIS Remote Management service to start automatically.


$setParams = @{
ComputerName = $ComputerName
Name = 'WMSvc'
StartupType = 'Automatic'
}
Set-Service @setParams

## Start the IIS Remote Management service.


Get-Service -ComputerName $ComputerName -Name 'WMSvc' | Start-Service
} else {
throw 'IIS Management Service Windows feature is not enabled'
}
}

Analyzing the Code


Before any code is written, you must first truly understand what the code does, at a high level. At
this point, you need bullet points. After reviewing this function, you should be able to discover that
the function performs four tasks:

• Checks to see if a Windows feature is installed on a remote computer


• If so, sets a Registry key value on a remote computer
– Sets a Windows service startup type on a remote computer to Automatic
– Starts a Windows service on a remote computer
• If a terminating error is thrown in the process, it will throw an exception with a custom message

Congratulations! You’ve just completed step #1 of writing Pester tests. But you may be asking
yourself, “But wait, I haven’t written any tests yet? How can this possibly be related to testing?!?”
This is true. I have not mentioned anything about testing in this section, yet. But I have performed
the first critical step: outlining what the code is supposed to do.
You may be surprised how many people cannot tell me what a piece of code does when I ask them.
This doesn’t mean, “It enables IIS remote management.” Anyone could see that just by reading the
function name. We’re talking about outlining what it does in detail, which is what I just did.
There’s nothing fancy in this “analysis” step. You just read through the code and noted the main
actions it’s supposed to do. It’s as simple as that. You’ll be surprised how often you’ll come back to
this simple outline as you progress.
Test Walkthrough #1 140

Refactoring
It’s essential first to review the code to ensure it’s testable. It’s time to refactor. Rather than going
deep into the dos and don’ts of refactoring, I encourage you now to take a look at the Building
Testable Code chapter. This chapter goes into detail of how you can refactor code to make it more
testable.
Assuming you’ve read and understood that chapter, the resulting function you now have looks like
below. This is the same function we came up in the Test Design Practices chapter after refactoring
the code based on the information provided there.

function Enable-IISRemoteManagement {
[CmdletBinding()]
param (
[Parameter(Mandatory)]
[ValidateScript({
if (-not (Test-Connection -ComputerName $_ -Quiet -Count 1)) {
throw "The computer [$_] could not be reached."
} else {
$true
}
})]
[string]$ComputerName
)

## Verify that the IIS Management Service Windows feature is installed.


if (Test-InstalledWindowsFeature -ComputerName $ComputerName -Name 'Web-Mgmt-Ser\
vice') {
# Enable Remote Management via a Registry key.
$remoteKeyParams = @{
ComputerName = $ComputerName
Path = 'HKLM:\SOFTWARE\Microsoft\WebManagement\Server'
Name = 'EnableRemoteManagement'
Value = '1'
}
Set-RemoteRegistryValue @remoteKeyParams

# Set the IIS Remote Management service to start automatically.


$setParams = @{
ComputerName = $ComputerName
Name = 'WMSvc'
StartupType = 'Automatic'
}
Test Walkthrough #1 141

Set-Service @setParams

# Start the IIS Remote Management service.


Get-Service -ComputerName $ComputerName -Name 'WMSvc' | Start-Service
} else {
throw 'IIS Management Service Windows feature is not installed.'
}
}

function Test-InstalledWindowsFeature {
param(
$ComputerName,
$Name
)

$getFeatureParams = @{
ComputerName = $ComputerName
Name = $Name
ErrorAction = 'Ignore'
}

if ($feature = Get-WindowsFeature @getFeatureParams) {


if ($feature.Installed) {
$true
} else {
$false
}
} else {
$false
}
}

function Set-RemoteRegistryValue {
param(
$ComputerName,
$Path,
$Name,
$Value
)

$null = Invoke-Command -ComputerName $ComputerName -ScriptBlock {


Set-ItemProperty -Path $using:Path -Name $using:Name -Value $using:Value
}
Test Walkthrough #1 142

Deciding What to Test


Once you’ve got the code in its ready-to-test state, you need to figure out what to test.
Since the code can represent so many different scenarios, there’s no way you can provide a specific
formula that will apply to all situations. However, you can give a set of universal guidelines to follow
when designing any Pester test. Refer to the Test Design Practices chapter for more information on
this step.
The first task is to break the function down into its “flow” components. I define a function’s “flow” as
what goes in, what and how code executes inside the function, and what (if anything ) the function
returns when it’s done. To define the flow explicitly in the describe blocks, I like to create context
blocks for each state.

$here = Split-Path -Parent $MyInvocation.MyCommand.Path


$sut = (Split-Path -Leaf $MyInvocation.MyCommand.Path).Replace('.Tests.', '.')
. "$here\$sut"

describe 'Enable-IISRemoteManagement' {
context 'Input' { }
context 'Execution' { }
context 'Output' { }
}

This gives a visual reminder that the function has multiple stages and that each stage should have
some test (or tests) associated with it. This approach is by no means required, especially if you need
to use contexts differently, but it is a good approach to consider. At this stage, you want to get the
describe blocks’ structure built.

Find the Shortest Code Path


Once you set up the describe block, it’s time to look at the code and determine the shortest code
path. A code path is a series of statements that execute when you run code. In this example, there
are four different code paths.

• When Test-Connection returns $false and $ComputerName is between 1 and 15 characters, it


throws a specific exception.
• When Test-Connection returns $true but $ComputerName has more than 15 characters, it
throws a specific exception.
Test Walkthrough #1 143

• When Test-InstalledWindowsFeature returns $true, it perform some stuff on the remote


computer.
• When Test-InstalledWindowsFeature returns $false, it throws a specific exception.

Code paths are based on conditional logic. In this example, a code path branches off in the parameter
validation attribute code, and inside the function using If constructs. If constructs always create at
least two code paths.
You’re looking for the shortest code path. Where is it? Which path can the code take that requires
the fewest number of “branches”? The answer is the if construct inside of the parameter validation
attribute. Here, the function executes Test-Connection, which returns $false, and throws an
exception. No other code is executed. This is a great point to begin creating unit tests.

Creating Test(s) for the Shortest Code Path


Once you identify the shortest code path, you then need to build a scenario that will ensure
the code follows this path every time the function runs. Since a particular command’s output
(Test-Connection) dictates if the code throws an exception and ends, or if the code continues, you
can build a mock for Test-Connection to ensure it always follows the shortest code path (returning
$false and throwing an exception). Building the mock lets you run a basic test with all of the
dependencies removed, thus limiting the test to the actual function’s code.
In other words, you’re taking Test-Connection out of the loop by “faking” it to do what you want.
Since the code path only executes a single command, you need to mock Test-Connection to return
$false, so the code throws an exception, and the function ends. Then create the first test to ensure
that the test followed the code path and to see whether an exception was indeed thrown.

describe 'Enable-IISRemoteManagement' {
context 'Input' {
it 'when the computer is offline, it will throw an exception' {
mock 'Test-Connection' { $false }

{ Enable-IISRemoteManagement -ComputerName 'IAMOFFLINE' } | should throw\


'could not be reached'
}
}

context 'Execution' { }

context 'Output' { }
}
Test Walkthrough #1 144

When this test runs, it should be successful and will be fast since it’s following the shortest possible
code path (and isn’t running the real Test-Connection). Congratulations! You’ve built your first
well-thought-out unit test!

Building Tests for Other Code Paths


At this point, begin making tests for all the other code paths. The next “branch” is when
Test-Connection returns $true. It then runs Test-InstalledWindowsFeature, which defines the
next “branch.” As soon as that next “branch” is hit, that’s a good time to add another test.
Let’s take a look at the paths the code can take if Test-InstalledWindowsFeature returns $false,
since that’s the shortest. If the command returns $false, the code throws another exception. Let’s
create a mocking situation to replicate this code path.
Since you’re now executing code inside the function, and will be testing that the code outputs
something, add it to the Output context. Remember that these contexts are entirely optional. They
exist for visual cues.

describe 'Enable-IISRemoteManagement' {
context 'Input' {
it 'when the computer being passed is offline, it will throw an exception' {
mock 'Test-Connection' { $false }

{ Enable-IISRemoteManagement -ComputerName 'IAMOFFLINE' } | should throw\


'could not be reached'
}
}

context 'Execution' { }

context 'Output' {
it 'when the computer does not have the Web-Mgmt-Service feature installed, \
it will throw an exception' {
mock 'Test-Connection' { $true }
mock 'Test-InstalledWindowsFeature' { $false }

{ Enable-IISRemoteManagement -ComputerName 'IAMONLINE' } | should throw \


'Windows feature is not installed'
}
}
}

You’ve now defined another code path and forced the function to throw the other exception you’re
expecting. Let’s keep going.
Test Walkthrough #1 145

You’ve just run a test to confirm what the function returns when Test-InstalledWindowsFeature re-
turns $false. Let’s now build some tests to verify what you expect to happen when Test-InstalledWindowsFeature
returns $true. When that happens, the function performs three different tasks. Since you’re building
unit tests, you aren’t necessarily concerned that it performed those tasks. You’re worried that the
right parameters are passed to the commands that perform the tasks. After all, you’re not testing
those underlying commands; you are only testing the code inside this function.
Next, the function calls the commands Set-RemoteRegistryValue, Set-Service, Get-Service, and
Start-Service. You need to create mocks for these commands to ensure that they don’t try to run.
Again, those commands aren’t your problem. You need to make sure you’re calling them correctly.
Since these commands don’t return anything to test—that is, they produce no output—place mocks
and it blocks under the Execution context block, for no other reason than our own organizational
mindset.
When creating the mocks, also include no parameters other than Name, since that’s all you’re using
in the code. You don’t need to mock the entire functionality of these commands; you only need to
mock the scenarios the code is attempting to use.
The Execution context block would then look like this:

context 'Execution' {
mock 'Test-Connection' { $true }
mock 'Test-InstalledWindowsFeature' { $true }
mock 'Set-RemoteRegistryValue'
mock 'Set-Service'
mock 'Start-Service'
mock 'Get-Service'
}

Now you need to add some actual tests. Since you’re now under the Execution block, you shouldn’t
be testing to see if the function returns anything (if it did, that’d be the Output section). You assert
that the functions you expect to be called are called and that they were called with the appropriate
parameters.
First, let’s ensure they were called at all. You’ll then get more specific to ensure they were called
correctly.
First build a test to ensure Set-RemoteRegistryValue (or rather, its mock) was executed. This
can be done two ways: by using the Assert-MockedCalled command for each mock, or by using
the more general Assert-VerifiableMocks command. When building a test that needs to ensure
multiple commands are executed, adding the Verifiable parameter to each mock and calling the
Assert-VerifiableMocks command will work. However, you will be building out specific tests for
each command. Because of this requirement, use the more granular Assert-MockCalled command
to narrow the focus to only one mock at a time.
Let’s first assert that Set-RemoteRegistryValue is called.
Test Walkthrough #1 146

context 'Execution' {
mock 'Test-Connection' { $true }
mock 'Test-InstalledWindowsFeature' { $true }
mock 'Set-RemoteRegistryValue'
mock 'Set-Service'
mock 'Start-Service'
mock 'Get-Service'

it 'when the Web-Mgmt-Service feature is installed, it changes the EnableRemoteM\


anagement reg value to 1' {
$null = Enable-IISRemoteManagement -ComputerName 'SOMETHING'
$assMParams = @{
CommandName = 'Set-RemoteRegistryValue'
Times = 1
Exactly = $true
}
Assert-MockCalled @assMParams
}
}

You made no changes to the Set-RemoteRegistryValue mock, and used the Assert-MockCalled
command to assert that mock was called. You used the optional parameters Times and Exactly
here, to be more accurate on how many times the mock was expected to be called.
By default, Assert-MockCalled assumes success if the command is executed one or more times.
Always try to be as specific as possible, and the Times and Exactly parameters here ensure that the
command is called only once.
Also note that you gave the it block an appropriate name, although this test is not complete yet. As
is, it just ensures that Set-RemoteRegistryValue ran. The test has no clue as to what registry key
path or value was passed to it yet.
Now do the same thing for the other three functions, and create the remaining tests.

context 'Execution' {
mock 'Test-Connection' { $true }
mock 'Test-InstalledWindowsFeature' { $true }
mock 'Set-RemoteRegistryValue'
mock 'Set-Service'
mock 'Start-Service'
mock 'Get-Service'

it 'when the Web-Mgmt-Service feature is already installed, it changes the Enabl\


eRemoteManagement reg value to 1' {
Test Walkthrough #1 147

$null = Enable-IISRemoteManagement -ComputerName 'SOMETHING'


$assMParams = @{
CommandName = 'Set-RemoteRegistryValue'
Times = 1
Exactly = $true
}
Assert-MockCalled @assMParams
}

it 'when the Web-Mgmt-Service feature is already installed, it changes the WMSvc\


service startup type to Automatic' {
$null = Enable-IISRemoteManagement -ComputerName 'SOMETHING'
$assMParams = @{
CommandName = 'Set-Service'
Times = 1
Exactly = $true
}
Assert-MockCalled @assMParams
}

it 'when the Web-Mgmt-Service feature is already installed, it starts the WMSvc serv\
ice' {
$null = Enable-IISRemoteManagement -ComputerName 'SOMETHING'
$assMParams = @{
CommandName = 'Get-Service'
Times = 1
Exactly = $true
}
Assert-MockCalled @assMParams

$assMParams.CommandName = 'Start-Service'
Assert-MockCalled @assMParams
}
}

Let’s run it:


Test Walkthrough #1 148

Context Input
[+] when the computer being passed is offline, it will throw an exception 206ms \
Context Execution
[+] when the Web-Mgmt-Service feature is already installed, it attempts to chang\
e the EnableRemoteManagement reg value to 1 301ms
[-] when the Web-Mgmt-Service feature is already installed, it attempts to chang\
e the WMSvc service startup type to Automatic 248ms
Expected Set-Service to be called 1 times exactly but was called 2 times
51: Assert-MockCalled @assMParams at <ScriptBlock>, <No file>: line 51
[-] when the Web-Mgmt-Service feature is already installed, it attempts to start\
the WMSvc service 228ms
Expected Get-Service to be called 1 times exactly but was called 3 times
63: Assert-MockCalled @assMParams at <ScriptBlock>, <No file>: line 63
Context Output
[+] when the computer already has the Web-Mgmt-Service feature installed, it wil\
l throw an exception 347ms

Our test failed! Pester said that Set-Service was called two times and Get-Service was called three
times. How is that possible?
The reason is because of scope, specifically related to the mocks created. By default, the Assert-MockCalled
command asserts that a mock was called within the entire describe block or in the current context
block, which is what you have here. Inside the execution context block you now have three it
blocks, each executing Enable-IISRemoteManagement. When Enable-IISRemoteManagement runs, it
calls each of the mocks and increments the counter. Assert-MockedCalled is counting all of those
instances and not the particular case you need to test inside of the it block.
You need to narrow the scope by using the Scope parameter on the Assert-MockCalled command.
This tells Assert-MockCalled to count only the times the mock was called inside the current it block,
rather than counting the times it was called inside the current describe or context blocks.
Each of the calls to Assert-MockCalled should get changed to look like this:

$assMParams = @{
CommandName = 'Set-Service'
Times = 1
Scope = 'It'
Exactly = $true
}

Let’s try again:


Test Walkthrough #1 149

Describing Enable-IISRemoteManagement Context Input


[+] when the computer being passed is offline, it will throw an exception 186ms

Context Execution
[+] when the Web-Mgmt-Service feature is already installed, it attempts to c\
hange the EnableRemoteManagement reg value to 1 293ms
[+] when the Web-Mgmt-Service feature is already installed, it attempts to c\
hange the WMSvc service startup type to Automatic 99ms
[-] when the Web-Mgmt-Service feature is already installed, it attempts to s\
tart the WMSvc service 111ms
Expected Start-Service to be called 1 times exactly but was called 0 tim\
es
69: Assert-MockCalled @assMParams at <ScriptBlock>, <No file>: line 69
Context Output
[+] when the computer already has the Web-Mgmt-Service feature installed, it\
will throw an exception 245ms

It still failed! But at least it’s a different error this time. It’s now telling me that the Start-Service
command wasn’t called when you could see that Get-Service is sending its results to Start-Service.
Oh, but wait, you’ve created a mock for Get-Service. You should check the output. It looks like
you’ve got Get-Service returning nothing, whereas the “real” cmdlet indeed returns something. So
the problem is that Start-Service is receiving no input, and that’s the reason for the failure.
Let’s change Get-Service’s mocked output so that Start-Service will receive something, and you
can then assert that Start-Service was called.
You’ll first need to figure out the type of object that Get-Service usually returns. To do that, use
Get-Member.

PS> Get-Service | Get-Member

TypeName: System.ServiceProcess.ServiceController
<SNIP>

Now use the New-MockObject command to ensure Get-Service returns a single object of that type,
letting Start-Service bind that object over the pipeline.

mock 'Get-Service' {
New-MockObject -Type 'System.ServiceProcess.ServiceController'
}

Let’s rerun it all:


Test Walkthrough #1 150

Describing Enable-IISRemoteManagement
Context Input
[+] when the computer being passed is offline, it will throw an exception 34\
9ms
Context Execution
[+] when the Web-Mgmt-Service feature is already installed, it attempts to c\
hange the EnableRemoteManagement reg value to 1 761ms
[+] when the Web-Mgmt-Service feature is already installed, it attempts to c\
hange the WMSvc service startup type to Automatic 373ms
[+] when the Web-Mgmt-Service feature is already installed, it attempts to s\
tart the WMSvc service 124ms
Context Output
[+] when the computer already has the Web-Mgmt-Service feature installed, it\
will throw an exception 579ms

All tests pass now. Yay!


But you’re not done yet. The final task is ensuring that each assertion was called with the correct
parameters. All you’ve done to this point is to make sure the functions ran. You haven’t made
sure they ran correctly. This applies a deeper level of granularity to the assertions and is a level
I recommend using every chance possible.
Asserting mocks by parameters passed is done using the ParameterFilter parameter, either on the
mock itself or on the Assert-MockCalled command. If you add it to the mock itself, the mock will
not be invoked unless the command is called with a particular set of parameters. If you use the
wrong parameters, the mock isn’t called, and Assert-MockCalled won’t count it.
If you use the ParameterFilter parameter on the Assert-MockCalled command, the mock itself
would always be invoked, but the assertion would only count if specific parameters were passed to
the function. There are many reasons to perform both methods, but for this demonstration, you’re
just going to use the ParameterFilter parameter on the Assert-MockCalled command.
We start with the Set-RemoteRegistryValue mock. When you look at the code, it appears that the
only parameter being used on this function is ComputerName. The other two parameters are static.
Since they are static and cannot be changed by the function itself, no tests are needed for these
parameters. Instead, you need to ensure that the argument passed to the ComputerName parameter is
as expected.
It looks like the value of ComputerName provided to the primary function as a parameter should be
the same one that is passed to Set-RemoteRegistryValue. So you need to add ParameterFilter to
Assert-MockCalled. Note that ParameterFilter accepts a script block, and inside of that script block
is the condition.
Test Walkthrough #1 151

it 'when the Web-Mgmt-Service feature is already installed, it attempts to change th\


e EnableRemoteManagement reg value to 1' {
$null = Enable-IISRemoteManagement -ComputerName 'SOMETHING'

$assMParams = @{
CommandName = 'Set-RemoteRegistryValue'
Times = 1
Scope = 'It'
Exactly = $true
ParameterFilter = { $ComputerName -eq 'SOMETHING' }
}
Assert-MockCalled @assMParams
}

Each parameter passed to Set-RemoteRegistryValue can be referenced as a variable; the way


ComputerName is referenced above. In this example, you’ve used the fake computer name (SOMETHING)
you’re passing to Enable-IISRemoteManagement, and then ensured that that is the same name passed
to the ComputerName parameter of Set-RemoteRegistryValue.
You can take this same approach with the other functions as well.

context 'Execution' {
mock 'Test-Connection' { $true }
mock 'Test-InstalledWindowsFeature' { $true }
mock 'Set-RemoteRegistryValue'
mock 'Set-Service'
mock 'Start-Service'
mock 'Get-Service' { New-MockObject -Type 'System.ServiceProcess.ServiceControll\
er' }

it 'when the Web-Mgmt-Service feature is already installed, it attempts to chang\


e the EnableRemoteManagement reg value to 1' {
$null = Enable-IISRemoteManagement -ComputerName 'SOMETHING'
$assMParams = @{
CommandName = 'Set-RemoteRegistryValue'
Times = 1
Scope = 'It'
Exactly = $true
ParameterFilter = { $ComputerName -eq 'SOMETHING' }
}
Assert-MockCalled @assMParams
}
Test Walkthrough #1 152

it 'when the Web-Mgmt-Service feature is already installed, it attempts to chang\


e the WMSvc service startup type to Automatic' {
$null = Enable-IISRemoteManagement -ComputerName 'SOMETHING'
$assMParams = @{
CommandName = 'Set-Service'
Times = 1
Scope = 'It'
Exactly = $true
ParameterFilter = { $ComputerName -eq 'SOMETHING' }
}
Assert-MockCalled @assMParams
}

it 'when the Web-Mgmt-Service feature is already installed, it attempts to start\


the WMSvc service' {
$null = Enable-IISRemoteManagement -ComputerName 'SOMETHING'
$assMParams = @{
CommandName = 'Get-Service'
Times = 1
Scope = 'It'
Exactly = $true
}
Assert-MockCalled @assMParams -ParameterFilter { $ComputerName -eq 'SOMETHIN\
G' }

$assMParams.CommandName = 'Start-Service'
Assert-MockCalled @assMParams
}
}

The only change is the Get-Service assertion. The Get-Service assertion changed because you
couldn’t apply the ParameterFilter parameter on the Assert-MockCalled reference as a splatted
argument since you’re sharing those arguments with Start-Service.
Also, notice that you’re not applying ParameterFilter to the Start-Service command mock.
Although possible, you chose not to because, as the code is now, there’s no way for Start-Service to
accept any input other than what Get-Service passes to it over the pipeline. Since you’re controlling
what Get-Service returns, it’s safe to assume that if Start-Service is called, then it’s being called
with the appropriate input.
Test Walkthrough #1 153

Summary
This chapter has been a long-form walkthrough of how I approach testing. You’ll notice that
it requires a systematic, step-by-step approach understanding the what the code does and then
breaking that code down one task at a time to build a complete test suite.
Test Walkthrough #2
In this second walkthrough chapter, you’ll be building tests for a currently publicly-available
function called Invoke-Parallel. You can download a copy of this function via the book’s GitHub
repo¹⁶.

Understand the Subject


The first step you must do before writing a single test is to understand the subject. You must
understand what the code is doing at a deep level. You’d be surprised the number of people that
get to coding immediately and wonder why their tests are incomplete.
To decide what to test, you have to understand the code’s purpose and the way it works and start to
anticipate how it might go wrong.
Let’s pause for an important note: start to anticipate how it might go wrong. Sometimes people get
too hung up trying to figure out every little thing that they might want to test, and they end up stuck
in an endless mental loop. Your first pass at a test doesn’t need to incorporate every possibility; you’ll
be able to edit the test later and add to it, change it, or even remove items from it. That’s fine. Just
start and complete a “good enough” set of tests first.
The author of the Invoke-Parallel function you’re using as an example in this chapter, helpfully
provides detailed, comment-based help for his Invoke-Parallel function, which makes understand-
ing it a lot easier. Pay attention to the help content, if any exists. Read the help content in depth and
understand the intent behind the code.
Also, if you’re writing tests for a function, look at the function parameters. Generally speaking,
consider test cases for various inputs to those parameters. That does not mean, however, writing
a test for every possible combination of parameters. Consider the Stop-Service cmdlet that comes
with PowerShell. This command has a Name, PassThru, and a few other parameters. You’ll want to
test cases that pass a single value to the Name parameter, and cases that pass multiple values to Name.
You’d also want to test the PassThru parameter’s functionality, but that doesn’t mean you need to
test how Stop-Service behaves when passing a single service name and multiple service names
to the Name parameter because the functionality of Name and PassThru aren’t tightly coupled. They
don’t affect each other so that they can be tested more independently.
Along the same lines, the Invoke-Parallel function includes a ScriptFile and a ScriptBlock
parameter, which appear to be mutually exclusive based on their definitions in the param() block as
shown below in the code snippet from the function. You know they are mutually exclusive because
they are in different parameter sets.
¹⁶https://github.com/adbertram/pesterbookcode
Test Walkthrough #2 155

[Parameter(Mandatory=$false,position=0,ParameterSetName='ScriptBlock')]
[System.Management.Automation.ScriptBlock]$ScriptBlock

[Parameter(Mandatory=$false,ParameterSetName='ScriptFile')]
[ValidateScript({Test-Path $_ -pathtype leaf})]$ScriptFile

Consider another parameter on the Invoke-Parallel function called SleepTimer. You can see from
looking at the code that the only places it uses $SleepTimer is in a Write-Verbose command on line
193 and two Start-Sleep commands on lines 332 and 537 as shown below.

Line 193: Write-Verbose "Throttle: '$throttle' SleepTimer '$sleepTimer' runSpace\


Timeout '$runspaceTimeout' maxQueue '$maxQueue' logFile '$logFile'"
Line 332: if($PSBoundParameters['Wait']){ Start-Sleep -milliseconds $SleepTimer }
Line 537: Start-Sleep -Milliseconds $sleepTimer

We don’t need to make sure Start-Sleep and Write-Verbose work correctly, so you probably don’t
need to consider SleepTimer for a test case. Start-Sleep and Write-Verbose are core PowerShell
commands; if they don’t work, you’ve got far more significant problems to worry about, so you can
assume they do work. You can also believe core PowerShell commands will work as intended since
the Microsoft PowerShell team has far more tests for their commands than you do.
Let’s also consider the overall point of the Invoke-Parallel function. The function’s job is to take
a script file or a script block and execute it lots of times in parallel. Some input object is passed to
each parallel instance, and Invoke-Parallel runs whatever code you give it. For testing purposes,
you don’t need to provide it with complex code. You’re not testing all the different permutations of
code that could be fed to the command; you’re testing the command itself. By using the simple code
as input to ScriptBlock, for example, you can focus just on the guts of Invoke-Parallel.
Take some time and go through the Invoke-Parallel function. Make sure you understand what it’s
supposed to be doing, and start thinking about some of the things you might want to test.

Designing Tests
Once you’ve got a basic understanding of the code to be tested, it’s now time to design your tests.
It’s always better to sit down with a code editor and start creating a design for your tests. Identify
what you want to test, and start making some notes about how you’ll do so, in plain language. Think
about designing tests around topics like:

• Output tests.You need to make sure that Invoke-Parallel actually runs stuff and produces
output. For example, given ten inputs, it should produce ten outputs. And, given an input, it
should provide the expected output.
• Error tests. There appears to be code designed to capture errors within the parallel execution
runspaces, and so you need to test to make sure that works.
Test Walkthrough #2 156

• ImportVariables parameter. The ImportVariables parameter claims to get user session vari-
ables and bring them into the initial state for each runspace. You need to test that. You also
need to test the opposite: that, without ImportVariables, runspaces start empty.
• ImportModules parameter. There’s an ImportModules that needs to be tested similarly to
ImportVariables.
• RunspaceTimeout parameter. There’s a RunspaceTimeout that you need to check, to make sure
a long-running runspace is terminated.
• Parameter parameter. There’s a Parameter parameter that should pass new variables into the
runspace.

These ideas were generated from reviewing the parameters and reading the command’s source code
to look for complex tasks. You’re looking for code “paths” here. You‘re looking for all of the outcomes
the code has and how it got there. These are just samples we’ve picked. They are, by no means, an
exhaustive list.
Anything in the function that involves a lot of actual coding, as opposed to just running native
PowerShell commands or invoking simple .NET classes, is a potential test case. The informal “tests”
we’ve thought of as of now might not be the complete set of final tests that you’ll run. That’s fine.
If you run into bugs with the code you’re testing, you can always fix them, go back and add tests
for that bug condition, so you’ll catch the bug more quickly in the future.

Writing the Tests


Now that you have some basic ideas of the initial set of tests, you need to write, let’s get down to
building the tests. First, it’s important to start with a scaffold of tests to provide us with some tests
to fill in as you go. You’ll start with the basic framework of a Pester test, as shown below. You’ll call
this tests file Invoke-Parallel.Tests.ps1.

## Dot source the script to make it available


. "$PSScriptRoot\Invoke-Parallel.ps1"

describe 'Invoke-Parallel' { }

Output Tests
We need to make sure that Invoke-Parallel actually runs and produces output. For example, given
ten inputs, it should produce ten outputs. And, given an input, it should provide the expected output.
To test this, you’ll feed a bunch of integer objects (10) to Invoke-Parallel since it’s easy, using the
range operator, to produce integers. You’ll jam those into double quotes to force them into a string,
and then count them. You’ll also check to make sure that one of them, at least, is a string as expected.
You’ll add the first it block in the describe block to get started, as shown below.
Test Walkthrough #2 157

$out = (0..9) | Invoke-Parallel -ScriptBlock { "x$_" }

it 'should produce ten strings' {


$out.Count | Should -Be 10
$out[5] | Should -Be 'x5'
}

Notice how you captured the command’s initial run in the $out variable, and that the
command itself was not piped to the Should assertion. You did that so that you could
run two tests against the same output. You tested that the correct number of objects were
produced and that the sixth one was the string x5. You added the “x” in the string to
ensure it is, in fact, a string; PowerShell would otherwise be happy to equate the number
five and the string 5, which isn’t what you want.

Error Tests
There appears to be code designed to capture errors within the parallel execution runspaces, so you
need to test to make sure that works. Below is a snippet of lines 276-283 where you can see if any
errors are found in the runspace, Invoke-Parallel will run Write-Error which will be returned as
non-terminating errors.

if($runspace.powershell.Streams.Error.Count -gt 0) {
#set the logging info and move the file to completed
$log.status = "CompletedWithErrors"
Write-Verbose ($log | ConvertTo-Csv -Delimiter ";" -NoTypeInformation)[1]
foreach($ErrorRecord in $runspace.powershell.Streams.Error) {
Write-Error -ErrorRecord $ErrorRecord
}
}

This one is a little tough. Invoke-Parallel launches parallel runspaces, each of which might produce
an error. Typically, those errors could be lost, and so Invoke-Parallel has to capture them and bring
them back to us. You need to launch Invoke-Parallel and force some error, and then make sure you
do get that error back.
Test Walkthrough #2 158

$invokeParallelParams = @{
ErrorVariable = 'err'
ErrorAction = 'SilentlyContinue'
ScriptBlock = { Write-Error 'BOOM' }
}
$out = 0 | Invoke-Parallel @invokeParallelParams

it 'should return runspace errors' {


$out | Should -BeNullOrEmpty
$err[0].ToString() | Should -Be 'BOOM'
}

In the example above, you’re piping one integer, 0, to Invoke-Parallel and running it one time
and specified the variable err to capture the error. In the ScriptBlock parameter, you’re sending a
non-terminating error. You expect there to be no output (only an error), so you test to make sure
$out is null. Then you check the $err variable, which should be a collection, to make sure the first
item is the error message.

ImportVariables Parameter
Next, the ImportVariables parameter claims to get user session variables and bring them into the
initial state for each runspace. You need to test that. You also need to test the opposite: that, without
the ImportVariables parameter, runspaces start empty. You need to define a variable and see if
Invoke-Parallel will pick it up and import it into the runspaces it creates.

$var1 = 'Hello'
$var2 = 'Goodbye'

$out = 0 | Invoke-Parallel -ImportVariables -ScriptBlock { $var1; $var2 }

it 'should import variables' {


$out[0] | Should -Be 'Hello'
$out[1] | Should -Be 'Goodbye'
}

If the ImportVariables parameter isn’t working, then $var1 and $var2 will have no values in the
ScriptBlock parameter, and the tests should fail.

We also need to make sure that variables aren’t imported when you don’t ask for them to be, so you
can use a similar test, as shown below. UYou perform the same kind of test, but this time, you won’t
use the ImportVariables parameter and ensure those variables aren’t returned.
Test Walkthrough #2 159

$out = 0 | Invoke-Parallel -ScriptBlock { $var1; $var2 }

it 'should not import variables' {


$out | Should -BeNullOrEmpty
}

If you run this test above, you’ll find that the test fails to tell us that when you do pass variables
into the Scriptblock parameter, Invoke-Parallel will return something. Judging from the output,
it appears to return an empty array. Without this test, you’d have no way to determine this quickly!

[-] should not import variables 197ms


Expected $null or empty, but got @().
46: $out | Should -BeNullOrEmpty
at <ScriptBlock>, C:\Invoke-Parallel.Tests.ps1: lin

ImportModules Parameter
Next, you need to test the functionality of the ImportModules parameter. You’ll use the same basic
idea as you did by testing the ImportVariables parameter, as shown below.

context 'when ImportModules is used' {


$out = 0 | Invoke-Parallel -ImportModules -ScriptBlock { Get-Module Pester }

it 'should import and return an imported module' {


$out[0] | Should -Not -BeNullOrEmpty
}
}
context 'when ImportModules is not used' {
$out = 0 | Invoke-Parallel -ScriptBlock { Get-Module Pester }

it 'should not import and return an imported module' {


$out | Should -BeNullOrEmpty
}
}

The Pester module is pretty much guaranteed to exist on the machine where these tests are running.
You don’t care about the actual module you pass through the test. You care if any module is imported
because you’re not testing a specific module, you’re testing the functionality of Invoke-Parallel to
import the module into the runspace.
Test Walkthrough #2 160

RunspaceTimeout Parameter
Next up, there’s a RunspaceTimeout parameter that is supposed to make sure a long-running runspace
is terminated after a certain period. To test that, you need to kick off a runspace that won’t stop and
make sure Invoke-Parallel kills it.

$invokeParallelParams = @{
RunspaceTimeout = 1
ErrorVariable = 'err'
ScriptBlock = { Start-Sleep -Seconds 5 }
ErrorAction = 'SilentlyContinue'
}
0 | Invoke-Parallel @invokeParallelParams

it 'should time out' {


$err[0].ToString() | Should -Match "Runspace timed out at*"
}

Above you’ve created a runspace that will wait for five seconds and finish. But since you’ve
specified a RunspaceTimeout of one second, Invoke-Parallel should kill it after a second and
return an error based on my review of the code that matches the string you’ve provided. How
did you know what the error is? You can see it on line 304: Write-Error "Runspace timed out at
$($runtime.totalseconds) seconds...".

Parameter parameter
Finally, for the last test for the Invoke-Parallel command, you have the Parameter parameter that
should pass new variables into the runspace. This test will look the same as the variable test.

$invokeParallelParams = @{
Parameter = 5
ScriptBlock = { $parameter }
}
$out = 0 | Invoke-Parallel @invokeParallelParams

it 'should pass in the parameter' {


$out | Should -Be 5
}

That completes the initial round of tests. Now, because Invoke-Parallel’s author, Warren Frame,
already wrote a set of tests¹⁷ for the Invoke-Parallel command, compare what you’ve done here
in this section and look for gaps that you didn’t cover
¹⁷https://github.com/RamblingCookieMonster/Invoke-Parallel/tree/master/Tests
Test Walkthrough #2 161

Summary
It’s important to restate that the tests you’ve created here are not complete by any means. The
Invoke-Parallel function is 559 lines long. That’s a lot of PowerShell code! A handful of unit tests
aren’t going to come close to testing everything, but it doesn’t have to. Writing unit tests is an agile
process which will force you to prioritize the kinds of tests you create. You only tested four of this
function’s 15 parameters, and within the four parameters you did test, all code paths and outcomes
were not tested. That’s fine!
The key is to build an initial set of tests that prove to be the best use of your time. You should write
tests that cover the most common ways a function is executed and build from there. When you or
your team begins using another parameter, write a test for it. Did you come across a bug? Write a
test for it. Writing tests is an iterative process and is not a one-time thing.
Part 3: Hands-On Design and Testing
In Part III, you’re an old hand at building Pester tests. At this point, you should -Be well aware
of how to build Pester tests in real-world scenarios and know many tricks of the trade. However,
knowing how to build Pester tests and how to design them along with the code being tested are two
separate things.
In this part, you’re going to take a step back and cover how to build testable code to more easily
work with Pester and finally how to design your Pester tests to be easy to manage.
Buildling Testable Code
One of the first realizations I had, when I started learning how to test PowerShell with Pester, was
that I needed better coding practices. I thought that I could write tests for the code I had. I was dead
wrong. It turns out that to build tests, you must first have testable code. “Testable” code is a broad
statement, but, in this chapter, you’ll get introduced to some way you can build your code to make
integrating tests with it seamless.

Modularization
One of the most critical methodologies to follow when writing code is modularization. Modular-
ization is essential even if you don’t write tests but Pester will virtually force you to make the code
modular.
What do I mean by modular? I’m talking about breaking code into as many functions as possible.
You will soon find that Pester loves functions. Functions are almost mandatory when building Pester
tests as you’ve undoubtedly seen throughout this book.
Pester assumes you’re writing PowerShell code in functions and your PS1 script files contain
functions. Pester loves functions and so should you. Functions are an excellent way to break apart
complex scripts, make them more readable, extensible and Pester-friendly.

“Helper” Functions
It’s sometimes hard to understand how to break up code into modular components. One way to
start is to break each script down into various sections that “do” different tasks. Each task may
then become comprises will determine whether it becomes a standalone function. Small functions
that that are called by another function that “help” it fulfill its duty are sometimes called “helper”
functions.
Explaining “helper” functions is much easier understood with a good example. Below is what you’ll
be using.
Buildling Testable Code 164

function Enable-IISRemoteManagement {
[CmdletBinding()]
param (
[Parameter(Mandatory)]
[string]$ComputerName
)

## Verify that the IIS Management Service Windows feature is enabled.


$getFeatureParams = @{
ComputerName = $ComputerName
Name = 'Web-Mgmt-Service'
}

if ((Get-WindowsFeature @getFeatureParams).Installed) {
# Enable Remote Management via a Registry key.
$null = Invoke-Command -ComputerName $ComputerName -ScriptBlock {
$setItemParams = @{
Path = 'HKLM:\SOFTWARE\Microsoft\WebManagement\Server'
Name = 'EnableRemoteManagement'
Value = '1'
}
Set-ItemProperty @setItemParams
}

## Set the IIS Remote Management service to start automatically.


$setParams = @{
ComputerName = $ComputerName
Name = 'WMSvc'
StartupType = 'Automatic'
}
Set-Service @setParams

## Start the IIS Remote Management service.


Get-Service -ComputerName $ComputerName -Name 'WMSvc' | Start-Service
} else {
throw 'IIS Management Service Windows feature is not enabled'
}
}

The first task this function does is to check whether a Windows feature is installed. It passes the
remote computer name and a feature name to the Get-WindowsFeature cmdlet. The script then reads
the Installed property of the resulting object and makes a decision based on that result.
Buildling Testable Code 165

$getFeatureParams = @{ ComputerName = $ComputerName Name = 'Web-Mgmt-Service' }


if ((Get-WindowsFeature @getFeatureParams).Installed)
## Do stuff
} else {
## Do other stuff.
}

PowerShell has an approved verb called Test for just this purpose. If you’re checking to see if a
Windows feature is installed, why do you need to use the Get-WindowsFeature cmdlet, and then
check whether its return value has an Installed property? You don’t. You don’t care. You want to
know if that feature is installed. Instead, you can break this part out into a helper function called
Test-InstalledWindowsFeature.

My new helper function fulfills a particular use case, runs very little code, and should be very
straightforward to test. By moving this functionality out of the “main” function, you reduce the num-
ber of tests that the “main” function will need. Below is an example of the Test-InstalledWindowsFeature
helper function that returns a boolean $true or $false value, which is all you need.

function Test-InstalledWindowsFeature {
param(
$ComputerName,
$Name
)

$getFeatureParams = @{
ComputerName = $ComputerName
Name = $Name
ErrorAction = 'Ignore'
}

if ($feature = Get-WindowsFeature @getFeatureParams) {


if ($feature.Installed) {
$true
} else {
$false
}
} else {
$false
}
}

Notice above that you expanded upon the functionality a bit by first checking to see if the feature
was even available on the remote computer. Then, and only then, you check to see whether it
Buildling Testable Code 166

was installed. I was comfortable adding more code for this task because it was broken out into
a helper function. The extra code is “hidden” inside of the helper function, so it’s not distracting
in the primary function. And this additional code doesn’t make the helper function unnecessarily
complicated from a testing perspective.
The Windows feature install “task” now turns into a more robust process and is more easily testable
from within the primary function:

if (Test-InstalledWindowsFeature -ComputerName $ComputerName -Name 'Web-Mgmt-Service\


') { }

The other opportunity I see is in changing the registry value. Modifying registry values is something
I do in a lot of the code, and it has a bit more wrapped around it than just running Set-ItemProperty.

$null = Invoke-Command -ComputerName $ComputerName -ScriptBlock {


$setItemParams = @{
Path = 'HKLM:\SOFTWARE\Microsoft\WebManagement\Server'
Name = 'EnableRemoteManagement' Value = '1'
}
Set-ItemProperty @setItemParams
}

In this example, the point is to change a registry value on a remote computer.You shouldn’t have
to use Invoke-Command, Set-ItemProperty, etc. Instead, wouldn’t it be much easier to have a helper
function called Set-RemoteRegistryValue? This way, you’d have to specify the remote computer
name, the registry key path, the key value name, and the value to set.

$remoteKeyParams = @{
ComputerName = $ComputerName
Path = 'HKLM:\SOFTWARE\Microsoft\WebManagement\Server'
Name = 'EnableRemoteManagement'
Value = '1'
}

PS> Set-RemoteRegistryValue @remoteKeyParams

Our helper function would then look like this:


Buildling Testable Code 167

function Set-RemoteRegistryValue {
param(
$ComputerName,
$Path,
$Name,
$Value
)

$null = Invoke-Command -ComputerName $ComputerName -ScriptBlock {


Set-ItemProperty -Path $using:Path -Name $using:Name -Value $using:Value
}
}

Once this helper function is created, it looks like the other tasks are self-explanatory, as they are
already one line and use the obvious Set-Service, Get-Service and Start-Service commands.
Let’s now see what the new example “main” function looks like:

function Enable-IISRemoteManagement {
[CmdletBinding()]
param (
[Parameter(Mandatory)]
[string]$ComputerName
)

## Verify that the IIS Management Service Windows feature is installed.


if (Test-InstalledWindowsFeature -ComputerName $ComputerName -Name 'Web-Mgmt-Ser\
vice') {
# Enable Remote Management via a Registry key.
$remoteKeyParams = @{
ComputerName = $ComputerName
Path = 'HKLM:\SOFTWARE\Microsoft\WebManagement\Server'
Name = 'EnableRemoteManagement'
Value = '1'
}
Set-RemoteRegistryValue @remoteKeyParams

# Set the IIS Remote Management service to start automatically.


$setParams = @{
ComputerName = $ComputerName
Name = 'WMSvc'
StartupType = 'Automatic'
}
Set-Service @setParams
Buildling Testable Code 168

# Start the IIS Remote Management service.


Get-Service -ComputerName $ComputerName -Name 'WMSvc' | Start-Service
} else {
throw 'IIS Management Service Windows feature is not installed.'
}
}

function Test-InstalledWindowsFeature {
param(
$ComputerName,
$Name
)

$getFeatureParams = @{
ComputerName = $ComputerName
Name = $Name
ErrorAction = 'Ignore'
}

if ($feature = Get-WindowsFeature @getFeatureParams) {


if ($feature.Installed) {
$true
} else {
$false
}
} else {
$false
}
}

function Set-RemoteRegistryValue {
param(
$ComputerName,
$Path,
$Name,
$Value
)

$null = Invoke-Command -ComputerName $ComputerName -ScriptBlock {


Set-ItemProperty -Path $using:Path -Name $using:Name -Value $using:Value
}
}
Buildling Testable Code 169

You’ll see that even though you’ve added more code to this script, it’s easier to read. If I were viewing
this code in a code editor, I could collapse each function as not to get distracted by all of the extra
code as well.

Limit Function Input


A fully-tested function accounts for each way that the function can be executed. You modify function
behavior at run-time through the use of parameters. Parameter count and test count are directly
related. Also, how parameter arguments can be passed to the parameters (directly, via the pipeline,
and so on) also increase test count. It’s essential to limit the ways the function’s behavior can be
modified. This can be done in a few different ways.

Reduce parameter sets


Let’s say a function has three parameter sets; that’s a combination of three different ways each
parameter in that function can be used. Each parameter set has two parameters in it of which one
of the parameters is mandatory, as shown below.

function Do-Something {
[CmdletBinding()]
param(
[Parameter(Mandatory,ParameterSetName = 'Set1')]
$Foo,
[Parameter(ParameterSetName = 'Set1')]
$Bar,
[Parameter(Mandatory,ParameterSetName = 'Set2')]
$Baz,
[Parameter(ParameterSetName = 'Set2')]
$Something,
[Parameter(Mandatory,ParameterSetName = 'Set3')]
$SomethingElse,
[Parameter(ParameterSetName = 'Set3')]
$OutOfIdeas
)
}

We need to figure out all of the different ways the Do-Something function could be called. Using
the minimum amount of parameters, the function could be called three different ways using three
different parameters.
Buildling Testable Code 170

Do-Something -Foo
Do-Something -Baz
Do-Something -SomethingElse

The examples above, though didn’t account for any of the optional parameters in each set. If you
were to include those, you’d have at least six different ways the function could be called as shown
below.

Do-Something -Foo
Do-Something -Foo -Bar
Do-Something -Baz
Do-Something -Baz -Something
Do-Something -SomethingElse
Do-Something -SomethingElse -OutOfIdeas

Are all these ways to call this function necessary? Will a single parameter set suffice? These are
questions you must ask yourself before writing a single test.

Reduce Parameters
In the example above, what if you could eliminate two parameter sets and one parameter from the
remaining set? The possible ways the function could be executed would dwindle to only one.
The only way the Do-Something function could be called is by running Do-Something -Foo. Do you
see the pattern? Simplify as much as possible. The fewer ways a function can be called, the fewer
tests necessary.

Define a Parameter Type


Once you’ve simplified parameters as much as possible, it’s time to restrict the input to those
parameters to a limited number of values. You’re again taking action to reduce the number of tests
needed.
In the above example, notice that the Foo parameter does not have a type associated with it. Without
explicitly defining a type, you can pass a string, integer, complex object or anything to this parameter
and the function code would have to handle that somehow thus warranting more tests. The possible
ways this parameter can be used would significantly be reduced by defining the type of Foo.

[Parameter(Mandatory)]
[string]$Foo

You’ve decided the Foo parameter will be a string. This action alone has eliminated thousands of
ways different object types could be used.
Buildling Testable Code 171

Validate Parameters
The next step is limiting what string is used. Do you only anticipate allowing a particular set of
strings? Use the ValidateSet parameter attribute.

[Parameter(Mandatory)]
[ValidateSet('a','b','c')]
[string]$Foo

Perhaps you know the value of the Foo parameter will always match a certain regular expression
pattern. Use the ValidatePattern parameter validation routine. Do you know of any other way to
restrict the values of Foo? Use the ValidateScript parameter validation routine. The point is to use as
much parameter validation as possible to constrain the number of possibilities. Parameter validation
can drastically reduce the number of tests that need to be created.
Let’s say you have a function that has a ComputerName parameter. Inside of that function, it connects
to the remote computer. To do that, the computer must be online. Since you know the remote
computer name passed to the ComputerName parameter must be online for this function to work
properly, there’s no sense waiting for the code inside of the function to throw an error to realize
then that the computer you’re running it against is offline. You can check that ahead of time. It’d be
better to get a “friendly” error message and account for that offline computer as early as possible.
To do that, you’ll use the ValidateScript attribute:

[Parameter(Mandatory)]
[ValidateScript({
if (-not (Test-Connection -ComputerName $_ -Quiet -Count 1)) {
throw "The computer [$_] could not be reached."
} else {
$true
}
})]
[string]$ComputerName

Now you can be sure that the computer name passed to the ComputerName parameter will at least
respond to a ping. This is a great first step in limiting the number of strings passed here to those that
are valid.
Another way the input could be restricted is to ensure ComputerName is a valid computer name. If you
were sure this function was only going to be run in an Active Directory domain, for example, you
could ensure that every computer name passed to ComputerName must be fifteen characters or less.
Because you’re aware of this rule ahead of time, you can implement it with parameter validation
using the ValidateLength validation attribute.
Buildling Testable Code 172

[Parameter(Mandatory)]
[ValidateScript({
if (-not (Test-Connection -ComputerName $_ -Quiet -Count 1)) {
throw "The computer [$_] could not be reached."
} else {
$true
}
})]
[ValidateLength(1,15)]
[string]$ComputerName

You now know that there’s no way any computer name passed to the function could be shorter than
one character or longer than 15 characters. You’re also sure that the computer must respond to a
ping before the function even begins running. This input limiting may not necessarily affect the
tests you’re building now, but it is a great pattern to get into the habit of practicing. Going through
this process will save you tons of time in the long run.

Input Via Parameters


Your functions should receive input only through parameters. That means not using a global,
script, or other non-local variables. If your command’s behavior can vary based on some outside
configuration element, that element should be changeable via a parameter. For example, rather than
building a function that accepts computer names from a file like below:

PS> Get-MyComputerInfo -Filename computerlist.txt

Your function should accept the computer names as parameter values. This allows the function to
receive computer names in different ways like via a text file, Active Directory, or whatever other
data source is storing the computer names.

PS> Get-MyComputerInfo -ComputerName (Get-Content computerlist.txt)


PS> Get-MyComputerInfo -ComputerName (Get-AdComputer -Filter *).Name

This structure is much more straightforward to test using Pester and lets you check a wider variety
of scenarios more quickly. Your functions shouldn’t duplicate functionality that’s already present in
another standard command. Instead, use that other command and feed the resulting information to
your function. This philosophy has always been a good practice in PowerShell, in general.
Buildling Testable Code 173

Managing Output Properly


Your function should output everything to the pipeline (output stream) in the form of structured
objects. Don’t output formatted text unless the whole point of your command is to accept objects as
input and then format them as text (e.g., a Format- command, ConvertTo- command, etc.). It’s fine
to write Verbose, Warning, Error, or Information text; but avoid using Write-Host, because host text
can’t be captured or piped, meaning it’s impractical for Pester to evaluate it.
Also, you’ll probably write commands that make some environmental change like creating files or
deleting user objects. Creating a file, in PowerShell terms, isn’t output; it’s a system modification.
Those types of actions are okay. Most PowerShell commands that modify the system don’t also
produce pipeline output unless you ask them to do so using a PassThru switch parameter. Verbose,
Warning, Error, and Information text are fine, but you should generally avoid creating pipeline
output unless the point of your command is to do so. Look at the Stop-Service command; for
example, it produces no pipeline output under normal circumstances.

Summary
Unfortunately, building tests for your PowerShell isn’t as simple as writing the tests. It’s essential to
write testable code and to refactor existing code to become testable. If you were to take away one
lesson from this chapter, it’s functions. When I first started learning how to write Pester tests, my lack
of including functions in code greatly hindered my progress and made writing tests cumbersome
and frustrating.
Remember that even though writing tests requires a significant time commitment, it will force you
to write better code, thus leaving you a better developer!
Test Design Practices
Throughout this book, you’ve covered many different ways to design tests. You’ve covered the
building blocks Pester provides in Part I and then went over either directly or indirectly how to
build tests. This chapter shares some of what I’ve learned about test design with Pester. You’ll cover
concise practice statements, but I’ll also include why I, personally, feel these are useful approaches.
It’s important to understand that some of these practices will be contradictory. Testing, like almost
everything in code, is a process of compromise and trade-offs. You can write your code to handle
every possible error condition, but that’s going to make it a lot tougher to maintain and a lot slower
to write. You could add tests for literally everything that could happen in your code, but that’s going
to take a long time, be hard to maintain, and add relatively little value. I can’t help you figure out
where the line is for your code, but this chapter will help you make better decisions for yourself and
go over some best practices for you to follow.

Ensuring Code “State” Doesn’t Change


It’s easy to think of Pester tests as something you write to test your code right now, to make sure
a component or the whole script is working as expected. But testing lasts forever. Each time you
add something, fix something or modify it in any way, you can quickly rerun your existing tests to
make sure you didn’t break anything. This is commonly referred to as regression testing. With that
in mind, it’s sometimes useful to add tests for things you might not think are entirely necessary at
the time.
For example, suppose you have a function that looks, in part, like this:

function Get-Something {
[CmdletBinding()]
param(
[Parameter(Mandatory)]
[string[]]$ComputerName
)

Would you write a test to make sure that the function doesn’t run without providing a value to the
ComputerName parameter? Your first reaction might be to think since you’ve assigned that parameter
as mandatory, it’s PowerShell’s job to implement that behavior. That is a valid point; however, you’re
not taking into account changes that may happen to that parameter over time.
Test Design Practices 175

Building a test to ensure the ComputerName parameter is mandatory is a way to make sure nobody
accidentally removes the mandatory attribute. The code needs a computer name to run so part of the
test must be to make sure the attribute is there. You’re using the test to ensure the original intentions
of the function stay that way.

The Input/Execution/Output Pattern


There are many ways to structure your tests, but one way I have found useful is what I call the
input/execution/output pattern. Using this pattern, you build your Pester test script with a describe
block and three context blocks called input, execution and output as shown below.

describe 'Test group' {


context 'input' {

context 'execution' {

context 'output' {

}
}

This testing pattern lays out, in a logical, ordered fashion, the stages every script or command has. I
find that this pattern helps me organize my thoughts and make sure I’m writing tests for everything
necessary.

Input
Some functions take the input that typically comes in the form of function parameters. Parameters
are ways to affect the function’s code execution at run-time. Because it’s essential to account for the
many different ways a function can be executed, it’s crucial to account for input parameters.
If you’ve limited the number of parameters defined as much as possible, the tests to create in this
stage will be minimal. These tests are optional. Why? Because how code executes and what it outputs
is more critical. If the input is restricted appropriately, the execution and output tests will usually
catch any unexpected inputs. However, some examples of testing input could be:

• Ensuring that parameter validation attributes throw exceptions correctly


• Ensuring that the function accepts pipeline input
• Ensuring the definition of expected parameter sets
Test Design Practices 176

Execution
Testing code execution is vital in unit tests. These types of test ensure that the logic tree in the
function was followed as expected based on the input provided. The tests also ensure that commands
executed inside the function receive the appropriate parameters, execute the expected amount of
times, or even execute at all, depending on the input provided. The execution stage is when decisions
are made as to where the code can go or what paths it follows.
For example, a function that includes a single if/then construct could have two possible ways to go.
It’s essential to test each of these branches.
The execution stage is where you will build most of your mock assertions. This is where you assert
that the commands inside your function are being called as you would expect. Perhaps you’ve got
a simple function that calls another command, and that looks like the below example.

function Foo {
param( $Thing )

if (Check-Thing -Thing $Thing) {


## Branch 1
} else {
## Branch 2
}
}

It’s essential to test both branches. To do that, you’d create a mock for Check-Thing and two separate
tests, one for when Check-Thing returns $true and one for when Check-Thing returns $false. In
this case, you’d probably have two it blocks inside your execution context block asserting that the
Check-Thing command was called appropriately depending on the value of $Thing.

Output
This is the most apparent section and will probably be the one that contains the majority of your tests
(if your function returns anything). This is the section where you test what your function returns
based on what inputs it was given. This stage is where you check things like:

• How many objects were returned


• What kind of objects were returned
• The property names on the objects returned (if custom objects)
• The property values on the objects returned

The list could go on and on here depending on the circumstances. Remember the phrase testing
based on what input was given. To have excellent test coverage, it’s crucial not merely to pass a few
Test Design Practices 177

mandatory parameters to a function and test the output. Use this section to check for all optional
parameters as well. This is where Pester’s test cases come in handy. You can learn more about them
in the Test Cases chapter.

Choosing When to Create Describe Blocks


As you’re creating a test script, you may be wondering when exactly you should create a describe
block. Should you create a single describe block for each function, break them out and create multiple
describe blocks based on the context for a function, or maybe break things up by context blocks?
These are all design considerations that are highly dependent on the scenario and the test author.
However, I can offer some general guidance by referencing how each Pester entity affects the tests.
After all, if describe, context and it blocks were just ways to make the tests more aesthetically
pleasing it wouldn’t much matter.
Describe and context blocks primarily separate scope. If a mock or variable is defined in one block,
it won’t be available to the other. Also, the Invoke-Pester command has the TestName parameter
that allows you to selectively run describe blocks from a single test script.
When deciding how to structure your describe blocks, ask yourself:

• Do I need to share a mock?


• Do I need the ability to run a test by itself?

In unit testing, the most popular way is to create describe blocks per function, and optionally use
context blocks to ensure no mock overlap when a single command needs to have multiple mocks
applied to it. Below is an example.

function Get-Thing {
param()

Get-ChildItem -Path 'Foo'


}

describe 'Get-Thing' {
context 'When Thing does not exist' {
mock 'Get-ChildItem'
$result = Get-Thing

it 'should throw an exception' {

}
}
Test Design Practices 178

context 'When Thing does exist' {

mock 'Get-ChildItem' {
[pscustomobject]@{
Name = 'file.txt'
FullName = 'C:\file.txt'
}
}

it 'should do that thing' {

}
}
}

The one-describe-block-per-function approach using context blocks solves the mock scope problem
but doesn’t let you select which context blocks to execute. If you need the ability to run a test in one
of those context blocks you could create multiple describe blocks like seen below.

function Get-Thing {
param()

Get-ChildItem -Path 'Foo'

describe 'When Thing does not exist' {

mock 'Get-ChildItem'

$result = Get-Thing

it 'should throw an exception' {

}
}

describe 'When Thing does exist' {

mock 'Get-ChildItem' {
[pscustomobject]@{
Name = 'file.txt'
Test Design Practices 179

FullName = 'C:\file.txt'
}
}

it 'should do that thing' {

}
}

Once you’ve got multiple describe blocks created, you can call Invoke-Pester to only execute one
of the context-specific tests like below.

PS> Invoke-Pester -Path 'C:\Get-Thing.Tests.ps1' -TestName 'When thing does not exis\
t'
PS> Invoke-Pester -Path 'C:\Get-Thing.Tests.ps1' -TestName 'When thing does exist'

Naming Conventions
Admittedly, naming is not one of the most important parts of testing with Pester. After all, if you’ve
got all of the appropriate tests executing and showing results, that’s all that matters. However, you
can follow some conventions to establish some standards among your tests.
The significant advantage of creating a naming convention is that having recognizable names
might prevent you from having to look at the actual test to see what went wrong. Descriptive
and standardized names may allow you to peek at the default Pester output and then immediately
recognize the problem versus having to dig into what the test is doing.
There are several naming conventions possible. Here I highlight some of the popular ones. This is
by no means an exhaustive list, and if you feel like your current naming convention works for you,
then stick with it!
You’ve got two mandatory entities and one optional entity to assign a name to; describe, context
and it blocks. Since describe and it blocks are necessary, you must assign a name to those. If you
choose to use context blocks, you’ll have to name those as well. Naming contexts is highly relative,
so I’m going to forego recommending naming conventions there. I will, however, go over some best
practices when naming describe and it blocks.

Describe Blocks
Describe blocks, as you know, are a way to group it blocks together or a way to arrange individual
tests. When assigning names for describe blocks, think about what that group of it blocks represents.
Nine times out of ten, that’s going to be a single function; though if you’ve chosen to use describe
blocks based on the context that could be different.
Test Design Practices 180

A function typically requires multiple tests (it blocks), so naming each describe block as the name
of the function it tests (if you’ve chosen that design pattern) is a wise choice.

It Blocks
Naming it blocks requires more thought. It blocks are the actual tests, and the names need to be as
descriptive as possible, yet not overwhelming. By breaking down the components of a test, you can
come up with a descriptive naming convention.
The code inside of an it block decomposes into two rough components:

• State Under Test (SUT)


• Expected behavior

It’s essential to include a label for each of these components when assigning names to your tests.
But what do these elements mean anyway? These components come from the traditional software
development testing world and may need a little bit of expansion:
The state under test (SUT) is a label representing how the function is called. It’s a particular state
the function will be in when the test is run. For example, a “state” might be when the function is
called with a specified parameter, when a global variable is set/not set, when a registry key has a
particular value, etc. Notice each of these examples was prefaced with “when.” That’s a clue. The SUT
represents a variable condition.
The expected behavior of a test is more precise. The predicted response is what the function is
supposed to do. For example, it could return an object, return nothing at all, call a command inside
of the function, set a registry value, restart a service, etc. The expected behavior is the result once
the function has completed.
Here are some examples of representing each of these components in your tests:

• Should_Return_ObjectOfPSCredential
• Should_Call_FunctionXYX
• Should_Restart_WindowsServiceXYZ
• Should_Return_1Item_When_DatabaseABCContains2Rows
• Should_ThrowException_When_NoConnectionToServer
• Given_NoConnectionToServer_Should_ThrowException

It’s entirely optional to include the should reference. This sometimes makes the test name sound
more fluid. The most important part is defining the expected behavior (return, call, restart, throw
exception) and usually under which condition (NoConnectionToServer, DatabaseABCContains2Rows,
etc.). Being as specific and as narrow as possible is essential. Even though I list Should_Call_-
FunctionXYX, this technically isn’t perfect because it does not describe the state under test, the
condition under which the function runs. It would be much better to name this test Should_Call_-
FunctionXYX_When_Parameter123IsPassed, for example.
Test Design Practices 181

Summary
I hope this chapter has provided you with some new ideas to help you design your tests better. This
is by no means an exhaustive list, but it should give you the necessary foundation to begin planning
and creating better Pester tests.
Part 4: Pester Cookbook
If you need more examples of how to use Pester to test all kinds of scenarios, this is part of the book
is for you. In this part, you’ll find different testing scenarios Pester applies to.
In this part, you’ll find various “recipes” that’ll help you cook up the best Pester tests!
Recipe: Testing for Live Conditions
When you talk about a command being deterministic, you mean that a given set of inputs will
always create the same output. So, 1+2 is deterministic, because the result should always be 3. But
Get-Date isn’t deterministic, because given the same input (none), it’ll return different results at
different times.
So what about a command that’s testing against live production data, such as querying information
from WMI/CIM? For example, checking a computer’s model or manufacturer might be different
depending on where the code runs. Mentally, this feels like it’s somewhere in between deterministic
and non-deterministic, right?
Regardless, it’s still easy to test in Pester.
Imagine a very brief function like this one:

function Get-SystemInfo {
Param(
[string]$ComputerName
)
$os = Get-CimInstance -Class Win32_OperatingSystem -comp $ComputerName
$cs = Get-CimInstance -Class Win32_ComputerSystem -comp $ComputerName
$props = @{
'ComputerName'=$ComputerName
'OSVersion' =$os.Version
'Model' =$cs.Model
}
New-Object -Type PSObject -Prop $props
}

How can I test to make sure the output’s “Model” property is correct, when I don’t know in advance
where it will run during testing? The answer is: I don’t need to care. Here’s a possible test for just
that piece:
Recipe: Testing for Live Conditions 184

Describe "Get-SystemInfo" {
It "queries Model info" {
$result = Get-SystemInfo -comp localhost | Select -Expand Model
$shouldbe = Get-CimInstance Win32_ComputerSystem | Select -Expand Model
$result | should -Be $shouldbe
}
}
Recipe: Testing for Installed Software
A common task to automate is to install or remove software. Unfortunately, doing so is never cut
and dry. “Installed” software can mean ten different things to ten different people. “Installed” may
mean an MSI was executed, Windows Installer did it’s thing which registered the application in the
registry. It could also mean simply copying an EXE to the right location and being able to run it.
For our purposes, I’m going to define “installed” as anything that shows up in the Programs and
Features window.
Since accurately finding software that’s installed can be a daunting task, we’re going to use a
community module called SoftwareInstallManager freely available by running Install-Module
SoftwareInstallManager. This is a module built by yours truly that not only enumerates installed
software but does a lot of other tasks around software management.
Let’s say I have a script that installs a piece of software. Rather than just noticing that it doesn’t
throw an error, I’d like to see if it actually installed the software. The first thing I’d do is install the
software manually. I perform the install first manually because I need to figure out how the software
registers itself in the registry. For this recipe, I’ll use Notepad++.
I’ve installed Notepad++ 7.4 on my machine, went to Control Panel –> Programs and Features and
take note of the exact name and version that’s displayed.

Notepad++ as Displayed in Programs and Features

It looks like the title is _Notepad++ (32-bit x86) and the version is 7.4. Great! Now, let’s run
the Get-InstalledSoftware command that comes in the SoftwareInstallManager module we
downloaded earlier to see if we can find that instance.

Get-InstalledSoftware | where {$_.Name -match 'notepad'} | select name,version

Name Version
---- -------
Notepad++ (32-bit x86) 7.4

Since I didn’t know for sure what the exact title would be I just used a match operator and limited
the properties to Name and Version to find the exact name and version we’ll need to look for in our
test.
Now that we have the code necessary to figure out if the expected software is installed or not, we
can then wrap this into a Pester test. We’ll create a boolean (yes/no) assertion in a test to see if that
exact software title and version is installed.
Recipe: Testing for Installed Software 186

describe 'Installed Software' {

## Invoke your script/function here

context 'Notepad++' {

$installedNPlusPlus = Get-InstalledSoftware | where {$_.Name -eq 'Notepad++ \


(32-bit x86)' -and $_.Version -eq '7.4' }

it 'should install the expected version' {


$installedNPlusPlus | should not benullorempty
}
}
}

Notice that I didn’t create a describe block for just Notepad++ and, instead, created a
context block for it. Since we may eventually test for numerous pieces of software, it’s
always a good idea to build your tests for extensibility. By designing your tests like this
allows you to easily add and remove various pieces of softwar at will without changing a
lot of code around it.
Recipe: Ensuring all Functions in a
Module Have Tests
As you begin to write more tests, there usually comes a time when you need to leverage Pester to
test itself. We’re not talking about writing Pester tests for the Pester module but rather using Pester
to ensure you’ve got enough tests created. Similar to the concept of code coverage, this just means
creating a test to ensure you’ve got enough tests. Confused yet? If so, I promise it’s not as hard it
sounds.
A general rule when writing unit tests for modules is that you should write at least a single unit test
for each function inside of the module. Sometimes, though, during development you get busy and
forget to write tests for various functions. You invoke the module’s functions, a bug crops up, and
you wonder why a test didn’t catch it ahead of time. Come to find out, you forgot to write the tests
for that function to begin with! For this recipe, since I have a single accompanying tests script for
each module, I can include a standard test in each script to ensure that each function in that module
has at least one test.
When I write tests for modules, I always have a single tests script per module. So, for example, let’s
say I have a module called Foo and my file structure would look like the following:

Foo
/Foo.psm1
/Foo.psd1
/Foo.Tests.ps1

Inside of this module, I have a few functions. What the functions are and what they do doesn’t
matter for my purposes here. Let’s say they’re called Do-Thing, Set-Thing, and New-Thing. I then
have tests for two of these, but I forgot to author one as you can see in my Foo.Tests.ps1 tests script
below:
Recipe: Ensuring all Functions in a Module Have Tests 188

describe 'Do-Thing' {
it 'does some important stuff' {

}
}

describe 'Set-Thing' {
it 'does some important stuff' {

}
}

You can see that I forgot to create a describe block filled with it blocks for the function New-Thing.
I want to ensure that my tests do not depend on me remembering to include tests for every function.
Instead, I’d like to create a test to ensure every function inside of this module has an associated
describe block.

To create this test, we’ll have to depend on a standard naming convention for my tests scripts, and
we’ll have to use the Select-String command to discover all of the describe blocks dynamically.
Since we’re going to include this test in all of my module tests scripts, I need to make it generic. The
first step will be to figure out what module we’re in the middle of testing. To discover the module
inside of the tests script, I can use the $MyInvocation (automatic) variable. PowerShell automatically
fills the $MyInvocation variable with lots of useful information about its execution of the current
script.

$ThisModule = "$($MyInvocation.MyCommand.Path -replace '\.Tests\.ps1$', '').psm1"


$ThisModuleName = (($ThisModule | Split-Path -Leaf) -replace '\.psm1')
$module = Get-Module -Name $ThisModuleName

$module should now represent the Foo module. Once I’ve got the module name captured, we’ll then
need to figure out where the associated tests file is for this module. Since we’re following a standard
naming convention for my tests (and I’m sure you are too), I can predict that the test script is located
in the same folder as my module and ends with .Tests.ps1.

$testFile = Get-ChildItem $module.ModuleBase -Filter '*.Tests.ps1' -File

I now have the file path to the test file captured. At this point, I need to search the test script for
describe blocks. This search is done with the Select-String command and a little RegEx. We’re
using the PSParser class here to remove the quotes from each ‘describe’ block.
Recipe: Ensuring all Functions in a Module Have Tests 189

$testNames = Select-String -Path $testFile.FullName -Pattern 'describe\s[^\$](.+)?\s\


+{' | ForEach-Object {
[System.Management.Automation.PSParser]::Tokenize($_.Matches.Groups[1].Value, [r\
ef]$null).Content
}

Once I have all of the test names captured, I can gather all of the exported functions using
Get-Command (exported functions, because Get-Command does not return private functions) and then
use Compare-Object to compare the exported functions in the module to all of the describe block
names.

$moduleCommandNames = (Get-Command -Module $ThisModuleName).Name


Compare-Object $moduleCommandNames $testNames | where { $_.SideIndicator -eq '<=' } \
| select inputobject | should -BeNullOrEmpty

To wrap this all up, my final describe block with it block inside would look like this:

describe 'Module tests' {

$ThisModule = "$($MyInvocation.MyCommand.Path -replace '\.Tests\.ps1$', '').psm1"


$ThisModuleName = (($ThisModule | Split-Path -Leaf) -replace '\.psm1')
$module = Get-Module -Name $ThisModuleName

$testFile = Get-ChildItem $module.ModuleBase -Filter '*.Tests.ps1' -File

$testNames = Select-String -Path $testFile.FullName -Pattern 'describe\s[^\$](.+\


)?\s+{' | ForEach-Object {
[System.Management.Automation.PSParser]::Tokenize($_.Matches.Groups[1].Value\
, [ref]$null).Content
}

$moduleCommandNames = (Get-Command -Module $ThisModuleName).Name

it 'should have a test for each function' {

Compare-Object $moduleCommandNames $testNames | where { $_.SideIndicator -eq\


'<=' } | select inputobject | should -BeNullOrEmpty
}
}
Recipe: Testing External Applications
Let’s quickly look at ways to run tests that include external, non-PowerShell applications, such as
scripts or EXE files.

What to Test
Our script contains a reference to some other executable file like an EXE file. This EXE can return
many different exit codes depending on what parameters I pass to it. I need a way to ensure it
returns the exit code I expect.

How to Test It
Example function:

Function PingWrapper {
param($IpAddress)
Start-Process -FilePath ping.exe -NoNewWindow -Wait -ArgumentList "$IpAddress -n\
1" -PassThru
## Do other stuff
}

I’ve determined that ping.exe returns an exit code of 0 when a host replies successfully and 1 when it
does not. I will test for this scenario by reading the System.Diagnostics.Process object’s ExitCode
property that this function returns.
Pester test:

Describe 'PingWrapper' {
it 'an exit code of 0 is generated when it can ping an IP address' {
$defaultGateway = (Get-NetRoute -RouteMetric 0).NextHop
$result = PingWrapper -IpAddress $defaultGateway
$result.ExitCode | should -Be 0
}

it 'an exit code of 1 is generated when it can ping an IP address' {


$result = PingWrapper -IpAddress '9.9.9.9'
$result.ExitCode | should -Be 1
}

}
Recipe: Testing External Applications 191

The Background
There are a few different ways to test an exit code from an external program. Here I’ve chosen
to invoke the process by wrapping the external command in a function, using the Start-Process
cmdlet. I used the Start-Process cmdlet because, when it completes, it returns an ExitCode property
that represents the information I need. I could have used $LastExitCode here as well, but that would
only allow me to test the very last exit code of the external command I ran inside of the function. If
I had more than one EXE reference in my function, $LastExitCode I only yield the result of the last
external command to run. The first would be untestable.
If using external commands, I recommend always wrapping them in a function. Once in a function,
testing will be considerably easier to do.

Testing Exit Codes from Standalone EXE files

What to Test
I have a standalone EXE file that I need to invoke that’s not part of a PowerShell script. I need to
ensure that it returns the exit code I expect when I execute it.

How to Test It
Since this will not be part of a PowerShell script, I can directly invoke powershell.exe from cmd to
run the required EXE.

powershell.exe -Command { ping.exe 192.168.0.1 -n 1 } -NoProfile -NonInteractive

I’ve determined that ping.exe returns an exit code of 0 when host successfully replied and 1 when
it did not. I will test for this scenario by checking the $LastExitCode value.
Pester test:

Describe 'ping.exe execution' {

it 'an exit code of 0 is generated when it can ping an IP address' {


powershell.exe -Command { ping.exe 192.168.0.1 -n 1 } -NoProfile -NonInterac\
tive
$LastExitCode | should -Be 0
}

it 'an exit code of 1 is generated when it can ping an IP address' {


Recipe: Testing External Applications 192

powershell.exe -Command { ping.exe 9.9.9.9 -n 1 } -NoProfile -NonInteractive


$LastExitCode | should -Be 1
}

}
Recipe: Testing Syntax
Pester can test just about anything in PowerShell, but there’s one thing that must happen before
testing can begin; the code to be tested must be executed. This is evident, right? Without running
the code, it can’t be tested. But what if that code can’t even be invoked at all? What if there’s a
syntax error in the code where the PowerShell can’t even start execution? In that case, a test isn’t
even possible.
What happens when a syntax error occurs? In the PowerShell world, an error exception is thrown
and displayed in red text. And, luck just so has it that Pester can test for exceptions. But, Pester
cannot capture these exceptions thrown by syntax errors without a little help.
In this recipe, we’ll retake a step and think about performing testing before the script is actually
“executed.” We’ll perform a simple Pester test to ensure your script can be executed successfully
before we even begin to test code flow.
This recipe will test for errors like:

The term ‘ohno!’ is not recognized as the name of a cmdlet, function…

To test script execution, PowerShell must “convert” pure text into executable code. This is done
through expressions or scriptblocks. By inserting code in a scriptblock, this tells PowerShell that this
text is executable. PowerShell will then attempt to execute whatever code is inside of this scriptblock.
We can use this technique to our advantage.
For our demonstration, let’s create a simple script with an obvious syntax error in it. Since ohno!
isn’t even enclosed in quotes, PowerShell will attempt to run this as command and fail. I’ll create a
script with a single line in it and save it as C:\SyntaxError.ps1.

ohno!

First, we have to “convert” our small script’s text stored in a PS1 file to a string. One way to do this is
by using the Get-Content command with the -Raw parameter. By default, Get-Content returns a text
file as an array with each element being a single line. When using the -Raw parameter, Get-Content
will return the entire file as a single string.

$scriptContents = Get-Content -Path C:\SyntaxError.ps1 -Raw

Now that we have the contents of the script stored as a string in memory, we now need to execute
it and see if it throws an exception. Typically, when you need to execute a string in PowerShell, you
can use the Invoke-Expression or ‘&.’ Wrapping this into a Pester to test if an exception is thrown
might look like this:
Recipe: Testing Syntax 194

describe 'Syntax Test' {


it 'should not throw an exception' {
& $scriptContents | should throw
}
}

However, this test does not work because Pester isn’t testing the exception soon enough.

[-] should not throw an exception 201ms


The term 'ohno!' is not recognized as the name of a cmdlet, function, script fil\
e, or operable program. Check the spelling of the name, or if a path was included, v\
erify that the path is correct and try again.
at <ScriptBlock>, : line 3
3: & $scriptContents | should throw

To ensure the script can simply be executed without throwing an exception, we have to roll this
string in a scriptblock and execute that instead. We need to create a scriptblock from a string. We
can do this by using the Create static method on the [scriptblock] type accelerator. You can see
below that I’m reading the contents of my script which is then used as the first argument to the
Create method. This then returns a scriptblock.

[scriptblock]::Create((Get-Content -Path 'C:\SyntaxError.ps1' -Raw))

Once we have the scriptblock created, we simply need to execute it. Now is the time we can use the
& operator but this time instead of directly invoking a string, we invoke a scriptblock. Below is an
example of invoking this scriptblock inside of a Pester test and ensuring an exception is not thrown.

describe 'Syntax Test' {


it 'should not throw an exception' {
{ & ([scriptblock]::Create((Get-Content -Path 'C:\SyntaxError.ps1' -Raw)))} \
| should not throw
}
}

When invoked, Pester now can capture that exception just like any other and report on it.
Recipe: Testing Syntax 195

[-] should not throw an exception 234ms


Expected: the expression not to throw an exception. Message was {The term 'ohno!\
' is not recognized as the name of a cmdlet, function, script file, or operable prog\
ram. Check the spelling of the name, or if a path was included, verify that the path\
is correct and try again.}
from line:1 char:1
+ ohno!
+ ~~~~~
at <ScriptBlock>, : line 3
3: { & ([scriptblock]::Create((Get-Content -Path 'C:\SyntaxError.ps1' -R\
aw)))} | should not throw

Taking this a small step further, how about we ensure that our script doesn’t contain any
unrecognized commands?

describe 'Syntax Test' {


it 'should not throw an exception' {
{ & ([scriptblock]::Create((Get-Content -Path 'C:\SyntaxError.ps1' -Raw)))} \
| should not throw 'is not recognized as the name of a cmdlet'
}
}

Any exception message can be tested for here by using the should -throw assertion.
Recipe: Testing Remote Script Blocks
When we write PowerShell scripts, it’s standard practice to use scriptblocks with commands like
Invoke-Command. Using scriptblocks is an easy way to invoke code on another computer using
PowerShell remoting. But unfortunately, they aren’t easy to test in Pester. Since Pester relies on code
invocation on the local machine, code executed in a scriptblock on a remote machine is impossible to
test accurately. Pester (at this time) does not have the ability know how that code is being executed
on the remote machine. However, there are some workarounds we can do to perform some basic
testing.

Do Not Use $Using


I might have a scriptblock runs a small piece of code on a remote machine using Invoke-Command.
Perhaps this code looks for the value of a registry key. Because the registry key may change, I’m
assigning it to a variable locally and then using the $using shortcut to pass the value of that variable
to the remote scriptblock.

$regPath = 'HKLM:\SOFTWARE\Microsoft'
Invoke-Command -ComputerName DC -ScriptBlock { Get-Item -Path $using:regPath }

This snippet works as expected. But now I want to write a Pester test. I’d like to ensure that Get-Item
is receiving the correct argument for the Path parameter. If there were not running in a remote
scriptblock, I could create a mock for Get-Item with a parameter filter and assert that the mock was
called with Assert-MockCalled as shown below.

describe 'Remote scriptblock Testing' {

mock 'Get-Item'

$regPath = 'HKLM:\SOFTWARE\Microsoft'
Invoke-Command -ComputerName DC -ScriptBlock { Get-Item -Path $using:regPath }

it 'should ping the expected computer' {


$assertParams = @{
CommandName = 'Get-Item'
Times = 1
Exactly = $true
ExclusiveFilter = { $PSBoundParameters.Path -eq 'HKLM:\SOFTWARE\Microsof\
Recipe: Testing Remote Script Blocks 197

t' }
}
Assert-MockCalled @assertParams
}
}

However, when I invoke my test, it fails.

Describing Remote scriptblock Testing


[-] should ping the expected computer 2.18s
Expected Get-Item to be called 1 times exactly but was called 0 times
at <ScriptBlock>, : line 15
15: Assert-MockCalled @assertParams

Why? The reason is that $using:regPath is expanded on the remote computer. Pester has no idea
about that. The code needs to be refactored so that the value of $using:regPath is evaluated locally
rather than remotely. To do that, we have to scrap using $using and instead use Invoke-Command’s
ArgumentList parameter with the $args replacement inside of the scriptblock.

Below is a refactor of the test above instead using the ArgumentList parameter. You can see there’s
no need to mock Get-Item because Pester can’t see it anyway. To figure out what the value of Path
is getting passed to it, we’ll catch it before it goes to the remote computer. We’re not technically
asserting that Get-Item was called with the expected Path parameter but we can infer that Get-Item
got the right value since we know the behavior of the $args[0] variable.
Inferring that a command was called is done in different situations when it’s impossible to assert
that a particular command was called. This is a great example.

describe 'Remote scriptblock Testing' {

mock 'Invoke-Command'

$regPath = 'HKLM:\SOFTWARE\Microsoft'
Invoke-Command -ComputerName DC -ScriptBlock { Get-Item -Path $args[0] } -Argume\
ntList $regPath

it 'should get the expected reg path' {


$assertParams = @{
CommandName = 'Invoke-Command'
Times = 1
Exactly = $true
ExclusiveFilter = { $PSBoundParameters.ArgumentList -eq 'HKLM:\SOFTWARE\\
Microsoft' }
}
Recipe: Testing Remote Script Blocks 198

Assert-MockCalled @assertParams
}
}

Testing Command Invocation in a Remote Scriptblock


Another example of using Pester to test code inside of a remote scriptblock is figuring out how to
assert that a command was called inside of the scriptblock. Using our example from above, perhaps
I’d like to know if Get-Item actually was executed on the remote computer. I could create a mock
for Get-Item just as I did with Invoke-Command and perform a mock assertion on it but it wouldn’t
matter. Pester is blind to the code executed remotely.
Even though we can’t use our standard command Assert-MockCalled to ensure a command was
invoked, we can still detect if the command was executed on the remote computer. Although not
beautiful, this method still works.
Instead of using Assert-MockCalled, we’ll use simple string matching to look in the contents of
the remote scriptblock before Invoke-Command is ran to see if there’s a reference to Get-Item. We’re
technically not ensuring that it was executed but we can infer that it is since it’s in the scriptblock.
To search for strings inside of a scriptblock though we must first convert it into a string. We do this
with the ToString() method.

PS> $scriptBlock = { Get-Item -Path $args[0] }


PS> $scriptBlock.ToString()
Get-Item -Path $args[0]

Converting the scriptblock to a string gets it into a state where we can now look for strings inside of
it. For this instance, I want to see if Get-Item is being called inside of the scriptblock. Again, since I
can’t use mock assertions here, I’ll instead just see if the Get-Item string is inside of the scriptblock
using the match operator.

$scriptBlock = { Get-Item -Path $args[0] }


if ($scriptBlock.ToString() -match 'Get-Item') {
$true
} else {
$false
}

True

I can now roll this code into my describe block.


Recipe: Testing Remote Script Blocks 199

describe 'Remote scriptblock Testing' {

mock 'Invoke-Command'

$regPath = 'HKLM:\SOFTWARE\Microsoft'
$scriptBlock = { Get-Item -Path $args[0] }
Invoke-Command -ComputerName DC -ScriptBlock $scriptBlock -ArgumentList $regPath

it 'should invoke Get-Item' {


$scriptBlock.ToString() | should match 'Get-Item'
}
}

Describing Remote scriptblock Testing


[+] should invoke Get-Item 181ms

Notice how I created the $scriptblock variable first then used that variable for the
Invoke-Command invocation as well as the test. By setting up a single point for the code
to be tested and the test itself, you can be sure the code being tested is the exactly the same.
Resources
I hope that you now have a solid understanding of how to write and perform tests for PowerShell
scripts. In this book, we’ve covered a lot of information on both testing methodologies, the mindset
behind testing and especially how to apply that mindset using the Pester testing framework.
This book was intended to give you everything you need to know to get started testing with
PowerShell; however, I realize that not everyone learns the same way and will need other resources
to digest this content.
In this section, I have gathered up some other favorite resources for learning Pester in both written
and video form.

Video Courses
• Testing PowerShell with Pester (Microsoft Virtual Academy)¹⁸
By the author, this is a Microsoft Virtual Academy course that goes over the basics of
PowerShell all the way up to an example of automating Pester tests within a build pipeline.
• Testing PowerShell with Pester (Pluralsight)¹⁹
By Robert Cain, this is a Pluralsight course that goes over the basics of unit testing with
PowerShell.
• Infrastructure Testing with Pester (Pluralsight)²⁰
By the author, this is a Pluralsight course that explains how to use Pester not for unit testing
but for integration/acceptance or what’s commonly known as infrastructure testing.

Articles
• 4SysOps Pester Series²¹
By the author, learn via short articles about Pester with titles like Unit Tests vs. Integration
Tests, Working with Infrastructure Dependencies and more.
• Using Pester with Visual Studio Team Services²²
By Mike Kaufmann, this is a set of in-depth articles around how to automate your Pester testing
by integrating into a Visual Studio Team Services build.

¹⁸http://bit.ly/2qz05vQ
¹⁹http://bit.ly/2s7ai4T
²⁰https://www.pluralsight.com/courses/pester-infrastructure-testing
²¹http://bit.ly/2qsqvEz
²²http://bit.ly/2raWpUJ
Release Notes
06/24/19

• Replaced older Contains and ContainsExactly should operators with FileContentMatch and
FileContentMatchExactly references. Thanks Marc van Gorp!
• Added section on Operational Validation Framework
• Added TestRegistry
• Added Custom should assertions
• Added tagging examples
• Added all of the latest Pester syntax and features
• MAJOR editing work cleaning up grammar and syntax
• Reorganized chapters to make them flow better

10/06/17

• Added the InModuleScope and Test Script Parameters section in the mocks chapter

07/23/17

• Added Gherkin introduction. This will act as the start of all future Gherkin material.

07/02/17

• Added recipe: Testing for Syntax Errors

06/25/17

• Added Parameter Filter “Inheritance” section to mocks

06/10/17

• Removed the start of the Testing GUIs chapter


• Added a cookbook recipe: installed software.
• Cleaned up unused sections and formatting tweaks
• Added a cookbook recipe: testing remote scriptblocks
Release Notes 202

05/29/17

• Added Other Resources chapter

05/28/17

• Removed Pester Case Study chapter


• Added Simplifying Mock Assertions and Sending Variables to the Console to Troubleshooting
Tests chapter
• Added Using $PSBoundParameters in Parameter Filters in Mocks

05/14/17

• Added Using Debugger Breakpoints During Tests to Troubleshooting Tests chapter


• Added Set-TestInconclusive to it blocks chapter

05/06/17

• Added Option #4: Asserting Mocks Transitively in Mocking the Unmockable


• Added Assert-MockCalled’s ExclusiveFilter parameter explanation to Mocks

05/01/17

• Added Mock Assertions


• Lots of typos and formatting fixes in mock chapter

04/29/17

• Added “Choosing When to Create Describe Blocks” in Design Practices


• Added Working with Pester Output chapter

04/29/16

• Changed all of the “we”, “us” and “our” references to “me” and “I”

04/29/13

• Don is stepping away from the book; Adam will be sole author.

02/20/17
Release Notes 203

• Fixes to “Adding Tests for a Function”


• Adding “Improving Code Coverage”
• Recipe, “Testing External Applications”
• Formatting fixes and typos squished
• Added “The Test Drive”
• Added code sample notes to front matter
• Note that the online version may not provide access to front matter; I urge readers to rely
primarily on one of the downloadable formats
• Starting “Pester Case Study”

01/30/17

• “Mocking the Unmockable” added


• Extensive additions to “Design Practices”

01/16/17

• Release notes moving to reverse chronology


• Added “Infrastructure Validation” chapter

01/14/17

• Fixed ContainsExactly bug in “Should” chapter


• Added “BeIn” assertion to “Should” chapter

01/12/17

• Expanded “Design Practices”


• Fixed some wording issues and missing references in Designing for Testing
• Expanded outline a lot, again
• Added some boilerplate to in-progress chapters, so you’ll know they’re not “done”

01/10/17

• Added “Adding Tests - Again,” a very long process-focused chapter.


• Stubbed out additional chapters for future content
• Note that code is in the process of being moved to a public repo - appreciate your patience
while I make this change

01/08/17
Release Notes 204

• Initializing release notes, which will form a change log for upcoming releases.
• Added Why code coverage
• Added Measuring code coverage
• Starting design practices chapter
• Stubbing out more Part 2 content
• Expanded into chapter on Mocks
• Overall ToC expansion and reorganization
• Fixing Release Notes formatting (grr)
• Standardizing on code font for keywords (vs bold)

10/18/18

• Professional edit of design practices chapter with more to come!

You might also like