You are on page 1of 35

Exploratory Testing

Maaret Pyhäjärvi
This book is for sale at http://leanpub.com/exploratorytesting

This version was published on 2020-04-15

This is a Leanpub book. Leanpub empowers authors and publishers with the Lean Publishing
process. Lean Publishing is the act of publishing an in-progress ebook using lightweight tools and
many iterations to get reader feedback, pivot until you have the right book and build traction once
you do.

© 2015 - 2020 Maaret Pyhäjärvi


Contents

Dedication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1

Why This Book . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2

The LeanPub way . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3

What is Exploratory Testing? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4


An Approach to Testing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
Specifying It By Contrast . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
The Three Scopes of Exploratory Testing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
Listen to Language . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

What is Exploratory Testing - the Programmer Edition . . . . . . . . . . . . . . . . . . . . . . 7


What Testing Gives You? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
Exploratory Testing, eh? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8

The Two Guiding Principles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10


Learning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
Opportunity Cost . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11

Place of exploration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
An example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
Separate Tester Role . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
Future is Here, Just Not Evenly Distributed . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15

How to Explore with Intent - Exploratory Testing Self-Management . . . . . . . . . . . . . . 16


Intertwining Different Testing Activities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
Learning To Self-manage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
You’re In Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20

Exploratory Testing an API . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22


Why Is Exploratory Testing An API Relevant? . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
Testing as Artifact Creation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
Testing As a Performance, (aka Exploratory Testing) . . . . . . . . . . . . . . . . . . . . . . . 23
What Does Testing Give Us? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
An Example API with ApprovalTests . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
CONTENTS

13 Patterns To Help You Explore An API . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27


Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
Dedication
I have three dedications to pass, the book wouldn’t be in existence with any of you missing.
Huib Schoots, thank you for your support and trust in me writing this book. As I started my writing
journey on Exploratory Testing, I learned a respected colleague in the theme and beat me to the
obvious name on LeanPub. He did not only pass the name space on to me, but also encouraged me
to fill it. This book could not exist without you.
People paying for this book have provided me critical support in writing, and coming back to add
and improve to the written understanding. Every single email telling someone paid for work I’m
doing makes me feel valued. Thank you. While this book is what you directly pay for, indirectly
your support shows up in all the work I do.
I talk exploratory testing with many people, and compare notes. I want to start mentioning some of
them by name. Alex Schladebeck, Ru Cindrea, Anne-Marie Charrett, Kristine Korbus - you’ve
shared my journey in relevant ways and inspired and taught me.
Why This Book
Exploratory testing - to me - is the way we think around software systems to find information
relevant to various stakeholders. The thinking starts with the idea that if something was easy and
straightforward to know, it could already be known. So we need to dig deeper to build systems that
provide the value.
I’ve learned something about testing in two decades that I still can’t find in the books I’m reading.
I share some of it in talks, but I have only so much time traveling around. I share some of it in
blog posts, but reading things as they are in flux is a different to a book. With this book, I want to
structure my experiences, stories and tricks into something I hope other testers will find useful.
If you have not read Elisabeth Hendrickson’s wonderful book Explore IT yet, please do so. We write
from different experiences, believing in a lot of similar things.
I think of this book as my lessons learned book on testing. I call it exploratory testing because the
way I approach testing with an exploratory mindset is different. It’s different to the artifact-oriented
mindset of the past (test cases) and artifact-oriented mindset of today (test automation). While all
testing may be exploratory, not all testing is done exploration first. Not all testing focuses on learning
as much as good exploratory testing does.
Back in the days, there was a guitar. Then someone invented an electric guitar. The electric guitar was
clearly different, but it was still a guitar. So we called the original guitar acoustic guitar. Exploratory
testing is to testing what acoustic guitar is guitars.
It’s the specialty thinking more commonly found in specialist testers, and a set of skills anyone can
choose to learn, with practice. Don’t expect it to be quick and easy though. Deep learning in layers
takes time, experimenting and reflection.
The LeanPub way
I used to have a book deal with a publisher, and the main thing that came out of that plan of a book
was a feeling of overwhelming barrier to publishing. I remember thinking nothing was ever good
enough. I knew I would learn more, I knew I needed to learn more and I expected there would be a
day when I had learned enough and I then just could get the book done.
The more I’ve learned, the more I’ve also realizes that my learning will never stop. I can proudly
show my old texts and my new texts indicating experiences that have caused a 180 turn in my
perspectives. I can only speak with the experience I have. I expect you, the reader, to read it with
the experience you have, and take everything with a grain of salt. I’m sharing lessons from my
experience.
I found LeanPub with another book I paired to write (check Mob Programming Guidebook by me
and Llewellyn Falco) and writing on this platform is different.
I publish pieces that could be useful as I write them, giving myself permission to iterate and
increment the book.
You will see pieces in the book that I’ve published as articles with Ministry of Testing. Their
contribution has been invaluable in getting me started with writing beyond blogging.
My readers can choose to read as I write or at any point later. I can make the book available for free,
and those readers who pay for the book, even a few dollars, are priceless source of motivation for
me to keep adding stuff.
This way, I discover the book. With you.
I would love to hear your feedback, comments and questions. You can always reach me by email:
maaret@iki.fi and if you want to give boost to my writing, please tweet about this book. I use the
hashtag #ExploratoryTestingBook.
What is Exploratory Testing?
It has been 34 years since Cem Kaner coined the term to describe a style of skilled multidisciplinary
testing common in Silicon Valley. I’ve walked the path of exploratory testing for 25 years and it has
been a foundational practice in becoming the testing professional I am today. Let’s look at what it
is, to understand why it still matters — more than ever.

An Approach to Testing
Exploratory Testing is an approach to testing that centers the person doing testing by emphasizing
intertwined test design and execution with continuous learning where next test is influenced by
lessons on previous tests. As an approach, it gives a frame on how we do testing in a skilled way.
We use and grow multidisciplinary knowledge for fuller picture of empirical information testing
can provide. With the product as our external imagination, we are grounded on what is there but
inspired to seek beyond it.
We learn with every test about the application under test, ourselves as the tool doing the testing,
other tools helpful in extending our capabilities and the helpful ways we can view the world the
application lives in. We keep track of testing that has been done, needs doing and how this all
integrates with the rest of the people working on similar or interconnected themes.
What makes our activity exploratory testing over other exploratory approaches founded in curiosity
is the intent to evaluate. We evaluate, seek for information we are missing, making sure what we
know is real with empirical evidence.

Specifying It By Contrast
It’s not a surprise that some folks would like to call exploratory testing just testing. In many ways it
is the only way of testing that makes sense — incorporating active learning is central to our success
with software these days.
To contrast exploratory testing with what people often refer to as manual testing, exploratory testing
as a skilled approach encompasses use of programming for testing purposes. We use our brains,
our hands as well as programs to dig in deep while testing. Sometimes the way of including test
automation happens through means of collaboration, where ideas from exploratory testing drive
implementation of automation that makes sense.
To contrast exploratory testing with people refer to as scripted testing, exploratory testing isn’t
driven by scripts. If we create scripts from exploratory testing, we know to use them in exploratory
What is Exploratory Testing? 5

fashion remembering that active thinking should always be present even when the script supports
us in remembering a basic flow. We’ve talked a lot about scripted testing as an approach where we
separate design — deciding what to test, and execution — making the test happen, and thus lowering
our chances of active learning targeting the most recent understanding of the risk profile.
Another contrast to programming centric views to testing comes with embracing the multidisci-
plinary view of of testing where asking questions like “is my application breaking the law today
after recent changes?” is something routinely encoded into exploratory testing, but often out of
scope for a group of automators.

The Three Scopes of Exploratory Testing


As an approach, we can time and integrate this into our development efforts in many ways.
I find there are three ways of scoping: * exploratory testing as a way of extending existing test cases
* exploratory testing as a limited timebox * exploratory testing as frame of all thinking
When exploratory testing is scoped as a way of extending existing test cases, the way of working
does not look very exploratory. In places like this, you find yourself wondering why Tina is so
successful with the same tests Toni is missing problems with. The secret is that one of them actively
extends how they understand the test cases, refuses to follow exact instructions or only instructions.
They learn and find new perspectives.
When exploratory testing is scoped as a limited time box, you find people feeling they can only
be free on Friday afternoons. These are moments in project life that are structured separately,
with different expectations of how things flow and where focus is. This is a great way of scoping
exploratory testing into a process where the natural inclination is to believe we know where the
tasks begin and end, and need explicit encouragement to see if what holds true.
When exploratory testing is the frame of all thinking, it encompasses all considerations about testing.
We start with exploratory testing to identify something that needs documenting, and take time
to document it — perhaps with automation that is the modern way of documenting as executable
specifications.

Listen to Language
If you hear: “My boss asked me to test search so I searched something that was found and something
that wasn’t and reported I was done” you may be witnessing very low quality exploratory testing. It
relies on following high-level orders to an extent the tester can imagine based on their knowledge.
If you hear: “My colleague wrote me 50 cases of what to try out with search and I tried them
and reported I was done” you may be witnessing testing that isn’t exploratory. There is no hint
of learning, and owning responsibility of quality of the testing that happens.
What is Exploratory Testing? 6

If you hear: “My boss asked me to test search so I came back with 50 quick ideas of what could
be relevant and we figured out I’d just do 10 of them before we decided if going further was
worthwhile“, you are likely to be witnessing exploratory testing.
Similarly, in the automation first space, if you hear: “I got a Jira ticket saying I should automate this.
I did, and found some problems while at it, and extended existing automation because of the stuff I
learned.”, you may be seeing someone who is exploratory testing.
If you hear: “The Jira ticket said to automate A, B, and C, and I automated A and B, C could not be
automated.”, you may be witnessing testing that isn’t exploratory.
Look at who is in the center: is the tester doing the work and learning actively applying a better way
of doing the overall testing? If yes, that is exploratory testing.
As skilled approach, it is only as good as the skill of the person applying it. With focus on learning
through, skill may be a problem of today, but improved upon every day as exploratory testing is being
done. If you find yourself not learning, you most likely are not exploring. With the product as your
external imagination, you should find yourself imagining new routes through the functionalities,
new users with new perspectives, and relevant information your project teams would be happy to
make justified decisions on their take for the risk. With and without automation, in a good balance.
What is Exploratory Testing - the
Programmer Edition
In the new software world regime where programmers find themselves taking a lot more respon-
sibility of testing, we need to understand what exploratory testing is as it extends what most
programmer’s find their tests covering, and causes us talk past each other in understanding what
testing.

What Testing Gives You?


As a programmer, you know what you’re implementing. When you don’t know, you’ll learn. You
explore the problem to figure out a solution.
Back in the days some bad organizations told you that they’ve hired a group of testers. They might
even say that since they pay other people for testing, you shouldn’t be bothering yourself with that.
But you know you want to test. Because testing has direct value to you as a programmer. It gives
you four things:

• Specification
• Feedback
• Regression
• Granularity

Specification means that tests you write can be concrete examples of what the program you’re about
to write is supposed to do. No more fancy words around high level concepts — give me an example
and what is supposed to happen with it. And when moving on, you have the specification of what
was agreed. This is what we made the software do, change is fine but this was the specification you
were aware of at time of implementation.
Feedback means that as the tests are around and we run the tests — they are automated, of course —
 the tests give us feedback of what is working and what not. The feedback, especially when we
work with modern agile technical practices like test-driven development, gives us a concrete goal of
what we need to make work and if it is working. They help us anchor our intent so that given the
interruptions, we still can stay on the right path. And we can figure out if the path was wrong. This
is the test that passes, yet you say it isn’t right. What are we not seeing?
Regression means that tests don’t only help us when we’re building something for the first time,
but they also help us in changes. And software that does not change is dead, because users using
What is Exploratory Testing - the Programmer Edition 8

and loving it will come back with loads of new ideas and requirements. We want to make sure that
when we change something, we make only changes we intended. And regression is the perspective
of doing more than we intended, without tests without us knowing.
Granularity comes to play when our test fail for a reason. Granularity is about knowing exactly
what is wrong and not having to spend a lot of time figuring it out. We know that small tests pinpoint
problems better. We know that automation pinpoints problems better than people. And we know
that we can ask people to be very precise on their feedback when they complain something doesn’t
work. Not having to waste time on figuring out what is wrong is valuable.
This is not exploratory testing. Exploratory testing often guides this type of testing, but this is testing
as artifact creation. Exploratory testing focuses on testing as performance — like improvisational
theatre — in a multidisciplinary way beyond computer science.

Exploratory Testing, eh?


I get all of this good stuff from testing as I know it, unit tests and test automation. What is this
exploratory testing stuff then?
Exploratory testing is basically saying that spending time thinking with your application, your
APIs, your environments as an external imagination, you will find things you did not realize you
were missing. And time thinking with something you’ve implemented is real, empirical. Not just
something you wish was true like most of our designs.
Exploratory testing is multidisciplinary, basically saying that it is not enough for you to take orders
from whoever gives you requirements, but you have to think critically on why they are asking what
they are asking and its practical implications. If someone asks you to do something that is illegal,
you need to point it out. If you don’t know if it is illegal, you need to spend time figuring out what
is and isn’t illegal, and how that would show up in the application you are creating. GDPR is a great
example of this: not caring for privacy can cost your organization a lot. And it is not just law you
need to care for. You need to care for business, users, environments your software runs in, ethics,
psychology — and many more. You want to do no harm, and be conscious about what you are doing.
Intentional, not accidental.
With the way you look at testing as programmer, what more is there that exploratory testing
promises to give you? It’s an approach that focuses on learning while evaluating what you know
and don’t know, giving yourself chances of learning the things you don’t know you don’t know but
can see while using the application like your users would.
It gives you four things:

• Guidance
• Understanding
• Models
• Serendipity
What is Exploratory Testing - the Programmer Edition 9

Guidance is not just about the specification, but general directions of what is better and what is
not. Some of it you don’t know to place in yes/no boxes, but have to clarify with your stakeholders
to turn them into something that can be a specification.
Understanding means that we know more about the place of our application in the overall world
of things, why people would find it valuable, how other’s design decisions can cause us trouble and
how what should be true if things were right always are not. It helps us put the details we’re asked
to code into a bigger picture — one that is sociotechnical and extending beyond our own organization
and powers.
Models are ways of encoding knowledge, so that we can make informed decisions, understand
things deeper and learn faster next time or with next people joining our teams.
Serendipity is the lucky accidents, running into information you did not expect to find. The lucky
accidents of new information about how things could and do go wrong when using your application
emerge given enough time and variety to use of your application. And knowing helps you not get
the escalations waking you up to critical maintenance tasks because who else would fix all of this
than the programmers?
An Approach To Testing
Exploratory testing is a approach to testing. It says whoever tests needs to be learning. Learning
needs to change what you are doing. You can’t separate designing of tests and executing them
without losing learning that influences your next tests. It is an approach that frames how we do
testing in a skilled way.
We don’t do just what is asked, but we carefully consider perspectives we may miss.
We don’t look at just what we are building, but the dependencies too, intentional and accidental.
Unsurprisingly, great programmer teams are doing exploratory testing too. Creating great automa-
tion relies on exploratory testing to figure out all the things you want to go check. While with
exploratory testing we believe that premature writing of instructions hinders intellectual processes,
we also know that writing that stuff as code that can be changed as our understanding grows, this
frees our mental capacity to think of other things. The executable test artifacts give us that peace of
mind.
Programmer teams are also doing what I would consider low quality exploratory testing with
limited ideas of what might make a difference in new information being revealed. And that is where
testers often come in — mindspace free of some of the programmer burdens, they focus their energies
elsewhere, raising the overall quality of work coming out of teams.
Finally, I want to leave you with this idea — bad testing is still testing. It just does not give much
of any of the benefits you could get with any testing. Exploratory testing and learning actively
transforms bad testing to better.
The Two Guiding Principles
Whenever I need to define exploratory testing, I bow to people who have come before me.
Cem Kaner introduced me to the idea of exploratory testing with the first testing book I ever read:
Testing Computer Software. He defines exploratory testing as:
Exploratory software testing is a style of software testing that emphasizes the personal
freedom and responsibility of the individual tester to continually optimize the value of her
work by treating test-related learning, test design, test execution, and test result interpretation
as mutually supportive activities that run in parallel throughout the project.
Elisabeth Hendrickson et al. created an invaluable resource, a Cheat Sheet, to summarize some ideas
common to starting with exploratory testing. She defines exploratory testing as:
Exploratory testing is a systematic approach for discovering risks using rigorous analysis
techniques coupled with testing heuristics.
A lot of writing on the topic and techniques are part of Rapid Software Testing Methodology that
James Bach and Michael Bolton have created. They define all testing as exploratory and have
recently deprecated the term.
Exploratory testing, to me, emphasizes the difference to other testing that Julian Harty very clearly
points out: “Most of the testing I see is worthless. It should be automated, and the automation
deleted.” Exploratory testing isn’t that testing. A lot of that testing is around through.
I find myself talking about two guiding principles around exploratory testing again and again. These
two guiding principles are learning and opportunity cost.

Learning
If we run a test but don’t stop to learn and let the results of the test we just run influence our choices
on the next test, we are not exploring. Learning is a core to exploring. Exploring enables discovery
of information that is surprising, and the surprise should lead into learning.
The learning attitude shows in the testing we do so that there is testing against the risk of regression,
but a lot of times the risk isn’t best addressed by running exact same tests again and again.
Understanding the change, seeking out various perspectives in which it might have impact and
introduce problems that were not there before is the primary way an exploratory tester thinks.
Whatever I test, I approach it with the idea of actively avoiding repeating the same test. There’s so
much I can vary, and learning about what I could vary is part of the charm of exploratory testing.
When we optimize for learning and providing as much relevant information as we can with whatever
we have learned by that time, we can be useful in different ways at different phases of our learning
with the system.
The Two Guiding Principles 11

Opportunity Cost
Whatever we choose to do is a choice away from something else. Opportunity cost is the idea of
becoming aware of your choices that have always more dimensions than the obvious.
There are some choices that remove others completely. Here’s a thought experiment to clarify what
I mean: Imagine you’re married and having hard time with your spouse. You’re not exactly happy.
You come up with ideas of what could be changed. Two main ideas are on the table. One is to go
to counceling and the other is to try an open relationship. If you choose the latter and your spouse
feels strongly against this, the latter option may no longer be available. The system has changed.
There are some choices that you can do in different order and they still are both relevant options.
If you choose to test first with a specification, you will never again be the person who has never
read the specification. If you choose to test first without the specification, you will never have the
experience of what you would notice if your first experience was with a specification.
There are some choices that leave others outside scope. If you choose to use all your time in creating
automation and avoiding following the exploration ideas you generate while automating as basic
cases already require effort, you leave the information exploring could provide out of scope. If you
choose to explore and not automate, you leave repetitive work of future to be done manually or
undone.
The idea of being aware of opportunity cost emphasizes a variety of choices where there is no one
obviously correct choice in the stream of small decisions. We seek to provide information, and we
can do so with various orders of tasks.
It’s good to remember that rarely we have an endless schedule and budget. So being aware of
opportunity cost keeps us focused on doing the best testing possible with the time we have available.
Place of exploration
Over the years, I’ve worked with places where release cycles grow shorter. From integrating all
changes into builds a couple of times a week, we’ve moved over to continuous integration. Each
change gets integrated to the latest system and made available for testing as soon as the build
automation finishes running. We don’t get to test the exactly same thing for a very long time, or if
we do, we spend time on something that will not be the final assembly delivered. Similarly, we’ve
moved from giving those assemblies to customers once every six months to continuous delivery and
the version in production can change multiple times a day.
In the fast-paced delivery world, we turn to look heavily at automation. As we need to be able to
run our tests again and again, and deliver the change as soon as it has been checked in and run
through our automated build and test pipeline, surely there is no place for exploratory testing? Or if
there is, maybe we just do all of our exploratory testing against a production version? Or maybe on
top of all this automation, exploratory testing is a separate activity, happening just before accepting
the change into the assembly that gets built and pushed forward? Like a time-box spent on testing
whatever risks we saw the change potentially introducing that our automation may not catch?
Think of exploratory testing as a mindset that frames all testing activities - including automation.
It’s the mindset that suggests that even when we automate, we need to think. That the automation
we are creating for continuous testing is a tool, and will be only as good as the thinking that created
it. Just like the application it tests.

An example
We were working in a small team, building the basis for a feature: managing Windows Firewall
remotely. There were four of us: Alice and Bob were the programmers assigned at the task. Cecile
specialized in end to end test automation. David could read code, but on most days chose not to and
through of themselves as the team’s exploratory testing specialist.
As the team was taking in the new feature, there was a whole group discussion. The group talked
about existing APIs to use for the task at hand, and figured out that the feature had a core. There
was the existing Windows Firewall. There was information about rules to add delivered. And those
rules needed to be applied, or there would not be the feature. After drawing some boxes on the wall,
having discussions about overall and minimal scope, the programmers started their work of writing
the application code.
It did not take long until Alice checked in the module frame, and Bob reviewed the pull request
accepting the changes making something available that was still just a frame. Alice and Bob paired
to build up the basic functionality, leading Cecile and David listening to them bouncing off ideas of
Place of exploration 13

what was the right thing to do. As they introduced functionality, they also included unit tests. And as
they figured out the next slice of functionality to add, David was noticing how much of exploratory
testing the two did between the pair. The unit tests would surprise them on a regular basis, and they
took each surprise as an invitation to explore in the context of code. Soon the functionality of adding
rules was forming, and the pull requests were accepted within the pair.
Meanwhile, Cecile was setting up possibilities to run the Windows Firewall in a multitude of
Windows operating systems. They created a script that introduced five different flavors of Windows
that were supposed to be supported for the new functionality to be run as jobs within the continuous
integration pipeline. They created libraries that allowed to drive the Windows Firewall in those
operating systems, so that one could programmatically see what rules were introduced and shown.
Since the team had agreed on the mechanism of how the rules would be delivered from the outside,
they also created mechanisms of locally creating rules through the same mechanism.
As soon as the module could be run, Alice and Bob would help out Cecile on getting the test
automation scripts running on the module. David also participated as they created the simplest
possible thing that should work: adding a rule called “1” that blocked ping and could be verified in
system context by running ping before and after. Setting up the scripts on top of Cecile’s foundation
was straightforward.
Cecile wanted to test their scripts before leaving them running triggered for an hourly repeat for a
baseline, and manually started the run on one of their operating systems, visually verifying what
was going on. They soon realized that there was a problem they had not anticipated, leaving the
list of rules in an unstable state. Visually, things were flickering when the list of rules was looked
at through the UI. That was not what happened when rules were added manually. And Cecile had
explored enough of the system to know what adding a rule through the existing user interfaces
should look like. Something was off.
Cecile, Bob and Alice figured out that the problem was related to naming the rules. If the rule name
was less than three characters, there was a problem. So Bob introduced a fix limiting rule’s minimum
length, Alice approved the change and Cecile changed the rules to have a name longer than three
characters. Cecile used Windows Firewall more to figure out different types of rules, and added
more cases by exploring what was same and different and made sure they would test things both
locally and remotely - end to end.
David had also started exploratory testing the application as soon as there were features available.
They had learned that Alice and Bob did not introduce logging right from the start, and as they
did, that the log wasn’t consistent with other existing logs in how it was named. They were aware
of things being built into the automation, and focused their search on things that would expand
the knowledge. They identified that there were other ways of introducing rules, or locking the
Windows Firewall so that rules could not be introduced through external mechanisms. They would
pay attention to rule wizard functionalities in the Windows Firewall, enforcing rules around legal
rules, and make notes of those only to realize through testing that Alice and Bob had not considered
that all combinations were not legal. Things David would find would not be bugs as the team defined
a bug - programming errors - but more of missing functionalities, for lacking information about the
execution environment.
Place of exploration 14

David would make lists of tests for Cecile to add to the test automation, and pair with Cecile to
add them. As they were pairing, the possibilities of automatically creating a lot of rules triggered
their minds and they would try introducing a thousand rules to note performance concerns. And as
adding was so much fun, obviously removing some of them would make sense too. They would also
add changing rules. And as they were playing with numbers, they realized that they had uncovered
a security issue: rules were there to protect, and timings would allow for times of unprotection.
The team built it all to a level they felt was minimal to be delivered outside the organization. Unit
tests and test automation allowed for making changes and believing those cases still worked out ok.
They could explore around every change, and every idea.
The functionality also included a bit of monitoring, allowing them to see the use of feature in
production. After having the feature running in production, monitoring provided extra ideas of
what to explore to understand the overall system better.
What this story shows: * everyone explores, not everyone calls it exploratory testing even if it is
that * we explore before be build, during building, while we automate and separately from building
and automating, as well as while in production * exploration can happen in context of code and in
context of the system we’re building as well as in context of use in the overall system * without
exploratory testing, we don’t give ourselves the best chances of knowing what we’re building

Separate Tester Role


In software development, we translate ideas to code. When we focus on the idea separately, the
translation process separately and the result of the translation process separately. To see things with
many dimensions in their full potential, quite much headspace gets occupied. In fact, our ability to
focus and think is heavily limited.
It has been a common experience that having people who use their headspace focusing on building
the solution separately from people focusing on finding it’s weak points and understanding it in
context gives us better results in a shorter timeframe. For this to work to its full potential, we need
collaboration, as closely knit as possible. Diverse views and focuses bring in the fuller picture when
everyone’s voices are valued and heard.
With limited headspace for learning, it makes sense we diversify our learning focus. From a personal
experience, I can speak on the extensive practice needed to become a tester able to do deep (as
opposed to shallow) exploratory testing. Also, I can speak on the experience of needing less of
specialized testers in teams in general, as those who do and practice deep exploratory testing can
proceed much faster in the modern ways of working where everyone tests and basic quality of
the products is not the time consuming obstacle. Where there used to be a need of a tester per
programmer, with the new division of labor portions such as 1:10 are common.
Place of exploration 15

Future is Here, Just Not Evenly Distributed


There are teams where collaboration is taken to a positive extreme and the whole team works
together, even through a single computer used to do the work in a group. We call this way of
collaboration mob programming. With Mob Programming, the deep exploratory testing as well as
a tester perspective’s typical insights are contributed as soon as they emerge, while working on the
code and solution together.
Just as there are these new closely knit teams, there are teams where people in different roles are
highly isolated. There are places in which developers barely test and the organization guides them
to leave testing work for a separate role of testers. There are places where developers try to test, but
results show that they miss a lot of relevant issues. While the team learns to build software well, the
feedback of quality typically provided by testers can be a make-or-break practice for both customer
happiness but also team effectiveness.
With places that use long delivery cycles, the reliance of existence of testers is often even higher.
With systems having specialized domain knowledge the development teams don’t have, it is essential
that the domain specialists participate in the testing efforts to make sure the system built is serving
the actual needs fulfilling expectations of why it was built in the first place.
No matter which version of software development we live in, exploratory testing plays a key role.
It just shows itself in a different packaging in its projectized vs. continuous flow formats.
How to Explore with Intent -
Exploratory Testing Self-Management
This article was published in Ministry of Testing Testing Planet in 2016. Appropriate pieces of it will
find their place as part of this book.
Exploratory testing is the wonderful idea that we can use our freedom of choice while testing, to
learn as we go on and let that learning influence the choices we make next. The system we test
is our external imagination, and it’s our responsibility to give it the chance to whisper out all the
information there is.
When we test, everyone is allowed to stretch their assigned boxes with exploration at least a little.
Even the most test case oriented organizations will ask you to think, learn, and look around while
executing your assigned tests. That’s what makes you good at testing in the organization.
For more of a stretch, these organizations will allow for a few hours of freedom from the assigned
box, to do some time-boxed exploratory testing for finding gaps your usual test case harness keeps
you from spotting.
Others, like myself, work in the exploratory testing mode full time. In this mode, test cases (if such
will exist) are an output of the process instead of an input and created at a time we know the most
about a feature or product. We’ve learned a lot by the time we’re done testing.
Regardless of whether your mode of exploratory testing is using it as technique (extending your
test cases), as a task (time-boxing exploration) or as an approach (engulfing all your thinking of
testing), there’s a critical skill of self-management you’ll need to develop. You’ll want to explore
with intent, keep track of what you know and learn, and what more there is to learn. All of this will
grow iteratively and incrementally as you do this type of testing.

Intertwining Different Testing Activities


With years of practice on skilled exploration, I find it now possible to do different activities
simultaneously. I can strategize on a testing big picture and create tasks out of the ideas. I can
execute testing on some of those ideas configuring the environments and learn from the different
types of thinking. It’s not really simultaneous, it’s intertwined into these tiny bits of tasks, allowing
my mind to wonder and categorize things into a frame of reference.
It was not always possible. Actually, it was really hard. In particular, it is really hard to intertwine
long-term (looking into future work) and short-term (looking at what is going on now) thinking,
which are very different in nature. It’s ok, because the ability to intertwine is not a requirement to get
How to Explore with Intent - Exploratory Testing Self-Management 17

started. You would do well acknowledging where your abilities are and developing them further by
practicing intertwining, but also allowing yourself time to focus on just one thing. With exploratory
testing, the formula includes you: what works for you, as you are today.

A Practical Example
Imagine learning to drive a car. You’re taking your first lessons at the driving school and after some
bits of theory you know the basic mechanics of driving but have never done any of it.
You’ve been shown the three pedals, and when you stop to think, you know which one is which.
You know the gear shifter and it’s clear without telling what the steering wheel does (as long as you
drive forward, that is). And finally comes the moment you’re actually going to drive.
The driving instructor makes you drive a couple of laps around the parking lot and then tells you
to drive out, amongst other cars. With newness of all of this, your mind blanks and you remember
nothing of the following half an hour. And if you remember something, it’s the time when your car
stopped at an embarrassing location because it was too hard to do the right combination of clutch
and gears.
All the pieces are new and doing the right combination of even two of them at the same time is
an effort. Think about it, when you looked if you could turn right, didn’t you already start turning
the wheel? And when you were stopped at the lights to turn, didn’t it take significant effort to get
moving and turn at the same time?
After years of driving, you’re able to do the details without thinking much, and you’re free to use
your energy on optimizing your route of the day or the discussion you’re having with the person
next to you. Or choosing a new scenic route without messing up your driving flow.
It’s the same with testing. There’s a number of things to pay attention to. The details of the
application you’re operating. The details of the tools you need to use. The uncertainties of
information. All your thoughts and knowledge. The information you get from others, and whether
you trust it or not. The ideas of what to test and how to test it. The ideas of what would help you test
again later. The expectations driving you to care about particular type of information. Combining
any two of these at a time seems like a stretch and yet with exploratory testing, you’re expected to
keep track of all of these in some way. And most essentially from all the details, you’re expected to
build out and communicate both a long-term and a short-term view of the testing you’ve done and
are about to do.

Learning To Self-manage
I find that a critical skill for an exploratory tester is the skill to self-manage, and to create a structure
that helps you keep track of what you’re doing. Nowadays, with some years of experience behind
me, I just create mind maps. There is a simple tool I found to be brilliant for learning the right kind
of thinking, and that tool is what I want to share with you.
How to Explore with Intent - Exploratory Testing Self-Management 18

When I say tool, I mean more of a thinking tool. The thinking tool here though has a physical
structure.
For a relevant timeframe, I was going around testing with a notebook for a very particular purpose.
Each page in the notebook represented a day of testing, and provided me a mechanism to keep track
of my days. A page was split into four sections, with invisible titles I’ve illustrated in the picture:
Mission (why am I here?), Charter (what I’m doing today?), Details (what am I keeping track of in
details?) and Other Charters (what should I be doing before I’m done?).
At the start of a day of testing, I would open a fresh page and review my status after letting earlier
learning sink in. Each of the pages would stay there to remind me of how my learning journey
developed as the application was built up, one day at a time.

Notebook illustration

Mission
In the top left corner, I would stick a note about my mission, my purpose or as I often liked to think
of it, the sandbox I was hired to play in. What did the organization expect of me as per information
I would provide, having hired me as an exploratory tester? How I could describe that in just a few
sentences?
How to Explore with Intent - Exploratory Testing Self-Management 19

For example, I was hired in an organization with ten teams, each working on a particular area of the
product. My team was specializing in installations. That little note reminded me that while I could
test anything outside the installations if I so wished, there was a sandbox that I was supposed to
cover for relevant findings and it was unlikely that others would feel the urge to dig deep into my
area.
They were likely to travel through it, but all the special things in the area, they would probably
rather avoid. If I would be digging through someone else’s area, nothing would stop me. But I might
leave mine unattended. I might feel that I used all this time, and therefore I’m done, even if I was
only shallowly covering my own area.
The mission note reminded me of the types of information the organization considered relevant,
and the area of responsibility I felt I had accepted. It served as an anchor when the whispers of the
product lead me elsewhere to explore.

Charter
In the top right corner was my note about the work of the day: the Charter. Each morning I would
imagine what I was trying to achieve today - only to learn most evenings I had done something
completely different. Charter is a framing of what I’m testing, and as I learn they change over time.
It’s acceptable to start out with one idea and end up with something completely different when you
are finished.
The note of the day was another anchor keeping me honest. With exploration, I’m not required to
stick to my own plans. But I’m required to be in control of my plans in the sense that I don’t fool
myself into believing something is done just because the time is used.
Continuing on my example with the Installations team, I might set up my charter of the day to
be 2 installations with a deep dive into what actually gets installed. Or I might set it up to be 20
installations, looking through each shallowly. Or I might decide to focus on a few particular features
and their combinations. If I saw something while testing that triggered another thought, I could
follow it. But at the end of the day, I could review my idea from the morning: did I do 20 shallow
installations like I thought I would? If I didn’t, what did I do? What am I learning for myself from
how things turned out?

Details
In the bottom right corner, I would pile up notes. At first, these were just lines of text I would write
that would often fill the page next to the one I was working on. Later, I realized, that for me there
were three things I wanted to make notes of: the bugs, the questions, the ideas for test automation
or test cases, and my notes extended to have a categorization shorthand.
With any of the detailed ideas, I could choose to stop doing the testing I was doing, and attend to
the detail right away. I could decide that instead of focusing on exploring to find new information,
I could create an automated test case from a scenario I cooked up from exploration. I could decide
How to Explore with Intent - Exploratory Testing Self-Management 20

that instead of completing what I was planning on doing today, I would write the great bug report
with proper investigation behind it. I could decide to find a product owner, a support representative,
a programmer, or my manager to get an answer for a burning question I had. Or, I could make note
of any of these with minimum effort, and stick to my idea of what I would do to test the application
before attending to the details.
I learned that people like me can generate so many questions, that if I don’t have a personal throttling
mechanism, I can block others from focusing on other things. So I realized that collecting the
questions and asking them in regular intervals was a good discipline for me. And while looking
through my questions, I would notice that I had answers to more questions myself than I first
thought.
With each detail, the choice is mine. Shall I act on this detail immediately, or could it wait? Am I
losing something relevant if I don’t get my answer right away? Is the bug I found something the
developer would rather know now, than at the end of my working day? Do I want to stop being in
exploratory mode to improve my documentation, or to pair with a developer to implement a piece
of test automation, or do I rather time-box that work for another day from the idea I had while
testing?

Other Charters
In the bottom left corner, I would make notes of exploratory testing work I realized needed doing
while I was testing. I would write down ideas small and large that I would park for future reference,
sometimes realizing later that some of those I had already covered and just forgotten. Sometimes I
would add them to my backlog of work to do, and sometimes tuning the existing backlog of work
to support choosing focus points of upcoming testing days.
Some of my ideas would require creating code for purposes of extending the reach of exploration.
Some ideas would require getting intimately familiar with the details of log files and database
structures. Each new idea would build on the learning that had happened before, making me reassess
my strategy of what information I would invest in to have available first.

You’re In Control
The tool isn’t there to control you, it’s there to give you a structure to make your work visible for
you. You get to decide what happens when you explore, and in what order. If you need to go through
a particular flow 15 times from various angles, you do that. If you find it hard to think about strategy
and importance of particular tasks when you’re deep in doing testing, you reserve time separately
for strategic thinking.
With the days passing, and notes taken, I could go back seeing what types of sessions I would
typically have. There would be days where I’d just survey a functionality, to figure out a plan of
charters without focus on details. There would be target rich functionalities, where the only detail I
could pay attention to was the bugs. Over time, I could pay attention to doing things intentionally
How to Explore with Intent - Exploratory Testing Self-Management 21

with particular focus, and intentionally intertwined. I could stop to think, how different days and
different combinations made me feel. I learned to combine things in ways that were useful for my
organization, but also maximized the fun I could have while testing in a versatile manner.
While most value was in learning to self-manage my testing work around learning, there was also
a side impact. When someone would show up to ask about what I had done and was doing, I could
just flip a page and give an account of what had been going on. Seeing the structure created trust in
those who were interested in my progress.
As an active learner, you will get better every day you spend on testing. Exploratory testing treats
test design, test execution and learning as parallel, as mutually supportive activities to find unknown
unknowns. Doing things in parallel can be difficult, and testing needs to adjust to the tester’s
personal skill level and style. Your skill to self-manage your work and your learning - making
learning and reflection a habit - is what differentiates skilled exploratory testing from randomly
putting testing activities together.
I believe that the thing that makes us, testers, to not be treated as a commodity, is learning. It’s the
same with programmers. Learners outperform the ones that don’t. Exploratory testing has learning
at it’s core.
Exploratory Testing an API
This article was published in Ministry of Testing Testing Planet in 2016. Appropriate pieces of it will
find their place as part of this book.
As an exploratory tester I have honed my skills in testing products and applications through a
graphical user interface. The product is my external imagination and I can almost hear it whispering
to me: “Click here… You want to give me a different input… Have you checked out the log file I’m
producing?”
Exploratory testing is a systematic approach to uncovering risks and learning while testing. The
whispers I imagine are heuristics, based on years of experience and learning of how I could model
the product in relevant ways to identify information relevant to stakeholders. While the product is
my external imagination when I explore, I am my programmer’s external imagination when they
explore. They hear the same, unspoken whispers: “You’d want me to do this… I guess I should then.”
and they then become better testers.
I’ve only recently started applying this skillset on APIs - Application Programming Interfaces. An
application programming interface is a set of routines, protocols, and tools for building software and
applications. What triggered this focus of exploration effort was an idea to show at a conference how
testing something with a code interface is still very similar to testing something with a GUI.
With an API call, I can just fill in the blanks and figure out how the API sits in the bigger picture.
There should be nothing that stops me from exploring through an API, but why haven’t I done
it before? And then as I started exploring an API with a testing mindset, I started seeing APIs
everywhere, and finding more early opportunities to contribute.

Why Is Exploratory Testing An API Relevant?


The way I look at the world of testing, I see two ways we look at good testing - testing as creating
artifacts and testing as a performance. I learned this working side by side with Llewellyn Falco, a
test-infected developer. Whenever he would explain the testing as he knew it, it was clear he was
not speaking of testing as I knew it. We both talked about good testing, just very different ideas to
it.

Testing as Artifact Creation


People with automation first ideas to testing see testing as activity around creating artifacts. During
the previous times before agile and extensive automation, the same testing was supposed to be
Exploratory Testing an API 23

covered by detailed test cases. I’ve lived with Agile ideals long enough to have moved to the idea
that whatever is worth documenting in detail, it is probably worth documenting as test automation.
This way of looking at testing tends to focus on what we know.
When we approach testing as artifact creation, our focus is primarily on solving the problem of
creating right artifacts: what kinds of things would be useful automated? Where are the low-hanging
fruit and what kind of automation would help us drive and support the development?
The test automation artifacts at best give us:

• Spec - we know what we’re building


• Feedback - we know when it’s built as we specified
• Regression - we guard against things staying true over time
• Granularity - we can pinpoint what went wrong when the tests fail.

Testing As a Performance, (aka Exploratory Testing)


People with an exploratory testing background would be more inclined to see testing as a
performance, like a show with improvisation, improving with practice and revealing new layers
to how the performance goes. There’s nothing that stops you from creating automation from that
approach, but it starts with valuing different things. It starts with a focus on the things we don’t
know, and illusions we’re holding true without empirical evidence.
In contrast to testing as artifact creation, exploratory testing gives us:

• Guidance - not just yes/no, but a direction to better.


• Understanding - being able to place the information into a bigger picture,
• Models - ways to learn faster with supporting ideas or documents,
• Serendipity - the lucky accidents that none thought of that emerge given enough time and
variety to any application.
Exploratory Testing an API 24

What Does Testing Give Us?

What Testing Gives Us

We need both sides of the coin. Exploratory testing is a process of discovery, and it is entirely possible
to discover information we did not have from extensive test automation using APIs as our external
imagination.
There’s inherently nothing in exploratory testing that requires we must have a user interface or
have finalized features available. Still often I hear people expressing surprise at the idea that you
can explore an API.
In addition, exploring an API is thought as something that is intended for programmers. And an even
more specific misconception is that exploratory testing could not use programming / automation as
part of the exploration - that there would be something inherently manual required for exploration.
We must, as software testers, help team members understand that we can explore software in a
variety of manual, automated and technical ways.

An Example API with ApprovalTests


After a some time and research on considering a REST API or even some old Cobol APPC API or
a framework/library programmers typically could use, I ended up with ApprovalTests. It is created
Exploratory Testing an API 25

by a developer friend with significant reliance on the greatness of his unit tests, and I welcomed the
challenge to find problems through the means of exploratory testing.
ApprovalTests is a library for deciding if your tests pass or not, with a mechanism of saving a
result to a file and then comparing to the saved result. It also offers mechanisms of digging into
the differences on failure. It has extensive unit tests, and a developer who likes to brag about how
awesome his code is. The developer is a friend of mine and has a great approach to his open source
project: he pairs with people who complain to fix things together.
ApprovalTests have a couple of main connecting points.

• There are the Approvals that are specific to technology your testing. For example, my company
used ExcelApprovals that packaged a solution to problem of having different yet same results
with every run.
• And then there are the Reporters that are a way of saying how you want to analyze your tests
if they fail.

I personally know enough about programming to know to appreciate an IDE tool and the automatic
word completion feature. The one in Visual Studio is called Intellisense. It’s as if there is a GUI:
I write a few letters, and the program already suggests me options. That’s a user interface I can
explore, just as much as any other! The tools shows what the API includes.

Picture 1

Using the IDE word completion, I learn that Approvals has a lot of options in its API. Here is an
example where to test specific technologies with Approvals, you would want to make different
selections. Documentation reveals that Approvals.Verify() would be a basic scenario.
Exploratory Testing an API 26

Picture 2

I look at Reporters with the same idea of just opening up a menu, and find it hard to figure out what
in the list are reporters.
I later learn that it’s because of the word before, and that naming everything ReportWith would
help make the reporters more discoverable.
I also learn that the names can improve to include better their intent, for example some are supposed
to be silent - to run with continuous integration.

Picture 3

I go for online examples, and learn that they are images - not very user friendly. And I try to look for
in-IDE documentation, and learn it’s almost non-existent. I run existing unit tests to uncover they
don’t work at first run, but the developer fixes them quickly. And browsing through the public API
with the tool, I note a typo that gets fixed right away.
The API has dependencies to other tools, specifically test runners (e.g. nUnit and MSTest) and I want
my environment to enable exploring the similarities and differences with the two. Serendipitous
Exploratory Testing an API 27

order of installing the pieces reveals a bug in combination of using two runners in combination
with a delivery channel (Nuget). Over the course of testing, I draw a map of the environment I’m
working with, around ApprovalTests. The map is a source of test ideas on the dependencies.
I don’t only rely on the information available online, I actively ask for information. I ask the
developer what Approvals and Reporters do, to get a long list of things that some do and some
don’t - this becomes a great source for more exploratory testing. Like a checklist of claims, and a
great source for helping him tune up his user documentation.
Even a short exploration gave me ideas of what to dig in deeper, and issues to address for the
developer. Some additional sessions with groups in conferences revealed more problems, and showed
the power of exploratory testing on an extensively automation tested API.

13 Patterns To Help You Explore An API


It’s OK if you are unfamiliar with the concepts above in the specific example, but I think the patterns
below will help. They sum up my lessons learned on exploring an API myself, coming from a GUI-
focused viewpoint.

1 - Focus: Working with limited understanding


As an exploratory tester, your goal could be to provide value with a limited amount of effort. As you
are learning in layers, you need to choose the layers intelligently. There’s no right or wrong answer
on what to test first, but there are some usual candidates.
Some testers start looking at input fields. It’s like “fill in the blank” or like testing Google search.
What are the relevant things I could enter here?
How do I find the APIs to explore in the first place? Asking around, picking something people
call an API or a library? Using tools like Fiddler that track particular technologies API calls from
applications?
Others choose to start with documentation and collecting claims.
And others try to figure out a representative use case to get a basic feeling of what it would look
like when this works.
Notes are useful here. Park some ideas that you could make use of later. Choose some to take action
on. Continue to add more as you progress.
Regardless of what you do, you need to focus. Choose something. Learn. Choose something different.
Learn more. Continue until you’re satisfied with what you’ve tested or until you run out of time. The
idea is that when time runs out, you’ve already done the best testing you could in that timeframe.
Exploratory Testing an API 28

2 - Finding your building blocks


Tweak around a little, and you’ll see it. There are calls and operations. There are inputs and outputs.
There’s requests and responses. There are exceptions. There are dependencies. And all exist to serve
a purpose. Understand what your are the dials in the API that you can turn while testing.
You could play with the building blocks without understanding the purpose. You might discover the
purpose through the building blocks. But the building blocks are your blanks to fill in: what you call,
with what values, and what you expect to get back. Make a model of that.
You might call your inputs requests and your outputs responses. Find your vocabulary.
Also, look around for extending, could you be building something on top of these blocks?

3 - The environment it depends on


Your API probably does not exist alone. It depends on an environment. There’s a specific syntax
coming from the language in play. There’s things it uses, and things that use it.
Understand what’s in your scope (“your API”) and what is outside it (“ecosystem in which your API
provides value”). Draw a picture. Extend that picture as you’re learning more.

4 - Specific user with a specific purpose


APIs have users. Often, the users are developers. And developers are people with specific habits and
expectations.
They expect things to be language-wise idiomatic, to follow the conventions they are used to living
with. They expect particular styles of documentation. They expect discoverability.
Some of the best APIs are things where time to hello world (seeing it could work with some example)
is short and do not include stepping out of your IDE.
Talk to developers to learn some of their quirks. Listen to what they say about APIs they use and
don’t use. Listen to what they say about how they extend the APIs or work around their limitations.
In addition to intended use, there’s also misuse. What could go wrong if people misused your API?
Also, pay attention to names. If your API has something called ReadStuff() and it ends up creating
a lot of new data, that might be against what you expect. People use this. What could come in the
way of that and what should come in the way of that?

5 - Usability of an API
There’s great material out there on what makes a good API. It’s really a usability discussion!
When using the different commands/methods, what if they would be consistent in naming and in
parameter ordering so that programmers using them would make fewer mistakes?
Exploratory Testing an API 29

What if the method signatures didn’t repeat many parameters of same type so programmers using
them would get mixed up in the order?
What if using the API incorrectly would fail compile-time, not only run-time to give fast feedback?
What if the methods would follow a conservative overloading strategy, an idea that two methods
with same name would never have the same amount of arguments so that users can confuse the
inputs? I had never heard of the concept before exploring an API, and run into the idea googling for
ideas of what people say that make APIs more usable.
There’s even a position in the world called Developer Experience (DX), applying the user experience
(UX) concepts to the APIs developers use and focusing on things like Time to Hello world (see it run
on your environment) and Time to Working Application (using it for something real). These ideas
come naturally with an exploratory testing mindset.

6 - Why would anyone use this?


APIs exist for a purpose. Even the free open-source APIs exist for a purpose. Many open source
libraries hope to see more developers using them. Some may even be puzzled on why something
this excellent isn’t getting more traction. Sometimes with a lot of choice, you wonder why would
you choose something that is difficult or does not serve your exact needs?
It’s important that an exploratory tester asks, even faced with a programming interface the question
of purpose:

• Why does this exist?


• Who finds this valuable and in what way?
• What might come in between expected and received value?

7 - Think Lifecycle
You’re probably testing a version you have just kindly received into your hands. There were probably
versions before and there will be versions after. The only software that does not change is dead. How
does this make you think about testing?

• How would change work on this API?


• Is there a way for the user to recognize it’s version?
• How about removing functionality or replacing API calls, how long would you keep things
marked deprecated in your interface?
• And does that stop you from correcting even simple typos?
Exploratory Testing an API 30

8 - Google for concepts


The world is full of sources. If you see a word you don’t understand, Google is your friend. I ended up
googling “Conservative Overloading Strategy” after I learned it’s something you consider a feature
of a good API.
Answers are usually only a few clicks away.

9 - Collaborate: Fact-track your understanding


You’re not alone. Find someone to pair with. Find someone to ask questions from. Even if you
start testing all on your own, you don’t have to stay that way. Especially working on your own
organizations APIs, try pairing with a programmer.
I often specifically recommend Strong-Style Pairing. If your paired partner has better ideas on how
to test, you take the keyboard. Then speak to get ideas from the others head through your hands
into the computer.
It’s not just hands-on testing you could collaborate on. You could collaborate on coming up with
things you should know to test about. Mindmap creation together is a wonderful way of building
that information to feed exploratory testing.

10 - Documentation matters a lot for APIs


For programmers to be able to use an API, they look at documentation with examples. If you’re
examples are in many languages and all over the place, you’re not making it easier to find the basic
stuff.
If there’s no documentation, the usability and discoverability of your API should be good. Then
again, it could be that your users just have no options on what API to use for their purposes. That
seems popular too.

11 - Explore to create documentation


Programmers might not find the documentation creation their strong point, and while exploring,
you might become a subject matter expert in the use of the API you’re testing.
Turn the things you’ve learned into things that make the product better.

• Find the core of a new user’s experience and describe that clearly
• Add the deeper understanding of what (and why) the functionalities do into API documenta-
tion
• Clean up and remove things from automatically created API documentation
Exploratory Testing an API 31

12 - Some patterns become visible with repetition


I find myself doing almost or exactly same things many times before I understand what the API
does for a specific type of case. Sometimes repetition helps me see subtle differences that should be
tested for separately. Sometimes repetition helps me see how things are the same. I find that while
exploring, I need to have patience to do things more than once, with my mind paying attention to
all the subtle clues the API can give me. Learning is the key. When no longer learning, move on. But
give yourself time, as repetition is necessary.
My two favorite quotes on this are:

• It’s not that I’m so smart, it’s I just that I stay with the problems longer (Albert Einstein)
• The more I practice, the luckier I get (Arnold Palmer)

I find a lot more insights digging in deeper than what is first visible. Manual repetition may be key
for this insight, even on APIs.

13 - Disposable test automation


When you create these tests, they may end up being automated. Many times you have some level
of automation to access your API. You have requests and responses, and you may have code around
those. Some APIs you can’t run without code. Your code might not include automatic results
checking if you are just driving the API, not checking the results. But your code may also include
theories about ideas that must hold true, like exploring if a rare error message over large data sample
is really rare.
Most of all, with this automation you’re creating, you need to critically ask yourself:

• Could it be disposable, one time use?


• The value of it is in the learning it provided now?
• Are these tests that you want to see yourself maintaining?

Your set of automated tests will require maintenance. When tests fail, you will look into those.
Choose wisely. Choose when you learn what wise means.

Summary
Exploring APIs gives you the power of early feedback and easy access to new skills closer to code.
I suggest you give it a chance. Volunteer to work on something you wouldn’t usually work on as a
non-programmer. You’ll be surprised to see how much of your knowledge is transferrable to “more
technical” environment.

You might also like